Sample records for multiscale computational approach

  1. Towards practical multiscale approach for analysis of reinforced concrete structures

    NASA Astrophysics Data System (ADS)

    Moyeda, Arturo; Fish, Jacob

    2017-12-01

    We present a novel multiscale approach for analysis of reinforced concrete structural elements that overcomes two major hurdles in utilization of multiscale technologies in practice: (1) coupling between material and structural scales due to consideration of large representative volume elements (RVE), and (2) computational complexity of solving complex nonlinear multiscale problems. The former is accomplished using a variant of computational continua framework that accounts for sizeable reinforced concrete RVEs by adjusting the location of quadrature points. The latter is accomplished by means of reduced order homogenization customized for structural elements. The proposed multiscale approach has been verified against direct numerical simulations and validated against experimental results.

  2. Statistical Field Estimation for Complex Coastal Regions and Archipelagos (PREPRINT)

    DTIC Science & Technology

    2011-04-09

    and study the computational properties of these schemes. Specifically, we extend a multiscale Objective Analysis (OA) approach to complex coastal...computational properties of these schemes. Specifically, we extend a multiscale Objective Analysis (OA) approach to complex coastal regions and... multiscale free-surface code builds on the primitive-equation model of the Harvard Ocean Predic- tion System (HOPS, Haley et al. (2009)). Additionally

  3. Filters for Improvement of Multiscale Data from Atomistic Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gardner, David J.; Reynolds, Daniel R.

    Multiscale computational models strive to produce accurate and efficient numerical simulations of systems involving interactions across multiple spatial and temporal scales that typically differ by several orders of magnitude. Some such models utilize a hybrid continuum-atomistic approach combining continuum approximations with first-principles-based atomistic models to capture multiscale behavior. By following the heterogeneous multiscale method framework for developing multiscale computational models, unknown continuum scale data can be computed from an atomistic model. Concurrently coupling the two models requires performing numerous atomistic simulations which can dominate the computational cost of the method. Furthermore, when the resulting continuum data is noisy due tomore » sampling error, stochasticity in the model, or randomness in the initial conditions, filtering can result in significant accuracy gains in the computed multiscale data without increasing the size or duration of the atomistic simulations. In this work, we demonstrate the effectiveness of spectral filtering for increasing the accuracy of noisy multiscale data obtained from atomistic simulations. Moreover, we present a robust and automatic method for closely approximating the optimum level of filtering in the case of additive white noise. By improving the accuracy of this filtered simulation data, it leads to a dramatic computational savings by allowing for shorter and smaller atomistic simulations to achieve the same desired multiscale simulation precision.« less

  4. Filters for Improvement of Multiscale Data from Atomistic Simulations

    DOE PAGES

    Gardner, David J.; Reynolds, Daniel R.

    2017-01-05

    Multiscale computational models strive to produce accurate and efficient numerical simulations of systems involving interactions across multiple spatial and temporal scales that typically differ by several orders of magnitude. Some such models utilize a hybrid continuum-atomistic approach combining continuum approximations with first-principles-based atomistic models to capture multiscale behavior. By following the heterogeneous multiscale method framework for developing multiscale computational models, unknown continuum scale data can be computed from an atomistic model. Concurrently coupling the two models requires performing numerous atomistic simulations which can dominate the computational cost of the method. Furthermore, when the resulting continuum data is noisy due tomore » sampling error, stochasticity in the model, or randomness in the initial conditions, filtering can result in significant accuracy gains in the computed multiscale data without increasing the size or duration of the atomistic simulations. In this work, we demonstrate the effectiveness of spectral filtering for increasing the accuracy of noisy multiscale data obtained from atomistic simulations. Moreover, we present a robust and automatic method for closely approximating the optimum level of filtering in the case of additive white noise. By improving the accuracy of this filtered simulation data, it leads to a dramatic computational savings by allowing for shorter and smaller atomistic simulations to achieve the same desired multiscale simulation precision.« less

  5. Multiscale computing.

    PubMed

    Kobayashi, M; Irino, T; Sweldens, W

    2001-10-23

    Multiscale computing (MSC) involves the computation, manipulation, and analysis of information at different resolution levels. Widespread use of MSC algorithms and the discovery of important relationships between different approaches to implementation were catalyzed, in part, by the recent interest in wavelets. We present two examples that demonstrate how MSC can help scientists understand complex data. The first is from acoustical signal processing and the second is from computer graphics.

  6. A multiscale approach to accelerate pore-scale simulation of porous electrodes

    NASA Astrophysics Data System (ADS)

    Zheng, Weibo; Kim, Seung Hyun

    2017-04-01

    A new method to accelerate pore-scale simulation of porous electrodes is presented. The method combines the macroscopic approach with pore-scale simulation by decomposing a physical quantity into macroscopic and local variations. The multiscale method is applied to the potential equation in pore-scale simulation of a Proton Exchange Membrane Fuel Cell (PEMFC) catalyst layer, and validated with the conventional approach for pore-scale simulation. Results show that the multiscale scheme substantially reduces the computational cost without sacrificing accuracy.

  7. REVIEW OF THE GOVERNING EQUATIONS, COMPUTATIONAL ALGORITHMS, AND OTHER COMPONENTS OF THE MODELS-3 COMMUNITY MULTISCALE AIR QUALITY (CMAQ) MODELING SYSTEM

    EPA Science Inventory

    This article describes the governing equations, computational algorithms, and other components entering into the Community Multiscale Air Quality (CMAQ) modeling system. This system has been designed to approach air quality as a whole by including state-of-the-science capabiliti...

  8. Spatial adaptive sampling in multiscale simulation

    NASA Astrophysics Data System (ADS)

    Rouet-Leduc, Bertrand; Barros, Kipton; Cieren, Emmanuel; Elango, Venmugil; Junghans, Christoph; Lookman, Turab; Mohd-Yusof, Jamaludin; Pavel, Robert S.; Rivera, Axel Y.; Roehm, Dominic; McPherson, Allen L.; Germann, Timothy C.

    2014-07-01

    In a common approach to multiscale simulation, an incomplete set of macroscale equations must be supplemented with constitutive data provided by fine-scale simulation. Collecting statistics from these fine-scale simulations is typically the overwhelming computational cost. We reduce this cost by interpolating the results of fine-scale simulation over the spatial domain of the macro-solver. Unlike previous adaptive sampling strategies, we do not interpolate on the potentially very high dimensional space of inputs to the fine-scale simulation. Our approach is local in space and time, avoids the need for a central database, and is designed to parallelize well on large computer clusters. To demonstrate our method, we simulate one-dimensional elastodynamic shock propagation using the Heterogeneous Multiscale Method (HMM); we find that spatial adaptive sampling requires only ≈ 50 ×N0.14 fine-scale simulations to reconstruct the stress field at all N grid points. Related multiscale approaches, such as Equation Free methods, may also benefit from spatial adaptive sampling.

  9. Multi-element least square HDMR methods and their applications for stochastic multiscale model reduction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Lijian, E-mail: ljjiang@hnu.edu.cn; Li, Xinping, E-mail: exping@126.com

    Stochastic multiscale modeling has become a necessary approach to quantify uncertainty and characterize multiscale phenomena for many practical problems such as flows in stochastic porous media. The numerical treatment of the stochastic multiscale models can be very challengeable as the existence of complex uncertainty and multiple physical scales in the models. To efficiently take care of the difficulty, we construct a computational reduced model. To this end, we propose a multi-element least square high-dimensional model representation (HDMR) method, through which the random domain is adaptively decomposed into a few subdomains, and a local least square HDMR is constructed in eachmore » subdomain. These local HDMRs are represented by a finite number of orthogonal basis functions defined in low-dimensional random spaces. The coefficients in the local HDMRs are determined using least square methods. We paste all the local HDMR approximations together to form a global HDMR approximation. To further reduce computational cost, we present a multi-element reduced least-square HDMR, which improves both efficiency and approximation accuracy in certain conditions. To effectively treat heterogeneity properties and multiscale features in the models, we integrate multiscale finite element methods with multi-element least-square HDMR for stochastic multiscale model reduction. This approach significantly reduces the original model's complexity in both the resolution of the physical space and the high-dimensional stochastic space. We analyze the proposed approach, and provide a set of numerical experiments to demonstrate the performance of the presented model reduction techniques. - Highlights: • Multi-element least square HDMR is proposed to treat stochastic models. • Random domain is adaptively decomposed into some subdomains to obtain adaptive multi-element HDMR. • Least-square reduced HDMR is proposed to enhance computation efficiency and approximation accuracy in certain conditions. • Integrating MsFEM and multi-element least square HDMR can significantly reduce computation complexity.« less

  10. A Generalized Hybrid Multiscale Modeling Approach for Flow and Reactive Transport in Porous Media

    NASA Astrophysics Data System (ADS)

    Yang, X.; Meng, X.; Tang, Y. H.; Guo, Z.; Karniadakis, G. E.

    2017-12-01

    Using emerging understanding of biological and environmental processes at fundamental scales to advance predictions of the larger system behavior requires the development of multiscale approaches, and there is strong interest in coupling models at different scales together in a hybrid multiscale simulation framework. A limited number of hybrid multiscale simulation methods have been developed for subsurface applications, mostly using application-specific approaches for model coupling. The proposed generalized hybrid multiscale approach is designed with minimal intrusiveness to the at-scale simulators (pre-selected) and provides a set of lightweight C++ scripts to manage a complex multiscale workflow utilizing a concurrent coupling approach. The workflow includes at-scale simulators (using the lattice-Boltzmann method, LBM, at the pore and Darcy scale, respectively), scripts for boundary treatment (coupling and kriging), and a multiscale universal interface (MUI) for data exchange. The current study aims to apply the generalized hybrid multiscale modeling approach to couple pore- and Darcy-scale models for flow and mixing-controlled reaction with precipitation/dissolution in heterogeneous porous media. The model domain is packed heterogeneously that the mixing front geometry is more complex and not known a priori. To address those challenges, the generalized hybrid multiscale modeling approach is further developed to 1) adaptively define the locations of pore-scale subdomains, 2) provide a suite of physical boundary coupling schemes and 3) consider the dynamic change of the pore structures due to mineral precipitation/dissolution. The results are validated and evaluated by comparing with single-scale simulations in terms of velocities, reactive concentrations and computing cost.

  11. A machine learning approach for efficient uncertainty quantification using multiscale methods

    NASA Astrophysics Data System (ADS)

    Chan, Shing; Elsheikh, Ahmed H.

    2018-02-01

    Several multiscale methods account for sub-grid scale features using coarse scale basis functions. For example, in the Multiscale Finite Volume method the coarse scale basis functions are obtained by solving a set of local problems over dual-grid cells. We introduce a data-driven approach for the estimation of these coarse scale basis functions. Specifically, we employ a neural network predictor fitted using a set of solution samples from which it learns to generate subsequent basis functions at a lower computational cost than solving the local problems. The computational advantage of this approach is realized for uncertainty quantification tasks where a large number of realizations has to be evaluated. We attribute the ability to learn these basis functions to the modularity of the local problems and the redundancy of the permeability patches between samples. The proposed method is evaluated on elliptic problems yielding very promising results.

  12. Intercomparison of Multiscale Modeling Approaches in Simulating Subsurface Flow and Transport

    NASA Astrophysics Data System (ADS)

    Yang, X.; Mehmani, Y.; Barajas-Solano, D. A.; Song, H. S.; Balhoff, M.; Tartakovsky, A. M.; Scheibe, T. D.

    2016-12-01

    Hybrid multiscale simulations that couple models across scales are critical to advance predictions of the larger system behavior using understanding of fundamental processes. In the current study, three hybrid multiscale methods are intercompared: multiscale loose-coupling method, multiscale finite volume (MsFV) method and multiscale mortar method. The loose-coupling method enables a parallel workflow structure based on the Swift scripting environment that manages the complex process of executing coupled micro- and macro-scale models without being intrusive to the at-scale simulators. The MsFV method applies microscale and macroscale models over overlapping subdomains of the modeling domain and enforces continuity of concentration and transport fluxes between models via restriction and prolongation operators. The mortar method is a non-overlapping domain decomposition approach capable of coupling all permutations of pore- and continuum-scale models with each other. In doing so, Lagrange multipliers are used at interfaces shared between the subdomains so as to establish continuity of species/fluid mass flux. Subdomain computations can be performed either concurrently or non-concurrently depending on the algorithm used. All the above methods have been proven to be accurate and efficient in studying flow and transport in porous media. However, there has not been any field-scale applications and benchmarking among various hybrid multiscale approaches. To address this challenge, we apply all three hybrid multiscale methods to simulate water flow and transport in a conceptualized 2D modeling domain of the hyporheic zone, where strong interactions between groundwater and surface water exist across multiple scales. In all three multiscale methods, fine-scale simulations are applied to a thin layer of riverbed alluvial sediments while the macroscopic simulations are used for the larger subsurface aquifer domain. Different numerical coupling methods are then applied between scales and inter-compared. Comparisons are drawn in terms of velocity distributions, solute transport behavior, algorithm-induced numerical error and computing cost. The intercomparison work provides support for confidence in a variety of hybrid multiscale methods and motivates further development and applications.

  13. Finite Dimensional Approximations for Continuum Multiscale Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berlyand, Leonid

    2017-01-24

    The completed research project concerns the development of novel computational techniques for modeling nonlinear multiscale physical and biological phenomena. Specifically, it addresses the theoretical development and applications of the homogenization theory (coarse graining) approach to calculation of the effective properties of highly heterogenous biological and bio-inspired materials with many spatial scales and nonlinear behavior. This theory studies properties of strongly heterogeneous media in problems arising in materials science, geoscience, biology, etc. Modeling of such media raises fundamental mathematical questions, primarily in partial differential equations (PDEs) and calculus of variations, the subject of the PI’s research. The focus of completed researchmore » was on mathematical models of biological and bio-inspired materials with the common theme of multiscale analysis and coarse grain computational techniques. Biological and bio-inspired materials offer the unique ability to create environmentally clean functional materials used for energy conversion and storage. These materials are intrinsically complex, with hierarchical organization occurring on many nested length and time scales. The potential to rationally design and tailor the properties of these materials for broad energy applications has been hampered by the lack of computational techniques, which are able to bridge from the molecular to the macroscopic scale. The project addressed the challenge of computational treatments of such complex materials by the development of a synergistic approach that combines innovative multiscale modeling/analysis techniques with high performance computing.« less

  14. Petascale computation performance of lightweight multiscale cardiac models using hybrid programming models.

    PubMed

    Pope, Bernard J; Fitch, Blake G; Pitman, Michael C; Rice, John J; Reumann, Matthias

    2011-01-01

    Future multiscale and multiphysics models must use the power of high performance computing (HPC) systems to enable research into human disease, translational medical science, and treatment. Previously we showed that computationally efficient multiscale models will require the use of sophisticated hybrid programming models, mixing distributed message passing processes (e.g. the message passing interface (MPI)) with multithreading (e.g. OpenMP, POSIX pthreads). The objective of this work is to compare the performance of such hybrid programming models when applied to the simulation of a lightweight multiscale cardiac model. Our results show that the hybrid models do not perform favourably when compared to an implementation using only MPI which is in contrast to our results using complex physiological models. Thus, with regards to lightweight multiscale cardiac models, the user may not need to increase programming complexity by using a hybrid programming approach. However, considering that model complexity will increase as well as the HPC system size in both node count and number of cores per node, it is still foreseeable that we will achieve faster than real time multiscale cardiac simulations on these systems using hybrid programming models.

  15. Multiscale Modeling of Damage Processes in fcc Aluminum: From Atoms to Grains

    NASA Technical Reports Server (NTRS)

    Glaessgen, E. H.; Saether, E.; Yamakov, V.

    2008-01-01

    Molecular dynamics (MD) methods are opening new opportunities for simulating the fundamental processes of material behavior at the atomistic level. However, current analysis is limited to small domains and increasing the size of the MD domain quickly presents intractable computational demands. A preferred approach to surmount this computational limitation has been to combine continuum mechanics-based modeling procedures, such as the finite element method (FEM), with MD analyses thereby reducing the region of atomic scale refinement. Such multiscale modeling strategies can be divided into two broad classifications: concurrent multiscale methods that directly incorporate an atomistic domain within a continuum domain and sequential multiscale methods that extract an averaged response from the atomistic simulation for later use as a constitutive model in a continuum analysis.

  16. Strategies for efficient numerical implementation of hybrid multi-scale agent-based models to describe biological systems

    PubMed Central

    Cilfone, Nicholas A.; Kirschner, Denise E.; Linderman, Jennifer J.

    2015-01-01

    Biologically related processes operate across multiple spatiotemporal scales. For computational modeling methodologies to mimic this biological complexity, individual scale models must be linked in ways that allow for dynamic exchange of information across scales. A powerful methodology is to combine a discrete modeling approach, agent-based models (ABMs), with continuum models to form hybrid models. Hybrid multi-scale ABMs have been used to simulate emergent responses of biological systems. Here, we review two aspects of hybrid multi-scale ABMs: linking individual scale models and efficiently solving the resulting model. We discuss the computational choices associated with aspects of linking individual scale models while simultaneously maintaining model tractability. We demonstrate implementations of existing numerical methods in the context of hybrid multi-scale ABMs. Using an example model describing Mycobacterium tuberculosis infection, we show relative computational speeds of various combinations of numerical methods. Efficient linking and solution of hybrid multi-scale ABMs is key to model portability, modularity, and their use in understanding biological phenomena at a systems level. PMID:26366228

  17. Developing a novel hierarchical approach for multiscale structural reliability predictions for ultra-high consequence applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Emery, John M.; Coffin, Peter; Robbins, Brian A.

    Microstructural variabilities are among the predominant sources of uncertainty in structural performance and reliability. We seek to develop efficient algorithms for multiscale calcu- lations for polycrystalline alloys such as aluminum alloy 6061-T6 in environments where ductile fracture is the dominant failure mode. Our approach employs concurrent multiscale methods, but does not focus on their development. They are a necessary but not sufficient ingredient to multiscale reliability predictions. We have focused on how to efficiently use concurrent models for forward propagation because practical applications cannot include fine-scale details throughout the problem domain due to exorbitant computational demand. Our approach begins withmore » a low-fidelity prediction at the engineering scale that is sub- sequently refined with multiscale simulation. The results presented in this report focus on plasticity and damage at the meso-scale, efforts to expedite Monte Carlo simulation with mi- crostructural considerations, modeling aspects regarding geometric representation of grains and second-phase particles, and contrasting algorithms for scale coupling.« less

  18. A Hybrid Multiscale Framework for Subsurface Flow and Transport Simulations

    DOE PAGES

    Scheibe, Timothy D.; Yang, Xiaofan; Chen, Xingyuan; ...

    2015-06-01

    Extensive research efforts have been invested in reducing model errors to improve the predictive ability of biogeochemical earth and environmental system simulators, with applications ranging from contaminant transport and remediation to impacts of biogeochemical elemental cycling (e.g., carbon and nitrogen) on local ecosystems and regional to global climate. While the bulk of this research has focused on improving model parameterizations in the face of observational limitations, the more challenging type of model error/uncertainty to identify and quantify is model structural error which arises from incorrect mathematical representations of (or failure to consider) important physical, chemical, or biological processes, properties, ormore » system states in model formulations. While improved process understanding can be achieved through scientific study, such understanding is usually developed at small scales. Process-based numerical models are typically designed for a particular characteristic length and time scale. For application-relevant scales, it is generally necessary to introduce approximations and empirical parameterizations to describe complex systems or processes. This single-scale approach has been the best available to date because of limited understanding of process coupling combined with practical limitations on system characterization and computation. While computational power is increasing significantly and our understanding of biological and environmental processes at fundamental scales is accelerating, using this information to advance our knowledge of the larger system behavior requires the development of multiscale simulators. Accordingly there has been much recent interest in novel multiscale methods in which microscale and macroscale models are explicitly coupled in a single hybrid multiscale simulation. A limited number of hybrid multiscale simulations have been developed for biogeochemical earth systems, but they mostly utilize application-specific and sometimes ad-hoc approaches for model coupling. We are developing a generalized approach to hierarchical model coupling designed for high-performance computational systems, based on the Swift computing workflow framework. In this presentation we will describe the generalized approach and provide two use cases: 1) simulation of a mixing-controlled biogeochemical reaction coupling pore- and continuum-scale models, and 2) simulation of biogeochemical impacts of groundwater – river water interactions coupling fine- and coarse-grid model representations. This generalized framework can be customized for use with any pair of linked models (microscale and macroscale) with minimal intrusiveness to the at-scale simulators. It combines a set of python scripts with the Swift workflow environment to execute a complex multiscale simulation utilizing an approach similar to the well-known Heterogeneous Multiscale Method. User customization is facilitated through user-provided input and output file templates and processing function scripts, and execution within a high-performance computing environment is handled by Swift, such that minimal to no user modification of at-scale codes is required.« less

  19. Multi-Scale Computational Models for Electrical Brain Stimulation

    PubMed Central

    Seo, Hyeon; Jun, Sung C.

    2017-01-01

    Electrical brain stimulation (EBS) is an appealing method to treat neurological disorders. To achieve optimal stimulation effects and a better understanding of the underlying brain mechanisms, neuroscientists have proposed computational modeling studies for a decade. Recently, multi-scale models that combine a volume conductor head model and multi-compartmental models of cortical neurons have been developed to predict stimulation effects on the macroscopic and microscopic levels more precisely. As the need for better computational models continues to increase, we overview here recent multi-scale modeling studies; we focused on approaches that coupled a simplified or high-resolution volume conductor head model and multi-compartmental models of cortical neurons, and constructed realistic fiber models using diffusion tensor imaging (DTI). Further implications for achieving better precision in estimating cellular responses are discussed. PMID:29123476

  20. Facing the challenges of multiscale modelling of bacterial and fungal pathogen–host interactions

    PubMed Central

    Schleicher, Jana; Conrad, Theresia; Gustafsson, Mika; Cedersund, Gunnar; Guthke, Reinhard

    2017-01-01

    Abstract Recent and rapidly evolving progress on high-throughput measurement techniques and computational performance has led to the emergence of new disciplines, such as systems medicine and translational systems biology. At the core of these disciplines lies the desire to produce multiscale models: mathematical models that integrate multiple scales of biological organization, ranging from molecular, cellular and tissue models to organ, whole-organism and population scale models. Using such models, hypotheses can systematically be tested. In this review, we present state-of-the-art multiscale modelling of bacterial and fungal infections, considering both the pathogen and host as well as their interaction. Multiscale modelling of the interactions of bacteria, especially Mycobacterium tuberculosis, with the human host is quite advanced. In contrast, models for fungal infections are still in their infancy, in particular regarding infections with the most important human pathogenic fungi, Candida albicans and Aspergillus fumigatus. We reflect on the current availability of computational approaches for multiscale modelling of host–pathogen interactions and point out current challenges. Finally, we provide an outlook for future requirements of multiscale modelling. PMID:26857943

  1. Development of a Renormalization Group Approach to Multi-Scale Plasma Physics Computation

    DTIC Science & Technology

    2012-03-28

    with a collection of information if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1...NUMBER(S) 12. DISTRIBUTION/AVAILABILITY STATEMENT 13. SUPPLEMENTARY NOTES 14. ABSTRACT 15. SUBJECT TERMS 16. SECURITY CLASSIFICATION OF: a . REPORT...code) 29-12-2008 Final Technical Report From 29-12-2008 To 16-95-2011 (STTR PHASE II) DEVELOPMENT OF A RENORMALIZATION GROUP APPROACH TO MULTI-SCALE

  2. An Expanded Multi-scale Monte Carlo Simulation Method for Personalized Radiobiological Effect Estimation in Radiotherapy: a feasibility study

    NASA Astrophysics Data System (ADS)

    Zhang, Ying; Feng, Yuanming; Wang, Wei; Yang, Chengwen; Wang, Ping

    2017-03-01

    A novel and versatile “bottom-up” approach is developed to estimate the radiobiological effect of clinic radiotherapy. The model consists of multi-scale Monte Carlo simulations from organ to cell levels. At cellular level, accumulated damages are computed using a spectrum-based accumulation algorithm and predefined cellular damage database. The damage repair mechanism is modeled by an expanded reaction-rate two-lesion kinetic model, which were calibrated through replicating a radiobiological experiment. Multi-scale modeling is then performed on a lung cancer patient under conventional fractionated irradiation. The cell killing effects of two representative voxels (isocenter and peripheral voxel of the tumor) are computed and compared. At microscopic level, the nucleus dose and damage yields vary among all nucleuses within the voxels. Slightly larger percentage of cDSB yield is observed for the peripheral voxel (55.0%) compared to the isocenter one (52.5%). For isocenter voxel, survival fraction increase monotonically at reduced oxygen environment. Under an extreme anoxic condition (0.001%), survival fraction is calculated to be 80% and the hypoxia reduction factor reaches a maximum value of 2.24. In conclusion, with biological-related variations, the proposed multi-scale approach is more versatile than the existing approaches for evaluating personalized radiobiological effects in radiotherapy.

  3. Multiscale Modeling in Computational Biomechanics: Determining Computational Priorities and Addressing Current Challenges

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tawhai, Merryn; Bischoff, Jeff; Einstein, Daniel R.

    2009-05-01

    Abstract In this article, we describe some current multiscale modeling issues in computational biomechanics from the perspective of the musculoskeletal and respiratory systems and mechanotransduction. First, we outline the necessity of multiscale simulations in these biological systems. Then we summarize challenges inherent to multiscale biomechanics modeling, regardless of the subdiscipline, followed by computational challenges that are system-specific. We discuss some of the current tools that have been utilized to aid research in multiscale mechanics simulations, and the priorities to further the field of multiscale biomechanics computation.

  4. Integrating Intracellular Dynamics Using CompuCell3D and Bionetsolver: Applications to Multiscale Modelling of Cancer Cell Growth and Invasion

    PubMed Central

    Andasari, Vivi; Roper, Ryan T.; Swat, Maciej H.; Chaplain, Mark A. J.

    2012-01-01

    In this paper we present a multiscale, individual-based simulation environment that integrates CompuCell3D for lattice-based modelling on the cellular level and Bionetsolver for intracellular modelling. CompuCell3D or CC3D provides an implementation of the lattice-based Cellular Potts Model or CPM (also known as the Glazier-Graner-Hogeweg or GGH model) and a Monte Carlo method based on the metropolis algorithm for system evolution. The integration of CC3D for cellular systems with Bionetsolver for subcellular systems enables us to develop a multiscale mathematical model and to study the evolution of cell behaviour due to the dynamics inside of the cells, capturing aspects of cell behaviour and interaction that is not possible using continuum approaches. We then apply this multiscale modelling technique to a model of cancer growth and invasion, based on a previously published model of Ramis-Conde et al. (2008) where individual cell behaviour is driven by a molecular network describing the dynamics of E-cadherin and -catenin. In this model, which we refer to as the centre-based model, an alternative individual-based modelling technique was used, namely, a lattice-free approach. In many respects, the GGH or CPM methodology and the approach of the centre-based model have the same overall goal, that is to mimic behaviours and interactions of biological cells. Although the mathematical foundations and computational implementations of the two approaches are very different, the results of the presented simulations are compatible with each other, suggesting that by using individual-based approaches we can formulate a natural way of describing complex multi-cell, multiscale models. The ability to easily reproduce results of one modelling approach using an alternative approach is also essential from a model cross-validation standpoint and also helps to identify any modelling artefacts specific to a given computational approach. PMID:22461894

  5. Computational aspects in mechanical modeling of the articular cartilage tissue.

    PubMed

    Mohammadi, Hadi; Mequanint, Kibret; Herzog, Walter

    2013-04-01

    This review focuses on the modeling of articular cartilage (at the tissue level), chondrocyte mechanobiology (at the cell level) and a combination of both in a multiscale computation scheme. The primary objective is to evaluate the advantages and disadvantages of conventional models implemented to study the mechanics of the articular cartilage tissue and chondrocytes. From monophasic material models as the simplest form to more complicated multiscale theories, these approaches have been frequently used to model articular cartilage and have contributed significantly to modeling joint mechanics, addressing and resolving numerous issues regarding cartilage mechanics and function. It should be noted that attentiveness is important when using different modeling approaches, as the choice of the model limits the applications available. In this review, we discuss the conventional models applicable to some of the mechanical aspects of articular cartilage such as lubrication, swelling pressure and chondrocyte mechanics and address some of the issues associated with the current modeling approaches. We then suggest future pathways for a more realistic modeling strategy as applied for the simulation of the mechanics of the cartilage tissue using multiscale and parallelized finite element method.

  6. A FSI computational framework for vascular physiopathology: A novel flow-tissue multiscale strategy.

    PubMed

    Bianchi, Daniele; Monaldo, Elisabetta; Gizzi, Alessio; Marino, Michele; Filippi, Simonetta; Vairo, Giuseppe

    2017-09-01

    A novel fluid-structure computational framework for vascular applications is herein presented. It is developed by combining the double multi-scale nature of vascular physiopathology in terms of both tissue properties and blood flow. Addressing arterial tissues, they are modelled via a nonlinear multiscale constitutive rationale, based only on parameters having a clear histological and biochemical meaning. Moreover, blood flow is described by coupling a three-dimensional fluid domain (undergoing physiological inflow conditions) with a zero-dimensional model, which allows to reproduce the influence of the downstream vasculature, furnishing a realistic description of the outflow proximal pressure. The fluid-structure interaction is managed through an explicit time-marching approach, able to accurately describe tissue nonlinearities within each computational step for the fluid problem. A case study associated to a patient-specific aortic abdominal aneurysmatic geometry is numerically investigated, highlighting advantages gained from the proposed multiscale strategy, as well as showing soundness and effectiveness of the established framework for assessing useful clinical quantities and risk indexes. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  7. Augmenting Surgery via Multi-scale Modeling and Translational Systems Biology in the Era of Precision Medicine: A Multidisciplinary Perspective

    PubMed Central

    Kassab, Ghassan S.; An, Gary; Sander, Edward A.; Miga, Michael; Guccione, Julius M.; Ji, Songbai; Vodovotz, Yoram

    2016-01-01

    In this era of tremendous technological capabilities and increased focus on improving clinical outcomes, decreasing costs, and increasing precision, there is a need for a more quantitative approach to the field of surgery. Multiscale computational modeling has the potential to bridge the gap to the emerging paradigms of Precision Medicine and Translational Systems Biology, in which quantitative metrics and data guide patient care through improved stratification, diagnosis, and therapy. Achievements by multiple groups have demonstrated the potential for 1) multiscale computational modeling, at a biological level, of diseases treated with surgery and the surgical procedure process at the level of the individual and the population; along with 2) patient-specific, computationally-enabled surgical planning, delivery, and guidance and robotically-augmented manipulation. In this perspective article, we discuss these concepts, and cite emerging examples from the fields of trauma, wound healing, and cardiac surgery. PMID:27015816

  8. An Analysis Platform for Multiscale Hydrogeologic Modeling with Emphasis on Hybrid Multiscale Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scheibe, Timothy D.; Murphy, Ellyn M.; Chen, Xingyuan

    2015-01-01

    One of the most significant challenges facing hydrogeologic modelers is the disparity between those spatial and temporal scales at which fundamental flow, transport and reaction processes can best be understood and quantified (e.g., microscopic to pore scales, seconds to days) and those at which practical model predictions are needed (e.g., plume to aquifer scales, years to centuries). While the multiscale nature of hydrogeologic problems is widely recognized, technological limitations in computational and characterization restrict most practical modeling efforts to fairly coarse representations of heterogeneous properties and processes. For some modern problems, the necessary level of simplification is such that modelmore » parameters may lose physical meaning and model predictive ability is questionable for any conditions other than those to which the model was calibrated. Recently, there has been broad interest across a wide range of scientific and engineering disciplines in simulation approaches that more rigorously account for the multiscale nature of systems of interest. In this paper, we review a number of such approaches and propose a classification scheme for defining different types of multiscale simulation methods and those classes of problems to which they are most applicable. Our classification scheme is presented in terms of a flow chart (Multiscale Analysis Platform or MAP), and defines several different motifs of multiscale simulation. Within each motif, the member methods are reviewed and example applications are discussed. We focus attention on hybrid multiscale methods, in which two or more models with different physics described at fundamentally different scales are directly coupled within a single simulation. Very recently these methods have begun to be applied to groundwater flow and transport simulations, and we discuss these applications in the context of our classification scheme. As computational and characterization capabilities continue to improve, we envision that hybrid multiscale modeling will become more common and may become a viable alternative to conventional single-scale models in the near future.« less

  9. An analysis platform for multiscale hydrogeologic modeling with emphasis on hybrid multiscale methods.

    PubMed

    Scheibe, Timothy D; Murphy, Ellyn M; Chen, Xingyuan; Rice, Amy K; Carroll, Kenneth C; Palmer, Bruce J; Tartakovsky, Alexandre M; Battiato, Ilenia; Wood, Brian D

    2015-01-01

    One of the most significant challenges faced by hydrogeologic modelers is the disparity between the spatial and temporal scales at which fundamental flow, transport, and reaction processes can best be understood and quantified (e.g., microscopic to pore scales and seconds to days) and at which practical model predictions are needed (e.g., plume to aquifer scales and years to centuries). While the multiscale nature of hydrogeologic problems is widely recognized, technological limitations in computation and characterization restrict most practical modeling efforts to fairly coarse representations of heterogeneous properties and processes. For some modern problems, the necessary level of simplification is such that model parameters may lose physical meaning and model predictive ability is questionable for any conditions other than those to which the model was calibrated. Recently, there has been broad interest across a wide range of scientific and engineering disciplines in simulation approaches that more rigorously account for the multiscale nature of systems of interest. In this article, we review a number of such approaches and propose a classification scheme for defining different types of multiscale simulation methods and those classes of problems to which they are most applicable. Our classification scheme is presented in terms of a flowchart (Multiscale Analysis Platform), and defines several different motifs of multiscale simulation. Within each motif, the member methods are reviewed and example applications are discussed. We focus attention on hybrid multiscale methods, in which two or more models with different physics described at fundamentally different scales are directly coupled within a single simulation. Very recently these methods have begun to be applied to groundwater flow and transport simulations, and we discuss these applications in the context of our classification scheme. As computational and characterization capabilities continue to improve, we envision that hybrid multiscale modeling will become more common and also a viable alternative to conventional single-scale models in the near future. © 2014, National Ground Water Association.

  10. Multiscale Simulation of Microbe Structure and Dynamics

    PubMed Central

    Joshi, Harshad; Singharoy, Abhishek; Sereda, Yuriy V.; Cheluvaraja, Srinath C.; Ortoleva, Peter J.

    2012-01-01

    A multiscale mathematical and computational approach is developed that captures the hierarchical organization of a microbe. It is found that a natural perspective for understanding a microbe is in terms of a hierarchy of variables at various levels of resolution. This hierarchy starts with the N -atom description and terminates with order parameters characterizing a whole microbe. This conceptual framework is used to guide the analysis of the Liouville equation for the probability density of the positions and momenta of the N atoms constituting the microbe and its environment. Using multiscale mathematical techniques, we derive equations for the co-evolution of the order parameters and the probability density of the N-atom state. This approach yields a rigorous way to transfer information between variables on different space-time scales. It elucidates the interplay between equilibrium and far-from-equilibrium processes underlying microbial behavior. It also provides framework for using coarse-grained nanocharacterization data to guide microbial simulation. It enables a methodical search for free-energy minimizing structures, many of which are typically supported by the set of macromolecules and membranes constituting a given microbe. This suite of capabilities provides a natural framework for arriving at a fundamental understanding of microbial behavior, the analysis of nanocharacterization data, and the computer-aided design of nanostructures for biotechnical and medical purposes. Selected features of the methodology are demonstrated using our multiscale bionanosystem simulator DeductiveMultiscaleSimulator. Systems used to demonstrate the approach are structural transitions in the cowpea chlorotic mosaic virus, RNA of satellite tobacco mosaic virus, virus-like particles related to human papillomavirus, and iron-binding protein lactoferrin. PMID:21802438

  11. Multiscale approach including microfibril scale to assess elastic constants of cortical bone based on neural network computation and homogenization method.

    PubMed

    Barkaoui, Abdelwahed; Chamekh, Abdessalem; Merzouki, Tarek; Hambli, Ridha; Mkaddem, Ali

    2014-03-01

    The complexity and heterogeneity of bone tissue require a multiscale modeling to understand its mechanical behavior and its remodeling mechanisms. In this paper, a novel multiscale hierarchical approach including microfibril scale based on hybrid neural network (NN) computation and homogenization equations was developed to link nanoscopic and macroscopic scales to estimate the elastic properties of human cortical bone. The multiscale model is divided into three main phases: (i) in step 0, the elastic constants of collagen-water and mineral-water composites are calculated by averaging the upper and lower Hill bounds; (ii) in step 1, the elastic properties of the collagen microfibril are computed using a trained NN simulation. Finite element calculation is performed at nanoscopic levels to provide a database to train an in-house NN program; and (iii) in steps 2-10 from fibril to continuum cortical bone tissue, homogenization equations are used to perform the computation at the higher scales. The NN outputs (elastic properties of the microfibril) are used as inputs for the homogenization computation to determine the properties of mineralized collagen fibril. The mechanical and geometrical properties of bone constituents (mineral, collagen, and cross-links) as well as the porosity were taken in consideration. This paper aims to predict analytically the effective elastic constants of cortical bone by modeling its elastic response at these different scales, ranging from the nanostructural to mesostructural levels. Our findings of the lowest scale's output were well integrated with the other higher levels and serve as inputs for the next higher scale modeling. Good agreement was obtained between our predicted results and literature data. Copyright © 2013 John Wiley & Sons, Ltd.

  12. Optimization of a hydrodynamic separator using a multiscale computational fluid dynamics approach.

    PubMed

    Schmitt, Vivien; Dufresne, Matthieu; Vazquez, Jose; Fischer, Martin; Morin, Antoine

    2013-01-01

    This article deals with the optimization of a hydrodynamic separator working on the tangential separation mechanism along a screen. The aim of this study is to optimize the shape of the device to avoid clogging. A multiscale approach is used. This methodology combines measurements and computational fluid dynamics (CFD). A local model enables us to observe the different phenomena occurring at the orifice scale, which shows the potential of expanded metal screens. A global model is used to simulate the flow within the device using a conceptual model of the screen (porous wall). After validation against the experimental measurements, the global model was used to investigate the influence of deflectors and disk plates in the structure.

  13. Parallelization of fine-scale computation in Agile Multiscale Modelling Methodology

    NASA Astrophysics Data System (ADS)

    Macioł, Piotr; Michalik, Kazimierz

    2016-10-01

    Nowadays, multiscale modelling of material behavior is an extensively developed area. An important obstacle against its wide application is high computational demands. Among others, the parallelization of multiscale computations is a promising solution. Heterogeneous multiscale models are good candidates for parallelization, since communication between sub-models is limited. In this paper, the possibility of parallelization of multiscale models based on Agile Multiscale Methodology framework is discussed. A sequential, FEM based macroscopic model has been combined with concurrently computed fine-scale models, employing a MatCalc thermodynamic simulator. The main issues, being investigated in this work are: (i) the speed-up of multiscale models with special focus on fine-scale computations and (ii) on decreasing the quality of computations enforced by parallel execution. Speed-up has been evaluated on the basis of Amdahl's law equations. The problem of `delay error', rising from the parallel execution of fine scale sub-models, controlled by the sequential macroscopic sub-model is discussed. Some technical aspects of combining third-party commercial modelling software with an in-house multiscale framework and a MPI library are also discussed.

  14. A Tensor-Product-Kernel Framework for Multiscale Neural Activity Decoding and Control

    PubMed Central

    Li, Lin; Brockmeier, Austin J.; Choi, John S.; Francis, Joseph T.; Sanchez, Justin C.; Príncipe, José C.

    2014-01-01

    Brain machine interfaces (BMIs) have attracted intense attention as a promising technology for directly interfacing computers or prostheses with the brain's motor and sensory areas, thereby bypassing the body. The availability of multiscale neural recordings including spike trains and local field potentials (LFPs) brings potential opportunities to enhance computational modeling by enriching the characterization of the neural system state. However, heterogeneity on data type (spike timing versus continuous amplitude signals) and spatiotemporal scale complicates the model integration of multiscale neural activity. In this paper, we propose a tensor-product-kernel-based framework to integrate the multiscale activity and exploit the complementary information available in multiscale neural activity. This provides a common mathematical framework for incorporating signals from different domains. The approach is applied to the problem of neural decoding and control. For neural decoding, the framework is able to identify the nonlinear functional relationship between the multiscale neural responses and the stimuli using general purpose kernel adaptive filtering. In a sensory stimulation experiment, the tensor-product-kernel decoder outperforms decoders that use only a single neural data type. In addition, an adaptive inverse controller for delivering electrical microstimulation patterns that utilizes the tensor-product kernel achieves promising results in emulating the responses to natural stimulation. PMID:24829569

  15. Multiscale Multifunctional Progressive Fracture of Composite Structures

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.; Minnetyan, L.

    2012-01-01

    A new approach is described for evaluating fracture in composite structures. This approach is independent of classical fracture mechanics parameters like fracture toughness. It relies on computational simulation and is programmed in a stand-alone integrated computer code. It is multiscale, multifunctional because it includes composite mechanics for the composite behavior and finite element analysis for predicting the structural response. It contains seven modules; layered composite mechanics (micro, macro, laminate), finite element, updating scheme, local fracture, global fracture, stress based failure modes, and fracture progression. The computer code is called CODSTRAN (Composite Durability Structural ANalysis). It is used in the present paper to evaluate the global fracture of four composite shell problems and one composite built-up structure. Results show that the composite shells. Global fracture is enhanced when internal pressure is combined with shear loads. The old reference denotes that nothing has been added to this comprehensive report since then.

  16. Modeling and Simulation of High Dimensional Stochastic Multiscale PDE Systems at the Exascale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kevrekidis, Ioannis

    2017-03-22

    The thrust of the proposal was to exploit modern data-mining tools in a way that will create a systematic, computer-assisted approach to the representation of random media -- and also to the representation of the solutions of an array of important physicochemical processes that take place in/on such media. A parsimonious representation/parametrization of the random media links directly (via uncertainty quantification tools) to good sampling of the distribution of random media realizations. It also links directly to modern multiscale computational algorithms (like the equation-free approach that has been developed in our group) and plays a crucial role in accelerating themore » scientific computation of solutions of nonlinear PDE models (deterministic or stochastic) in such media – both solutions in particular realizations of the random media, and estimation of the statistics of the solutions over multiple realizations (e.g. expectations).« less

  17. Multiscale high-order/low-order (HOLO) algorithms and applications

    NASA Astrophysics Data System (ADS)

    Chacón, L.; Chen, G.; Knoll, D. A.; Newman, C.; Park, H.; Taitano, W.; Willert, J. A.; Womeldorff, G.

    2017-02-01

    We review the state of the art in the formulation, implementation, and performance of so-called high-order/low-order (HOLO) algorithms for challenging multiscale problems. HOLO algorithms attempt to couple one or several high-complexity physical models (the high-order model, HO) with low-complexity ones (the low-order model, LO). The primary goal of HOLO algorithms is to achieve nonlinear convergence between HO and LO components while minimizing memory footprint and managing the computational complexity in a practical manner. Key to the HOLO approach is the use of the LO representations to address temporal stiffness, effectively accelerating the convergence of the HO/LO coupled system. The HOLO approach is broadly underpinned by the concept of nonlinear elimination, which enables segregation of the HO and LO components in ways that can effectively use heterogeneous architectures. The accuracy and efficiency benefits of HOLO algorithms are demonstrated with specific applications to radiation transport, gas dynamics, plasmas (both Eulerian and Lagrangian formulations), and ocean modeling. Across this broad application spectrum, HOLO algorithms achieve significant accuracy improvements at a fraction of the cost compared to conventional approaches. It follows that HOLO algorithms hold significant potential for high-fidelity system scale multiscale simulations leveraging exascale computing.

  18. Real-Time Nonlocal Means-Based Despeckling.

    PubMed

    Breivik, Lars Hofsoy; Snare, Sten Roar; Steen, Erik Normann; Solberg, Anne H Schistad

    2017-06-01

    In this paper, we propose a multiscale nonlocal means-based despeckling method for medical ultrasound. The multiscale approach leads to large computational savings and improves despeckling results over single-scale iterative approaches. We present two variants of the method. The first, denoted multiscale nonlocal means (MNLM), yields uniform robust filtering of speckle both in structured and homogeneous regions. The second, denoted unnormalized MNLM (UMNLM), is more conservative in regions of structure assuring minimal disruption of salient image details. Due to the popularity of anisotropic diffusion-based methods in the despeckling literature, we review the connection between anisotropic diffusion and iterative variants of NLM. These iterative variants in turn relate to our multiscale variant. As part of our evaluation, we conduct a simulation study making use of ground truth phantoms generated from clinical B-mode ultrasound images. We evaluate our method against a set of popular methods from the despeckling literature on both fine and coarse speckle noise. In terms of computational efficiency, our method outperforms the other considered methods. Quantitatively on simulations and on a tissue-mimicking phantom, our method is found to be competitive with the state-of-the-art. On clinical B-mode images, our method is found to effectively smooth speckle while preserving low-contrast and highly localized salient image detail.

  19. Multiscale turbulence models based on convected fluid microstructure

    NASA Astrophysics Data System (ADS)

    Holm, Darryl D.; Tronci, Cesare

    2012-11-01

    The Euler-Poincaré approach to complex fluids is used to derive multiscale equations for computationally modeling Euler flows as a basis for modeling turbulence. The model is based on a kinematic sweeping ansatz (KSA) which assumes that the mean fluid flow serves as a Lagrangian frame of motion for the fluctuation dynamics. Thus, we regard the motion of a fluid parcel on the computationally resolvable length scales as a moving Lagrange coordinate for the fluctuating (zero-mean) motion of fluid parcels at the unresolved scales. Even in the simplest two-scale version on which we concentrate here, the contributions of the fluctuating motion under the KSA to the mean motion yields a system of equations that extends known results and appears to be suitable for modeling nonlinear backscatter (energy transfer from smaller to larger scales) in turbulence using multiscale methods.

  20. Multiscale Modeling and Uncertainty Quantification for Nuclear Fuel Performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Estep, Donald; El-Azab, Anter; Pernice, Michael

    2017-03-23

    In this project, we will address the challenges associated with constructing high fidelity multiscale models of nuclear fuel performance. We (*) propose a novel approach for coupling mesoscale and macroscale models, (*) devise efficient numerical methods for simulating the coupled system, and (*) devise and analyze effective numerical approaches for error and uncertainty quantification for the coupled multiscale system. As an integral part of the project, we will carry out analysis of the effects of upscaling and downscaling, investigate efficient methods for stochastic sensitivity analysis of the individual macroscale and mesoscale models, and carry out a posteriori error analysis formore » computed results. We will pursue development and implementation of solutions in software used at Idaho National Laboratories on models of interest to the Nuclear Energy Advanced Modeling and Simulation (NEAMS) program.« less

  1. Investigating lithium-ion battery materials during overcharge-induced thermal runaway: an operando and multi-scale X-ray CT study.

    PubMed

    Finegan, Donal P; Scheel, Mario; Robinson, James B; Tjaden, Bernhard; Di Michiel, Marco; Hinds, Gareth; Brett, Dan J L; Shearing, Paul R

    2016-11-16

    Catastrophic failure of lithium-ion batteries occurs across multiple length scales and over very short time periods. A combination of high-speed operando tomography, thermal imaging and electrochemical measurements is used to probe the degradation mechanisms leading up to overcharge-induced thermal runaway of a LiCoO 2 pouch cell, through its interrelated dynamic structural, thermal and electrical responses. Failure mechanisms across multiple length scales are explored using a post-mortem multi-scale tomography approach, revealing significant morphological and phase changes in the LiCoO 2 electrode microstructure and location dependent degradation. This combined operando and multi-scale X-ray computed tomography (CT) technique is demonstrated as a comprehensive approach to understanding battery degradation and failure.

  2. Computational design and multiscale modeling of a nanoactuator using DNA actuation.

    PubMed

    Hamdi, Mustapha

    2009-12-02

    Developments in the field of nanobiodevices coupling nanostructures and biological components are of great interest in medical nanorobotics. As the fundamentals of bio/non-bio interaction processes are still poorly understood in the design of these devices, design tools and multiscale dynamics modeling approaches are necessary at the fabrication pre-project stage. This paper proposes a new concept of optimized carbon nanotube based servomotor design for drug delivery and biomolecular transport applications. The design of an encapsulated DNA-multi-walled carbon nanotube actuator is prototyped using multiscale modeling. The system is parametrized by using a quantum level approach and characterized by using a molecular dynamics simulation. Based on the analysis of the simulation results, a servo nanoactuator using ionic current feedback is simulated and analyzed for application as a drug delivery carrier.

  3. A simple, stable, and accurate linear tetrahedral finite element for transient, nearly, and fully incompressible solid dynamics: A dynamic variational multiscale approach [A simple, stable, and accurate tetrahedral finite element for transient, nearly incompressible, linear and nonlinear elasticity: A dynamic variational multiscale approach

    DOE PAGES

    Scovazzi, Guglielmo; Carnes, Brian; Zeng, Xianyi; ...

    2015-11-12

    Here, we propose a new approach for the stabilization of linear tetrahedral finite elements in the case of nearly incompressible transient solid dynamics computations. Our method is based on a mixed formulation, in which the momentum equation is complemented by a rate equation for the evolution of the pressure field, approximated with piece-wise linear, continuous finite element functions. The pressure equation is stabilized to prevent spurious pressure oscillations in computations. Incidentally, it is also shown that many stabilized methods previously developed for the static case do not generalize easily to transient dynamics. Extensive tests in the context of linear andmore » nonlinear elasticity are used to corroborate the claim that the proposed method is robust, stable, and accurate.« less

  4. A simple, stable, and accurate linear tetrahedral finite element for transient, nearly, and fully incompressible solid dynamics: A dynamic variational multiscale approach [A simple, stable, and accurate tetrahedral finite element for transient, nearly incompressible, linear and nonlinear elasticity: A dynamic variational multiscale approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scovazzi, Guglielmo; Carnes, Brian; Zeng, Xianyi

    Here, we propose a new approach for the stabilization of linear tetrahedral finite elements in the case of nearly incompressible transient solid dynamics computations. Our method is based on a mixed formulation, in which the momentum equation is complemented by a rate equation for the evolution of the pressure field, approximated with piece-wise linear, continuous finite element functions. The pressure equation is stabilized to prevent spurious pressure oscillations in computations. Incidentally, it is also shown that many stabilized methods previously developed for the static case do not generalize easily to transient dynamics. Extensive tests in the context of linear andmore » nonlinear elasticity are used to corroborate the claim that the proposed method is robust, stable, and accurate.« less

  5. Adaptation of a Fast Optimal Interpolation Algorithm to the Mapping of Oceangraphic Data

    NASA Technical Reports Server (NTRS)

    Menemenlis, Dimitris; Fieguth, Paul; Wunsch, Carl; Willsky, Alan

    1997-01-01

    A fast, recently developed, multiscale optimal interpolation algorithm has been adapted to the mapping of hydrographic and other oceanographic data. This algorithm produces solution and error estimates which are consistent with those obtained from exact least squares methods, but at a small fraction of the computational cost. Problems whose solution would be completely impractical using exact least squares, that is, problems with tens or hundreds of thousands of measurements and estimation grid points, can easily be solved on a small workstation using the multiscale algorithm. In contrast to methods previously proposed for solving large least squares problems, our approach provides estimation error statistics while permitting long-range correlations, using all measurements, and permitting arbitrary measurement locations. The multiscale algorithm itself, published elsewhere, is not the focus of this paper. However, the algorithm requires statistical models having a very particular multiscale structure; it is the development of a class of multiscale statistical models, appropriate for oceanographic mapping problems, with which we concern ourselves in this paper. The approach is illustrated by mapping temperature in the northeastern Pacific. The number of hydrographic stations is kept deliberately small to show that multiscale and exact least squares results are comparable. A portion of the data were not used in the analysis; these data serve to test the multiscale estimates. A major advantage of the present approach is the ability to repeat the estimation procedure a large number of times for sensitivity studies, parameter estimation, and model testing. We have made available by anonymous Ftp a set of MATLAB-callable routines which implement the multiscale algorithm and the statistical models developed in this paper.

  6. Developing Higher-Order Materials Knowledge Systems

    NASA Astrophysics Data System (ADS)

    Fast, Anthony Nathan

    2011-12-01

    Advances in computational materials science and novel characterization techniques have allowed scientists to probe deeply into a diverse range of materials phenomena. These activities are producing enormous amounts of information regarding the roles of various hierarchical material features in the overall performance characteristics displayed by the material. Connecting the hierarchical information over disparate domains is at the crux of multiscale modeling. The inherent challenge of performing multiscale simulations is developing scale bridging relationships to couple material information between well separated length scales. Much progress has been made in the development of homogenization relationships which replace heterogeneous material features with effective homogenous descriptions. These relationships facilitate the flow of information from lower length scales to higher length scales. Meanwhile, most localization relationships that link the information from a from a higher length scale to a lower length scale are plagued by computationally intensive techniques which are not readily integrated into multiscale simulations. The challenge of executing fully coupled multiscale simulations is augmented by the need to incorporate the evolution of the material structure that may occur under conditions such as material processing. To address these challenges with multiscale simulation, a novel framework called the Materials Knowledge System (MKS) has been developed. This methodology efficiently extracts, stores, and recalls microstructure-property-processing localization relationships. This approach is built on the statistical continuum theories developed by Kroner that express the localization of the response field at the microscale using a series of highly complex convolution integrals, which have historically been evaluated analytically. The MKS approach dramatically improves the accuracy of these expressions by calibrating the convolution kernels in these expressions to results from previously validated physics-based models. These novel tools have been validated for the elastic strain localization in moderate contrast dual-phase composites by direct comparisons with predictions from finite element model. The versatility of the approach is further demonstrated by its successful application to capturing the structure evolution during spinodal decomposition of a binary alloy. Lastly, some key features in the future application of the MKS approach are developed using the Portevin-le Chaterlier effect. It has been shown with these case studies that the MKS approach is capable of accurately reproducing the results from physics based models with a drastic reduction in computational requirements.

  7. Multiscale Cancer Modeling

    PubMed Central

    Macklin, Paul; Cristini, Vittorio

    2013-01-01

    Simulating cancer behavior across multiple biological scales in space and time, i.e., multiscale cancer modeling, is increasingly being recognized as a powerful tool to refine hypotheses, focus experiments, and enable more accurate predictions. A growing number of examples illustrate the value of this approach in providing quantitative insight on the initiation, progression, and treatment of cancer. In this review, we introduce the most recent and important multiscale cancer modeling works that have successfully established a mechanistic link between different biological scales. Biophysical, biochemical, and biomechanical factors are considered in these models. We also discuss innovative, cutting-edge modeling methods that are moving predictive multiscale cancer modeling toward clinical application. Furthermore, because the development of multiscale cancer models requires a new level of collaboration among scientists from a variety of fields such as biology, medicine, physics, mathematics, engineering, and computer science, an innovative Web-based infrastructure is needed to support this growing community. PMID:21529163

  8. Multiscale Modeling in the Clinic: Drug Design and Development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clancy, Colleen E.; An, Gary; Cannon, William R.

    A wide range of length and time scales are relevant to pharmacology, especially in drug development, drug design and drug delivery. Therefore, multi-scale computational modeling and simulation methods and paradigms that advance the linkage of phenomena occurring at these multiple scales have become increasingly important. Multi-scale approaches present in silico opportunities to advance laboratory research to bedside clinical applications in pharmaceuticals research. This is achievable through the capability of modeling to reveal phenomena occurring across multiple spatial and temporal scales, which are not otherwise readily accessible to experimentation. The resultant models, when validated, are capable of making testable predictions tomore » guide drug design and delivery. In this review we describe the goals, methods, and opportunities of multi-scale modeling in drug design and development. We demonstrate the impact of multiple scales of modeling in this field. We indicate the common mathematical techniques employed for multi-scale modeling approaches used in pharmacology and present several examples illustrating the current state-of-the-art regarding drug development for: Excitable Systems (Heart); Cancer (Metastasis and Differentiation); Cancer (Angiogenesis and Drug Targeting); Metabolic Disorders; and Inflammation and Sepsis. We conclude with a focus on barriers to successful clinical translation of drug development, drug design and drug delivery multi-scale models.« less

  9. Constrained approximation of effective generators for multiscale stochastic reaction networks and application to conditioned path sampling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cotter, Simon L., E-mail: simon.cotter@manchester.ac.uk

    2016-10-15

    Efficient analysis and simulation of multiscale stochastic systems of chemical kinetics is an ongoing area for research, and is the source of many theoretical and computational challenges. In this paper, we present a significant improvement to the constrained approach, which is a method for computing effective dynamics of slowly changing quantities in these systems, but which does not rely on the quasi-steady-state assumption (QSSA). The QSSA can cause errors in the estimation of effective dynamics for systems where the difference in timescales between the “fast” and “slow” variables is not so pronounced. This new application of the constrained approach allowsmore » us to compute the effective generator of the slow variables, without the need for expensive stochastic simulations. This is achieved by finding the null space of the generator of the constrained system. For complex systems where this is not possible, or where the constrained subsystem is itself multiscale, the constrained approach can then be applied iteratively. This results in breaking the problem down into finding the solutions to many small eigenvalue problems, which can be efficiently solved using standard methods. Since this methodology does not rely on the quasi steady-state assumption, the effective dynamics that are approximated are highly accurate, and in the case of systems with only monomolecular reactions, are exact. We will demonstrate this with some numerics, and also use the effective generators to sample paths of the slow variables which are conditioned on their endpoints, a task which would be computationally intractable for the generator of the full system.« less

  10. Coupled numerical approach combining finite volume and lattice Boltzmann methods for multi-scale multi-physicochemical processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Li; He, Ya-Ling; Kang, Qinjun

    2013-12-15

    A coupled (hybrid) simulation strategy spatially combining the finite volume method (FVM) and the lattice Boltzmann method (LBM), called CFVLBM, is developed to simulate coupled multi-scale multi-physicochemical processes. In the CFVLBM, computational domain of multi-scale problems is divided into two sub-domains, i.e., an open, free fluid region and a region filled with porous materials. The FVM and LBM are used for these two regions, respectively, with information exchanged at the interface between the two sub-domains. A general reconstruction operator (RO) is proposed to derive the distribution functions in the LBM from the corresponding macro scalar, the governing equation of whichmore » obeys the convection–diffusion equation. The CFVLBM and the RO are validated in several typical physicochemical problems and then are applied to simulate complex multi-scale coupled fluid flow, heat transfer, mass transport, and chemical reaction in a wall-coated micro reactor. The maximum ratio of the grid size between the FVM and LBM regions is explored and discussed. -- Highlights: •A coupled simulation strategy for simulating multi-scale phenomena is developed. •Finite volume method and lattice Boltzmann method are coupled. •A reconstruction operator is derived to transfer information at the sub-domains interface. •Coupled multi-scale multiple physicochemical processes in micro reactor are simulated. •Techniques to save computational resources and improve the efficiency are discussed.« less

  11. Multiscale Modeling of Carbon/Phenolic Composite Thermal Protection Materials: Atomistic to Effective Properties

    NASA Technical Reports Server (NTRS)

    Arnold, Steven M.; Murthy, Pappu L.; Bednarcyk, Brett A.; Lawson, John W.; Monk, Joshua D.; Bauschlicher, Charles W., Jr.

    2016-01-01

    Next generation ablative thermal protection systems are expected to consist of 3D woven composite architectures. It is well known that composites can be tailored to achieve desired mechanical and thermal properties in various directions and thus can be made fit-for-purpose if the proper combination of constituent materials and microstructures can be realized. In the present work, the first, multiscale, atomistically-informed, computational analysis of mechanical and thermal properties of a present day - Carbon/Phenolic composite Thermal Protection System (TPS) material is conducted. Model results are compared to measured in-plane and out-of-plane mechanical and thermal properties to validate the computational approach. Results indicate that given sufficient microstructural fidelity, along with lowerscale, constituent properties derived from molecular dynamics simulations, accurate composite level (effective) thermo-elastic properties can be obtained. This suggests that next generation TPS properties can be accurately estimated via atomistically informed multiscale analysis.

  12. Towards multiscale modeling of influenza infection

    PubMed Central

    Murillo, Lisa N.; Murillo, Michael S.; Perelson, Alan S.

    2013-01-01

    Aided by recent advances in computational power, algorithms, and higher fidelity data, increasingly detailed theoretical models of infection with influenza A virus are being developed. We review single scale models as they describe influenza infection from intracellular to global scales, and, in particular, we consider those models that capture details specific to influenza and can be used to link different scales. We discuss the few multiscale models of influenza infection that have been developed in this emerging field. In addition to discussing modeling approaches, we also survey biological data on influenza infection and transmission that is relevant for constructing influenza infection models. We envision that, in the future, multiscale models that capitalize on technical advances in experimental biology and high performance computing could be used to describe the large spatial scale epidemiology of influenza infection, evolution of the virus, and transmission between hosts more accurately. PMID:23608630

  13. Collaborative Simulation Grid: Multiscale Quantum-Mechanical/Classical Atomistic Simulations on Distributed PC Clusters in the US and Japan

    NASA Technical Reports Server (NTRS)

    Kikuchi, Hideaki; Kalia, Rajiv; Nakano, Aiichiro; Vashishta, Priya; Iyetomi, Hiroshi; Ogata, Shuji; Kouno, Takahisa; Shimojo, Fuyuki; Tsuruta, Kanji; Saini, Subhash; hide

    2002-01-01

    A multidisciplinary, collaborative simulation has been performed on a Grid of geographically distributed PC clusters. The multiscale simulation approach seamlessly combines i) atomistic simulation backed on the molecular dynamics (MD) method and ii) quantum mechanical (QM) calculation based on the density functional theory (DFT), so that accurate but less scalable computations are performed only where they are needed. The multiscale MD/QM simulation code has been Grid-enabled using i) a modular, additive hybridization scheme, ii) multiple QM clustering, and iii) computation/communication overlapping. The Gridified MD/QM simulation code has been used to study environmental effects of water molecules on fracture in silicon. A preliminary run of the code has achieved a parallel efficiency of 94% on 25 PCs distributed over 3 PC clusters in the US and Japan, and a larger test involving 154 processors on 5 distributed PC clusters is in progress.

  14. Multiscale analysis of information dynamics for linear multivariate processes.

    PubMed

    Faes, Luca; Montalto, Alessandro; Stramaglia, Sebastiano; Nollo, Giandomenico; Marinazzo, Daniele

    2016-08-01

    In the study of complex physical and physiological systems represented by multivariate time series, an issue of great interest is the description of the system dynamics over a range of different temporal scales. While information-theoretic approaches to the multiscale analysis of complex dynamics are being increasingly used, the theoretical properties of the applied measures are poorly understood. This study introduces for the first time a framework for the analytical computation of information dynamics for linear multivariate stochastic processes explored at different time scales. After showing that the multiscale processing of a vector autoregressive (VAR) process introduces a moving average (MA) component, we describe how to represent the resulting VARMA process using statespace (SS) models and how to exploit the SS model parameters to compute analytical measures of information storage and information transfer for the original and rescaled processes. The framework is then used to quantify multiscale information dynamics for simulated unidirectionally and bidirectionally coupled VAR processes, showing that rescaling may lead to insightful patterns of information storage and transfer but also to potentially misleading behaviors.

  15. Systems oncology: towards patient-specific treatment regimes informed by multiscale mathematical modelling.

    PubMed

    Powathil, Gibin G; Swat, Maciej; Chaplain, Mark A J

    2015-02-01

    The multiscale complexity of cancer as a disease necessitates a corresponding multiscale modelling approach to produce truly predictive mathematical models capable of improving existing treatment protocols. To capture all the dynamics of solid tumour growth and its progression, mathematical modellers need to couple biological processes occurring at various spatial and temporal scales (from genes to tissues). Because effectiveness of cancer therapy is considerably affected by intracellular and extracellular heterogeneities as well as by the dynamical changes in the tissue microenvironment, any model attempt to optimise existing protocols must consider these factors ultimately leading to improved multimodal treatment regimes. By improving existing and building new mathematical models of cancer, modellers can play important role in preventing the use of potentially sub-optimal treatment combinations. In this paper, we analyse a multiscale computational mathematical model for cancer growth and spread, incorporating the multiple effects of radiation therapy and chemotherapy in the patient survival probability and implement the model using two different cell based modelling techniques. We show that the insights provided by such multiscale modelling approaches can ultimately help in designing optimal patient-specific multi-modality treatment protocols that may increase patients quality of life. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. Multiscale finite element modeling of sheet molding compound (SMC) composite structure based on stochastic mesostructure reconstruction

    DOE PAGES

    Chen, Zhangxing; Huang, Tianyu; Shao, Yimin; ...

    2018-03-15

    Predicting the mechanical behavior of the chopped carbon fiber Sheet Molding Compound (SMC) due to spatial variations in local material properties is critical for the structural performance analysis but is computationally challenging. Such spatial variations are induced by the material flow in the compression molding process. In this work, a new multiscale SMC modeling framework and the associated computational techniques are developed to provide accurate and efficient predictions of SMC mechanical performance. The proposed multiscale modeling framework contains three modules. First, a stochastic algorithm for 3D chip-packing reconstruction is developed to efficiently generate the SMC mesoscale Representative Volume Element (RVE)more » model for Finite Element Analysis (FEA). A new fiber orientation tensor recovery function is embedded in the reconstruction algorithm to match reconstructions with the target characteristics of fiber orientation distribution. Second, a metamodeling module is established to improve the computational efficiency by creating the surrogates of mesoscale analyses. Third, the macroscale behaviors are predicted by an efficient multiscale model, in which the spatially varying material properties are obtained based on the local fiber orientation tensors. Our approach is further validated through experiments at both meso- and macro-scales, such as tensile tests assisted by Digital Image Correlation (DIC) and mesostructure imaging.« less

  17. Multiscale finite element modeling of sheet molding compound (SMC) composite structure based on stochastic mesostructure reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Zhangxing; Huang, Tianyu; Shao, Yimin

    Predicting the mechanical behavior of the chopped carbon fiber Sheet Molding Compound (SMC) due to spatial variations in local material properties is critical for the structural performance analysis but is computationally challenging. Such spatial variations are induced by the material flow in the compression molding process. In this work, a new multiscale SMC modeling framework and the associated computational techniques are developed to provide accurate and efficient predictions of SMC mechanical performance. The proposed multiscale modeling framework contains three modules. First, a stochastic algorithm for 3D chip-packing reconstruction is developed to efficiently generate the SMC mesoscale Representative Volume Element (RVE)more » model for Finite Element Analysis (FEA). A new fiber orientation tensor recovery function is embedded in the reconstruction algorithm to match reconstructions with the target characteristics of fiber orientation distribution. Second, a metamodeling module is established to improve the computational efficiency by creating the surrogates of mesoscale analyses. Third, the macroscale behaviors are predicted by an efficient multiscale model, in which the spatially varying material properties are obtained based on the local fiber orientation tensors. Our approach is further validated through experiments at both meso- and macro-scales, such as tensile tests assisted by Digital Image Correlation (DIC) and mesostructure imaging.« less

  18. Scalable and fast heterogeneous molecular simulation with predictive parallelization schemes

    NASA Astrophysics Data System (ADS)

    Guzman, Horacio V.; Junghans, Christoph; Kremer, Kurt; Stuehn, Torsten

    2017-11-01

    Multiscale and inhomogeneous molecular systems are challenging topics in the field of molecular simulation. In particular, modeling biological systems in the context of multiscale simulations and exploring material properties are driving a permanent development of new simulation methods and optimization algorithms. In computational terms, those methods require parallelization schemes that make a productive use of computational resources for each simulation and from its genesis. Here, we introduce the heterogeneous domain decomposition approach, which is a combination of an heterogeneity-sensitive spatial domain decomposition with an a priori rearrangement of subdomain walls. Within this approach, the theoretical modeling and scaling laws for the force computation time are proposed and studied as a function of the number of particles and the spatial resolution ratio. We also show the new approach capabilities, by comparing it to both static domain decomposition algorithms and dynamic load-balancing schemes. Specifically, two representative molecular systems have been simulated and compared to the heterogeneous domain decomposition proposed in this work. These two systems comprise an adaptive resolution simulation of a biomolecule solvated in water and a phase-separated binary Lennard-Jones fluid.

  19. Action recognition using multi-scale histograms of oriented gradients based depth motion trail Images

    NASA Astrophysics Data System (ADS)

    Wang, Guanxi; Tie, Yun; Qi, Lin

    2017-07-01

    In this paper, we propose a novel approach based on Depth Maps and compute Multi-Scale Histograms of Oriented Gradient (MSHOG) from sequences of depth maps to recognize actions. Each depth frame in a depth video sequence is projected onto three orthogonal Cartesian planes. Under each projection view, the absolute difference between two consecutive projected maps is accumulated through a depth video sequence to form a Depth Map, which is called Depth Motion Trail Images (DMTI). The MSHOG is then computed from the Depth Maps for the representation of an action. In addition, we apply L2-Regularized Collaborative Representation (L2-CRC) to classify actions. We evaluate the proposed approach on MSR Action3D dataset and MSRGesture3D dataset. Promising experimental result demonstrates the effectiveness of our proposed method.

  20. Multiscale high-order/low-order (HOLO) algorithms and applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chacon, Luis; Chen, Guangye; Knoll, Dana Alan

    Here, we review the state of the art in the formulation, implementation, and performance of so-called high-order/low-order (HOLO) algorithms for challenging multiscale problems. HOLO algorithms attempt to couple one or several high-complexity physical models (the high-order model, HO) with low-complexity ones (the low-order model, LO). The primary goal of HOLO algorithms is to achieve nonlinear convergence between HO and LO components while minimizing memory footprint and managing the computational complexity in a practical manner. Key to the HOLO approach is the use of the LO representations to address temporal stiffness, effectively accelerating the convergence of the HO/LO coupled system. Themore » HOLO approach is broadly underpinned by the concept of nonlinear elimination, which enables segregation of the HO and LO components in ways that can effectively use heterogeneous architectures. The accuracy and efficiency benefits of HOLO algorithms are demonstrated with specific applications to radiation transport, gas dynamics, plasmas (both Eulerian and Lagrangian formulations), and ocean modeling. Across this broad application spectrum, HOLO algorithms achieve significant accuracy improvements at a fraction of the cost compared to conventional approaches. It follows that HOLO algorithms hold significant potential for high-fidelity system scale multiscale simulations leveraging exascale computing.« less

  1. Multiscale high-order/low-order (HOLO) algorithms and applications

    DOE PAGES

    Chacon, Luis; Chen, Guangye; Knoll, Dana Alan; ...

    2016-11-11

    Here, we review the state of the art in the formulation, implementation, and performance of so-called high-order/low-order (HOLO) algorithms for challenging multiscale problems. HOLO algorithms attempt to couple one or several high-complexity physical models (the high-order model, HO) with low-complexity ones (the low-order model, LO). The primary goal of HOLO algorithms is to achieve nonlinear convergence between HO and LO components while minimizing memory footprint and managing the computational complexity in a practical manner. Key to the HOLO approach is the use of the LO representations to address temporal stiffness, effectively accelerating the convergence of the HO/LO coupled system. Themore » HOLO approach is broadly underpinned by the concept of nonlinear elimination, which enables segregation of the HO and LO components in ways that can effectively use heterogeneous architectures. The accuracy and efficiency benefits of HOLO algorithms are demonstrated with specific applications to radiation transport, gas dynamics, plasmas (both Eulerian and Lagrangian formulations), and ocean modeling. Across this broad application spectrum, HOLO algorithms achieve significant accuracy improvements at a fraction of the cost compared to conventional approaches. It follows that HOLO algorithms hold significant potential for high-fidelity system scale multiscale simulations leveraging exascale computing.« less

  2. Particle-Based Methods for Multiscale Modeling of Blood Flow in the Circulation and in Devices: Challenges and Future Directions

    PubMed Central

    Yamaguchi, Takami; Ishikawa, Takuji; Imai, Y.; Matsuki, N.; Xenos, Mikhail; Deng, Yuefan; Bluestein, Danny

    2010-01-01

    A major computational challenge for a multiscale modeling is the coupling of disparate length and timescales between molecular mechanics and macroscopic transport, spanning the spatial and temporal scales characterizing the complex processes taking place in flow-induced blood clotting. Flow and pressure effects on a cell-like platelet can be well represented by a continuum mechanics model down to the order of the micrometer level. However, the molecular effects of adhesion/aggregation bonds are on the order of nanometer. A successful multiscale model of platelet response to flow stresses in devices and the ensuing clotting responses should be able to characterize the clotting reactions and their interactions with the flow. This paper attempts to describe a few of the computational methods that were developed in recent years and became available to researchers in the field. They differ from traditional approaches that dominate the field by expanding on prevailing continuum-based approaches, or by completely departing from them, yielding an expanding toolkit that may facilitate further elucidation of the underlying mechanisms of blood flow and the cellular response to it. We offer a paradigm shift by adopting a multidisciplinary approach with fluid dynamics simulations coupled to biophysical and biochemical transport. PMID:20336827

  3. Multiscale Granger causality

    NASA Astrophysics Data System (ADS)

    Faes, Luca; Nollo, Giandomenico; Stramaglia, Sebastiano; Marinazzo, Daniele

    2017-10-01

    In the study of complex physical and biological systems represented by multivariate stochastic processes, an issue of great relevance is the description of the system dynamics spanning multiple temporal scales. While methods to assess the dynamic complexity of individual processes at different time scales are well established, multiscale analysis of directed interactions has never been formalized theoretically, and empirical evaluations are complicated by practical issues such as filtering and downsampling. Here we extend the very popular measure of Granger causality (GC), a prominent tool for assessing directed lagged interactions between joint processes, to quantify information transfer across multiple time scales. We show that the multiscale processing of a vector autoregressive (AR) process introduces a moving average (MA) component, and describe how to represent the resulting ARMA process using state space (SS) models and to combine the SS model parameters for computing exact GC values at arbitrarily large time scales. We exploit the theoretical formulation to identify peculiar features of multiscale GC in basic AR processes, and demonstrate with numerical simulations the much larger estimation accuracy of the SS approach compared to pure AR modeling of filtered and downsampled data. The improved computational reliability is exploited to disclose meaningful multiscale patterns of information transfer between global temperature and carbon dioxide concentration time series, both in paleoclimate and in recent years.

  4. Multiscale analysis and computation for flows in heterogeneous media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Efendiev, Yalchin; Hou, T. Y.; Durlofsky, L. J.

    Our work in this project is aimed at making fundamental advances in multiscale methods for flow and transport in highly heterogeneous porous media. The main thrust of this research is to develop a systematic multiscale analysis and efficient coarse-scale models that can capture global effects and extend existing multiscale approaches to problems with additional physics and uncertainties. A key emphasis is on problems without an apparent scale separation. Multiscale solution methods are currently under active investigation for the simulation of subsurface flow in heterogeneous formations. These procedures capture the effects of fine-scale permeability variations through the calculation of specialized coarse-scalemore » basis functions. Most of the multiscale techniques presented to date employ localization approximations in the calculation of these basis functions. For some highly correlated (e.g., channelized) formations, however, global effects are important and these may need to be incorporated into the multiscale basis functions. Other challenging issues facing multiscale simulations are the extension of existing multiscale techniques to problems with additional physics, such as compressibility, capillary effects, etc. In our project, we explore the improvement of multiscale methods through the incorporation of additional (single-phase flow) information and the development of a general multiscale framework for flows in the presence of uncertainties, compressible flow and heterogeneous transport, and geomechanics. We have considered (1) adaptive local-global multiscale methods, (2) multiscale methods for the transport equation, (3) operator-based multiscale methods and solvers, (4) multiscale methods in the presence of uncertainties and applications, (5) multiscale finite element methods for high contrast porous media and their generalizations, and (6) multiscale methods for geomechanics. Below, we present a brief overview of each of these contributions.« less

  5. Fast online generalized multiscale finite element method using constraint energy minimization

    NASA Astrophysics Data System (ADS)

    Chung, Eric T.; Efendiev, Yalchin; Leung, Wing Tat

    2018-02-01

    Local multiscale methods often construct multiscale basis functions in the offline stage without taking into account input parameters, such as source terms, boundary conditions, and so on. These basis functions are then used in the online stage with a specific input parameter to solve the global problem at a reduced computational cost. Recently, online approaches have been introduced, where multiscale basis functions are adaptively constructed in some regions to reduce the error significantly. In multiscale methods, it is desired to have only 1-2 iterations to reduce the error to a desired threshold. Using Generalized Multiscale Finite Element Framework [10], it was shown that by choosing sufficient number of offline basis functions, the error reduction can be made independent of physical parameters, such as scales and contrast. In this paper, our goal is to improve this. Using our recently proposed approach [4] and special online basis construction in oversampled regions, we show that the error reduction can be made sufficiently large by appropriately selecting oversampling regions. Our numerical results show that one can achieve a three order of magnitude error reduction, which is better than our previous methods. We also develop an adaptive algorithm and enrich in selected regions with large residuals. In our adaptive method, we show that the convergence rate can be determined by a user-defined parameter and we confirm this by numerical simulations. The analysis of the method is presented.

  6. Modeling and Simulations in Photoelectrochemical Water Oxidation: From Single Level to Multiscale Modeling.

    PubMed

    Zhang, Xueqing; Bieberle-Hütter, Anja

    2016-06-08

    This review summarizes recent developments, challenges, and strategies in the field of modeling and simulations of photoelectrochemical (PEC) water oxidation. We focus on water splitting by metal-oxide semiconductors and discuss topics such as theoretical calculations of light absorption, band gap/band edge, charge transport, and electrochemical reactions at the electrode-electrolyte interface. In particular, we review the mechanisms of the oxygen evolution reaction, strategies to lower overpotential, and computational methods applied to PEC systems with particular focus on multiscale modeling. The current challenges in modeling PEC interfaces and their processes are summarized. At the end, we propose a new multiscale modeling approach to simulate the PEC interface under conditions most similar to those of experiments. This approach will contribute to identifying the limitations at PEC interfaces. Its generic nature allows its application to a number of electrochemical systems. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Multi-scale Computational Electromagnetics for Phenomenology and Saliency Characterization in Remote Sensing

    DTIC Science & Technology

    2016-07-15

    AFRL-AFOSR-JP-TR-2016-0068 Multi-scale Computational Electromagnetics for Phenomenology and Saliency Characterization in Remote Sensing Hean-Teik...SUBTITLE Multi-scale Computational Electromagnetics for Phenomenology and Saliency Characterization in Remote Sensing 5a.  CONTRACT NUMBER 5b.  GRANT NUMBER... electromagnetics to the application in microwave remote sensing as well as extension of modelling capability with computational flexibility to study

  8. Multi-scale Computational Electromagnetics for Phenomenology and Saliency Characterization in Remote Sensing

    DTIC Science & Technology

    2016-07-15

    AFRL-AFOSR-JP-TR-2016-0068 Multi-scale Computational Electromagnetics for Phenomenology and Saliency Characterization in Remote Sensing Hean-Teik...SUBTITLE Multi-scale Computational Electromagnetics for Phenomenology and Saliency Characterization in Remote Sensing 5a.  CONTRACT NUMBER 5b.  GRANT NUMBER...electromagnetics to the application in microwave remote sensing as well as extension of modelling capability with computational flexibility to study

  9. Toward a multiscale modeling framework for understanding serotonergic function

    PubMed Central

    Wong-Lin, KongFatt; Wang, Da-Hui; Moustafa, Ahmed A; Cohen, Jeremiah Y; Nakamura, Kae

    2017-01-01

    Despite its importance in regulating emotion and mental wellbeing, the complex structure and function of the serotonergic system present formidable challenges toward understanding its mechanisms. In this paper, we review studies investigating the interactions between serotonergic and related brain systems and their behavior at multiple scales, with a focus on biologically-based computational modeling. We first discuss serotonergic intracellular signaling and neuronal excitability, followed by neuronal circuit and systems levels. At each level of organization, we will discuss the experimental work accompanied by related computational modeling work. We then suggest that a multiscale modeling approach that integrates the various levels of neurobiological organization could potentially transform the way we understand the complex functions associated with serotonin. PMID:28417684

  10. Hybrid Multiscale Simulation of Hydrologic and Biogeochemical Processes in the River-Groundwater Interaction Zone

    NASA Astrophysics Data System (ADS)

    Yang, X.; Scheibe, T. D.; Chen, X.; Hammond, G. E.; Song, X.

    2015-12-01

    The zone in which river water and groundwater mix plays an important role in natural ecosystems as it regulates the mixing of nutrients that control biogeochemical transformations. Subsurface heterogeneity leads to local hotspots of microbial activity that are important to system function yet difficult to resolve computationally. To address this challenge, we are testing a hybrid multiscale approach that couples models at two distinct scales, based on field research at the U. S. Department of Energy's Hanford Site. The region of interest is a 400 x 400 x 20 m macroscale domain that intersects the aquifer and the river and contains a contaminant plume. However, biogeochemical activity is high in a thin zone (mud layer, <1 m thick) immediately adjacent to the river. This microscale domain is highly heterogeneous and requires fine spatial resolution to adequately represent the effects of local mixing on reactions. It is not computationally feasible to resolve the full macroscale domain at the fine resolution needed in the mud layer, and the reaction network needed in the mud layer is much more complex than that needed in the rest of the macroscale domain. Hence, a hybrid multiscale approach is used to efficiently and accurately predict flow and reactive transport at both scales. In our simulations, models at both scales are simulated using the PFLOTRAN code. Multiple microscale simulations in dynamically defined sub-domains (fine resolution, complex reaction network) are executed and coupled with a macroscale simulation over the entire domain (coarse resolution, simpler reaction network). The objectives of the research include: 1) comparing accuracy and computing cost of the hybrid multiscale simulation with a single-scale simulation; 2) identifying hot spots of microbial activity; and 3) defining macroscopic quantities such as fluxes, residence times and effective reaction rates.

  11. Multiscale geometric modeling of macromolecules I: Cartesian representation

    NASA Astrophysics Data System (ADS)

    Xia, Kelin; Feng, Xin; Chen, Zhan; Tong, Yiying; Wei, Guo-Wei

    2014-01-01

    This paper focuses on the geometric modeling and computational algorithm development of biomolecular structures from two data sources: Protein Data Bank (PDB) and Electron Microscopy Data Bank (EMDB) in the Eulerian (or Cartesian) representation. Molecular surface (MS) contains non-smooth geometric singularities, such as cusps, tips and self-intersecting facets, which often lead to computational instabilities in molecular simulations, and violate the physical principle of surface free energy minimization. Variational multiscale surface definitions are proposed based on geometric flows and solvation analysis of biomolecular systems. Our approach leads to geometric and potential driven Laplace-Beltrami flows for biomolecular surface evolution and formation. The resulting surfaces are free of geometric singularities and minimize the total free energy of the biomolecular system. High order partial differential equation (PDE)-based nonlinear filters are employed for EMDB data processing. We show the efficacy of this approach in feature-preserving noise reduction. After the construction of protein multiresolution surfaces, we explore the analysis and characterization of surface morphology by using a variety of curvature definitions. Apart from the classical Gaussian curvature and mean curvature, maximum curvature, minimum curvature, shape index, and curvedness are also applied to macromolecular surface analysis for the first time. Our curvature analysis is uniquely coupled to the analysis of electrostatic surface potential, which is a by-product of our variational multiscale solvation models. As an expository investigation, we particularly emphasize the numerical algorithms and computational protocols for practical applications of the above multiscale geometric models. Such information may otherwise be scattered over the vast literature on this topic. Based on the curvature and electrostatic analysis from our multiresolution surfaces, we introduce a new concept, the polarized curvature, for the prediction of protein binding sites.

  12. Hierarchical detection of red lesions in retinal images by multiscale correlation filtering

    NASA Astrophysics Data System (ADS)

    Zhang, Bob; Wu, Xiangqian; You, Jane; Li, Qin; Karray, Fakhri

    2009-02-01

    This paper presents an approach to the computer aided diagnosis (CAD) of diabetic retinopathy (DR) -- a common and severe complication of long-term diabetes which damages the retina and cause blindness. Since red lesions are regarded as the first signs of DR, there has been extensive research on effective detection and localization of these abnormalities in retinal images. In contrast to existing algorithms, a new approach based on Multiscale Correlation Filtering (MSCF) and dynamic thresholding is developed. This consists of two levels, Red Lesion Candidate Detection (coarse level) and True Red Lesion Detection (fine level). The approach was evaluated using data from Retinopathy On-line Challenge (ROC) competition website and we conclude our method to be effective and efficient.

  13. All-Particle Multiscale Computation of Hypersonic Rarefied Flow

    NASA Astrophysics Data System (ADS)

    Jun, E.; Burt, J. M.; Boyd, I. D.

    2011-05-01

    This study examines a new hybrid particle scheme used as an alternative means of multiscale flow simulation. The hybrid particle scheme employs the direct simulation Monte Carlo (DSMC) method in rarefied flow regions and the low diffusion (LD) particle method in continuum flow regions. The numerical procedures of the low diffusion particle method are implemented within an existing DSMC algorithm. The performance of the LD-DSMC approach is assessed by studying Mach 10 nitrogen flow over a sphere with a global Knudsen number of 0.002. The hybrid scheme results show good overall agreement with results from standard DSMC and CFD computation. Subcell procedures are utilized to improve computational efficiency and reduce sensitivity to DSMC cell size in the hybrid scheme. This makes it possible to perform the LD-DSMC simulation on a much coarser mesh that leads to a significant reduction in computation time.

  14. Multiscale information modelling for heart morphogenesis

    NASA Astrophysics Data System (ADS)

    Abdulla, T.; Imms, R.; Schleich, J. M.; Summers, R.

    2010-07-01

    Science is made feasible by the adoption of common systems of units. As research has become more data intensive, especially in the biomedical domain, it requires the adoption of a common system of information models, to make explicit the relationship between one set of data and another, regardless of format. This is being realised through the OBO Foundry to develop a suite of reference ontologies, and NCBO Bioportal to provide services to integrate biomedical resources and functionality to visualise and create mappings between ontology terms. Biomedical experts tend to be focused at one level of spatial scale, be it biochemistry, cell biology, or anatomy. Likewise, the ontologies they use tend to be focused at a particular level of scale. There is increasing interest in a multiscale systems approach, which attempts to integrate between different levels of scale to gain understanding of emergent effects. This is a return to physiological medicine with a computational emphasis, exemplified by the worldwide Physiome initiative, and the European Union funded Network of Excellence in the Virtual Physiological Human. However, little work has been done on how information modelling itself may be tailored to a multiscale systems approach. We demonstrate how this can be done for the complex process of heart morphogenesis, which requires multiscale understanding in both time and spatial domains. Such an effort enables the integration of multiscale metrology.

  15. Effect of Mesoscale and Multiscale Modeling on the Performance of Kevlar Woven Fabric Subjected to Ballistic Impact: A Numerical Study

    NASA Astrophysics Data System (ADS)

    Jia, Xin; Huang, Zhengxiang; Zu, Xudong; Gu, Xiaohui; Xiao, Qiangqiang

    2013-12-01

    In this study, an optimal finite element model of Kevlar woven fabric that is more computational efficient compared with existing models was developed to simulate ballistic impact onto fabric. Kevlar woven fabric was modeled to yarn level architecture by using the hybrid elements analysis (HEA), which uses solid elements in modeling the yarns at the impact region and uses shell elements in modeling the yarns away from the impact region. Three HEA configurations were constructed, in which the solid element region was set as about one, two, and three times that of the projectile's diameter with impact velocities of 30 m/s (non-perforation case) and 200 m/s (perforation case) to determine the optimal ratio between the solid element region and the shell element region. To further reduce computational time and to maintain the necessary accuracy, three multiscale models were presented also. These multiscale models combine the local region with the yarn level architecture by using the HEA approach and the global region with homogenous level architecture. The effect of the varying ratios of the local and global area on the ballistic performance of fabric was discussed. The deformation and damage mechanisms of fabric were analyzed and compared among numerical models. Simulation results indicate that the multiscale model based on HEA accurately reproduces the baseline results and obviously decreases computational time.

  16. Multiscale Modeling: A Review

    NASA Astrophysics Data System (ADS)

    Horstemeyer, M. F.

    This review of multiscale modeling covers a brief history of various multiscale methodologies related to solid materials and the associated experimental influences, the various influence of multiscale modeling on different disciplines, and some examples of multiscale modeling in the design of structural components. Although computational multiscale modeling methodologies have been developed in the late twentieth century, the fundamental notions of multiscale modeling have been around since da Vinci studied different sizes of ropes. The recent rapid growth in multiscale modeling is the result of the confluence of parallel computing power, experimental capabilities to characterize structure-property relations down to the atomic level, and theories that admit multiple length scales. The ubiquitous research that focus on multiscale modeling has broached different disciplines (solid mechanics, fluid mechanics, materials science, physics, mathematics, biological, and chemistry), different regions of the world (most continents), and different length scales (from atoms to autos).

  17. Particle-based methods for multiscale modeling of blood flow in the circulation and in devices: challenges and future directions. Sixth International Bio-Fluid Mechanics Symposium and Workshop March 28-30, 2008 Pasadena, California.

    PubMed

    Yamaguchi, Takami; Ishikawa, Takuji; Imai, Y; Matsuki, N; Xenos, Mikhail; Deng, Yuefan; Bluestein, Danny

    2010-03-01

    A major computational challenge for a multiscale modeling is the coupling of disparate length and timescales between molecular mechanics and macroscopic transport, spanning the spatial and temporal scales characterizing the complex processes taking place in flow-induced blood clotting. Flow and pressure effects on a cell-like platelet can be well represented by a continuum mechanics model down to the order of the micrometer level. However, the molecular effects of adhesion/aggregation bonds are on the order of nanometer. A successful multiscale model of platelet response to flow stresses in devices and the ensuing clotting responses should be able to characterize the clotting reactions and their interactions with the flow. This paper attempts to describe a few of the computational methods that were developed in recent years and became available to researchers in the field. They differ from traditional approaches that dominate the field by expanding on prevailing continuum-based approaches, or by completely departing from them, yielding an expanding toolkit that may facilitate further elucidation of the underlying mechanisms of blood flow and the cellular response to it. We offer a paradigm shift by adopting a multidisciplinary approach with fluid dynamics simulations coupled to biophysical and biochemical transport.

  18. Scalable and fast heterogeneous molecular simulation with predictive parallelization schemes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guzman, Horacio V.; Junghans, Christoph; Kremer, Kurt

    Multiscale and inhomogeneous molecular systems are challenging topics in the field of molecular simulation. In particular, modeling biological systems in the context of multiscale simulations and exploring material properties are driving a permanent development of new simulation methods and optimization algorithms. In computational terms, those methods require parallelization schemes that make a productive use of computational resources for each simulation and from its genesis. Here, we introduce the heterogeneous domain decomposition approach, which is a combination of an heterogeneity-sensitive spatial domain decomposition with an a priori rearrangement of subdomain walls. Within this approach and paper, the theoretical modeling and scalingmore » laws for the force computation time are proposed and studied as a function of the number of particles and the spatial resolution ratio. We also show the new approach capabilities, by comparing it to both static domain decomposition algorithms and dynamic load-balancing schemes. Specifically, two representative molecular systems have been simulated and compared to the heterogeneous domain decomposition proposed in this work. Finally, these two systems comprise an adaptive resolution simulation of a biomolecule solvated in water and a phase-separated binary Lennard-Jones fluid.« less

  19. Scalable and fast heterogeneous molecular simulation with predictive parallelization schemes

    DOE PAGES

    Guzman, Horacio V.; Junghans, Christoph; Kremer, Kurt; ...

    2017-11-27

    Multiscale and inhomogeneous molecular systems are challenging topics in the field of molecular simulation. In particular, modeling biological systems in the context of multiscale simulations and exploring material properties are driving a permanent development of new simulation methods and optimization algorithms. In computational terms, those methods require parallelization schemes that make a productive use of computational resources for each simulation and from its genesis. Here, we introduce the heterogeneous domain decomposition approach, which is a combination of an heterogeneity-sensitive spatial domain decomposition with an a priori rearrangement of subdomain walls. Within this approach and paper, the theoretical modeling and scalingmore » laws for the force computation time are proposed and studied as a function of the number of particles and the spatial resolution ratio. We also show the new approach capabilities, by comparing it to both static domain decomposition algorithms and dynamic load-balancing schemes. Specifically, two representative molecular systems have been simulated and compared to the heterogeneous domain decomposition proposed in this work. Finally, these two systems comprise an adaptive resolution simulation of a biomolecule solvated in water and a phase-separated binary Lennard-Jones fluid.« less

  20. A multi-scale approach to designing therapeutics for tuberculosis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Linderman, Jennifer J.; Cilfone, Nicholas A.; Pienaar, Elsje

    Approximately one third of the world’s population is infected with Mycobacterium tuberculosis. Limited information about how the immune system fights M. tuberculosis and what constitutes protection from the bacteria impact our ability to develop effective therapies for tuberculosis. We present an in vivo systems biology approach that integrates data from multiple model systems and over multiple length and time scales into a comprehensive multi-scale and multi-compartment view of the in vivo immune response to M. tuberculosis. Lastly, we describe computational models that can be used to study (a) immunomodulation with the cytokines tumor necrosis factor and interleukin 10, (b) oralmore » and inhaled antibiotics, and (c) the effect of vaccination.« less

  1. A multi-scale approach to designing therapeutics for tuberculosis

    DOE PAGES

    Linderman, Jennifer J.; Cilfone, Nicholas A.; Pienaar, Elsje; ...

    2015-04-20

    Approximately one third of the world’s population is infected with Mycobacterium tuberculosis. Limited information about how the immune system fights M. tuberculosis and what constitutes protection from the bacteria impact our ability to develop effective therapies for tuberculosis. We present an in vivo systems biology approach that integrates data from multiple model systems and over multiple length and time scales into a comprehensive multi-scale and multi-compartment view of the in vivo immune response to M. tuberculosis. Lastly, we describe computational models that can be used to study (a) immunomodulation with the cytokines tumor necrosis factor and interleukin 10, (b) oralmore » and inhaled antibiotics, and (c) the effect of vaccination.« less

  2. Automatic image enhancement based on multi-scale image decomposition

    NASA Astrophysics Data System (ADS)

    Feng, Lu; Wu, Zhuangzhi; Pei, Luo; Long, Xiong

    2014-01-01

    In image processing and computational photography, automatic image enhancement is one of the long-range objectives. Recently the automatic image enhancement methods not only take account of the globe semantics, like correct color hue and brightness imbalances, but also the local content of the image, such as human face and sky of landscape. In this paper we describe a new scheme for automatic image enhancement that considers both global semantics and local content of image. Our automatic image enhancement method employs the multi-scale edge-aware image decomposition approach to detect the underexposure regions and enhance the detail of the salient content. The experiment results demonstrate the effectiveness of our approach compared to existing automatic enhancement methods.

  3. Seafloor identification in sonar imagery via simulations of Helmholtz equations and discrete optimization

    NASA Astrophysics Data System (ADS)

    Engquist, Björn; Frederick, Christina; Huynh, Quyen; Zhou, Haomin

    2017-06-01

    We present a multiscale approach for identifying features in ocean beds by solving inverse problems in high frequency seafloor acoustics. The setting is based on Sound Navigation And Ranging (SONAR) imaging used in scientific, commercial, and military applications. The forward model incorporates multiscale simulations, by coupling Helmholtz equations and geometrical optics for a wide range of spatial scales in the seafloor geometry. This allows for detailed recovery of seafloor parameters including material type. Simulated backscattered data is generated using numerical microlocal analysis techniques. In order to lower the computational cost of the large-scale simulations in the inversion process, we take advantage of a pre-computed library of representative acoustic responses from various seafloor parameterizations.

  4. Multiscale modeling and computation of optically manipulated nano devices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bao, Gang, E-mail: baog@zju.edu.cn; Liu, Di, E-mail: richardl@math.msu.edu; Luo, Songting, E-mail: luos@iastate.edu

    2016-07-01

    We present a multiscale modeling and computational scheme for optical-mechanical responses of nanostructures. The multi-physical nature of the problem is a result of the interaction between the electromagnetic (EM) field, the molecular motion, and the electronic excitation. To balance accuracy and complexity, we adopt the semi-classical approach that the EM field is described classically by the Maxwell equations, and the charged particles follow the Schrödinger equations quantum mechanically. To overcome the numerical challenge of solving the high dimensional multi-component many-body Schrödinger equations, we further simplify the model with the Ehrenfest molecular dynamics to determine the motion of the nuclei, andmore » use the Time-Dependent Current Density Functional Theory (TD-CDFT) to calculate the excitation of the electrons. This leads to a system of coupled equations that computes the electromagnetic field, the nuclear positions, and the electronic current and charge densities simultaneously. In the regime of linear responses, the resonant frequencies initiating the out-of-equilibrium optical-mechanical responses can be formulated as an eigenvalue problem. A self-consistent multiscale method is designed to deal with the well separated space scales. The isomerization of azobenzene is presented as a numerical example.« less

  5. Nonlocal and Mixed-Locality Multiscale Finite Element Methods

    DOE PAGES

    Costa, Timothy B.; Bond, Stephen D.; Littlewood, David J.

    2018-03-27

    In many applications the resolution of small-scale heterogeneities remains a significant hurdle to robust and reliable predictive simulations. In particular, while material variability at the mesoscale plays a fundamental role in processes such as material failure, the resolution required to capture mechanisms at this scale is often computationally intractable. Multiscale methods aim to overcome this difficulty through judicious choice of a subscale problem and a robust manner of passing information between scales. One promising approach is the multiscale finite element method, which increases the fidelity of macroscale simulations by solving lower-scale problems that produce enriched multiscale basis functions. Here, inmore » this study, we present the first work toward application of the multiscale finite element method to the nonlocal peridynamic theory of solid mechanics. This is achieved within the context of a discontinuous Galerkin framework that facilitates the description of material discontinuities and does not assume the existence of spatial derivatives. Analysis of the resulting nonlocal multiscale finite element method is achieved using the ambulant Galerkin method, developed here with sufficient generality to allow for application to multiscale finite element methods for both local and nonlocal models that satisfy minimal assumptions. Finally, we conclude with preliminary results on a mixed-locality multiscale finite element method in which a nonlocal model is applied at the fine scale and a local model at the coarse scale.« less

  6. Nonlocal and Mixed-Locality Multiscale Finite Element Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Costa, Timothy B.; Bond, Stephen D.; Littlewood, David J.

    In many applications the resolution of small-scale heterogeneities remains a significant hurdle to robust and reliable predictive simulations. In particular, while material variability at the mesoscale plays a fundamental role in processes such as material failure, the resolution required to capture mechanisms at this scale is often computationally intractable. Multiscale methods aim to overcome this difficulty through judicious choice of a subscale problem and a robust manner of passing information between scales. One promising approach is the multiscale finite element method, which increases the fidelity of macroscale simulations by solving lower-scale problems that produce enriched multiscale basis functions. Here, inmore » this study, we present the first work toward application of the multiscale finite element method to the nonlocal peridynamic theory of solid mechanics. This is achieved within the context of a discontinuous Galerkin framework that facilitates the description of material discontinuities and does not assume the existence of spatial derivatives. Analysis of the resulting nonlocal multiscale finite element method is achieved using the ambulant Galerkin method, developed here with sufficient generality to allow for application to multiscale finite element methods for both local and nonlocal models that satisfy minimal assumptions. Finally, we conclude with preliminary results on a mixed-locality multiscale finite element method in which a nonlocal model is applied at the fine scale and a local model at the coarse scale.« less

  7. Multiscale mobility networks and the spatial spreading of infectious diseases.

    PubMed

    Balcan, Duygu; Colizza, Vittoria; Gonçalves, Bruno; Hu, Hao; Ramasco, José J; Vespignani, Alessandro

    2009-12-22

    Among the realistic ingredients to be considered in the computational modeling of infectious diseases, human mobility represents a crucial challenge both on the theoretical side and in view of the limited availability of empirical data. To study the interplay between short-scale commuting flows and long-range airline traffic in shaping the spatiotemporal pattern of a global epidemic we (i) analyze mobility data from 29 countries around the world and find a gravity model able to provide a global description of commuting patterns up to 300 kms and (ii) integrate in a worldwide-structured metapopulation epidemic model a timescale-separation technique for evaluating the force of infection due to multiscale mobility processes in the disease dynamics. Commuting flows are found, on average, to be one order of magnitude larger than airline flows. However, their introduction into the worldwide model shows that the large-scale pattern of the simulated epidemic exhibits only small variations with respect to the baseline case where only airline traffic is considered. The presence of short-range mobility increases, however, the synchronization of subpopulations in close proximity and affects the epidemic behavior at the periphery of the airline transportation infrastructure. The present approach outlines the possibility for the definition of layered computational approaches where different modeling assumptions and granularities can be used consistently in a unifying multiscale framework.

  8. Multiscale modeling of mucosal immune responses

    PubMed Central

    2015-01-01

    Computational modeling techniques are playing increasingly important roles in advancing a systems-level mechanistic understanding of biological processes. Computer simulations guide and underpin experimental and clinical efforts. This study presents ENteric Immune Simulator (ENISI), a multiscale modeling tool for modeling the mucosal immune responses. ENISI's modeling environment can simulate in silico experiments from molecular signaling pathways to tissue level events such as tissue lesion formation. ENISI's architecture integrates multiple modeling technologies including ABM (agent-based modeling), ODE (ordinary differential equations), SDE (stochastic modeling equations), and PDE (partial differential equations). This paper focuses on the implementation and developmental challenges of ENISI. A multiscale model of mucosal immune responses during colonic inflammation, including CD4+ T cell differentiation and tissue level cell-cell interactions was developed to illustrate the capabilities, power and scope of ENISI MSM. Background Computational techniques are becoming increasingly powerful and modeling tools for biological systems are of greater needs. Biological systems are inherently multiscale, from molecules to tissues and from nano-seconds to a lifespan of several years or decades. ENISI MSM integrates multiple modeling technologies to understand immunological processes from signaling pathways within cells to lesion formation at the tissue level. This paper examines and summarizes the technical details of ENISI, from its initial version to its latest cutting-edge implementation. Implementation Object-oriented programming approach is adopted to develop a suite of tools based on ENISI. Multiple modeling technologies are integrated to visualize tissues, cells as well as proteins; furthermore, performance matching between the scales is addressed. Conclusion We used ENISI MSM for developing predictive multiscale models of the mucosal immune system during gut inflammation. Our modeling predictions dissect the mechanisms by which effector CD4+ T cell responses contribute to tissue damage in the gut mucosa following immune dysregulation. PMID:26329787

  9. Multiscale modeling of mucosal immune responses.

    PubMed

    Mei, Yongguo; Abedi, Vida; Carbo, Adria; Zhang, Xiaoying; Lu, Pinyi; Philipson, Casandra; Hontecillas, Raquel; Hoops, Stefan; Liles, Nathan; Bassaganya-Riera, Josep

    2015-01-01

    Computational techniques are becoming increasingly powerful and modeling tools for biological systems are of greater needs. Biological systems are inherently multiscale, from molecules to tissues and from nano-seconds to a lifespan of several years or decades. ENISI MSM integrates multiple modeling technologies to understand immunological processes from signaling pathways within cells to lesion formation at the tissue level. This paper examines and summarizes the technical details of ENISI, from its initial version to its latest cutting-edge implementation. Object-oriented programming approach is adopted to develop a suite of tools based on ENISI. Multiple modeling technologies are integrated to visualize tissues, cells as well as proteins; furthermore, performance matching between the scales is addressed. We used ENISI MSM for developing predictive multiscale models of the mucosal immune system during gut inflammation. Our modeling predictions dissect the mechanisms by which effector CD4+ T cell responses contribute to tissue damage in the gut mucosa following immune dysregulation.Computational modeling techniques are playing increasingly important roles in advancing a systems-level mechanistic understanding of biological processes. Computer simulations guide and underpin experimental and clinical efforts. This study presents ENteric Immune Simulator (ENISI), a multiscale modeling tool for modeling the mucosal immune responses. ENISI's modeling environment can simulate in silico experiments from molecular signaling pathways to tissue level events such as tissue lesion formation. ENISI's architecture integrates multiple modeling technologies including ABM (agent-based modeling), ODE (ordinary differential equations), SDE (stochastic modeling equations), and PDE (partial differential equations). This paper focuses on the implementation and developmental challenges of ENISI. A multiscale model of mucosal immune responses during colonic inflammation, including CD4+ T cell differentiation and tissue level cell-cell interactions was developed to illustrate the capabilities, power and scope of ENISI MSM.

  10. Sustainable design and manufacturing of multifunctional polymer nanocomposite coatings: A multiscale systems approach

    NASA Astrophysics Data System (ADS)

    Xiao, Jie

    Polymer nanocomposites have a great potential to be a dominant coating material in a wide range of applications in the automotive, aerospace, ship-making, construction, and pharmaceutical industries. However, how to realize design sustainability of this type of nanostructured materials and how to ensure the true optimality of the product quality and process performance in coating manufacturing remain as a mountaintop area. The major challenges arise from the intrinsic multiscale nature of the material-process-product system and the need to manipulate the high levels of complexity and uncertainty in design and manufacturing processes. This research centers on the development of a comprehensive multiscale computational methodology and a computer-aided tool set that can facilitate multifunctional nanocoating design and application from novel function envisioning and idea refinement, to knowledge discovery and design solution derivation, and further to performance testing in industrial applications and life cycle analysis. The principal idea is to achieve exceptional system performance through concurrent characterization and optimization of materials, product and associated manufacturing processes covering a wide range of length and time scales. Multiscale modeling and simulation techniques ranging from microscopic molecular modeling to classical continuum modeling are seamlessly coupled. The tight integration of different methods and theories at individual scales allows the prediction of macroscopic coating performance from the fundamental molecular behavior. Goal-oriented design is also pursued by integrating additional methods for bio-inspired dynamic optimization and computational task management that can be implemented in a hierarchical computing architecture. Furthermore, multiscale systems methodologies are developed to achieve the best possible material application towards sustainable manufacturing. Automotive coating manufacturing, that involves paint spay and curing, is specifically discussed in this dissertation. Nevertheless, the multiscale considerations for sustainable manufacturing, the novel concept of IPP control, and the new PPDE-based optimization method are applicable to other types of manufacturing, e.g., metal coating development through electroplating. It is demonstrated that the methodological development in this dissertation can greatly facilitate experimentalists in novel material invention and new knowledge discovery. At the same time, they can provide scientific guidance and reveal various new opportunities and effective strategies for sustainable manufacturing.

  11. A self-consistent first-principle based approach to model carrier mobility in organic materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meded, Velimir; Friederich, Pascal; Symalla, Franz

    2015-12-31

    Transport through thin organic amorphous films, utilized in OLEDs and OPVs, has been a challenge to model by using ab-initio methods. Charge carrier mobility depends strongly on the disorder strength and reorganization energy, both of which are significantly affected by the details in environment of each molecule. Here we present a multi-scale approach to describe carrier mobility in which the materials morphology is generated using DEPOSIT, a Monte Carlo based atomistic simulation approach, or, alternatively by molecular dynamics calculations performed with GROMACS. From this morphology we extract the material specific hopping rates, as well as the on-site energies using amore » fully self-consistent embedding approach to compute the electronic structure parameters, which are then used in an analytic expression for the carrier mobility. We apply this strategy to compute the carrier mobility for a set of widely studied molecules and obtain good agreement between experiment and theory varying over several orders of magnitude in the mobility without any freely adjustable parameters. The work focuses on the quantum mechanical step of the multi-scale workflow, explains the concept along with the recently published workflow optimization, which combines density functional with semi-empirical tight binding approaches. This is followed by discussion on the analytic formula and its agreement with established percolation fits as well as kinetic Monte Carlo numerical approaches. Finally, we skatch an unified multi-disciplinary approach that integrates materials science simulation and high performance computing, developed within EU project MMM@HPC.« less

  12. A multi-scale Q1/P0 approach to langrangian shock hydrodynamics.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shashkov, Mikhail; Love, Edward; Scovazzi, Guglielmo

    A new multi-scale, stabilized method for Q1/P0 finite element computations of Lagrangian shock hydrodynamics is presented. Instabilities (of hourglass type) are controlled by a stabilizing operator derived using the variational multi-scale analysis paradigm. The resulting stabilizing term takes the form of a pressure correction. With respect to currently implemented hourglass control approaches, the novelty of the method resides in its residual-based character. The stabilizing residual has a definite physical meaning, since it embeds a discrete form of the Clausius-Duhem inequality. Effectively, the proposed stabilization samples and acts to counter the production of entropy due to numerical instabilities. The proposed techniquemore » is applicable to materials with no shear strength, for which there exists a caloric equation of state. The stabilization operator is incorporated into a mid-point, predictor/multi-corrector time integration algorithm, which conserves mass, momentum and total energy. Encouraging numerical results in the context of compressible gas dynamics confirm the potential of the method.« less

  13. Equation-free multiscale computation: algorithms and applications.

    PubMed

    Kevrekidis, Ioannis G; Samaey, Giovanni

    2009-01-01

    In traditional physicochemical modeling, one derives evolution equations at the (macroscopic, coarse) scale of interest; these are used to perform a variety of tasks (simulation, bifurcation analysis, optimization) using an arsenal of analytical and numerical techniques. For many complex systems, however, although one observes evolution at a macroscopic scale of interest, accurate models are only given at a more detailed (fine-scale, microscopic) level of description (e.g., lattice Boltzmann, kinetic Monte Carlo, molecular dynamics). Here, we review a framework for computer-aided multiscale analysis, which enables macroscopic computational tasks (over extended spatiotemporal scales) using only appropriately initialized microscopic simulation on short time and length scales. The methodology bypasses the derivation of macroscopic evolution equations when these equations conceptually exist but are not available in closed form-hence the term equation-free. We selectively discuss basic algorithms and underlying principles and illustrate the approach through representative applications. We also discuss potential difficulties and outline areas for future research.

  14. Top down and bottom up engineering of bone.

    PubMed

    Knothe Tate, Melissa L

    2011-01-11

    The goal of this retrospective article is to place the body of my lab's multiscale mechanobiology work in context of top-down and bottom-up engineering of bone. We have used biosystems engineering, computational modeling and novel experimental approaches to understand bone physiology, in health and disease, and across time (in utero, postnatal growth, maturity, aging and death, as well as evolution) and length scales (a single bone like a femur, m; a sample of bone tissue, mm-cm; a cell and its local environment, μm; down to the length scale of the cell's own skeleton, the cytoskeleton, nm). First we introduce the concept of flow in bone and the three calibers of porosity through which fluid flows. Then we describe, in the context of organ-tissue, tissue-cell and cell-molecule length scales, both multiscale computational models and experimental methods to predict flow in bone and to understand the flow of fluid as a means to deliver chemical and mechanical cues in bone. Addressing a number of studies in the context of multiple length and time scales, the importance of appropriate boundary conditions, site specific material parameters, permeability measures and even micro-nanoanatomically correct geometries are discussed in context of model predictions and their value for understanding multiscale mechanobiology of bone. Insights from these multiscale computational modeling and experimental methods are providing us with a means to predict, engineer and manufacture bone tissue in the laboratory and in the human body. Copyright © 2010 Elsevier Ltd. All rights reserved.

  15. A high-order multiscale finite-element method for time-domain acoustic-wave modeling

    NASA Astrophysics Data System (ADS)

    Gao, Kai; Fu, Shubin; Chung, Eric T.

    2018-05-01

    Accurate and efficient wave equation modeling is vital for many applications in such as acoustics, electromagnetics, and seismology. However, solving the wave equation in large-scale and highly heterogeneous models is usually computationally expensive because the computational cost is directly proportional to the number of grids in the model. We develop a novel high-order multiscale finite-element method to reduce the computational cost of time-domain acoustic-wave equation numerical modeling by solving the wave equation on a coarse mesh based on the multiscale finite-element theory. In contrast to existing multiscale finite-element methods that use only first-order multiscale basis functions, our new method constructs high-order multiscale basis functions from local elliptic problems which are closely related to the Gauss-Lobatto-Legendre quadrature points in a coarse element. Essentially, these basis functions are not only determined by the order of Legendre polynomials, but also by local medium properties, and therefore can effectively convey the fine-scale information to the coarse-scale solution with high-order accuracy. Numerical tests show that our method can significantly reduce the computation time while maintain high accuracy for wave equation modeling in highly heterogeneous media by solving the corresponding discrete system only on the coarse mesh with the new high-order multiscale basis functions.

  16. A high-order multiscale finite-element method for time-domain acoustic-wave modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Kai; Fu, Shubin; Chung, Eric T.

    Accurate and efficient wave equation modeling is vital for many applications in such as acoustics, electromagnetics, and seismology. However, solving the wave equation in large-scale and highly heterogeneous models is usually computationally expensive because the computational cost is directly proportional to the number of grids in the model. We develop a novel high-order multiscale finite-element method to reduce the computational cost of time-domain acoustic-wave equation numerical modeling by solving the wave equation on a coarse mesh based on the multiscale finite-element theory. In contrast to existing multiscale finite-element methods that use only first-order multiscale basis functions, our new method constructsmore » high-order multiscale basis functions from local elliptic problems which are closely related to the Gauss–Lobatto–Legendre quadrature points in a coarse element. Essentially, these basis functions are not only determined by the order of Legendre polynomials, but also by local medium properties, and therefore can effectively convey the fine-scale information to the coarse-scale solution with high-order accuracy. Numerical tests show that our method can significantly reduce the computation time while maintain high accuracy for wave equation modeling in highly heterogeneous media by solving the corresponding discrete system only on the coarse mesh with the new high-order multiscale basis functions.« less

  17. A high-order multiscale finite-element method for time-domain acoustic-wave modeling

    DOE PAGES

    Gao, Kai; Fu, Shubin; Chung, Eric T.

    2018-02-04

    Accurate and efficient wave equation modeling is vital for many applications in such as acoustics, electromagnetics, and seismology. However, solving the wave equation in large-scale and highly heterogeneous models is usually computationally expensive because the computational cost is directly proportional to the number of grids in the model. We develop a novel high-order multiscale finite-element method to reduce the computational cost of time-domain acoustic-wave equation numerical modeling by solving the wave equation on a coarse mesh based on the multiscale finite-element theory. In contrast to existing multiscale finite-element methods that use only first-order multiscale basis functions, our new method constructsmore » high-order multiscale basis functions from local elliptic problems which are closely related to the Gauss–Lobatto–Legendre quadrature points in a coarse element. Essentially, these basis functions are not only determined by the order of Legendre polynomials, but also by local medium properties, and therefore can effectively convey the fine-scale information to the coarse-scale solution with high-order accuracy. Numerical tests show that our method can significantly reduce the computation time while maintain high accuracy for wave equation modeling in highly heterogeneous media by solving the corresponding discrete system only on the coarse mesh with the new high-order multiscale basis functions.« less

  18. Coupling of Peridynamics and Finite Element Formulation for Multiscale Simulations

    DTIC Science & Technology

    2012-10-16

    unidirectional fiber - reinforced composites, Computer Methods in Applied Mechanics and Engineering 217 (2012) 247-261. [44] S. A. Silling, M. Epton...numerical testing for different grid widths to horizon ratios , (4) development of an approach to add another material variable in the given approach...partition of unity principle, (3) numerical testing for different grid widths to horizon ratios , (4) development of an approach to add another

  19. Predicting the breakdown strength and lifetime of nanocomposites using a multi-scale modeling approach

    NASA Astrophysics Data System (ADS)

    Huang, Yanhui; Zhao, He; Wang, Yixing; Ratcliff, Tyree; Breneman, Curt; Brinson, L. Catherine; Chen, Wei; Schadler, Linda S.

    2017-08-01

    It has been found that doping dielectric polymers with a small amount of nanofiller or molecular additive can stabilize the material under a high field and lead to increased breakdown strength and lifetime. Choosing appropriate fillers is critical to optimizing the material performance, but current research largely relies on experimental trial and error. The employment of computer simulations for nanodielectric design is rarely reported. In this work, we propose a multi-scale modeling approach that employs ab initio, Monte Carlo, and continuum scales to predict the breakdown strength and lifetime of polymer nanocomposites based on the charge trapping effect of the nanofillers. The charge transfer, charge energy relaxation, and space charge effects are modeled in respective hierarchical scales by distinctive simulation techniques, and these models are connected together for high fidelity and robustness. The preliminary results show good agreement with the experimental data, suggesting its promise for use in the computer aided material design of high performance dielectrics.

  20. Computationally Efficient Multiscale Reactive Molecular Dynamics to Describe Amino Acid Deprotonation in Proteins

    PubMed Central

    2016-01-01

    An important challenge in the simulation of biomolecular systems is a quantitative description of the protonation and deprotonation process of amino acid residues. Despite the seeming simplicity of adding or removing a positively charged hydrogen nucleus, simulating the actual protonation/deprotonation process is inherently difficult. It requires both the explicit treatment of the excess proton, including its charge defect delocalization and Grotthuss shuttling through inhomogeneous moieties (water and amino residues), and extensive sampling of coupled condensed phase motions. In a recent paper (J. Chem. Theory Comput.2014, 10, 2729−273725061442), a multiscale approach was developed to map high-level quantum mechanics/molecular mechanics (QM/MM) data into a multiscale reactive molecular dynamics (MS-RMD) model in order to describe amino acid deprotonation in bulk water. In this article, we extend the fitting approach (called FitRMD) to create MS-RMD models for ionizable amino acids within proteins. The resulting models are shown to faithfully reproduce the free energy profiles of the reference QM/MM Hamiltonian for PT inside an example protein, the ClC-ec1 H+/Cl– antiporter. Moreover, we show that the resulting MS-RMD models are computationally efficient enough to then characterize more complex 2-dimensional free energy surfaces due to slow degrees of freedom such as water hydration of internal protein cavities that can be inherently coupled to the excess proton charge translocation. The FitRMD method is thus shown to be an effective way to map ab initio level accuracy into a much more computationally efficient reactive MD method in order to explicitly simulate and quantitatively describe amino acid protonation/deprotonation in proteins. PMID:26734942

  1. Computationally Efficient Multiscale Reactive Molecular Dynamics to Describe Amino Acid Deprotonation in Proteins.

    PubMed

    Lee, Sangyun; Liang, Ruibin; Voth, Gregory A; Swanson, Jessica M J

    2016-02-09

    An important challenge in the simulation of biomolecular systems is a quantitative description of the protonation and deprotonation process of amino acid residues. Despite the seeming simplicity of adding or removing a positively charged hydrogen nucleus, simulating the actual protonation/deprotonation process is inherently difficult. It requires both the explicit treatment of the excess proton, including its charge defect delocalization and Grotthuss shuttling through inhomogeneous moieties (water and amino residues), and extensive sampling of coupled condensed phase motions. In a recent paper (J. Chem. Theory Comput. 2014, 10, 2729-2737), a multiscale approach was developed to map high-level quantum mechanics/molecular mechanics (QM/MM) data into a multiscale reactive molecular dynamics (MS-RMD) model in order to describe amino acid deprotonation in bulk water. In this article, we extend the fitting approach (called FitRMD) to create MS-RMD models for ionizable amino acids within proteins. The resulting models are shown to faithfully reproduce the free energy profiles of the reference QM/MM Hamiltonian for PT inside an example protein, the ClC-ec1 H(+)/Cl(-) antiporter. Moreover, we show that the resulting MS-RMD models are computationally efficient enough to then characterize more complex 2-dimensional free energy surfaces due to slow degrees of freedom such as water hydration of internal protein cavities that can be inherently coupled to the excess proton charge translocation. The FitRMD method is thus shown to be an effective way to map ab initio level accuracy into a much more computationally efficient reactive MD method in order to explicitly simulate and quantitatively describe amino acid protonation/deprotonation in proteins.

  2. Multiscale Support Vector Learning With Projection Operator Wavelet Kernel for Nonlinear Dynamical System Identification.

    PubMed

    Lu, Zhao; Sun, Jing; Butts, Kenneth

    2016-02-03

    A giant leap has been made in the past couple of decades with the introduction of kernel-based learning as a mainstay for designing effective nonlinear computational learning algorithms. In view of the geometric interpretation of conditional expectation and the ubiquity of multiscale characteristics in highly complex nonlinear dynamic systems [1]-[3], this paper presents a new orthogonal projection operator wavelet kernel, aiming at developing an efficient computational learning approach for nonlinear dynamical system identification. In the framework of multiresolution analysis, the proposed projection operator wavelet kernel can fulfill the multiscale, multidimensional learning to estimate complex dependencies. The special advantage of the projection operator wavelet kernel developed in this paper lies in the fact that it has a closed-form expression, which greatly facilitates its application in kernel learning. To the best of our knowledge, it is the first closed-form orthogonal projection wavelet kernel reported in the literature. It provides a link between grid-based wavelets and mesh-free kernel-based methods. Simulation studies for identifying the parallel models of two benchmark nonlinear dynamical systems confirm its superiority in model accuracy and sparsity.

  3. Multiscale modelling approaches for assessing cosmetic ingredients safety.

    PubMed

    Bois, Frédéric Y; Ochoa, Juan G Diaz; Gajewska, Monika; Kovarich, Simona; Mauch, Klaus; Paini, Alicia; Péry, Alexandre; Benito, Jose Vicente Sala; Teng, Sophie; Worth, Andrew

    2017-12-01

    The European Union's ban on animal testing for cosmetic ingredients and products has generated a strong momentum for the development of in silico and in vitro alternative methods. One of the focus of the COSMOS project was ab initio prediction of kinetics and toxic effects through multiscale pharmacokinetic modeling and in vitro data integration. In our experience, mathematical or computer modeling and in vitro experiments are complementary. We present here a summary of the main models and results obtained within the framework of the project on these topics. A first section presents our work at the organelle and cellular level. We then go toward modeling cell levels effects (monitored continuously), multiscale physiologically based pharmacokinetic and effect models, and route to route extrapolation. We follow with a short presentation of the automated KNIME workflows developed for dissemination and easy use of the models. We end with a discussion of two challenges to the field: our limited ability to deal with massive data and complex computations. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  4. Multiscale Rotation-Invariant Convolutional Neural Networks for Lung Texture Classification.

    PubMed

    Wang, Qiangchang; Zheng, Yuanjie; Yang, Gongping; Jin, Weidong; Chen, Xinjian; Yin, Yilong

    2018-01-01

    We propose a new multiscale rotation-invariant convolutional neural network (MRCNN) model for classifying various lung tissue types on high-resolution computed tomography. MRCNN employs Gabor-local binary pattern that introduces a good property in image analysis-invariance to image scales and rotations. In addition, we offer an approach to deal with the problems caused by imbalanced number of samples between different classes in most of the existing works, accomplished by changing the overlapping size between the adjacent patches. Experimental results on a public interstitial lung disease database show a superior performance of the proposed method to state of the art.

  5. Multiscale Aspects of Modeling Gas-Phase Nanoparticle Synthesis

    PubMed Central

    Buesser, B.; Gröhn, A.J.

    2013-01-01

    Aerosol reactors are utilized to manufacture nanoparticles in industrially relevant quantities. The development, understanding and scale-up of aerosol reactors can be facilitated with models and computer simulations. This review aims to provide an overview of recent developments of models and simulations and discuss their interconnection in a multiscale approach. A short introduction of the various aerosol reactor types and gas-phase particle dynamics is presented as a background for the later discussion of the models and simulations. Models are presented with decreasing time and length scales in sections on continuum, mesoscale, molecular dynamics and quantum mechanics models. PMID:23729992

  6. A multiphysics and multiscale software environment for modeling astrophysical systems

    NASA Astrophysics Data System (ADS)

    Portegies Zwart, Simon; McMillan, Steve; Harfst, Stefan; Groen, Derek; Fujii, Michiko; Nualláin, Breanndán Ó.; Glebbeek, Evert; Heggie, Douglas; Lombardi, James; Hut, Piet; Angelou, Vangelis; Banerjee, Sambaran; Belkus, Houria; Fragos, Tassos; Fregeau, John; Gaburov, Evghenii; Izzard, Rob; Jurić, Mario; Justham, Stephen; Sottoriva, Andrea; Teuben, Peter; van Bever, Joris; Yaron, Ofer; Zemp, Marcel

    2009-05-01

    We present MUSE, a software framework for combining existing computational tools for different astrophysical domains into a single multiphysics, multiscale application. MUSE facilitates the coupling of existing codes written in different languages by providing inter-language tools and by specifying an interface between each module and the framework that represents a balance between generality and computational efficiency. This approach allows scientists to use combinations of codes to solve highly coupled problems without the need to write new codes for other domains or significantly alter their existing codes. MUSE currently incorporates the domains of stellar dynamics, stellar evolution and stellar hydrodynamics for studying generalized stellar systems. We have now reached a "Noah's Ark" milestone, with (at least) two available numerical solvers for each domain. MUSE can treat multiscale and multiphysics systems in which the time- and size-scales are well separated, like simulating the evolution of planetary systems, small stellar associations, dense stellar clusters, galaxies and galactic nuclei. In this paper we describe three examples calculated using MUSE: the merger of two galaxies, the merger of two evolving stars, and a hybrid N-body simulation. In addition, we demonstrate an implementation of MUSE on a distributed computer which may also include special-purpose hardware, such as GRAPEs or GPUs, to accelerate computations. The current MUSE code base is publicly available as open source at http://muse.li.

  7. Multiscale analysis of neural spike trains.

    PubMed

    Ramezan, Reza; Marriott, Paul; Chenouri, Shojaeddin

    2014-01-30

    This paper studies the multiscale analysis of neural spike trains, through both graphical and Poisson process approaches. We introduce the interspike interval plot, which simultaneously visualizes characteristics of neural spiking activity at different time scales. Using an inhomogeneous Poisson process framework, we discuss multiscale estimates of the intensity functions of spike trains. We also introduce the windowing effect for two multiscale methods. Using quasi-likelihood, we develop bootstrap confidence intervals for the multiscale intensity function. We provide a cross-validation scheme, to choose the tuning parameters, and study its unbiasedness. Studying the relationship between the spike rate and the stimulus signal, we observe that adjusting for the first spike latency is important in cross-validation. We show, through examples, that the correlation between spike trains and spike count variability can be multiscale phenomena. Furthermore, we address the modeling of the periodicity of the spike trains caused by a stimulus signal or by brain rhythms. Within the multiscale framework, we introduce intensity functions for spike trains with multiplicative and additive periodic components. Analyzing a dataset from the retinogeniculate synapse, we compare the fit of these models with the Bayesian adaptive regression splines method and discuss the limitations of the methodology. Computational efficiency, which is usually a challenge in the analysis of spike trains, is one of the highlights of these new models. In an example, we show that the reconstruction quality of a complex intensity function demonstrates the ability of the multiscale methodology to crack the neural code. Copyright © 2013 John Wiley & Sons, Ltd.

  8. Multi-scale graph-cut algorithm for efficient water-fat separation.

    PubMed

    Berglund, Johan; Skorpil, Mikael

    2017-09-01

    To improve the accuracy and robustness to noise in water-fat separation by unifying the multiscale and graph cut based approaches to B 0 -correction. A previously proposed water-fat separation algorithm that corrects for B 0 field inhomogeneity in 3D by a single quadratic pseudo-Boolean optimization (QPBO) graph cut was incorporated into a multi-scale framework, where field map solutions are propagated from coarse to fine scales for voxels that are not resolved by the graph cut. The accuracy of the single-scale and multi-scale QPBO algorithms was evaluated against benchmark reference datasets. The robustness to noise was evaluated by adding noise to the input data prior to water-fat separation. Both algorithms achieved the highest accuracy when compared with seven previously published methods, while computation times were acceptable for implementation in clinical routine. The multi-scale algorithm was more robust to noise than the single-scale algorithm, while causing only a small increase (+10%) of the reconstruction time. The proposed 3D multi-scale QPBO algorithm offers accurate water-fat separation, robustness to noise, and fast reconstruction. The software implementation is freely available to the research community. Magn Reson Med 78:941-949, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  9. Multiscale Feature Analysis of Salivary Gland Branching Morphogenesis

    PubMed Central

    Baydil, Banu; Daley, William P.; Larsen, Melinda; Yener, Bülent

    2012-01-01

    Pattern formation in developing tissues involves dynamic spatio-temporal changes in cellular organization and subsequent evolution of functional adult structures. Branching morphogenesis is a developmental mechanism by which patterns are generated in many developing organs, which is controlled by underlying molecular pathways. Understanding the relationship between molecular signaling, cellular behavior and resulting morphological change requires quantification and categorization of the cellular behavior. In this study, tissue-level and cellular changes in developing salivary gland in response to disruption of ROCK-mediated signaling by are modeled by building cell-graphs to compute mathematical features capturing structural properties at multiple scales. These features were used to generate multiscale cell-graph signatures of untreated and ROCK signaling disrupted salivary gland organ explants. From confocal images of mouse submandibular salivary gland organ explants in which epithelial and mesenchymal nuclei were marked, a multiscale feature set capturing global structural properties, local structural properties, spectral, and morphological properties of the tissues was derived. Six feature selection algorithms and multiway modeling of the data was performed to identify distinct subsets of cell graph features that can uniquely classify and differentiate between different cell populations. Multiscale cell-graph analysis was most effective in classification of the tissue state. Cellular and tissue organization, as defined by a multiscale subset of cell-graph features, are both quantitatively distinct in epithelial and mesenchymal cell types both in the presence and absence of ROCK inhibitors. Whereas tensor analysis demonstrate that epithelial tissue was affected the most by inhibition of ROCK signaling, significant multiscale changes in mesenchymal tissue organization were identified with this analysis that were not identified in previous biological studies. We here show how to define and calculate a multiscale feature set as an effective computational approach to identify and quantify changes at multiple biological scales and to distinguish between different states in developing tissues. PMID:22403724

  10. Design of a framework for modeling, integration and simulation of physiological models.

    PubMed

    Erson, E Zeynep; Cavuşoğlu, M Cenk

    2012-09-01

    Multiscale modeling and integration of physiological models carry challenges due to the complex nature of physiological processes. High coupling within and among scales present a significant challenge in constructing and integrating multiscale physiological models. In order to deal with such challenges in a systematic way, there is a significant need for an information technology framework together with related analytical and computational tools that will facilitate integration of models and simulations of complex biological systems. Physiological Model Simulation, Integration and Modeling Framework (Phy-SIM) is an information technology framework providing the tools to facilitate development, integration and simulation of integrated models of human physiology. Phy-SIM brings software level solutions to the challenges raised by the complex nature of physiological systems. The aim of Phy-SIM, and this paper is to lay some foundation with the new approaches such as information flow and modular representation of the physiological models. The ultimate goal is to enhance the development of both the models and the integration approaches of multiscale physiological processes and thus this paper focuses on the design approaches that would achieve such a goal. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  11. Multi-scale image segmentation and numerical modeling in carbonate rocks

    NASA Astrophysics Data System (ADS)

    Alves, G. C.; Vanorio, T.

    2016-12-01

    Numerical methods based on computational simulations can be an important tool in estimating physical properties of rocks. These can complement experimental results, especially when time constraints and sample availability are a problem. However, computational models created at different scales can yield conflicting results with respect to the physical laboratory. This problem is exacerbated in carbonate rocks due to their heterogeneity at all scales. We developed a multi-scale approach performing segmentation of the rock images and numerical modeling across several scales, accounting for those heterogeneities. As a first step, we measured the porosity and the elastic properties of a group of carbonate samples with varying micrite content. Then, samples were imaged by Scanning Electron Microscope (SEM) as well as optical microscope at different magnifications. We applied three different image segmentation techniques to create numerical models from the SEM images and performed numerical simulations of the elastic wave-equation. Our results show that a multi-scale approach can efficiently account for micro-porosities in tight micrite-supported samples, yielding acoustic velocities comparable to those obtained experimentally. Nevertheless, in high-porosity samples characterized by larger grain/micrite ratio, results show that SEM scale images tend to overestimate velocities, mostly due to their inability to capture macro- and/or intragranular- porosity. This suggests that, for high-porosity carbonate samples, optical microscope images would be more suited for numerical simulations.

  12. A Multi-Scale Algorithm for Graffito Advertisement Detection from Images of Real Estate

    NASA Astrophysics Data System (ADS)

    Yang, Jun; Zhu, Shi-Jiao

    There is a significant need to detect and extract the graffito advertisement embedded in the housing images automatically. However, it is a hard job to separate the advertisement region well since housing images generally have complex background. In this paper, a detecting algorithm which uses multi-scale Gabor filters to identify graffito regions is proposed. Firstly, multi-scale Gabor filters with different directions are applied to housing images, then the approach uses these frequency data to find likely graffito regions using the relationship of different channels, it exploits the ability of different filters technique to solve the detection problem with low computational efforts. Lastly, the method is tested on several real estate images which are embedded graffito advertisement to verify its robustness and efficiency. The experiments demonstrate graffito regions can be detected quite well.

  13. Prediction of water loss and viscoelastic deformation of apple tissue using a multiscale model.

    PubMed

    Aregawi, Wondwosen A; Abera, Metadel K; Fanta, Solomon W; Verboven, Pieter; Nicolai, Bart

    2014-11-19

    A two-dimensional multiscale water transport and mechanical model was developed to predict the water loss and deformation of apple tissue (Malus × domestica Borkh. cv. 'Jonagold') during dehydration. At the macroscopic level, a continuum approach was used to construct a coupled water transport and mechanical model. Water transport in the tissue was simulated using a phenomenological approach using Fick's second law of diffusion. Mechanical deformation due to shrinkage was based on a structural mechanics model consisting of two parts: Yeoh strain energy functions to account for non-linearity and Maxwell's rheological model of visco-elasticity. Apparent parameters of the macroscale model were computed from a microscale model. The latter accounted for water exchange between different microscopic structures of the tissue (intercellular space, the cell wall network and cytoplasm) using transport laws with the water potential as the driving force for water exchange between different compartments of tissue. The microscale deformation mechanics were computed using a model where the cells were represented as a closed thin walled structure. The predicted apparent water transport properties of apple cortex tissue from the microscale model showed good agreement with the experimentally measured values. Deviations between calculated and measured mechanical properties of apple tissue were observed at strains larger than 3%, and were attributed to differences in water transport behavior between the experimental compression tests and the simulated dehydration-deformation behavior. Tissue dehydration and deformation in the high relative humidity range ( > 97% RH) could, however, be accurately predicted by the multiscale model. The multiscale model helped to understand the dynamics of the dehydration process and the importance of the different microstructural compartments (intercellular space, cell wall, membrane and cytoplasm) for water transport and mechanical deformation.

  14. Active learning of constitutive relation from mesoscopic dynamics for macroscopic modeling of non-Newtonian flows

    NASA Astrophysics Data System (ADS)

    Zhao, Lifei; Li, Zhen; Caswell, Bruce; Ouyang, Jie; Karniadakis, George Em

    2018-06-01

    We simulate complex fluids by means of an on-the-fly coupling of the bulk rheology to the underlying microstructure dynamics. In particular, a continuum model of polymeric fluids is constructed without a pre-specified constitutive relation, but instead it is actively learned from mesoscopic simulations where the dynamics of polymer chains is explicitly computed. To couple the bulk rheology of polymeric fluids and the microscale dynamics of polymer chains, the continuum approach (based on the finite volume method) provides the transient flow field as inputs for the (mesoscopic) dissipative particle dynamics (DPD), and in turn DPD returns an effective constitutive relation to close the continuum equations. In this multiscale modeling procedure, we employ an active learning strategy based on Gaussian process regression (GPR) to minimize the number of expensive DPD simulations, where adaptively selected DPD simulations are performed only as necessary. Numerical experiments are carried out for flow past a circular cylinder of a non-Newtonian fluid, modeled at the mesoscopic level by bead-spring chains. The results show that only five DPD simulations are required to achieve an effective closure of the continuum equations at Reynolds number Re = 10. Furthermore, when Re is increased to 100, only one additional DPD simulation is required for constructing an extended GPR-informed model closure. Compared to traditional message-passing multiscale approaches, applying an active learning scheme to multiscale modeling of non-Newtonian fluids can significantly increase the computational efficiency. Although the method demonstrated here obtains only a local viscosity from the polymer dynamics, it can be extended to other multiscale models of complex fluids whose macro-rheology is unknown.

  15. Multi-scale Material Parameter Identification Using LS-DYNA® and LS-OPT®

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stander, Nielen; Basudhar, Anirban; Basu, Ushnish

    2015-09-14

    Ever-tightening regulations on fuel economy, and the likely future regulation of carbon emissions, demand persistent innovation in vehicle design to reduce vehicle mass. Classical methods for computational mass reduction include sizing, shape and topology optimization. One of the few remaining options for weight reduction can be found in materials engineering and material design optimization. Apart from considering different types of materials, by adding material diversity and composite materials, an appealing option in automotive design is to engineer steel alloys for the purpose of reducing plate thickness while retaining sufficient strength and ductility required for durability and safety. A project tomore » develop computational material models for advanced high strength steel is currently being executed under the auspices of the United States Automotive Materials Partnership (USAMP) funded by the US Department of Energy. Under this program, new Third Generation Advanced High Strength Steel (i.e., 3GAHSS) are being designed, tested and integrated with the remaining design variables of a benchmark vehicle Finite Element model. The objectives of the project are to integrate atomistic, microstructural, forming and performance models to create an integrated computational materials engineering (ICME) toolkit for 3GAHSS. The mechanical properties of Advanced High Strength Steels (AHSS) are controlled by many factors, including phase composition and distribution in the overall microstructure, volume fraction, size and morphology of phase constituents as well as stability of the metastable retained austenite phase. The complex phase transformation and deformation mechanisms in these steels make the well-established traditional techniques obsolete, and a multi-scale microstructure-based modeling approach following the ICME [0]strategy was therefore chosen in this project. Multi-scale modeling as a major area of research and development is an outgrowth of the Comprehensive Test Ban Treaty of 1996 which banned surface testing of nuclear devices [1]. This had the effect that experimental work was reduced from large scale tests to multiscale experiments to provide material models with validation at different length scales. In the subsequent years industry realized that multi-scale modeling and simulation-based design were transferable to the design optimization of any structural system. Horstemeyer [1] lists a number of advantages of the use of multiscale modeling. Among these are: the reduction of product development time by alleviating costly trial-and-error iterations as well as the reduction of product costs through innovations in material, product and process designs. Multi-scale modeling can reduce the number of costly large scale experiments and can increase product quality by providing more accurate predictions. Research tends to be focussed on each particular length scale, which enhances accuracy in the long term. This paper serves as an introduction to the LS-OPT and LS-DYNA methodology for multi-scale modeling. It mainly focuses on an approach to integrate material identification using material models of different length scales. As an example, a multi-scale material identification strategy, consisting of a Crystal Plasticity (CP) material model and a homogenized State Variable (SV) model, is discussed and the parameter identification of the individual material models of different length scales is demonstrated. The paper concludes with thoughts on integrating the multi-scale methodology into the overall vehicle design.« less

  16. Asymptotic Expansion Homogenization for Multiscale Nuclear Fuel Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hales, J. D.; Tonks, M. R.; Chockalingam, K.

    2015-03-01

    Engineering scale nuclear fuel performance simulations can benefit by utilizing high-fidelity models running at a lower length scale. Lower length-scale models provide a detailed view of the material behavior that is used to determine the average material response at the macroscale. These lower length-scale calculations may provide insight into material behavior where experimental data is sparse or nonexistent. This multiscale approach is especially useful in the nuclear field, since irradiation experiments are difficult and expensive to conduct. The lower length-scale models complement the experiments by influencing the types of experiments required and by reducing the total number of experiments needed.more » This multiscale modeling approach is a central motivation in the development of the BISON-MARMOT fuel performance codes at Idaho National Laboratory. These codes seek to provide more accurate and predictive solutions for nuclear fuel behavior. One critical aspect of multiscale modeling is the ability to extract the relevant information from the lower length-scale sim- ulations. One approach, the asymptotic expansion homogenization (AEH) technique, has proven to be an effective method for determining homogenized material parameters. The AEH technique prescribes a system of equations to solve at the microscale that are used to compute homogenized material constants for use at the engineering scale. In this work, we employ AEH to explore the effect of evolving microstructural thermal conductivity and elastic constants on nuclear fuel performance. We show that the AEH approach fits cleanly into the BISON and MARMOT codes and provides a natural, multidimensional homogenization capability.« less

  17. Bridging scales through multiscale modeling: a case study on protein kinase A.

    PubMed

    Boras, Britton W; Hirakis, Sophia P; Votapka, Lane W; Malmstrom, Robert D; Amaro, Rommie E; McCulloch, Andrew D

    2015-01-01

    The goal of multiscale modeling in biology is to use structurally based physico-chemical models to integrate across temporal and spatial scales of biology and thereby improve mechanistic understanding of, for example, how a single mutation can alter organism-scale phenotypes. This approach may also inform therapeutic strategies or identify candidate drug targets that might otherwise have been overlooked. However, in many cases, it remains unclear how best to synthesize information obtained from various scales and analysis approaches, such as atomistic molecular models, Markov state models (MSM), subcellular network models, and whole cell models. In this paper, we use protein kinase A (PKA) activation as a case study to explore how computational methods that model different physical scales can complement each other and integrate into an improved multiscale representation of the biological mechanisms. Using measured crystal structures, we show how molecular dynamics (MD) simulations coupled with atomic-scale MSMs can provide conformations for Brownian dynamics (BD) simulations to feed transitional states and kinetic parameters into protein-scale MSMs. We discuss how milestoning can give reaction probabilities and forward-rate constants of cAMP association events by seamlessly integrating MD and BD simulation scales. These rate constants coupled with MSMs provide a robust representation of the free energy landscape, enabling access to kinetic, and thermodynamic parameters unavailable from current experimental data. These approaches have helped to illuminate the cooperative nature of PKA activation in response to distinct cAMP binding events. Collectively, this approach exemplifies a general strategy for multiscale model development that is applicable to a wide range of biological problems.

  18. Modeling of heterogeneous elastic materials by the multiscale hp-adaptive finite element method

    NASA Astrophysics Data System (ADS)

    Klimczak, Marek; Cecot, Witold

    2018-01-01

    We present an enhancement of the multiscale finite element method (MsFEM) by combining it with the hp-adaptive FEM. Such a discretization-based homogenization technique is a versatile tool for modeling heterogeneous materials with fast oscillating elasticity coefficients. No assumption on periodicity of the domain is required. In order to avoid direct, so-called overkill mesh computations, a coarse mesh with effective stiffness matrices is used and special shape functions are constructed to account for the local heterogeneities at the micro resolution. The automatic adaptivity (hp-type at the macro resolution and h-type at the micro resolution) increases efficiency of computation. In this paper details of the modified MsFEM are presented and a numerical test performed on a Fichera corner domain is presented in order to validate the proposed approach.

  19. An Efficient Multiscale Finite-Element Method for Frequency-Domain Seismic Wave Propagation

    DOE PAGES

    Gao, Kai; Fu, Shubin; Chung, Eric T.

    2018-02-13

    The frequency-domain seismic-wave equation, that is, the Helmholtz equation, has many important applications in seismological studies, yet is very challenging to solve, particularly for large geological models. Iterative solvers, domain decomposition, or parallel strategies can partially alleviate the computational burden, but these approaches may still encounter nontrivial difficulties in complex geological models where a sufficiently fine mesh is required to represent the fine-scale heterogeneities. We develop a novel numerical method to solve the frequency-domain acoustic wave equation on the basis of the multiscale finite-element theory. We discretize a heterogeneous model with a coarse mesh and employ carefully constructed high-order multiscalemore » basis functions to form the basis space for the coarse mesh. Solved from medium- and frequency-dependent local problems, these multiscale basis functions can effectively capture themedium’s fine-scale heterogeneity and the source’s frequency information, leading to a discrete system matrix with a much smaller dimension compared with those from conventional methods.We then obtain an accurate solution to the acoustic Helmholtz equation by solving only a small linear system instead of a large linear system constructed on the fine mesh in conventional methods.We verify our new method using several models of complicated heterogeneities, and the results show that our new multiscale method can solve the Helmholtz equation in complex models with high accuracy and extremely low computational costs.« less

  20. An Efficient Multiscale Finite-Element Method for Frequency-Domain Seismic Wave Propagation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Kai; Fu, Shubin; Chung, Eric T.

    The frequency-domain seismic-wave equation, that is, the Helmholtz equation, has many important applications in seismological studies, yet is very challenging to solve, particularly for large geological models. Iterative solvers, domain decomposition, or parallel strategies can partially alleviate the computational burden, but these approaches may still encounter nontrivial difficulties in complex geological models where a sufficiently fine mesh is required to represent the fine-scale heterogeneities. We develop a novel numerical method to solve the frequency-domain acoustic wave equation on the basis of the multiscale finite-element theory. We discretize a heterogeneous model with a coarse mesh and employ carefully constructed high-order multiscalemore » basis functions to form the basis space for the coarse mesh. Solved from medium- and frequency-dependent local problems, these multiscale basis functions can effectively capture themedium’s fine-scale heterogeneity and the source’s frequency information, leading to a discrete system matrix with a much smaller dimension compared with those from conventional methods.We then obtain an accurate solution to the acoustic Helmholtz equation by solving only a small linear system instead of a large linear system constructed on the fine mesh in conventional methods.We verify our new method using several models of complicated heterogeneities, and the results show that our new multiscale method can solve the Helmholtz equation in complex models with high accuracy and extremely low computational costs.« less

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chacón, L., E-mail: chacon@lanl.gov; Chen, G.; Knoll, D.A.

    We review the state of the art in the formulation, implementation, and performance of so-called high-order/low-order (HOLO) algorithms for challenging multiscale problems. HOLO algorithms attempt to couple one or several high-complexity physical models (the high-order model, HO) with low-complexity ones (the low-order model, LO). The primary goal of HOLO algorithms is to achieve nonlinear convergence between HO and LO components while minimizing memory footprint and managing the computational complexity in a practical manner. Key to the HOLO approach is the use of the LO representations to address temporal stiffness, effectively accelerating the convergence of the HO/LO coupled system. The HOLOmore » approach is broadly underpinned by the concept of nonlinear elimination, which enables segregation of the HO and LO components in ways that can effectively use heterogeneous architectures. The accuracy and efficiency benefits of HOLO algorithms are demonstrated with specific applications to radiation transport, gas dynamics, plasmas (both Eulerian and Lagrangian formulations), and ocean modeling. Across this broad application spectrum, HOLO algorithms achieve significant accuracy improvements at a fraction of the cost compared to conventional approaches. It follows that HOLO algorithms hold significant potential for high-fidelity system scale multiscale simulations leveraging exascale computing.« less

  2. A Novel Method to Verify Multilevel Computational Models of Biological Systems Using Multiscale Spatio-Temporal Meta Model Checking

    PubMed Central

    Gilbert, David

    2016-01-01

    Insights gained from multilevel computational models of biological systems can be translated into real-life applications only if the model correctness has been verified first. One of the most frequently employed in silico techniques for computational model verification is model checking. Traditional model checking approaches only consider the evolution of numeric values, such as concentrations, over time and are appropriate for computational models of small scale systems (e.g. intracellular networks). However for gaining a systems level understanding of how biological organisms function it is essential to consider more complex large scale biological systems (e.g. organs). Verifying computational models of such systems requires capturing both how numeric values and properties of (emergent) spatial structures (e.g. area of multicellular population) change over time and across multiple levels of organization, which are not considered by existing model checking approaches. To address this limitation we have developed a novel approximate probabilistic multiscale spatio-temporal meta model checking methodology for verifying multilevel computational models relative to specifications describing the desired/expected system behaviour. The methodology is generic and supports computational models encoded using various high-level modelling formalisms because it is defined relative to time series data and not the models used to generate it. In addition, the methodology can be automatically adapted to case study specific types of spatial structures and properties using the spatio-temporal meta model checking concept. To automate the computational model verification process we have implemented the model checking approach in the software tool Mule (http://mule.modelchecking.org). Its applicability is illustrated against four systems biology computational models previously published in the literature encoding the rat cardiovascular system dynamics, the uterine contractions of labour, the Xenopus laevis cell cycle and the acute inflammation of the gut and lung. Our methodology and software will enable computational biologists to efficiently develop reliable multilevel computational models of biological systems. PMID:27187178

  3. A Novel Method to Verify Multilevel Computational Models of Biological Systems Using Multiscale Spatio-Temporal Meta Model Checking.

    PubMed

    Pârvu, Ovidiu; Gilbert, David

    2016-01-01

    Insights gained from multilevel computational models of biological systems can be translated into real-life applications only if the model correctness has been verified first. One of the most frequently employed in silico techniques for computational model verification is model checking. Traditional model checking approaches only consider the evolution of numeric values, such as concentrations, over time and are appropriate for computational models of small scale systems (e.g. intracellular networks). However for gaining a systems level understanding of how biological organisms function it is essential to consider more complex large scale biological systems (e.g. organs). Verifying computational models of such systems requires capturing both how numeric values and properties of (emergent) spatial structures (e.g. area of multicellular population) change over time and across multiple levels of organization, which are not considered by existing model checking approaches. To address this limitation we have developed a novel approximate probabilistic multiscale spatio-temporal meta model checking methodology for verifying multilevel computational models relative to specifications describing the desired/expected system behaviour. The methodology is generic and supports computational models encoded using various high-level modelling formalisms because it is defined relative to time series data and not the models used to generate it. In addition, the methodology can be automatically adapted to case study specific types of spatial structures and properties using the spatio-temporal meta model checking concept. To automate the computational model verification process we have implemented the model checking approach in the software tool Mule (http://mule.modelchecking.org). Its applicability is illustrated against four systems biology computational models previously published in the literature encoding the rat cardiovascular system dynamics, the uterine contractions of labour, the Xenopus laevis cell cycle and the acute inflammation of the gut and lung. Our methodology and software will enable computational biologists to efficiently develop reliable multilevel computational models of biological systems.

  4. Community effort endorsing multiscale modelling, multiscale data science and multiscale computing for systems medicine.

    PubMed

    Zanin, Massimiliano; Chorbev, Ivan; Stres, Blaz; Stalidzans, Egils; Vera, Julio; Tieri, Paolo; Castiglione, Filippo; Groen, Derek; Zheng, Huiru; Baumbach, Jan; Schmid, Johannes A; Basilio, José; Klimek, Peter; Debeljak, Nataša; Rozman, Damjana; Schmidt, Harald H H W

    2017-12-05

    Systems medicine holds many promises, but has so far provided only a limited number of proofs of principle. To address this road block, possible barriers and challenges of translating systems medicine into clinical practice need to be identified and addressed. The members of the European Cooperation in Science and Technology (COST) Action CA15120 Open Multiscale Systems Medicine (OpenMultiMed) wish to engage the scientific community of systems medicine and multiscale modelling, data science and computing, to provide their feedback in a structured manner. This will result in follow-up white papers and open access resources to accelerate the clinical translation of systems medicine. © The Author 2017. Published by Oxford University Press.

  5. Performance of distributed multiscale simulations

    PubMed Central

    Borgdorff, J.; Ben Belgacem, M.; Bona-Casas, C.; Fazendeiro, L.; Groen, D.; Hoenen, O.; Mizeranschi, A.; Suter, J. L.; Coster, D.; Coveney, P. V.; Dubitzky, W.; Hoekstra, A. G.; Strand, P.; Chopard, B.

    2014-01-01

    Multiscale simulations model phenomena across natural scales using monolithic or component-based code, running on local or distributed resources. In this work, we investigate the performance of distributed multiscale computing of component-based models, guided by six multiscale applications with different characteristics and from several disciplines. Three modes of distributed multiscale computing are identified: supplementing local dependencies with large-scale resources, load distribution over multiple resources, and load balancing of small- and large-scale resources. We find that the first mode has the apparent benefit of increasing simulation speed, and the second mode can increase simulation speed if local resources are limited. Depending on resource reservation and model coupling topology, the third mode may result in a reduction of resource consumption. PMID:24982258

  6. Integrative computational models of cardiac arrhythmias -- simulating the structurally realistic heart

    PubMed Central

    Trayanova, Natalia A; Tice, Brock M

    2009-01-01

    Simulation of cardiac electrical function, and specifically, simulation aimed at understanding the mechanisms of cardiac rhythm disorders, represents an example of a successful integrative multiscale modeling approach, uncovering emergent behavior at the successive scales in the hierarchy of structural complexity. The goal of this article is to present a review of the integrative multiscale models of realistic ventricular structure used in the quest to understand and treat ventricular arrhythmias. It concludes with the new advances in image-based modeling of the heart and the promise it holds for the development of individualized models of ventricular function in health and disease. PMID:20628585

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fogarty, Aoife C., E-mail: fogarty@mpip-mainz.mpg.de; Potestio, Raffaello, E-mail: potestio@mpip-mainz.mpg.de; Kremer, Kurt, E-mail: kremer@mpip-mainz.mpg.de

    A fully atomistic modelling of many biophysical and biochemical processes at biologically relevant length- and time scales is beyond our reach with current computational resources, and one approach to overcome this difficulty is the use of multiscale simulation techniques. In such simulations, when system properties necessitate a boundary between resolutions that falls within the solvent region, one can use an approach such as the Adaptive Resolution Scheme (AdResS), in which solvent particles change their resolution on the fly during the simulation. Here, we apply the existing AdResS methodology to biomolecular systems, simulating a fully atomistic protein with an atomistic hydrationmore » shell, solvated in a coarse-grained particle reservoir and heat bath. Using as a test case an aqueous solution of the regulatory protein ubiquitin, we first confirm the validity of the AdResS approach for such systems, via an examination of protein and solvent structural and dynamical properties. We then demonstrate how, in addition to providing a computational speedup, such a multiscale AdResS approach can yield otherwise inaccessible physical insights into biomolecular function. We use our methodology to show that protein structure and dynamics can still be correctly modelled using only a few shells of atomistic water molecules. We also discuss aspects of the AdResS methodology peculiar to biomolecular simulations.« less

  8. Progress in fast, accurate multi-scale climate simulations

    DOE PAGES

    Collins, W. D.; Johansen, H.; Evans, K. J.; ...

    2015-06-01

    We present a survey of physical and computational techniques that have the potential to contribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth with these computational improvements include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enablingmore » improved accuracy and fidelity in simulation of dynamics and allowing more complete representations of climate features at the global scale. At the same time, partnerships with computer science teams have focused on taking advantage of evolving computer architectures such as many-core processors and GPUs. As a result, approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.« less

  9. Data Assimilation and Propagation of Uncertainty in Multiscale Cardiovascular Simulation

    NASA Astrophysics Data System (ADS)

    Schiavazzi, Daniele; Marsden, Alison

    2015-11-01

    Cardiovascular modeling is the application of computational tools to predict hemodynamics. State-of-the-art techniques couple a 3D incompressible Navier-Stokes solver with a boundary circulation model and can predict local and peripheral hemodynamics, analyze the post-operative performance of surgical designs and complement clinical data collection minimizing invasive and risky measurement practices. The ability of these tools to make useful predictions is directly related to their accuracy in representing measured physiologies. Tuning of model parameters is therefore a topic of paramount importance and should include clinical data uncertainty, revealing how this uncertainty will affect the predictions. We propose a fully Bayesian, multi-level approach to data assimilation of uncertain clinical data in multiscale circulation models. To reduce the computational cost, we use a stable, condensed approximation of the 3D model build by linear sparse regression of the pressure/flow rate relationship at the outlets. Finally, we consider the problem of non-invasively propagating the uncertainty in model parameters to the resulting hemodynamics and compare Monte Carlo simulation with Stochastic Collocation approaches based on Polynomial or Multi-resolution Chaos expansions.

  10. Parallel multiscale simulations of a brain aneurysm

    PubMed Central

    Grinberg, Leopold; Fedosov, Dmitry A.; Karniadakis, George Em

    2012-01-01

    Cardiovascular pathologies, such as a brain aneurysm, are affected by the global blood circulation as well as by the local microrheology. Hence, developing computational models for such cases requires the coupling of disparate spatial and temporal scales often governed by diverse mathematical descriptions, e.g., by partial differential equations (continuum) and ordinary differential equations for discrete particles (atomistic). However, interfacing atomistic-based with continuum-based domain discretizations is a challenging problem that requires both mathematical and computational advances. We present here a hybrid methodology that enabled us to perform the first multi-scale simulations of platelet depositions on the wall of a brain aneurysm. The large scale flow features in the intracranial network are accurately resolved by using the high-order spectral element Navier-Stokes solver εκ αr. The blood rheology inside the aneurysm is modeled using a coarse-grained stochastic molecular dynamics approach (the dissipative particle dynamics method) implemented in the parallel code LAMMPS. The continuum and atomistic domains overlap with interface conditions provided by effective forces computed adaptively to ensure continuity of states across the interface boundary. A two-way interaction is allowed with the time-evolving boundary of the (deposited) platelet clusters tracked by an immersed boundary method. The corresponding heterogeneous solvers ( εκ αr and LAMMPS) are linked together by a computational multilevel message passing interface that facilitates modularity and high parallel efficiency. Results of multiscale simulations of clot formation inside the aneurysm in a patient-specific arterial tree are presented. We also discuss the computational challenges involved and present scalability results of our coupled solver on up to 300K computer processors. Validation of such coupled atomistic-continuum models is a main open issue that has to be addressed in future work. PMID:23734066

  11. Parallel multiscale simulations of a brain aneurysm.

    PubMed

    Grinberg, Leopold; Fedosov, Dmitry A; Karniadakis, George Em

    2013-07-01

    Cardiovascular pathologies, such as a brain aneurysm, are affected by the global blood circulation as well as by the local microrheology. Hence, developing computational models for such cases requires the coupling of disparate spatial and temporal scales often governed by diverse mathematical descriptions, e.g., by partial differential equations (continuum) and ordinary differential equations for discrete particles (atomistic). However, interfacing atomistic-based with continuum-based domain discretizations is a challenging problem that requires both mathematical and computational advances. We present here a hybrid methodology that enabled us to perform the first multi-scale simulations of platelet depositions on the wall of a brain aneurysm. The large scale flow features in the intracranial network are accurately resolved by using the high-order spectral element Navier-Stokes solver εκ αr . The blood rheology inside the aneurysm is modeled using a coarse-grained stochastic molecular dynamics approach (the dissipative particle dynamics method) implemented in the parallel code LAMMPS. The continuum and atomistic domains overlap with interface conditions provided by effective forces computed adaptively to ensure continuity of states across the interface boundary. A two-way interaction is allowed with the time-evolving boundary of the (deposited) platelet clusters tracked by an immersed boundary method. The corresponding heterogeneous solvers ( εκ αr and LAMMPS) are linked together by a computational multilevel message passing interface that facilitates modularity and high parallel efficiency. Results of multiscale simulations of clot formation inside the aneurysm in a patient-specific arterial tree are presented. We also discuss the computational challenges involved and present scalability results of our coupled solver on up to 300K computer processors. Validation of such coupled atomistic-continuum models is a main open issue that has to be addressed in future work.

  12. Parallel multiscale simulations of a brain aneurysm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grinberg, Leopold; Fedosov, Dmitry A.; Karniadakis, George Em, E-mail: george_karniadakis@brown.edu

    2013-07-01

    Cardiovascular pathologies, such as a brain aneurysm, are affected by the global blood circulation as well as by the local microrheology. Hence, developing computational models for such cases requires the coupling of disparate spatial and temporal scales often governed by diverse mathematical descriptions, e.g., by partial differential equations (continuum) and ordinary differential equations for discrete particles (atomistic). However, interfacing atomistic-based with continuum-based domain discretizations is a challenging problem that requires both mathematical and computational advances. We present here a hybrid methodology that enabled us to perform the first multiscale simulations of platelet depositions on the wall of a brain aneurysm.more » The large scale flow features in the intracranial network are accurately resolved by using the high-order spectral element Navier–Stokes solver NεκTαr. The blood rheology inside the aneurysm is modeled using a coarse-grained stochastic molecular dynamics approach (the dissipative particle dynamics method) implemented in the parallel code LAMMPS. The continuum and atomistic domains overlap with interface conditions provided by effective forces computed adaptively to ensure continuity of states across the interface boundary. A two-way interaction is allowed with the time-evolving boundary of the (deposited) platelet clusters tracked by an immersed boundary method. The corresponding heterogeneous solvers (NεκTαr and LAMMPS) are linked together by a computational multilevel message passing interface that facilitates modularity and high parallel efficiency. Results of multiscale simulations of clot formation inside the aneurysm in a patient-specific arterial tree are presented. We also discuss the computational challenges involved and present scalability results of our coupled solver on up to 300 K computer processors. Validation of such coupled atomistic-continuum models is a main open issue that has to be addressed in future work.« less

  13. Adaptive-projection intrinsically transformed multivariate empirical mode decomposition in cooperative brain-computer interface applications.

    PubMed

    Hemakom, Apit; Goverdovsky, Valentin; Looney, David; Mandic, Danilo P

    2016-04-13

    An extension to multivariate empirical mode decomposition (MEMD), termed adaptive-projection intrinsically transformed MEMD (APIT-MEMD), is proposed to cater for power imbalances and inter-channel correlations in real-world multichannel data. It is shown that the APIT-MEMD exhibits similar or better performance than MEMD for a large number of projection vectors, whereas it outperforms MEMD for the critical case of a small number of projection vectors within the sifting algorithm. We also employ the noise-assisted APIT-MEMD within our proposed intrinsic multiscale analysis framework and illustrate the advantages of such an approach in notoriously noise-dominated cooperative brain-computer interface (BCI) based on the steady-state visual evoked potentials and the P300 responses. Finally, we show that for a joint cognitive BCI task, the proposed intrinsic multiscale analysis framework improves system performance in terms of the information transfer rate. © 2016 The Author(s).

  14. Simulation of dilute polymeric fluids in a three-dimensional contraction using a multiscale FENE model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Griebel, M., E-mail: griebel@ins.uni-bonn.de, E-mail: ruettgers@ins.uni-bonn.de; Rüttgers, A., E-mail: griebel@ins.uni-bonn.de, E-mail: ruettgers@ins.uni-bonn.de

    The multiscale FENE model is applied to a 3D square-square contraction flow problem. For this purpose, the stochastic Brownian configuration field method (BCF) has been coupled with our fully parallelized three-dimensional Navier-Stokes solver NaSt3DGPF. The robustness of the BCF method enables the numerical simulation of high Deborah number flows for which most macroscopic methods suffer from stability issues. The results of our simulations are compared with that of experimental measurements from literature and show a very good agreement. In particular, flow phenomena such as a strong vortex enhancement, streamline divergence and a flow inversion for highly elastic flows are reproduced.more » Due to their computational complexity, our simulations require massively parallel computations. Using a domain decomposition approach with MPI, the implementation achieves excellent scale-up results for up to 128 processors.« less

  15. A variational multiscale method for particle-cloud tracking in turbomachinery flows

    NASA Astrophysics Data System (ADS)

    Corsini, A.; Rispoli, F.; Sheard, A. G.; Takizawa, K.; Tezduyar, T. E.; Venturini, P.

    2014-11-01

    We present a computational method for simulation of particle-laden flows in turbomachinery. The method is based on a stabilized finite element fluid mechanics formulation and a finite element particle-cloud tracking method. We focus on induced-draft fans used in process industries to extract exhaust gases in the form of a two-phase fluid with a dispersed solid phase. The particle-laden flow causes material wear on the fan blades, degrading their aerodynamic performance, and therefore accurate simulation of the flow would be essential in reliable computational turbomachinery analysis and design. The turbulent-flow nature of the problem is dealt with a Reynolds-Averaged Navier-Stokes model and Streamline-Upwind/Petrov-Galerkin/Pressure-Stabilizing/Petrov-Galerkin stabilization, the particle-cloud trajectories are calculated based on the flow field and closure models for the turbulence-particle interaction, and one-way dependence is assumed between the flow field and particle dynamics. We propose a closure model utilizing the scale separation feature of the variational multiscale method, and compare that to the closure utilizing the eddy viscosity model. We present computations for axial- and centrifugal-fan configurations, and compare the computed data to those obtained from experiments, analytical approaches, and other computational methods.

  16. Multiscale systems biology of trauma-induced coagulopathy.

    PubMed

    Tsiklidis, Evan; Sims, Carrie; Sinno, Talid; Diamond, Scott L

    2018-07-01

    Trauma with hypovolemic shock is an extreme pathological state that challenges the body to maintain blood pressure and oxygenation in the face of hemorrhagic blood loss. In conjunction with surgical actions and transfusion therapy, survival requires the patient's blood to maintain hemostasis to stop bleeding. The physics of the problem are multiscale: (a) the systemic circulation sets the global blood pressure in response to blood loss and resuscitation therapy, (b) local tissue perfusion is altered by localized vasoregulatory mechanisms and bleeding, and (c) altered blood and vessel biology resulting from the trauma as well as local hemodynamics control the assembly of clotting components at the site of injury. Building upon ongoing modeling efforts to simulate arterial or venous thrombosis in a diseased vasculature, computer simulation of trauma-induced coagulopathy is an emerging approach to understand patient risk and predict response. Despite uncertainties in quantifying the patient's dynamic injury burden, multiscale systems biology may help link blood biochemistry at the molecular level to multiorgan responses in the bleeding patient. As an important goal of systems modeling, establishing early metrics of a patient's high-dimensional trajectory may help guide transfusion therapy or warn of subsequent later stage bleeding or thrombotic risks. This article is categorized under: Analytical and Computational Methods > Computational Methods Biological Mechanisms > Regulatory Biology Models of Systems Properties and Processes > Mechanistic Models. © 2018 Wiley Periodicals, Inc.

  17. A Review of Computational Methods in Materials Science: Examples from Shock-Wave and Polymer Physics

    PubMed Central

    Steinhauser, Martin O.; Hiermaier, Stefan

    2009-01-01

    This review discusses several computational methods used on different length and time scales for the simulation of material behavior. First, the importance of physical modeling and its relation to computer simulation on multiscales is discussed. Then, computational methods used on different scales are shortly reviewed, before we focus on the molecular dynamics (MD) method. Here we survey in a tutorial-like fashion some key issues including several MD optimization techniques. Thereafter, computational examples for the capabilities of numerical simulations in materials research are discussed. We focus on recent results of shock wave simulations of a solid which are based on two different modeling approaches and we discuss their respective assets and drawbacks with a view to their application on multiscales. Then, the prospects of computer simulations on the molecular length scale using coarse-grained MD methods are covered by means of examples pertaining to complex topological polymer structures including star-polymers, biomacromolecules such as polyelectrolytes and polymers with intrinsic stiffness. This review ends by highlighting new emerging interdisciplinary applications of computational methods in the field of medical engineering where the application of concepts of polymer physics and of shock waves to biological systems holds a lot of promise for improving medical applications such as extracorporeal shock wave lithotripsy or tumor treatment. PMID:20054467

  18. Performance of hybrid programming models for multiscale cardiac simulations: preparing for petascale computation.

    PubMed

    Pope, Bernard J; Fitch, Blake G; Pitman, Michael C; Rice, John J; Reumann, Matthias

    2011-10-01

    Future multiscale and multiphysics models that support research into human disease, translational medical science, and treatment can utilize the power of high-performance computing (HPC) systems. We anticipate that computationally efficient multiscale models will require the use of sophisticated hybrid programming models, mixing distributed message-passing processes [e.g., the message-passing interface (MPI)] with multithreading (e.g., OpenMP, Pthreads). The objective of this study is to compare the performance of such hybrid programming models when applied to the simulation of a realistic physiological multiscale model of the heart. Our results show that the hybrid models perform favorably when compared to an implementation using only the MPI and, furthermore, that OpenMP in combination with the MPI provides a satisfactory compromise between performance and code complexity. Having the ability to use threads within MPI processes enables the sophisticated use of all processor cores for both computation and communication phases. Considering that HPC systems in 2012 will have two orders of magnitude more cores than what was used in this study, we believe that faster than real-time multiscale cardiac simulations can be achieved on these systems.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mehrez, Loujaine; Ghanem, Roger; Aitharaju, Venkat

    Design of non-crimp fabric (NCF) composites entails major challenges pertaining to (1) the complex fine-scale morphology of the constituents, (2) the manufacturing-produced inconsistency of this morphology spatially, and thus (3) the ability to build reliable, robust, and efficient computational surrogate models to account for this complex nature. Traditional approaches to construct computational surrogate models have been to average over the fluctuations of the material properties at different scale lengths. This fails to account for the fine-scale features and fluctuations in morphology, material properties of the constituents, as well as fine-scale phenomena such as damage and cracks. In addition, it failsmore » to accurately predict the scatter in macroscopic properties, which is vital to the design process and behavior prediction. In this work, funded in part by the Department of Energy, we present an approach for addressing these challenges by relying on polynomial chaos representations of both input parameters and material properties at different scales. Moreover, we emphasize the efficiency and robustness of integrating the polynomial chaos expansion with multiscale tools to perform multiscale assimilation, characterization, propagation, and prediction, all of which are necessary to construct the data-driven surrogate models required to design under the uncertainty of composites. These data-driven constructions provide an accurate map from parameters (and their uncertainties) at all scales and the system-level behavior relevant for design. While this perspective is quite general and applicable to all multiscale systems, NCF composites present a particular hierarchy of scales that permits the efficient implementation of these concepts.« less

  20. Recent Enhancements to the Community Multiscale Air Quality Modeling System (CMAQ)

    EPA Science Inventory

    EPA’s Office of Research and Development, Computational Exposure Division held a webinar on January 31, 2017 to present the recent scientific and computational updates made by EPA to the Community Multi-Scale Air Quality Model (CMAQ). Topics covered included: (1) Improveme...

  1. Multiscale fluid-structure interaction modelling to determine the mechanical stimulation of bone cells in a tissue engineered scaffold.

    PubMed

    Zhao, Feihu; Vaughan, Ted J; Mcnamara, Laoise M

    2015-04-01

    Recent studies have shown that mechanical stimulation, by means of flow perfusion and mechanical compression (or stretching), enhances osteogenic differentiation of mesenchymal stem cells and bone cells within biomaterial scaffolds in vitro. However, the precise mechanisms by which such stimulation enhances bone regeneration is not yet fully understood. Previous computational studies have sought to characterise the mechanical stimulation on cells within biomaterial scaffolds using either computational fluid dynamics or finite element (FE) approaches. However, the physical environment within a scaffold under perfusion is extremely complex and requires a multiscale and multiphysics approach to study the mechanical stimulation of cells. In this study, we seek to determine the mechanical stimulation of osteoblasts seeded in a biomaterial scaffold under flow perfusion and mechanical compression using multiscale modelling by two-way fluid-structure interaction and FE approaches. The mechanical stimulation, in terms of wall shear stress (WSS) and strain in osteoblasts, is quantified at different locations within the scaffold for cells of different attachment morphologies (attached, bridged). The results show that 75.4 % of scaffold surface has a WSS of 0.1-10 mPa, which indicates the likelihood of bone cell differentiation at these locations. For attached and bridged osteoblasts, the maximum strains are 397 and 177,200 με, respectively. Additionally, the results from mechanical compression show that attached cells are more stimulated (maximum strain = 22,600 με) than bridged cells (maximum strain = 10.000 με)Such information is important for understanding the biological response of osteoblasts under in vitro stimulation. Finally, a combination of perfusion and compression of a tissue engineering scaffold is suggested for osteogenic differentiation.

  2. Computational models of aortic coarctation in hypoplastic left heart syndrome: considerations on validation of a detailed 3D model.

    PubMed

    Biglino, Giovanni; Corsini, Chiara; Schievano, Silvia; Dubini, Gabriele; Giardini, Alessandro; Hsia, Tain-Yen; Pennati, Giancarlo; Taylor, Andrew M

    2014-05-01

    Reliability of computational models for cardiovascular investigations strongly depends on their validation against physical data. This study aims to experimentally validate a computational model of complex congenital heart disease (i.e., surgically palliated hypoplastic left heart syndrome with aortic coarctation) thus demonstrating that hemodynamic information can be reliably extrapolated from the model for clinically meaningful investigations. A patient-specific aortic arch model was tested in a mock circulatory system and the same flow conditions were re-created in silico, by setting an appropriate lumped parameter network (LPN) attached to the same three-dimensional (3D) aortic model (i.e., multi-scale approach). The model included a modified Blalock-Taussig shunt and coarctation of the aorta. Different flow regimes were tested as well as the impact of uncertainty in viscosity. Computational flow and pressure results were in good agreement with the experimental signals, both qualitatively, in terms of the shape of the waveforms, and quantitatively (mean aortic pressure 62.3 vs. 65.1 mmHg, 4.8% difference; mean aortic flow 28.0 vs. 28.4% inlet flow, 1.4% difference; coarctation pressure drop 30.0 vs. 33.5 mmHg, 10.4% difference), proving the reliability of the numerical approach. It was observed that substantial changes in fluid viscosity or using a turbulent model in the numerical simulations did not significantly affect flows and pressures of the investigated physiology. Results highlighted how the non-linear fluid dynamic phenomena occurring in vitro must be properly described to ensure satisfactory agreement. This study presents methodological considerations for using experimental data to preliminarily set up a computational model, and then simulate a complex congenital physiology using a multi-scale approach.

  3. Multiscale corner detection and classification using local properties and semantic patterns

    NASA Astrophysics Data System (ADS)

    Gallo, Giovanni; Giuoco, Alessandro L.

    2002-05-01

    A new technique to detect, localize and classify corners in digital closed curves is proposed. The technique is based on correct estimation of support regions for each point. We compute multiscale curvature to detect and to localize corners. As a further step, with the aid of some local features, it's possible to classify corners into seven distinct types. Classification is performed using a set of rules, which describe corners according to preset semantic patterns. Compared with existing techniques, the proposed approach inscribes itself into the family of algorithms that try to explain the curve, instead of simple labeling. Moreover, our technique works in manner similar to what is believed are typical mechanisms of human perception.

  4. Fast Particle Methods for Multiscale Phenomena Simulations

    NASA Technical Reports Server (NTRS)

    Koumoutsakos, P.; Wray, A.; Shariff, K.; Pohorille, Andrew

    2000-01-01

    We are developing particle methods oriented at improving computational modeling capabilities of multiscale physical phenomena in : (i) high Reynolds number unsteady vortical flows, (ii) particle laden and interfacial flows, (iii)molecular dynamics studies of nanoscale droplets and studies of the structure, functions, and evolution of the earliest living cell. The unifying computational approach involves particle methods implemented in parallel computer architectures. The inherent adaptivity, robustness and efficiency of particle methods makes them a multidisciplinary computational tool capable of bridging the gap of micro-scale and continuum flow simulations. Using efficient tree data structures, multipole expansion algorithms, and improved particle-grid interpolation, particle methods allow for simulations using millions of computational elements, making possible the resolution of a wide range of length and time scales of these important physical phenomena.The current challenges in these simulations are in : [i] the proper formulation of particle methods in the molecular and continuous level for the discretization of the governing equations [ii] the resolution of the wide range of time and length scales governing the phenomena under investigation. [iii] the minimization of numerical artifacts that may interfere with the physics of the systems under consideration. [iv] the parallelization of processes such as tree traversal and grid-particle interpolations We are conducting simulations using vortex methods, molecular dynamics and smooth particle hydrodynamics, exploiting their unifying concepts such as : the solution of the N-body problem in parallel computers, highly accurate particle-particle and grid-particle interpolations, parallel FFT's and the formulation of processes such as diffusion in the context of particle methods. This approach enables us to transcend among seemingly unrelated areas of research.

  5. Computational Materials: Modeling and Simulation of Nanostructured Materials and Systems

    NASA Technical Reports Server (NTRS)

    Gates, Thomas S.; Hinkley, Jeffrey A.

    2003-01-01

    The paper provides details on the structure and implementation of the Computational Materials program at the NASA Langley Research Center. Examples are given that illustrate the suggested approaches to predicting the behavior and influencing the design of nanostructured materials such as high-performance polymers, composites, and nanotube-reinforced polymers. Primary simulation and measurement methods applicable to multi-scale modeling are outlined. Key challenges including verification and validation of models are highlighted and discussed within the context of NASA's broad mission objectives.

  6. Multiscale Simulations of Dynamics of Ferroelectric Domains

    NASA Astrophysics Data System (ADS)

    Liu, Shi

    Ferroelectrics with switchable polarization have many important technological applications, which heavily rely on the interactions between the polarization and external perturbations. Understanding the dynamical response of ferroelectric materials is crucial for the discovery and development of new design principles and engineering strategies for optimized and breakthrough applications of ferroelectrics. We developed a multiscale computational approach that combines methods at different length and time scales to elucidate the connection between local structures, domain dynamics, and macroscopic finite-temperature properties of ferroelectrics. We started from first-principles calculations of ferroelectrics to build a model interatomic potential, enabling large-scale molecular dynamics (MD) simulations. The atomistic insights of nucleation and growth at the domain wall obtained from MD were then incorporated into a continuum model within the framework of Landau-Ginzburg-Devonshire theory. This progressive theoretical framework allows for the first time an efficient and accurate estimation of macroscopic properties such as the coercive field for a broad range of ferroelectrics from first-principles. This multiscale approach has also been applied to explore the effect of dipolar defects on ferroelectric switching and to understand the origin of giant electro-strain coupling. ONR, NSF, Carnegie Institution for Science.

  7. Multiscale Cloud System Modeling

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo; Moncrieff, Mitchell W.

    2009-01-01

    The central theme of this paper is to describe how cloud system resolving models (CRMs) of grid spacing approximately 1 km have been applied to various important problems in atmospheric science across a wide range of spatial and temporal scales and how these applications relate to other modeling approaches. A long-standing problem concerns the representation of organized precipitating convective cloud systems in weather and climate models. Since CRMs resolve the mesoscale to large scales of motion (i.e., 10 km to global) they explicitly address the cloud system problem. By explicitly representing organized convection, CRMs bypass restrictive assumptions associated with convective parameterization such as the scale gap between cumulus and large-scale motion. Dynamical models provide insight into the physical mechanisms involved with scale interaction and convective organization. Multiscale CRMs simulate convective cloud systems in computational domains up to global and have been applied in place of contemporary convective parameterizations in global models. Multiscale CRMs pose a new challenge for model validation, which is met in an integrated approach involving CRMs, operational prediction systems, observational measurements, and dynamical models in a new international project: the Year of Tropical Convection, which has an emphasis on organized tropical convection and its global effects.

  8. A Comprehensive Specimen-Specific Multiscale Data Set for Anatomical and Mechanical Characterization of the Tibiofemoral Joint

    PubMed Central

    Chokhandre, Snehal; Colbrunn, Robb; Bennetts, Craig; Erdemir, Ahmet

    2015-01-01

    Understanding of tibiofemoral joint mechanics at multiple spatial scales is essential for developing effective preventive measures and treatments for both pathology and injury management. Currently, there is a distinct lack of specimen-specific biomechanical data at multiple spatial scales, e.g., joint, tissue, and cell scales. Comprehensive multiscale data may improve the understanding of the relationship between biomechanical and anatomical markers across various scales. Furthermore, specimen-specific multiscale data for the tibiofemoral joint may assist development and validation of specimen-specific computational models that may be useful for more thorough analyses of the biomechanical behavior of the joint. This study describes an aggregation of procedures for acquisition of multiscale anatomical and biomechanical data for the tibiofemoral joint. Magnetic resonance imaging was used to acquire anatomical morphology at the joint scale. A robotic testing system was used to quantify joint level biomechanical response under various loading scenarios. Tissue level material properties were obtained from the same specimen for the femoral and tibial articular cartilage, medial and lateral menisci, anterior and posterior cruciate ligaments, and medial and lateral collateral ligaments. Histology data were also obtained for all tissue types to measure specimen-specific cell scale information, e.g., cellular distribution. This study is the first of its kind to establish a comprehensive multiscale data set for a musculoskeletal joint and the presented data collection approach can be used as a general template to guide acquisition of specimen-specific comprehensive multiscale data for musculoskeletal joints. PMID:26381404

  9. A Hybrid Multi-Scale Model of Crystal Plasticity for Handling Stress Concentrations

    DOE PAGES

    Sun, Shang; Ramazani, Ali; Sundararaghavan, Veera

    2017-09-04

    Microstructural effects become important at regions of stress concentrators such as notches, cracks and contact surfaces. A multiscale model is presented that efficiently captures microstructural details at such critical regions. The approach is based on a multiresolution mesh that includes an explicit microstructure representation at critical regions where stresses are localized. At regions farther away from the stress concentration, a reduced order model that statistically captures the effect of the microstructure is employed. The statistical model is based on a finite element representation of the orientation distribution function (ODF). As an illustrative example, we have applied the multiscaling method tomore » compute the stress intensity factor K I around the crack tip in a wedge-opening load specimen. The approach is verified with an analytical solution within linear elasticity approximation and is then extended to allow modeling of microstructural effects on crack tip plasticity.« less

  10. A Hybrid Multi-Scale Model of Crystal Plasticity for Handling Stress Concentrations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Shang; Ramazani, Ali; Sundararaghavan, Veera

    Microstructural effects become important at regions of stress concentrators such as notches, cracks and contact surfaces. A multiscale model is presented that efficiently captures microstructural details at such critical regions. The approach is based on a multiresolution mesh that includes an explicit microstructure representation at critical regions where stresses are localized. At regions farther away from the stress concentration, a reduced order model that statistically captures the effect of the microstructure is employed. The statistical model is based on a finite element representation of the orientation distribution function (ODF). As an illustrative example, we have applied the multiscaling method tomore » compute the stress intensity factor K I around the crack tip in a wedge-opening load specimen. The approach is verified with an analytical solution within linear elasticity approximation and is then extended to allow modeling of microstructural effects on crack tip plasticity.« less

  11. An extended algebraic variational multiscale-multigrid-multifractal method (XAVM4) for large-eddy simulation of turbulent two-phase flow

    NASA Astrophysics Data System (ADS)

    Rasthofer, U.; Wall, W. A.; Gravemeier, V.

    2018-04-01

    A novel and comprehensive computational method, referred to as the eXtended Algebraic Variational Multiscale-Multigrid-Multifractal Method (XAVM4), is proposed for large-eddy simulation of the particularly challenging problem of turbulent two-phase flow. The XAVM4 involves multifractal subgrid-scale modeling as well as a Nitsche-type extended finite element method as an approach for two-phase flow. The application of an advanced structural subgrid-scale modeling approach in conjunction with a sharp representation of the discontinuities at the interface between two bulk fluids promise high-fidelity large-eddy simulation of turbulent two-phase flow. The high potential of the XAVM4 is demonstrated for large-eddy simulation of turbulent two-phase bubbly channel flow, that is, turbulent channel flow carrying a single large bubble of the size of the channel half-width in this particular application.

  12. Modeling Complex Biological Flows in Multi-Scale Systems using the APDEC Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trebotich, D

    We have developed advanced numerical algorithms to model biological fluids in multiscale flow environments using the software framework developed under the SciDAC APDEC ISIC. The foundation of our computational effort is an approach for modeling DNA-laden fluids as ''bead-rod'' polymers whose dynamics are fully coupled to an incompressible viscous solvent. The method is capable of modeling short range forces and interactions between particles using soft potentials and rigid constraints. Our methods are based on higher-order finite difference methods in complex geometry with adaptivity, leveraging algorithms and solvers in the APDEC Framework. Our Cartesian grid embedded boundary approach to incompressible viscousmore » flow in irregular geometries has also been interfaced to a fast and accurate level-sets method within the APDEC Framework for extracting surfaces from volume renderings of medical image data and used to simulate cardio-vascular and pulmonary flows in critical anatomies.« less

  13. Modeling complex biological flows in multi-scale systems using the APDEC framework

    NASA Astrophysics Data System (ADS)

    Trebotich, David

    2006-09-01

    We have developed advanced numerical algorithms to model biological fluids in multiscale flow environments using the software framework developed under the SciDAC APDEC ISIC. The foundation of our computational effort is an approach for modeling DNA laden fluids as ''bead-rod'' polymers whose dynamics are fully coupled to an incompressible viscous solvent. The method is capable of modeling short range forces and interactions between particles using soft potentials and rigid constraints. Our methods are based on higher-order finite difference methods in complex geometry with adaptivity, leveraging algorithms and solvers in the APDEC Framework. Our Cartesian grid embedded boundary approach to incompressible viscous flow in irregular geometries has also been interfaced to a fast and accurate level-sets method within the APDEC Framework for extracting surfaces from volume renderings of medical image data and used to simulate cardio-vascular and pulmonary flows in critical anatomies.

  14. Adaptive resolution simulation of a biomolecule and its hydration shell: Structural and dynamical properties

    NASA Astrophysics Data System (ADS)

    Fogarty, Aoife C.; Potestio, Raffaello; Kremer, Kurt

    2015-05-01

    A fully atomistic modelling of many biophysical and biochemical processes at biologically relevant length- and time scales is beyond our reach with current computational resources, and one approach to overcome this difficulty is the use of multiscale simulation techniques. In such simulations, when system properties necessitate a boundary between resolutions that falls within the solvent region, one can use an approach such as the Adaptive Resolution Scheme (AdResS), in which solvent particles change their resolution on the fly during the simulation. Here, we apply the existing AdResS methodology to biomolecular systems, simulating a fully atomistic protein with an atomistic hydration shell, solvated in a coarse-grained particle reservoir and heat bath. Using as a test case an aqueous solution of the regulatory protein ubiquitin, we first confirm the validity of the AdResS approach for such systems, via an examination of protein and solvent structural and dynamical properties. We then demonstrate how, in addition to providing a computational speedup, such a multiscale AdResS approach can yield otherwise inaccessible physical insights into biomolecular function. We use our methodology to show that protein structure and dynamics can still be correctly modelled using only a few shells of atomistic water molecules. We also discuss aspects of the AdResS methodology peculiar to biomolecular simulations.

  15. A Multiphysics and Multiscale Software Environment for Modeling Astrophysical Systems

    NASA Astrophysics Data System (ADS)

    Portegies Zwart, Simon; McMillan, Steve; O'Nualláin, Breanndán; Heggie, Douglas; Lombardi, James; Hut, Piet; Banerjee, Sambaran; Belkus, Houria; Fragos, Tassos; Fregeau, John; Fuji, Michiko; Gaburov, Evghenii; Glebbeek, Evert; Groen, Derek; Harfst, Stefan; Izzard, Rob; Jurić, Mario; Justham, Stephen; Teuben, Peter; van Bever, Joris; Yaron, Ofer; Zemp, Marcel

    We present MUSE, a software framework for tying together existing computational tools for different astrophysical domains into a single multiphysics, multiscale workload. MUSE facilitates the coupling of existing codes written in different languages by providing inter-language tools and by specifying an interface between each module and the framework that represents a balance between generality and computational efficiency. This approach allows scientists to use combinations of codes to solve highly-coupled problems without the need to write new codes for other domains or significantly alter their existing codes. MUSE currently incorporates the domains of stellar dynamics, stellar evolution and stellar hydrodynamics for a generalized stellar systems workload. MUSE has now reached a "Noah's Ark" milestone, with two available numerical solvers for each domain. MUSE can treat small stellar associations, galaxies and everything in between, including planetary systems, dense stellar clusters and galactic nuclei. Here we demonstrate an examples calculated with MUSE: the merger of two galaxies. In addition we demonstrate the working of MUSE on a distributed computer. The current MUSE code base is publicly available as open source at http://muse.li.

  16. A Computational Cluster for Multiscale Simulations of Ionic Liquids

    DTIC Science & Technology

    2008-09-16

    AND SUBTITLE DURIP: A Computational Cluster for Multiscale Simulations of Ionic Liquids 5a. CONTRACT NUMBER 5b. GRANT NUMBER FA955007-1-0512 5c...AVAILABILITY STATEMENT ZO\\5oc\\\\%1>^ 13. SUPPLEMENTARY NOTES 14. ABSTRACT The focus of this project was to acquire and use computer cluster nodes...by ANSI Std. Z39.18 Adobe Professional 7.0 Comprehensive Final Report: Gregory A. Voth, PI Contract/Grant Title: DURIP: A Computational Cluster for

  17. Molecular systems biology of ErbB1 signaling: bridging the gap through multiscale modeling and high-performance computing.

    PubMed

    Shih, Andrew J; Purvis, Jeremy; Radhakrishnan, Ravi

    2008-12-01

    The complexity in intracellular signaling mechanisms relevant for the conquest of many diseases resides at different levels of organization with scales ranging from the subatomic realm relevant to catalytic functions of enzymes to the mesoscopic realm relevant to the cooperative association of molecular assemblies and membrane processes. Consequently, the challenge of representing and quantifying functional or dysfunctional modules within the networks remains due to the current limitations in our understanding of mesoscopic biology, i.e., how the components assemble into functional molecular ensembles. A multiscale approach is necessary to treat a hierarchy of interactions ranging from molecular (nm, ns) to signaling (microm, ms) length and time scales, which necessitates the development and application of specialized modeling tools. Complementary to multiscale experimentation (encompassing structural biology, mechanistic enzymology, cell biology, and single molecule studies) multiscale modeling offers a powerful and quantitative alternative for the study of functional intracellular signaling modules. Here, we describe the application of a multiscale approach to signaling mediated by the ErbB1 receptor which constitutes a network hub for the cell's proliferative, migratory, and survival programs. Through our multiscale model, we mechanistically describe how point-mutations in the ErbB1 receptor can profoundly alter signaling characteristics leading to the onset of oncogenic transformations. Specifically, we describe how the point mutations induce cascading fragility mechanisms at the molecular scale as well as at the scale of the signaling network to preferentially activate the survival factor Akt. We provide a quantitative explanation for how the hallmark of preferential Akt activation in cell-lines harboring the constitutively active mutant ErbB1 receptors causes these cell-lines to be addicted to ErbB1-mediated generation of survival signals. Consequently, inhibition of ErbB1 activity leads to a remarkable therapeutic response in the addicted cell lines.

  18. Towards a Multiscale Approach to Cybersecurity Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogan, Emilie A.; Hui, Peter SY; Choudhury, Sutanay

    2013-11-12

    We propose a multiscale approach to modeling cyber networks, with the goal of capturing a view of the network and overall situational awareness with respect to a few key properties--- connectivity, distance, and centrality--- for a system under an active attack. We focus on theoretical and algorithmic foundations of multiscale graphs, coming from an algorithmic perspective, with the goal of modeling cyber system defense as a specific use case scenario. We first define a notion of \\emph{multiscale} graphs, in contrast with their well-studied single-scale counterparts. We develop multiscale analogs of paths and distance metrics. As a simple, motivating example ofmore » a common metric, we present a multiscale analog of the all-pairs shortest-path problem, along with a multiscale analog of a well-known algorithm which solves it. From a cyber defense perspective, this metric might be used to model the distance from an attacker's position in the network to a sensitive machine. In addition, we investigate probabilistic models of connectivity. These models exploit the hierarchy to quantify the likelihood that sensitive targets might be reachable from compromised nodes. We believe that our novel multiscale approach to modeling cyber-physical systems will advance several aspects of cyber defense, specifically allowing for a more efficient and agile approach to defending these systems.« less

  19. A multiscale computational model of spatially resolved calcium cycling in cardiac myocytes: from detailed cleft dynamics to the whole cell concentration profiles

    PubMed Central

    Vierheller, Janine; Neubert, Wilhelm; Falcke, Martin; Gilbert, Stephen H.; Chamakuri, Nagaiah

    2015-01-01

    Mathematical modeling of excitation-contraction coupling (ECC) in ventricular cardiac myocytes is a multiscale problem, and it is therefore difficult to develop spatially detailed simulation tools. ECC involves gradients on the length scale of 100 nm in dyadic spaces and concentration profiles along the 100 μm of the whole cell, as well as the sub-millisecond time scale of local concentration changes and the change of lumenal Ca2+ content within tens of seconds. Our concept for a multiscale mathematical model of Ca2+ -induced Ca2+ release (CICR) and whole cardiomyocyte electrophysiology incorporates stochastic simulation of individual LC- and RyR-channels, spatially detailed concentration dynamics in dyadic clefts, rabbit membrane potential dynamics, and a system of partial differential equations for myoplasmic and lumenal free Ca2+ and Ca2+-binding molecules in the bulk of the cell. We developed a novel computational approach to resolve the concentration gradients from dyadic space to cell level by using a quasistatic approximation within the dyad and finite element methods for integrating the partial differential equations. We show whole cell Ca2+-concentration profiles using three previously published RyR-channel Markov schemes. PMID:26441674

  20. Modelling approaches for evaluating multiscale tendon mechanics

    PubMed Central

    Fang, Fei; Lake, Spencer P.

    2016-01-01

    Tendon exhibits anisotropic, inhomogeneous and viscoelastic mechanical properties that are determined by its complicated hierarchical structure and varying amounts/organization of different tissue constituents. Although extensive research has been conducted to use modelling approaches to interpret tendon structure–function relationships in combination with experimental data, many issues remain unclear (i.e. the role of minor components such as decorin, aggrecan and elastin), and the integration of mechanical analysis across different length scales has not been well applied to explore stress or strain transfer from macro- to microscale. This review outlines mathematical and computational models that have been used to understand tendon mechanics at different scales of the hierarchical organization. Model representations at the molecular, fibril and tissue levels are discussed, including formulations that follow phenomenological and microstructural approaches (which include evaluations of crimp, helical structure and the interaction between collagen fibrils and proteoglycans). Multiscale modelling approaches incorporating tendon features are suggested to be an advantageous methodology to understand further the physiological mechanical response of tendon and corresponding adaptation of properties owing to unique in vivo loading environments. PMID:26855747

  1. Recent advances in computational-analytical integral transforms for convection-diffusion problems

    NASA Astrophysics Data System (ADS)

    Cotta, R. M.; Naveira-Cotta, C. P.; Knupp, D. C.; Zotin, J. L. Z.; Pontes, P. C.; Almeida, A. P.

    2017-10-01

    An unifying overview of the Generalized Integral Transform Technique (GITT) as a computational-analytical approach for solving convection-diffusion problems is presented. This work is aimed at bringing together some of the most recent developments on both accuracy and convergence improvements on this well-established hybrid numerical-analytical methodology for partial differential equations. Special emphasis is given to novel algorithm implementations, all directly connected to enhancing the eigenfunction expansion basis, such as a single domain reformulation strategy for handling complex geometries, an integral balance scheme in dealing with multiscale problems, the adoption of convective eigenvalue problems in formulations with significant convection effects, and the direct integral transformation of nonlinear convection-diffusion problems based on nonlinear eigenvalue problems. Then, selected examples are presented that illustrate the improvement achieved in each class of extension, in terms of convergence acceleration and accuracy gain, which are related to conjugated heat transfer in complex or multiscale microchannel-substrate geometries, multidimensional Burgers equation model, and diffusive metal extraction through polymeric hollow fiber membranes. Numerical results are reported for each application and, where appropriate, critically compared against the traditional GITT scheme without convergence enhancement schemes and commercial or dedicated purely numerical approaches.

  2. Multiscale modeling of porous ceramics using movable cellular automaton method

    NASA Astrophysics Data System (ADS)

    Smolin, Alexey Yu.; Smolin, Igor Yu.; Smolina, Irina Yu.

    2017-10-01

    The paper presents a multiscale model for porous ceramics based on movable cellular automaton method, which is a particle method in novel computational mechanics of solid. The initial scale of the proposed approach corresponds to the characteristic size of the smallest pores in the ceramics. At this scale, we model uniaxial compression of several representative samples with an explicit account of pores of the same size but with the unique position in space. As a result, we get the average values of Young's modulus and strength, as well as the parameters of the Weibull distribution of these properties at the current scale level. These data allow us to describe the material behavior at the next scale level were only the larger pores are considered explicitly, while the influence of small pores is included via effective properties determined earliar. If the pore size distribution function of the material has N maxima we need to perform computations for N-1 levels in order to get the properties step by step from the lowest scale up to the macroscale. The proposed approach was applied to modeling zirconia ceramics with bimodal pore size distribution. The obtained results show correct behavior of the model sample at the macroscale.

  3. Multi-scale computational modeling of developmental biology.

    PubMed

    Setty, Yaki

    2012-08-01

    Normal development of multicellular organisms is regulated by a highly complex process in which a set of precursor cells proliferate, differentiate and move, forming over time a functioning tissue. To handle their complexity, developmental systems can be studied over distinct scales. The dynamics of each scale is determined by the collective activity of entities at the scale below it. I describe a multi-scale computational approach for modeling developmental systems and detail the methodology through a synthetic example of a developmental system that retains key features of real developmental systems. I discuss the simulation of the system as it emerges from cross-scale and intra-scale interactions and describe how an in silico study can be carried out by modifying these interactions in a way that mimics in vivo experiments. I highlight biological features of the results through a comparison with findings in Caenorhabditis elegans germline development and finally discuss about the applications of the approach in real developmental systems and propose future extensions. The source code of the model of the synthetic developmental system can be found in www.wisdom.weizmann.ac.il/~yaki/MultiScaleModel. yaki.setty@gmail.com Supplementary data are available at Bioinformatics online.

  4. Multiscale Reactive Molecular Dynamics

    DTIC Science & Technology

    2012-08-15

    biology cannot be described without considering electronic and nuclear-level dynamics and their coupling to slower, cooperative motions of the system ...coupling to slower, cooperative motions of the system . These inherently multiscale problems require computationally efficient and accurate methods to...condensed phase systems with computational efficiency orders of magnitudes greater than currently possible with ab initio simulation methods, thus

  5. Materials integrity in microsystems: a framework for a petascale predictive-science-based multiscale modeling and simulation system

    NASA Astrophysics Data System (ADS)

    To, Albert C.; Liu, Wing Kam; Olson, Gregory B.; Belytschko, Ted; Chen, Wei; Shephard, Mark S.; Chung, Yip-Wah; Ghanem, Roger; Voorhees, Peter W.; Seidman, David N.; Wolverton, Chris; Chen, J. S.; Moran, Brian; Freeman, Arthur J.; Tian, Rong; Luo, Xiaojuan; Lautenschlager, Eric; Challoner, A. Dorian

    2008-09-01

    Microsystems have become an integral part of our lives and can be found in homeland security, medical science, aerospace applications and beyond. Many critical microsystem applications are in harsh environments, in which long-term reliability needs to be guaranteed and repair is not feasible. For example, gyroscope microsystems on satellites need to function for over 20 years under severe radiation, thermal cycling, and shock loading. Hence a predictive-science-based, verified and validated computational models and algorithms to predict the performance and materials integrity of microsystems in these situations is needed. Confidence in these predictions is improved by quantifying uncertainties and approximation errors. With no full system testing and limited sub-system testings, petascale computing is certainly necessary to span both time and space scales and to reduce the uncertainty in the prediction of long-term reliability. This paper presents the necessary steps to develop predictive-science-based multiscale modeling and simulation system. The development of this system will be focused on the prediction of the long-term performance of a gyroscope microsystem. The environmental effects to be considered include radiation, thermo-mechanical cycling and shock. Since there will be many material performance issues, attention is restricted to creep resulting from thermal aging and radiation-enhanced mass diffusion, material instability due to radiation and thermo-mechanical cycling and damage and fracture due to shock. To meet these challenges, we aim to develop an integrated multiscale software analysis system that spans the length scales from the atomistic scale to the scale of the device. The proposed software system will include molecular mechanics, phase field evolution, micromechanics and continuum mechanics software, and the state-of-the-art model identification strategies where atomistic properties are calibrated by quantum calculations. We aim to predict the long-term (in excess of 20 years) integrity of the resonator, electrode base, multilayer metallic bonding pads, and vacuum seals in a prescribed mission. Although multiscale simulations are efficient in the sense that they focus the most computationally intensive models and methods on only the portions of the space time domain needed, the execution of the multiscale simulations associated with evaluating materials and device integrity for aerospace microsystems will require the application of petascale computing. A component-based software strategy will be used in the development of our massively parallel multiscale simulation system. This approach will allow us to take full advantage of existing single scale modeling components. An extensive, pervasive thrust in the software system development is verification, validation, and uncertainty quantification (UQ). Each component and the integrated software system need to be carefully verified. An UQ methodology that determines the quality of predictive information available from experimental measurements and packages the information in a form suitable for UQ at various scales needs to be developed. Experiments to validate the model at the nanoscale, microscale, and macroscale are proposed. The development of a petascale predictive-science-based multiscale modeling and simulation system will advance the field of predictive multiscale science so that it can be used to reliably analyze problems of unprecedented complexity, where limited testing resources can be adequately replaced by petascale computational power, advanced verification, validation, and UQ methodologies.

  6. Collaborative Multi-Scale 3d City and Infrastructure Modeling and Simulation

    NASA Astrophysics Data System (ADS)

    Breunig, M.; Borrmann, A.; Rank, E.; Hinz, S.; Kolbe, T.; Schilcher, M.; Mundani, R.-P.; Jubierre, J. R.; Flurl, M.; Thomsen, A.; Donaubauer, A.; Ji, Y.; Urban, S.; Laun, S.; Vilgertshofer, S.; Willenborg, B.; Menninghaus, M.; Steuer, H.; Wursthorn, S.; Leitloff, J.; Al-Doori, M.; Mazroobsemnani, N.

    2017-09-01

    Computer-aided collaborative and multi-scale 3D planning are challenges for complex railway and subway track infrastructure projects in the built environment. Many legal, economic, environmental, and structural requirements have to be taken into account. The stringent use of 3D models in the different phases of the planning process facilitates communication and collaboration between the stake holders such as civil engineers, geological engineers, and decision makers. This paper presents concepts, developments, and experiences gained by an interdisciplinary research group coming from civil engineering informatics and geo-informatics banding together skills of both, the Building Information Modeling and the 3D GIS world. New approaches including the development of a collaborative platform and 3D multi-scale modelling are proposed for collaborative planning and simulation to improve the digital 3D planning of subway tracks and other infrastructures. Experiences during this research and lessons learned are presented as well as an outlook on future research focusing on Building Information Modeling and 3D GIS applications for cities of the future.

  7. The transport of drug in fibrosis. Comment on "Towards a unified approach in the modeling of fibrosis: A review with research perspectives" by Martine Ben Amar and Carlo Bianca

    NASA Astrophysics Data System (ADS)

    Ivancevic, Vladimir

    2016-07-01

    The topic of the review article [1] is the derivation of a multiscale paradigm for the modeling of fibrosis. Firstly, the biological process of the physiological and pathological fibrosis including therapeutical actions is reviewed. Fibrosis can be a consequence of tissue damage, infections and autoimmune diseases, foreign material, tumors. Some questions regarding the pathogenesis, progression and possible regression of fibrosis are lacking. At each scale of observation, different theoretical tools coming from computational, mathematical and physical biology have been proposed. However a complete framework that takes into account the different mechanisms occurring at different scales is still missing. Therefore with the main aim to define a multiscale approach for the modeling of fibrosis, the authors of [1] have presented different top-down and bottom-up approaches that have been developed in the literature. Specifically, their description refers to models for fibrosis diseases based on ordinary and partial differential equation, agents [2], thermostatted kinetic theory [3-5], coarse-grained structures [6-8] and constitutive laws for fibrous collagen networks [9]. A critical analysis has been addressed for all frameworks discussed in the paper. Open problems and future research directions referring to both biological and modeling insight of fibrosis are presented. The paper concludes with the ambitious aim of a multiscale model.

  8. Multiscale spatial and temporal estimation of the b-value

    NASA Astrophysics Data System (ADS)

    García-Hernández, R.; D'Auria, L.; Barrancos, J.; Padilla, G.

    2017-12-01

    The estimation of the spatial and temporal variations of the Gutenberg-Richter b-value is of great importance in different seismological applications. One of the problems affecting its estimation is the heterogeneous distribution of the seismicity which makes its estimate strongly dependent upon the selected spatial and/or temporal scale. This is especially important in volcanoes where dense clusters of earthquakes often overlap the background seismicity. Proposed solutions for estimating temporal variations of the b-value include considering equally spaced time intervals or variable intervals having an equal number of earthquakes. Similar approaches have been proposed to image the spatial variations of this parameter as well.We propose a novel multiscale approach, based on the method of Ogata and Katsura (1993), allowing a consistent estimation of the b-value regardless of the considered spatial and/or temporal scales. Our method, named MUST-B (MUltiscale Spatial and Temporal characterization of the B-value), basically consists in computing estimates of the b-value at multiple temporal and spatial scales, extracting for a give spatio-temporal point a statistical estimator of the value, as well as and indication of the characteristic spatio-temporal scale. This approach includes also a consistent estimation of the completeness magnitude (Mc) and of the uncertainties over both b and Mc.We applied this method to example datasets for volcanic (Tenerife, El Hierro) and tectonic areas (Central Italy) as well as an example application at global scale.

  9. Three-Dimensional Multiscale Modeling of Dendritic Spacing Selection During Al-Si Directional Solidification

    NASA Astrophysics Data System (ADS)

    Tourret, Damien; Clarke, Amy J.; Imhoff, Seth D.; Gibbs, Paul J.; Gibbs, John W.; Karma, Alain

    2015-08-01

    We present a three-dimensional extension of the multiscale dendritic needle network (DNN) model. This approach enables quantitative simulations of the unsteady dynamics of complex hierarchical networks in spatially extended dendritic arrays. We apply the model to directional solidification of Al-9.8 wt.%Si alloy and directly compare the model predictions with measurements from experiments with in situ x-ray imaging. We focus on the dynamical selection of primary spacings over a range of growth velocities, and the influence of sample geometry on the selection of spacings. Simulation results show good agreement with experiments. The computationally efficient DNN model opens new avenues for investigating the dynamics of large dendritic arrays at scales relevant to solidification experiments and processes.

  10. Chain conformations dictate multiscale charge transport phenomena in disordered semiconducting polymers

    PubMed Central

    Noriega, Rodrigo; Salleo, Alberto; Spakowitz, Andrew J.

    2013-01-01

    Existing models for the electronic properties of conjugated polymers do not capture the spatial arrangement of the disordered macromolecular chains over which charge transport occurs. Here, we present an analytical and computational description in which the morphology of individual polymer chains is dictated by well-known statistical models and the electronic coupling between units is determined using Marcus theory. The multiscale transport of charges in these materials (high mobility at short length scales, low mobility at long length scales) is naturally described with our framework. Additionally, the dependence of mobility with electric field and temperature is explained in terms of conformational variability and spatial correlation. Our model offers a predictive approach to connecting processing conditions with transport behavior. PMID:24062459

  11. Chain conformations dictate multiscale charge transport phenomena in disordered semiconducting polymers.

    PubMed

    Noriega, Rodrigo; Salleo, Alberto; Spakowitz, Andrew J

    2013-10-08

    Existing models for the electronic properties of conjugated polymers do not capture the spatial arrangement of the disordered macromolecular chains over which charge transport occurs. Here, we present an analytical and computational description in which the morphology of individual polymer chains is dictated by well-known statistical models and the electronic coupling between units is determined using Marcus theory. The multiscale transport of charges in these materials (high mobility at short length scales, low mobility at long length scales) is naturally described with our framework. Additionally, the dependence of mobility with electric field and temperature is explained in terms of conformational variability and spatial correlation. Our model offers a predictive approach to connecting processing conditions with transport behavior.

  12. Multi-scale finite element modeling allows the mechanics of amphibian neurulation to be elucidated

    NASA Astrophysics Data System (ADS)

    Chen, Xiaoguang; Brodland, G. Wayne

    2008-03-01

    The novel multi-scale computational approach introduced here makes possible a new means for testing hypotheses about the forces that drive specific morphogenetic movements. A 3D model based on this approach is used to investigate neurulation in the axolotl (Ambystoma mexicanum), a type of amphibian. The model is based on geometric data from 3D surface reconstructions of live embryos and from serial sections. Tissue properties are described by a system of cell-based constitutive equations, and parameters in the equations are determined from physical tests. The model includes the effects of Shroom-activated neural ridge reshaping and lamellipodium-driven convergent extension. A typical whole-embryo model consists of 10 239 elements and to run its 100 incremental time steps requires 2 days. The model shows that a normal phenotype does not result if lamellipodium forces are uniform across the width of the neural plate; but it can result if the lamellipodium forces decrease from a maximum value at the mid-sagittal plane to zero at the plate edge. Even the seemingly simple motions of neurulation are found to contain important features that would remain hidden, they were not studied using an advanced computational model. The present model operates in a setting where data are extremely sparse and an important outcome of the study is a better understanding of the role of computational models in such environments.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kornreich, Drew E; Vaidya, Rajendra U; Ammerman, Curtt N

    Integrated Computational Materials Engineering (ICME) is a novel overarching approach to bridge length and time scales in computational materials science and engineering. This approach integrates all elements of multi-scale modeling (including various empirical and science-based models) with materials informatics to provide users the opportunity to tailor material selections based on stringent application needs. Typically, materials engineering has focused on structural requirements (stress, strain, modulus, fracture toughness etc.) while multi-scale modeling has been science focused (mechanical threshold strength model, grain-size models, solid-solution strengthening models etc.). Materials informatics (mechanical property inventories) on the other hand, is extensively data focused. All of thesemore » elements are combined within the framework of ICME to create architecture for the development, selection and design new composite materials for challenging environments. We propose development of the foundations for applying ICME to composite materials development for nuclear and high-radiation environments (including nuclear-fusion energy reactors, nuclear-fission reactors, and accelerators). We expect to combine all elements of current material models (including thermo-mechanical and finite-element models) into the ICME framework. This will be accomplished through the use of a various mathematical modeling constructs. These constructs will allow the integration of constituent models, which in tum would allow us to use the adaptive strengths of using a combinatorial scheme (fabrication and computational) for creating new composite materials. A sample problem where these concepts are used is provided in this summary.« less

  14. Multi-scale finite element modeling allows the mechanics of amphibian neurulation to be elucidated.

    PubMed

    Chen, Xiaoguang; Brodland, G Wayne

    2008-04-11

    The novel multi-scale computational approach introduced here makes possible a new means for testing hypotheses about the forces that drive specific morphogenetic movements. A 3D model based on this approach is used to investigate neurulation in the axolotl (Ambystoma mexicanum), a type of amphibian. The model is based on geometric data from 3D surface reconstructions of live embryos and from serial sections. Tissue properties are described by a system of cell-based constitutive equations, and parameters in the equations are determined from physical tests. The model includes the effects of Shroom-activated neural ridge reshaping and lamellipodium-driven convergent extension. A typical whole-embryo model consists of 10,239 elements and to run its 100 incremental time steps requires 2 days. The model shows that a normal phenotype does not result if lamellipodium forces are uniform across the width of the neural plate; but it can result if the lamellipodium forces decrease from a maximum value at the mid-sagittal plane to zero at the plate edge. Even the seemingly simple motions of neurulation are found to contain important features that would remain hidden, they were not studied using an advanced computational model. The present model operates in a setting where data are extremely sparse and an important outcome of the study is a better understanding of the role of computational models in such environments.

  15. Multivariate and Multiscale Data Assimilation in Terrestrial Systems: A Review

    PubMed Central

    Montzka, Carsten; Pauwels, Valentijn R. N.; Franssen, Harrie-Jan Hendricks; Han, Xujun; Vereecken, Harry

    2012-01-01

    More and more terrestrial observational networks are being established to monitor climatic, hydrological and land-use changes in different regions of the World. In these networks, time series of states and fluxes are recorded in an automated manner, often with a high temporal resolution. These data are important for the understanding of water, energy, and/or matter fluxes, as well as their biological and physical drivers and interactions with and within the terrestrial system. Similarly, the number and accuracy of variables, which can be observed by spaceborne sensors, are increasing. Data assimilation (DA) methods utilize these observations in terrestrial models in order to increase process knowledge as well as to improve forecasts for the system being studied. The widely implemented automation in observing environmental states and fluxes makes an operational computation more and more feasible, and it opens the perspective of short-time forecasts of the state of terrestrial systems. In this paper, we review the state of the art with respect to DA focusing on the joint assimilation of observational data precedents from different spatial scales and different data types. An introduction is given to different DA methods, such as the Ensemble Kalman Filter (EnKF), Particle Filter (PF) and variational methods (3/4D-VAR). In this review, we distinguish between four major DA approaches: (1) univariate single-scale DA (UVSS), which is the approach used in the majority of published DA applications, (2) univariate multiscale DA (UVMS) referring to a methodology which acknowledges that at least some of the assimilated data are measured at a different scale than the computational grid scale, (3) multivariate single-scale DA (MVSS) dealing with the assimilation of at least two different data types, and (4) combined multivariate multiscale DA (MVMS). Finally, we conclude with a discussion on the advantages and disadvantages of the assimilation of multiple data types in a simulation model. Existing approaches can be used to simultaneously update several model states and model parameters if applicable. In other words, the basic principles for multivariate data assimilation are already available. We argue that a better understanding of the measurement errors for different observation types, improved estimates of observation bias and improved multiscale assimilation methods for data which scale nonlinearly is important to properly weight them in multiscale multivariate data assimilation. In this context, improved cross-validation of different data types, and increased ground truth verification of remote sensing products are required. PMID:23443380

  16. Multivariate and multiscale data assimilation in terrestrial systems: a review.

    PubMed

    Montzka, Carsten; Pauwels, Valentijn R N; Franssen, Harrie-Jan Hendricks; Han, Xujun; Vereecken, Harry

    2012-11-26

    More and more terrestrial observational networks are being established to monitor climatic, hydrological and land-use changes in different regions of the World. In these networks, time series of states and fluxes are recorded in an automated manner, often with a high temporal resolution. These data are important for the understanding of water, energy, and/or matter fluxes, as well as their biological and physical drivers and interactions with and within the terrestrial system. Similarly, the number and accuracy of variables, which can be observed by spaceborne sensors, are increasing. Data assimilation (DA) methods utilize these observations in terrestrial models in order to increase process knowledge as well as to improve forecasts for the system being studied. The widely implemented automation in observing environmental states and fluxes makes an operational computation more and more feasible, and it opens the perspective of short-time forecasts of the state of terrestrial systems. In this paper, we review the state of the art with respect to DA focusing on the joint assimilation of observational data precedents from different spatial scales and different data types. An introduction is given to different DA methods, such as the Ensemble Kalman Filter (EnKF), Particle Filter (PF) and variational methods (3/4D-VAR). In this review, we distinguish between four major DA approaches: (1) univariate single-scale DA (UVSS), which is the approach used in the majority of published DA applications, (2) univariate multiscale DA (UVMS) referring to a methodology which acknowledges that at least some of the assimilated data are measured at a different scale than the computational grid scale, (3) multivariate single-scale DA (MVSS) dealing with the assimilation of at least two different data types, and (4) combined multivariate multiscale DA (MVMS). Finally, we conclude with a discussion on the advantages and disadvantages of the assimilation of multiple data types in a simulation model. Existing approaches can be used to simultaneously update several model states and model parameters if applicable. In other words, the basic principles for multivariate data assimilation are already available. We argue that a better understanding of the measurement errors for different observation types, improved estimates of observation bias and improved multiscale assimilation methods for data which scale nonlinearly is important to properly weight them in multiscale multivariate data assimilation. In this context, improved cross-validation of different data types, and increased ground truth verification of remote sensing products are required.

  17. A mixed parallel strategy for the solution of coupled multi-scale problems at finite strains

    NASA Astrophysics Data System (ADS)

    Lopes, I. A. Rodrigues; Pires, F. M. Andrade; Reis, F. J. P.

    2018-02-01

    A mixed parallel strategy for the solution of homogenization-based multi-scale constitutive problems undergoing finite strains is proposed. The approach aims to reduce the computational time and memory requirements of non-linear coupled simulations that use finite element discretization at both scales (FE^2). In the first level of the algorithm, a non-conforming domain decomposition technique, based on the FETI method combined with a mortar discretization at the interface of macroscopic subdomains, is employed. A master-slave scheme, which distributes tasks by macroscopic element and adopts dynamic scheduling, is then used for each macroscopic subdomain composing the second level of the algorithm. This strategy allows the parallelization of FE^2 simulations in computers with either shared memory or distributed memory architectures. The proposed strategy preserves the quadratic rates of asymptotic convergence that characterize the Newton-Raphson scheme. Several examples are presented to demonstrate the robustness and efficiency of the proposed parallel strategy.

  18. Multiscale Space-Time Computational Methods for Fluid-Structure Interactions

    DTIC Science & Technology

    2015-09-13

    prescribed fully or partially, is from an actual locust, extracted from high-speed, multi-camera video recordings of the locust in a wind tunnel . We use...With creative methods for coupling the fluid and structure, we can increase the scope and efficiency of the FSI modeling . Multiscale methods, which now...play an important role in computational mathematics, can also increase the accuracy and efficiency of the computer modeling techniques. The main

  19. Multi-scale modeling in cell biology

    PubMed Central

    Meier-Schellersheim, Martin; Fraser, Iain D. C.; Klauschen, Frederick

    2009-01-01

    Biomedical research frequently involves performing experiments and developing hypotheses that link different scales of biological systems such as, for instance, the scales of intracellular molecular interactions to the scale of cellular behavior and beyond to the behavior of cell populations. Computational modeling efforts that aim at exploring such multi-scale systems quantitatively with the help of simulations have to incorporate several different simulation techniques due to the different time and space scales involved. Here, we provide a non-technical overview of how different scales of experimental research can be combined with the appropriate computational modeling techniques. We also show that current modeling software permits building and simulating multi-scale models without having to become involved with the underlying technical details of computational modeling. PMID:20448808

  20. Community Multiscale Air Quality Model

    EPA Science Inventory

    The U.S. EPA developed the Community Multiscale Air Quality (CMAQ) system to apply a “one atmosphere” multiscale and multi-pollutant modeling approach based mainly on the “first principles” description of the atmosphere. The multiscale capability is supported by the governing di...

  1. Tug-of-war lacunarity—A novel approach for estimating lacunarity

    NASA Astrophysics Data System (ADS)

    Reiss, Martin A.; Lemmerer, Birgit; Hanslmeier, Arnold; Ahammer, Helmut

    2016-11-01

    Modern instrumentation provides us with massive repositories of digital images that will likely only increase in the future. Therefore, it has become increasingly important to automatize the analysis of digital images, e.g., with methods from pattern recognition. These methods aim to quantify the visual appearance of captured textures with quantitative measures. As such, lacunarity is a useful multi-scale measure of texture's heterogeneity but demands high computational efforts. Here we investigate a novel approach based on the tug-of-war algorithm, which estimates lacunarity in a single pass over the image. We computed lacunarity for theoretical and real world sample images, and found that the investigated approach is able to estimate lacunarity with low uncertainties. We conclude that the proposed method combines low computational efforts with high accuracy, and that its application may have utility in the analysis of high-resolution images.

  2. Model's sparse representation based on reduced mixed GMsFE basis methods

    NASA Astrophysics Data System (ADS)

    Jiang, Lijian; Li, Qiuqi

    2017-06-01

    In this paper, we propose a model's sparse representation based on reduced mixed generalized multiscale finite element (GMsFE) basis methods for elliptic PDEs with random inputs. A typical application for the elliptic PDEs is the flow in heterogeneous random porous media. Mixed generalized multiscale finite element method (GMsFEM) is one of the accurate and efficient approaches to solve the flow problem in a coarse grid and obtain the velocity with local mass conservation. When the inputs of the PDEs are parameterized by the random variables, the GMsFE basis functions usually depend on the random parameters. This leads to a large number degree of freedoms for the mixed GMsFEM and substantially impacts on the computation efficiency. In order to overcome the difficulty, we develop reduced mixed GMsFE basis methods such that the multiscale basis functions are independent of the random parameters and span a low-dimensional space. To this end, a greedy algorithm is used to find a set of optimal samples from a training set scattered in the parameter space. Reduced mixed GMsFE basis functions are constructed based on the optimal samples using two optimal sampling strategies: basis-oriented cross-validation and proper orthogonal decomposition. Although the dimension of the space spanned by the reduced mixed GMsFE basis functions is much smaller than the dimension of the original full order model, the online computation still depends on the number of coarse degree of freedoms. To significantly improve the online computation, we integrate the reduced mixed GMsFE basis methods with sparse tensor approximation and obtain a sparse representation for the model's outputs. The sparse representation is very efficient for evaluating the model's outputs for many instances of parameters. To illustrate the efficacy of the proposed methods, we present a few numerical examples for elliptic PDEs with multiscale and random inputs. In particular, a two-phase flow model in random porous media is simulated by the proposed sparse representation method.

  3. Model's sparse representation based on reduced mixed GMsFE basis methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Lijian, E-mail: ljjiang@hnu.edu.cn; Li, Qiuqi, E-mail: qiuqili@hnu.edu.cn

    2017-06-01

    In this paper, we propose a model's sparse representation based on reduced mixed generalized multiscale finite element (GMsFE) basis methods for elliptic PDEs with random inputs. A typical application for the elliptic PDEs is the flow in heterogeneous random porous media. Mixed generalized multiscale finite element method (GMsFEM) is one of the accurate and efficient approaches to solve the flow problem in a coarse grid and obtain the velocity with local mass conservation. When the inputs of the PDEs are parameterized by the random variables, the GMsFE basis functions usually depend on the random parameters. This leads to a largemore » number degree of freedoms for the mixed GMsFEM and substantially impacts on the computation efficiency. In order to overcome the difficulty, we develop reduced mixed GMsFE basis methods such that the multiscale basis functions are independent of the random parameters and span a low-dimensional space. To this end, a greedy algorithm is used to find a set of optimal samples from a training set scattered in the parameter space. Reduced mixed GMsFE basis functions are constructed based on the optimal samples using two optimal sampling strategies: basis-oriented cross-validation and proper orthogonal decomposition. Although the dimension of the space spanned by the reduced mixed GMsFE basis functions is much smaller than the dimension of the original full order model, the online computation still depends on the number of coarse degree of freedoms. To significantly improve the online computation, we integrate the reduced mixed GMsFE basis methods with sparse tensor approximation and obtain a sparse representation for the model's outputs. The sparse representation is very efficient for evaluating the model's outputs for many instances of parameters. To illustrate the efficacy of the proposed methods, we present a few numerical examples for elliptic PDEs with multiscale and random inputs. In particular, a two-phase flow model in random porous media is simulated by the proposed sparse representation method.« less

  4. Computer Laboratory for Multi-scale Simulations of Novel Nanomaterials

    DTIC Science & Technology

    2014-09-15

    schemes for multiscale modeling of polymers. Permselective ion-exchange membranes for protective clothing, fuel cells , and batteries are of special...polyelectrolyte membranes ( PEM ) with chemical warfare agents (CWA) and their simulants and (2) development of new simulation methods and computational...chemical potential using gauge cell method and calculation of density profiles. However, the code does not run in parallel environments. For mesoscale

  5. Multiscale solvers and systematic upscaling in computational physics

    NASA Astrophysics Data System (ADS)

    Brandt, A.

    2005-07-01

    Multiscale algorithms can overcome the scale-born bottlenecks that plague most computations in physics. These algorithms employ separate processing at each scale of the physical space, combined with interscale iterative interactions, in ways which use finer scales very sparingly. Having been developed first and well known as multigrid solvers for partial differential equations, highly efficient multiscale techniques have more recently been developed for many other types of computational tasks, including: inverse PDE problems; highly indefinite (e.g., standing wave) equations; Dirac equations in disordered gauge fields; fast computation and updating of large determinants (as needed in QCD); fast integral transforms; integral equations; astrophysics; molecular dynamics of macromolecules and fluids; many-atom electronic structures; global and discrete-state optimization; practical graph problems; image segmentation and recognition; tomography (medical imaging); fast Monte-Carlo sampling in statistical physics; and general, systematic methods of upscaling (accurate numerical derivation of large-scale equations from microscopic laws).

  6. Computational Modeling Approaches to Multiscale Design of Icephobic Surfaces

    NASA Technical Reports Server (NTRS)

    Tallman, Aaron; Wang, Yan; Vargas, Mario

    2017-01-01

    To aid in the design of surfaces that prevent icing, a model and computational simulation of impact ice formation at the single droplet scale was implemented. The nucleation of a single supercooled droplet impacting on a substrate, in rime ice conditions, was simulated. Open source computational fluid dynamics (CFD) software was used for the simulation. To aid in the design of surfaces that prevent icing, a model of impact ice formation at the single droplet scale was proposed•No existing model simulates simultaneous impact and freezing of a single super-cooled water droplet•For the 10-week project, a low-fidelity feasibility study was the goal.

  7. Coherent multiscale image processing using dual-tree quaternion wavelets.

    PubMed

    Chan, Wai Lam; Choi, Hyeokho; Baraniuk, Richard G

    2008-07-01

    The dual-tree quaternion wavelet transform (QWT) is a new multiscale analysis tool for geometric image features. The QWT is a near shift-invariant tight frame representation whose coefficients sport a magnitude and three phases: two phases encode local image shifts while the third contains image texture information. The QWT is based on an alternative theory for the 2-D Hilbert transform and can be computed using a dual-tree filter bank with linear computational complexity. To demonstrate the properties of the QWT's coherent magnitude/phase representation, we develop an efficient and accurate procedure for estimating the local geometrical structure of an image. We also develop a new multiscale algorithm for estimating the disparity between a pair of images that is promising for image registration and flow estimation applications. The algorithm features multiscale phase unwrapping, linear complexity, and sub-pixel estimation accuracy.

  8. Review of the synergies between computational modeling and experimental characterization of materials across length scales

    DOE PAGES

    Dingreville, Rémi; Karnesky, Richard A.; Puel, Guillaume; ...

    2015-11-16

    With the increasing interplay between experimental and computational approaches at multiple length scales, new research directions are emerging in materials science and computational mechanics. Such cooperative interactions find many applications in the development, characterization and design of complex material systems. This manuscript provides a broad and comprehensive overview of recent trends in which predictive modeling capabilities are developed in conjunction with experiments and advanced characterization to gain a greater insight into structure–property relationships and study various physical phenomena and mechanisms. The focus of this review is on the intersections of multiscale materials experiments and modeling relevant to the materials mechanicsmore » community. After a general discussion on the perspective from various communities, the article focuses on the latest experimental and theoretical opportunities. Emphasis is given to the role of experiments in multiscale models, including insights into how computations can be used as discovery tools for materials engineering, rather than to “simply” support experimental work. This is illustrated by examples from several application areas on structural materials. In conclusion this manuscript ends with a discussion on some problems and open scientific questions that are being explored in order to advance this relatively new field of research.« less

  9. Multiscale Mechanics of Articular Cartilage: Potentials and Challenges of Coupling Musculoskeletal, Joint, and Microscale Computational Models

    PubMed Central

    Halloran, J. P.; Sibole, S.; van Donkelaar, C. C.; van Turnhout, M. C.; Oomens, C. W. J.; Weiss, J. A.; Guilak, F.; Erdemir, A.

    2012-01-01

    Articular cartilage experiences significant mechanical loads during daily activities. Healthy cartilage provides the capacity for load bearing and regulates the mechanobiological processes for tissue development, maintenance, and repair. Experimental studies at multiple scales have provided a fundamental understanding of macroscopic mechanical function, evaluation of the micromechanical environment of chondrocytes, and the foundations for mechanobiological response. In addition, computational models of cartilage have offered a concise description of experimental data at many spatial levels under healthy and diseased conditions, and have served to generate hypotheses for the mechanical and biological function. Further, modeling and simulation provides a platform for predictive risk assessment, management of dysfunction, as well as a means to relate multiple spatial scales. Simulation-based investigation of cartilage comes with many challenges including both the computational burden and often insufficient availability of data for model development and validation. This review outlines recent modeling and simulation approaches to understand cartilage function from a mechanical systems perspective, and illustrates pathways to associate mechanics with biological function. Computational representations at single scales are provided from the body down to the microstructure, along with attempts to explore multiscale mechanisms of load sharing that dictate the mechanical environment of the cartilage and chondrocytes. PMID:22648577

  10. A Multiscale Computational Model Combining a Single Crystal Plasticity Constitutive Model with the Generalized Method of Cells (GMC) for Metallic Polycrystals.

    PubMed

    Ghorbani Moghaddam, Masoud; Achuthan, Ajit; Bednarcyk, Brett A; Arnold, Steven M; Pineda, Evan J

    2016-05-04

    A multiscale computational model is developed for determining the elasto-plastic behavior of polycrystal metals by employing a single crystal plasticity constitutive model that can capture the microstructural scale stress field on a finite element analysis (FEA) framework. The generalized method of cells (GMC) micromechanics model is used for homogenizing the local field quantities. At first, the stand-alone GMC is applied for studying simple material microstructures such as a repeating unit cell (RUC) containing single grain or two grains under uniaxial loading conditions. For verification, the results obtained by the stand-alone GMC are compared to those from an analogous FEA model incorporating the same single crystal plasticity constitutive model. This verification is then extended to samples containing tens to hundreds of grains. The results demonstrate that the GMC homogenization combined with the crystal plasticity constitutive framework is a promising approach for failure analysis of structures as it allows for properly predicting the von Mises stress in the entire RUC, in an average sense, as well as in the local microstructural level, i.e. , each individual grain. Two-three orders of saving in computational cost, at the expense of some accuracy in prediction, especially in the prediction of the components of local tensor field quantities and the quantities near the grain boundaries, was obtained with GMC. Finally, the capability of the developed multiscale model linking FEA and GMC to solve real-life-sized structures is demonstrated by successfully analyzing an engine disc component and determining the microstructural scale details of the field quantities.

  11. A Multiscale Computational Model Combining a Single Crystal Plasticity Constitutive Model with the Generalized Method of Cells (GMC) for Metallic Polycrystals

    PubMed Central

    Ghorbani Moghaddam, Masoud; Achuthan, Ajit; Bednarcyk, Brett A.; Arnold, Steven M.; Pineda, Evan J.

    2016-01-01

    A multiscale computational model is developed for determining the elasto-plastic behavior of polycrystal metals by employing a single crystal plasticity constitutive model that can capture the microstructural scale stress field on a finite element analysis (FEA) framework. The generalized method of cells (GMC) micromechanics model is used for homogenizing the local field quantities. At first, the stand-alone GMC is applied for studying simple material microstructures such as a repeating unit cell (RUC) containing single grain or two grains under uniaxial loading conditions. For verification, the results obtained by the stand-alone GMC are compared to those from an analogous FEA model incorporating the same single crystal plasticity constitutive model. This verification is then extended to samples containing tens to hundreds of grains. The results demonstrate that the GMC homogenization combined with the crystal plasticity constitutive framework is a promising approach for failure analysis of structures as it allows for properly predicting the von Mises stress in the entire RUC, in an average sense, as well as in the local microstructural level, i.e., each individual grain. Two–three orders of saving in computational cost, at the expense of some accuracy in prediction, especially in the prediction of the components of local tensor field quantities and the quantities near the grain boundaries, was obtained with GMC. Finally, the capability of the developed multiscale model linking FEA and GMC to solve real-life-sized structures is demonstrated by successfully analyzing an engine disc component and determining the microstructural scale details of the field quantities. PMID:28773458

  12. A Computational Systems Biology Software Platform for Multiscale Modeling and Simulation: Integrating Whole-Body Physiology, Disease Biology, and Molecular Reaction Networks

    PubMed Central

    Eissing, Thomas; Kuepfer, Lars; Becker, Corina; Block, Michael; Coboeken, Katrin; Gaub, Thomas; Goerlitz, Linus; Jaeger, Juergen; Loosen, Roland; Ludewig, Bernd; Meyer, Michaela; Niederalt, Christoph; Sevestre, Michael; Siegmund, Hans-Ulrich; Solodenko, Juri; Thelen, Kirstin; Telle, Ulrich; Weiss, Wolfgang; Wendl, Thomas; Willmann, Stefan; Lippert, Joerg

    2011-01-01

    Today, in silico studies and trial simulations already complement experimental approaches in pharmaceutical R&D and have become indispensable tools for decision making and communication with regulatory agencies. While biology is multiscale by nature, project work, and software tools usually focus on isolated aspects of drug action, such as pharmacokinetics at the organism scale or pharmacodynamic interaction on the molecular level. We present a modeling and simulation software platform consisting of PK-Sim® and MoBi® capable of building and simulating models that integrate across biological scales. A prototypical multiscale model for the progression of a pancreatic tumor and its response to pharmacotherapy is constructed and virtual patients are treated with a prodrug activated by hepatic metabolization. Tumor growth is driven by signal transduction leading to cell cycle transition and proliferation. Free tumor concentrations of the active metabolite inhibit Raf kinase in the signaling cascade and thereby cell cycle progression. In a virtual clinical study, the individual therapeutic outcome of the chemotherapeutic intervention is simulated for a large population with heterogeneous genomic background. Thereby, the platform allows efficient model building and integration of biological knowledge and prior data from all biological scales. Experimental in vitro model systems can be linked with observations in animal experiments and clinical trials. The interplay between patients, diseases, and drugs and topics with high clinical relevance such as the role of pharmacogenomics, drug–drug, or drug–metabolite interactions can be addressed using this mechanistic, insight driven multiscale modeling approach. PMID:21483730

  13. A fast solver for the Helmholtz equation based on the generalized multiscale finite-element method

    NASA Astrophysics Data System (ADS)

    Fu, Shubin; Gao, Kai

    2017-11-01

    Conventional finite-element methods for solving the acoustic-wave Helmholtz equation in highly heterogeneous media usually require finely discretized mesh to represent the medium property variations with sufficient accuracy. Computational costs for solving the Helmholtz equation can therefore be considerably expensive for complicated and large geological models. Based on the generalized multiscale finite-element theory, we develop a novel continuous Galerkin method to solve the Helmholtz equation in acoustic media with spatially variable velocity and mass density. Instead of using conventional polynomial basis functions, we use multiscale basis functions to form the approximation space on the coarse mesh. The multiscale basis functions are obtained from multiplying the eigenfunctions of a carefully designed local spectral problem with an appropriate multiscale partition of unity. These multiscale basis functions can effectively incorporate the characteristics of heterogeneous media's fine-scale variations, thus enable us to obtain accurate solution to the Helmholtz equation without directly solving the large discrete system formed on the fine mesh. Numerical results show that our new solver can significantly reduce the dimension of the discrete Helmholtz equation system, and can also obviously reduce the computational time.

  14. Blood Flow: Multi-scale Modeling and Visualization (July 2011)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2011-01-01

    Multi-scale modeling of arterial blood flow can shed light on the interaction between events happening at micro- and meso-scales (i.e., adhesion of red blood cells to the arterial wall, clot formation) and at macro-scales (i.e., change in flow patterns due to the clot). Coupled numerical simulations of such multi-scale flow require state-of-the-art computers and algorithms, along with techniques for multi-scale visualizations. This animation presents early results of two studies used in the development of a multi-scale visualization methodology. The fisrt illustrates a flow of healthy (red) and diseased (blue) blood cells with a Dissipative Particle Dynamics (DPD) method. Each bloodmore » cell is represented by a mesh, small spheres show a sub-set of particles representing the blood plasma, while instantaneous streamlines and slices represent the ensemble average velocity. In the second we investigate the process of thrombus (blood clot) formation, which may be responsible for the rupture of aneurysms, by concentrating on the platelet blood cells, observing as they aggregate on the wall of an aneruysm. Simulation was performed on Kraken at the National Institute for Computational Sciences. Visualization was produced using resources of the Argonne Leadership Computing Facility at Argonne National Laboratory.« less

  15. Electromechanical models of the ventricles

    PubMed Central

    Constantino, Jason; Gurev, Viatcheslav

    2011-01-01

    Computational modeling has traditionally played an important role in dissecting the mechanisms for cardiac dysfunction. Ventricular electromechanical models, likely the most sophisticated virtual organs to date, integrate detailed information across the spatial scales of cardiac electrophysiology and mechanics and are capable of capturing the emergent behavior and the interaction between electrical activation and mechanical contraction of the heart. The goal of this review is to provide an overview of the latest advancements in multiscale electromechanical modeling of the ventricles. We first detail the general framework of multiscale ventricular electromechanical modeling and describe the state of the art in computational techniques and experimental validation approaches. The powerful utility of ventricular electromechanical models in providing a better understanding of cardiac function is then demonstrated by reviewing the latest insights obtained by these models, focusing primarily on the mechanisms by which mechanoelectric coupling contributes to ventricular arrythmogenesis, the relationship between electrical activation and mechanical contraction in the normal heart, and the mechanisms of mechanical dyssynchrony and resynchronization in the failing heart. Computational modeling of cardiac electromechanics will continue to complement basic science research and clinical cardiology and holds promise to become an important clinical tool aiding the diagnosis and treatment of cardiac disease. PMID:21572017

  16. Agile Multi-Scale Decompositions for Automatic Image Registration

    NASA Technical Reports Server (NTRS)

    Murphy, James M.; Leija, Omar Navarro; Le Moigne, Jacqueline

    2016-01-01

    In recent works, the first and third authors developed an automatic image registration algorithm based on a multiscale hybrid image decomposition with anisotropic shearlets and isotropic wavelets. This prototype showed strong performance, improving robustness over registration with wavelets alone. However, this method imposed a strict hierarchy on the order in which shearlet and wavelet features were used in the registration process, and also involved an unintegrated mixture of MATLAB and C code. In this paper, we introduce a more agile model for generating features, in which a flexible and user-guided mix of shearlet and wavelet features are computed. Compared to the previous prototype, this method introduces a flexibility to the order in which shearlet and wavelet features are used in the registration process. Moreover, the present algorithm is now fully coded in C, making it more efficient and portable than the MATLAB and C prototype. We demonstrate the versatility and computational efficiency of this approach by performing registration experiments with the fully-integrated C algorithm. In particular, meaningful timing studies can now be performed, to give a concrete analysis of the computational costs of the flexible feature extraction. Examples of synthetically warped and real multi-modal images are analyzed.

  17. Comparative analysis of ventricular assist devices (POLVAD and POLVAD_EXT) based on multiscale FEM model.

    PubMed

    Milenin, Andrzej; Kopernik, Magdalena

    2011-01-01

    The prosthesis - pulsatory ventricular assist device (VAD) - is made of polyurethane (PU) and biocompatible TiN deposited by pulsed laser deposition (PLD) method. The paper discusses the numerical modelling and computer-aided design of such an artificial organ. Two types of VADs: POLVAD and POLVAD_EXT are investigated. The main tasks and assumptions of the computer program developed are presented. The multiscale model of VAD based on finite element method (FEM) is introduced and the analysis of the stress-strain state in macroscale for the blood chamber in both versions of VAD is shown, as well as the verification of the results calculated by applying ABAQUS, a commercial FEM code. The FEM code developed is based on a new approach to the simulation of multilayer materials obtained by using PLD method. The model in microscale includes two components, i.e., model of initial stresses (residual stress) caused by the deposition process and simulation of active loadings observed in the blood chamber of POLVAD and POLVAD_EXT. The computed distributions of stresses and strains in macro- and microscales are helpful in defining precisely the regions of blood chamber, which can be defined as the failure-source areas.

  18. The topology of the cosmic web in terms of persistent Betti numbers

    NASA Astrophysics Data System (ADS)

    Pranav, Pratyush; Edelsbrunner, Herbert; van de Weygaert, Rien; Vegter, Gert; Kerber, Michael; Jones, Bernard J. T.; Wintraecken, Mathijs

    2017-03-01

    We introduce a multiscale topological description of the Megaparsec web-like cosmic matter distribution. Betti numbers and topological persistence offer a powerful means of describing the rich connectivity structure of the cosmic web and of its multiscale arrangement of matter and galaxies. Emanating from algebraic topology and Morse theory, Betti numbers and persistence diagrams represent an extension and deepening of the cosmologically familiar topological genus measure and the related geometric Minkowski functionals. In addition to a description of the mathematical background, this study presents the computational procedure for computing Betti numbers and persistence diagrams for density field filtrations. The field may be computed starting from a discrete spatial distribution of galaxies or simulation particles. The main emphasis of this study concerns an extensive and systematic exploration of the imprint of different web-like morphologies and different levels of multiscale clustering in the corresponding computed Betti numbers and persistence diagrams. To this end, we use Voronoi clustering models as templates for a rich variety of web-like configurations and the fractal-like Soneira-Peebles models exemplify a range of multiscale configurations. We have identified the clear imprint of cluster nodes, filaments, walls, and voids in persistence diagrams, along with that of the nested hierarchy of structures in multiscale point distributions. We conclude by outlining the potential of persistent topology for understanding the connectivity structure of the cosmic web, in large simulations of cosmic structure formation and in the challenging context of the observed galaxy distribution in large galaxy surveys.

  19. Self-consistent clustering analysis: an efficient multiscale scheme for inelastic heterogeneous materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Z.; Bessa, M. A.; Liu, W.K.

    A predictive computational theory is shown for modeling complex, hierarchical materials ranging from metal alloys to polymer nanocomposites. The theory can capture complex mechanisms such as plasticity and failure that span across multiple length scales. This general multiscale material modeling theory relies on sound principles of mathematics and mechanics, and a cutting-edge reduced order modeling method named self-consistent clustering analysis (SCA) [Zeliang Liu, M.A. Bessa, Wing Kam Liu, “Self-consistent clustering analysis: An efficient multi-scale scheme for inelastic heterogeneous materials,” Comput. Methods Appl. Mech. Engrg. 306 (2016) 319–341]. SCA reduces by several orders of magnitude the computational cost of micromechanical andmore » concurrent multiscale simulations, while retaining the microstructure information. This remarkable increase in efficiency is achieved with a data-driven clustering method. Computationally expensive operations are performed in the so-called offline stage, where degrees of freedom (DOFs) are agglomerated into clusters. The interaction tensor of these clusters is computed. In the online or predictive stage, the Lippmann-Schwinger integral equation is solved cluster-wise using a self-consistent scheme to ensure solution accuracy and avoid path dependence. To construct a concurrent multiscale model, this scheme is applied at each material point in a macroscale structure, replacing a conventional constitutive model with the average response computed from the microscale model using just the SCA online stage. A regularized damage theory is incorporated in the microscale that avoids the mesh and RVE size dependence that commonly plagues microscale damage calculations. The SCA method is illustrated with two cases: a carbon fiber reinforced polymer (CFRP) structure with the concurrent multiscale model and an application to fatigue prediction for additively manufactured metals. For the CFRP problem, a speed up estimated to be about 43,000 is achieved by using the SCA method, as opposed to FE2, enabling the solution of an otherwise computationally intractable problem. The second example uses a crystal plasticity constitutive law and computes the fatigue potency of extrinsic microscale features such as voids. This shows that local stress and strain are capture sufficiently well by SCA. This model has been incorporated in a process-structure-properties prediction framework for process design in additive manufacturing.« less

  20. Three-dimensional multiscale modeling of dendritic spacing selection during Al-Si directional solidification

    DOE PAGES

    Tourret, Damien; Clarke, Amy J.; Imhoff, Seth D.; ...

    2015-05-27

    We present a three-dimensional extension of the multiscale dendritic needle network (DNN) model. This approach enables quantitative simulations of the unsteady dynamics of complex hierarchical networks in spatially extended dendritic arrays. We apply the model to directional solidification of Al-9.8 wt.%Si alloy and directly compare the model predictions with measurements from experiments with in situ x-ray imaging. The focus is on the dynamical selection of primary spacings over a range of growth velocities, and the influence of sample geometry on the selection of spacings. Simulation results show good agreement with experiments. The computationally efficient DNN model opens new avenues formore » investigating the dynamics of large dendritic arrays at scales relevant to solidification experiments and processes.« less

  1. Multiscale analysis of heart rate dynamics: entropy and time irreversibility measures.

    PubMed

    Costa, Madalena D; Peng, Chung-Kang; Goldberger, Ary L

    2008-06-01

    Cardiovascular signals are largely analyzed using traditional time and frequency domain measures. However, such measures fail to account for important properties related to multiscale organization and non-equilibrium dynamics. The complementary role of conventional signal analysis methods and emerging multiscale techniques, is, therefore, an important frontier area of investigation. The key finding of this presentation is that two recently developed multiscale computational tools--multiscale entropy and multiscale time irreversibility--are able to extract information from cardiac interbeat interval time series not contained in traditional methods based on mean, variance or Fourier spectrum (two-point correlation) techniques. These new methods, with careful attention to their limitations, may be useful in diagnostics, risk stratification and detection of toxicity of cardiac drugs.

  2. Multiscale Analysis of Heart Rate Dynamics: Entropy and Time Irreversibility Measures

    PubMed Central

    Peng, Chung-Kang; Goldberger, Ary L.

    2016-01-01

    Cardiovascular signals are largely analyzed using traditional time and frequency domain measures. However, such measures fail to account for important properties related to multiscale organization and nonequilibrium dynamics. The complementary role of conventional signal analysis methods and emerging multiscale techniques, is, therefore, an important frontier area of investigation. The key finding of this presentation is that two recently developed multiscale computational tools— multiscale entropy and multiscale time irreversibility—are able to extract information from cardiac interbeat interval time series not contained in traditional methods based on mean, variance or Fourier spectrum (two-point correlation) techniques. These new methods, with careful attention to their limitations, may be useful in diagnostics, risk stratification and detection of toxicity of cardiac drugs. PMID:18172763

  3. Multiscale approach to the physics of radiation damage with ions

    NASA Astrophysics Data System (ADS)

    Surdutovich, Eugene; Solov'yov, Andrey V.

    2013-04-01

    We review a multiscale approach to the physics of ion-beam cancer therapy, an approach suggested in order to understand the interplay of a large number of phenomena involved in radiation damage scenario occurring on a range of temporal, spatial, and energy scales. We briefly overview its history and present the current stage of its development. The differences of the multiscale approach from other methods of understanding and assessment of radiation damage are discussed as well as its relationship to other branches of physics, chemistry and biology.

  4. Multiscale modeling of a low magnetostrictive Fe-27wt%Co-0.5wt%Cr alloy

    NASA Astrophysics Data System (ADS)

    Savary, M.; Hubert, O.; Helbert, A. L.; Baudin, T.; Batonnet, R.; Waeckerlé, T.

    2018-05-01

    The present paper deals with the improvement of a multi-scale approach describing the magneto-mechanical coupling of Fe-27wt%Co-0.5wt%Cr alloy. The magnetostriction behavior is demonstrated as very different (low magnetostriction vs. high magnetostriction) when this material is submitted to two different final annealing conditions after cold rolling. The numerical data obtained from a multi-scale approach are in accordance with experimental data corresponding to the high magnetostriction level material. A bi-domain structure hypothesis is employed to explain the low magnetostriction behavior, in accordance with the effect of an applied tensile stress. A modification of the multiscale approach is proposed to match this result.

  5. Computational approach on PEB process in EUV resist: multi-scale simulation

    NASA Astrophysics Data System (ADS)

    Kim, Muyoung; Moon, Junghwan; Choi, Joonmyung; Lee, Byunghoon; Jeong, Changyoung; Kim, Heebom; Cho, Maenghyo

    2017-03-01

    For decades, downsizing has been a key issue for high performance and low cost of semiconductor, and extreme ultraviolet lithography is one of the promising candidates to achieve the goal. As a predominant process in extreme ultraviolet lithography on determining resolution and sensitivity, post exposure bake has been mainly studied by experimental groups, but development of its photoresist is at the breaking point because of the lack of unveiled mechanism during the process. Herein, we provide theoretical approach to investigate underlying mechanism on the post exposure bake process in chemically amplified resist, and it covers three important reactions during the process: acid generation by photo-acid generator dissociation, acid diffusion, and deprotection. Density functional theory calculation (quantum mechanical simulation) was conducted to quantitatively predict activation energy and probability of the chemical reactions, and they were applied to molecular dynamics simulation for constructing reliable computational model. Then, overall chemical reactions were simulated in the molecular dynamics unit cell, and final configuration of the photoresist was used to predict the line edge roughness. The presented multiscale model unifies the phenomena of both quantum and atomic scales during the post exposure bake process, and it will be helpful to understand critical factors affecting the performance of the resulting photoresist and design the next-generation material.

  6. Multiscale Simulation of Porous Ceramics Based on Movable Cellular Automaton Method

    NASA Astrophysics Data System (ADS)

    Smolin, A.; Smolin, I.; Eremina, G.; Smolina, I.

    2017-10-01

    The paper presents a model for simulating mechanical behaviour of multiscale porous ceramics based on movable cellular automaton method, which is a novel particle method in computational mechanics of solid. The initial scale of the proposed approach corresponds to the characteristic size of the smallest pores in the ceramics. At this scale, we model uniaxial compression of several representative samples with an explicit account of pores of the same size but with the random unique position in space. As a result, we get the average values of Young’s modulus and strength, as well as the parameters of the Weibull distribution of these properties at the current scale level. These data allow us to describe the material behaviour at the next scale level were only the larger pores are considered explicitly, while the influence of small pores is included via the effective properties determined at the previous scale level. If the pore size distribution function of the material has N maxima we need to perform computations for N - 1 levels in order to get the properties from the lowest scale up to the macroscale step by step. The proposed approach was applied to modelling zirconia ceramics with bimodal pore size distribution. The obtained results show correct behaviour of the model sample at the macroscale.

  7. Progressive Fracture of Composite Structures

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.; Minnetyan, Levon

    2008-01-01

    A new approach is described for evaluating fracture in composite structures. This approach is independent of classical fracture mechanics parameters like fracture toughness. It relies on computational simulation and is programmed in a stand-alone integrated computer code. It is multiscale, multifunctional because it includes composite mechanics for the composite behavior and finite element analysis for predicting the structural response. It contains seven modules; layered composite mechanics (micro, macro, laminate), finite element, updating scheme, local fracture, global fracture, stress based failure modes, and fracture progression. The computer code is called CODSTRAN (Composite Durability Structural ANalysis). It is used in the present paper to evaluate the global fracture of four composite shell problems and one composite built-up structure. Results show that the composite shells and the built-up composite structure global fracture are enhanced when internal pressure is combined with shear loads.

  8. Multiscale Materials Science - A Mathematical Approach to the Role of Defects and Uncertainty

    DTIC Science & Technology

    2016-10-28

    AFRL-AFOSR-UK-TR-2016-0034 Multiscale materials science - a mathematical approach to the role of defects and uncertainty Claude Le Bris ECOLE...science - a mathematical approach to the role of defects and uncertainty 5a.  CONTRACT NUMBER 5b.  GRANT NUMBER FA8655-13-1-3061 5c.  PROGRAM ELEMENT...1FORM SF 298 10/31/2016https://livelink.ebs.afrl.af.mil/livelink/llisapi.dll Contract FA 8655-13-1-3061 Multiscale materials science: a mathematical

  9. Machine learning action parameters in lattice quantum chromodynamics

    NASA Astrophysics Data System (ADS)

    Shanahan, Phiala E.; Trewartha, Daniel; Detmold, William

    2018-05-01

    Numerical lattice quantum chromodynamics studies of the strong interaction are important in many aspects of particle and nuclear physics. Such studies require significant computing resources to undertake. A number of proposed methods promise improved efficiency of lattice calculations, and access to regions of parameter space that are currently computationally intractable, via multi-scale action-matching approaches that necessitate parametric regression of generated lattice datasets. The applicability of machine learning to this regression task is investigated, with deep neural networks found to provide an efficient solution even in cases where approaches such as principal component analysis fail. The high information content and complex symmetries inherent in lattice QCD datasets require custom neural network layers to be introduced and present opportunities for further development.

  10. A roadmap to computational social neuroscience.

    PubMed

    Tognoli, Emmanuelle; Dumas, Guillaume; Kelso, J A Scott

    2018-02-01

    To complement experimental efforts toward understanding human social interactions at both neural and behavioral levels, two computational approaches are presented: (1) a fully parameterizable mathematical model of a social partner, the Human Dynamic Clamp which, by virtue of experimentally controlled interactions between Virtual Partners and real people, allows for emergent behaviors to be studied; and (2) a multiscale neurocomputational model of social coordination that enables exploration of social self-organization at all levels-from neuronal patterns to people interacting with each other. These complementary frameworks and the cross product of their analysis aim at understanding the fundamental principles governing social behavior.

  11. Multiscale Computational Analysis of Nitrogen and Oxygen Gas-Phase Thermochemistry in Hypersonic Flows

    NASA Astrophysics Data System (ADS)

    Bender, Jason D.

    Understanding hypersonic aerodynamics is important for the design of next-generation aerospace vehicles for space exploration, national security, and other applications. Ground-level experimental studies of hypersonic flows are difficult and expensive; thus, computational science plays a crucial role in this field. Computational fluid dynamics (CFD) simulations of extremely high-speed flows require models of chemical and thermal nonequilibrium processes, such as dissociation of diatomic molecules and vibrational energy relaxation. Current models are outdated and inadequate for advanced applications. We describe a multiscale computational study of gas-phase thermochemical processes in hypersonic flows, starting at the atomic scale and building systematically up to the continuum scale. The project was part of a larger effort centered on collaborations between aerospace scientists and computational chemists. We discuss the construction of potential energy surfaces for the N4, N2O2, and O4 systems, focusing especially on the multi-dimensional fitting problem. A new local fitting method named L-IMLS-G2 is presented and compared with a global fitting method. Then, we describe the theory of the quasiclassical trajectory (QCT) approach for modeling molecular collisions. We explain how we implemented the approach in a new parallel code for high-performance computing platforms. Results from billions of QCT simulations of high-energy N2 + N2, N2 + N, and N2 + O2 collisions are reported and analyzed. Reaction rate constants are calculated and sets of reactive trajectories are characterized at both thermal equilibrium and nonequilibrium conditions. The data shed light on fundamental mechanisms of dissociation and exchange reactions -- and their coupling to internal energy transfer processes -- in thermal environments typical of hypersonic flows. We discuss how the outcomes of this investigation and other related studies lay a rigorous foundation for new macroscopic models for hypersonic CFD. This research was supported by the Department of Energy Computational Science Graduate Fellowship and by the Air Force Office of Scientific Research Multidisciplinary University Research Initiative.

  12. Biomaterial science meets computational biology.

    PubMed

    Hutmacher, Dietmar W; Little, J Paige; Pettet, Graeme J; Loessner, Daniela

    2015-05-01

    There is a pressing need for a predictive tool capable of revealing a holistic understanding of fundamental elements in the normal and pathological cell physiology of organoids in order to decipher the mechanoresponse of cells. Therefore, the integration of a systems bioengineering approach into a validated mathematical model is necessary to develop a new simulation tool. This tool can only be innovative by combining biomaterials science with computational biology. Systems-level and multi-scale experimental data are incorporated into a single framework, thus representing both single cells and collective cell behaviour. Such a computational platform needs to be validated in order to discover key mechano-biological factors associated with cell-cell and cell-niche interactions.

  13. Evaluating metrics of local topographic position for multiscale geomorphometric analysis

    NASA Astrophysics Data System (ADS)

    Newman, D. R.; Lindsay, J. B.; Cockburn, J. M. H.

    2018-07-01

    The field of geomorphometry has increasingly moved towards the use of multiscale analytical techniques, due to the availability of fine-resolution digital elevation models (DEMs) and the inherent scale-dependency of many DEM-derived attributes such as local topographic position (LTP). LTP is useful for landform and soils mapping and numerous other environmental applications. Multiple LTP metrics have been proposed and applied in the literature; however, elevation percentile (EP) is notable for its robustness to elevation error and applicability to non-Gaussian local elevation distributions, both of which are common characteristics of DEM data sets. Multiscale LTP analysis involves the estimation of spatial patterns using a range of neighborhood sizes, traditionally achieved by applying spatial filtering techniques with varying kernel sizes. While EP can be demonstrated to provide accurate estimates of LTP, the computationally intensive method of its calculation makes it unsuited to multiscale LTP analysis, particularly at large neighborhood sizes or with fine-resolution DEMs. This research assessed the suitability of three LTP metrics for multiscale terrain characterization by quantifying their computational efficiency and by comparing their ability to approximate EP spatial patterns under varying topographic conditions. The tested LTP metrics included: deviation from mean elevation (DEV), percent elevation range (PER), and the novel relative topographic position (RTP) index. The results demonstrated that DEV, calculated using the integral image technique, offers fast and scale-invariant computation. DEV spatial patterns were strongly correlated with EP (r2 range of 0.699 to 0.967) under all tested topographic conditions. RTP was also a strong predictor of EP (r2 range of 0.594 to 0.917). PER was the weakest predictor of EP (r2 range of 0.031 to 0.801) without offering a substantial improvement in computational efficiency over RTP. PER was therefore determined to be unsuitable for most multiscale applications. It was concluded that the scale-invariant property offered by the integral image used by the DEV method counters the minor losses in robustness compared to EP, making DEV the optimal LTP metric for multiscale applications.

  14. A multiscale climate emulator for long-term morphodynamics (MUSCLE-morpho)

    NASA Astrophysics Data System (ADS)

    Antolínez, José Antonio A.; Méndez, Fernando J.; Camus, Paula; Vitousek, Sean; González, E. Mauricio; Ruggiero, Peter; Barnard, Patrick

    2016-01-01

    Interest in understanding long-term coastal morphodynamics has recently increased as climate change impacts become perceptible and accelerated. Multiscale, behavior-oriented and process-based models, or hybrids of the two, are typically applied with deterministic approaches which require considerable computational effort. In order to reduce the computational cost of modeling large spatial and temporal scales, input reduction and morphological acceleration techniques have been developed. Here we introduce a general framework for reducing dimensionality of wave-driver inputs to morphodynamic models. The proposed framework seeks to account for dependencies with global atmospheric circulation fields and deals simultaneously with seasonality, interannual variability, long-term trends, and autocorrelation of wave height, wave period, and wave direction. The model is also able to reproduce future wave climate time series accounting for possible changes in the global climate system. An application of long-term shoreline evolution is presented by comparing the performance of the real and the simulated wave climate using a one-line model. This article was corrected on 2 FEB 2016. See the end of the full text for details.

  15. Modelling strategies to predict the multi-scale effects of rural land management change

    NASA Astrophysics Data System (ADS)

    Bulygina, N.; Ballard, C. E.; Jackson, B. M.; McIntyre, N.; Marshall, M.; Reynolds, B.; Wheater, H. S.

    2011-12-01

    Changes to the rural landscape due to agricultural land management are ubiquitous, yet predicting the multi-scale effects of land management change on hydrological response remains an important scientific challenge. Much empirical research has been of little generic value due to inadequate design and funding of monitoring programmes, while the modelling issues challenge the capability of data-based, conceptual and physics-based modelling approaches. In this paper we report on a major UK research programme, motivated by a national need to quantify effects of agricultural intensification on flood risk. Working with a consortium of farmers in upland Wales, a multi-scale experimental programme (from experimental plots to 2nd order catchments) was developed to address issues of upland agricultural intensification. This provided data support for a multi-scale modelling programme, in which highly detailed physics-based models were conditioned on the experimental data and used to explore effects of potential field-scale interventions. A meta-modelling strategy was developed to represent detailed modelling in a computationally-efficient manner for catchment-scale simulation; this allowed catchment-scale quantification of potential management options. For more general application to data-sparse areas, alternative approaches were needed. Physics-based models were developed for a range of upland management problems, including the restoration of drained peatlands, afforestation, and changing grazing practices. Their performance was explored using literature and surrogate data; although subject to high levels of uncertainty, important insights were obtained, of practical relevance to management decisions. In parallel, regionalised conceptual modelling was used to explore the potential of indices of catchment response, conditioned on readily-available catchment characteristics, to represent ungauged catchments subject to land management change. Although based in part on speculative relationships, significant predictive power was derived from this approach. Finally, using a formal Bayesian procedure, these different sources of information were combined with local flow data in a catchment-scale conceptual model application , i.e. using small-scale physical properties, regionalised signatures of flow and available flow measurements.

  16. Multiscale Particle-Based Modeling of Flowing Platelets in Blood Plasma Using Dissipative Particle Dynamics and Coarse Grained Molecular Dynamics

    PubMed Central

    Zhang, Peng; Gao, Chao; Zhang, Na; Slepian, Marvin J.; Deng, Yuefan; Bluestein, Danny

    2014-01-01

    We developed a multiscale particle-based model of platelets, to study the transport dynamics of shear stresses between the surrounding fluid and the platelet membrane. This model facilitates a more accurate prediction of the activation potential of platelets by viscous shear stresses - one of the major mechanisms leading to thrombus formation in cardiovascular diseases and in prosthetic cardiovascular devices. The interface of the model couples coarse-grained molecular dynamics (CGMD) with dissipative particle dynamics (DPD). The CGMD handles individual platelets while the DPD models the macroscopic transport of blood plasma in vessels. A hybrid force field is formulated for establishing a functional interface between the platelet membrane and the surrounding fluid, in which the microstructural changes of platelets may respond to the extracellular viscous shear stresses transferred to them. The interaction between the two systems preserves dynamic properties of the flowing platelets, such as the flipping motion. Using this multiscale particle-based approach, we have further studied the effects of the platelet elastic modulus by comparing the action of the flow-induced shear stresses on rigid and deformable platelet models. The results indicate that neglecting the platelet deformability may overestimate the stress on the platelet membrane, which in turn may lead to erroneous predictions of the platelet activation under viscous shear flow conditions. This particle-based fluid-structure interaction multiscale model offers for the first time a computationally feasible approach for simulating deformable platelets interacting with viscous blood flow, aimed at predicting flow induced platelet activation by using a highly resolved mapping of the stress distribution on the platelet membrane under dynamic flow conditions. PMID:25530818

  17. Statistical CT noise reduction with multiscale decomposition and penalized weighted least squares in the projection domain

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang Shaojie; Tang Xiangyang; School of Automation, Xi'an University of Posts and Telecommunications, Xi'an, Shaanxi 710121

    2012-09-15

    Purposes: The suppression of noise in x-ray computed tomography (CT) imaging is of clinical relevance for diagnostic image quality and the potential for radiation dose saving. Toward this purpose, statistical noise reduction methods in either the image or projection domain have been proposed, which employ a multiscale decomposition to enhance the performance of noise suppression while maintaining image sharpness. Recognizing the advantages of noise suppression in the projection domain, the authors propose a projection domain multiscale penalized weighted least squares (PWLS) method, in which the angular sampling rate is explicitly taken into consideration to account for the possible variation ofmore » interview sampling rate in advanced clinical or preclinical applications. Methods: The projection domain multiscale PWLS method is derived by converting an isotropic diffusion partial differential equation in the image domain into the projection domain, wherein a multiscale decomposition is carried out. With adoption of the Markov random field or soft thresholding objective function, the projection domain multiscale PWLS method deals with noise at each scale. To compensate for the degradation in image sharpness caused by the projection domain multiscale PWLS method, an edge enhancement is carried out following the noise reduction. The performance of the proposed method is experimentally evaluated and verified using the projection data simulated by computer and acquired by a CT scanner. Results: The preliminary results show that the proposed projection domain multiscale PWLS method outperforms the projection domain single-scale PWLS method and the image domain multiscale anisotropic diffusion method in noise reduction. In addition, the proposed method can preserve image sharpness very well while the occurrence of 'salt-and-pepper' noise and mosaic artifacts can be avoided. Conclusions: Since the interview sampling rate is taken into account in the projection domain multiscale decomposition, the proposed method is anticipated to be useful in advanced clinical and preclinical applications where the interview sampling rate varies.« less

  18. Multi-scale heat and mass transfer modelling of cell and tissue cryopreservation

    PubMed Central

    Xu, Feng; Moon, Sangjun; Zhang, Xiaohui; Shao, Lei; Song, Young Seok; Demirci, Utkan

    2010-01-01

    Cells and tissues undergo complex physical processes during cryopreservation. Understanding the underlying physical phenomena is critical to improve current cryopreservation methods and to develop new techniques. Here, we describe multi-scale approaches for modelling cell and tissue cryopreservation including heat transfer at macroscale level, crystallization, cell volume change and mass transport across cell membranes at microscale level. These multi-scale approaches allow us to study cell and tissue cryopreservation. PMID:20047939

  19. Status and future of MUSE

    NASA Astrophysics Data System (ADS)

    Harfst, S.; Portegies Zwart, S.; McMillan, S.

    2008-12-01

    We present MUSE, a software framework for combining existing computational tools from different astrophysical domains into a single multi-physics, multi-scale application. MUSE facilitates the coupling of existing codes written in different languages by providing inter-language tools and by specifying an interface between each module and the framework that represents a balance between generality and computational efficiency. This approach allows scientists to use combinations of codes to solve highly-coupled problems without the need to write new codes for other domains or significantly alter their existing codes. MUSE currently incorporates the domains of stellar dynamics, stellar evolution and stellar hydrodynamics for studying generalized stellar systems. We have now reached a ``Noah's Ark'' milestone, with (at least) two available numerical solvers for each domain. MUSE can treat multi-scale and multi-physics systems in which the time- and size-scales are well separated, like simulating the evolution of planetary systems, small stellar associations, dense stellar clusters, galaxies and galactic nuclei. In this paper we describe two examples calculated using MUSE: the merger of two galaxies and an N-body simulation with live stellar evolution. In addition, we demonstrate an implementation of MUSE on a distributed computer which may also include special-purpose hardware, such as GRAPEs or GPUs, to accelerate computations. The current MUSE code base is publicly available as open source at http://muse.li.

  20. Module-based multiscale simulation of angiogenesis in skeletal muscle

    PubMed Central

    2011-01-01

    Background Mathematical modeling of angiogenesis has been gaining momentum as a means to shed new light on the biological complexity underlying blood vessel growth. A variety of computational models have been developed, each focusing on different aspects of the angiogenesis process and occurring at different biological scales, ranging from the molecular to the tissue levels. Integration of models at different scales is a challenging and currently unsolved problem. Results We present an object-oriented module-based computational integration strategy to build a multiscale model of angiogenesis that links currently available models. As an example case, we use this approach to integrate modules representing microvascular blood flow, oxygen transport, vascular endothelial growth factor transport and endothelial cell behavior (sensing, migration and proliferation). Modeling methodologies in these modules include algebraic equations, partial differential equations and agent-based models with complex logical rules. We apply this integrated model to simulate exercise-induced angiogenesis in skeletal muscle. The simulation results compare capillary growth patterns between different exercise conditions for a single bout of exercise. Results demonstrate how the computational infrastructure can effectively integrate multiple modules by coordinating their connectivity and data exchange. Model parameterization offers simulation flexibility and a platform for performing sensitivity analysis. Conclusions This systems biology strategy can be applied to larger scale integration of computational models of angiogenesis in skeletal muscle, or other complex processes in other tissues under physiological and pathological conditions. PMID:21463529

  1. On a sparse pressure-flow rate condensation of rigid circulation models

    PubMed Central

    Schiavazzi, D. E.; Hsia, T. Y.; Marsden, A. L.

    2015-01-01

    Cardiovascular simulation has shown potential value in clinical decision-making, providing a framework to assess changes in hemodynamics produced by physiological and surgical alterations. State-of-the-art predictions are provided by deterministic multiscale numerical approaches coupling 3D finite element Navier Stokes simulations to lumped parameter circulation models governed by ODEs. Development of next-generation stochastic multiscale models whose parameters can be learned from available clinical data under uncertainty constitutes a research challenge made more difficult by the high computational cost typically associated with the solution of these models. We present a methodology for constructing reduced representations that condense the behavior of 3D anatomical models using outlet pressure-flow polynomial surrogates, based on multiscale model solutions spanning several heart cycles. Relevance vector machine regression is compared with maximum likelihood estimation, showing that sparse pressure/flow rate approximations offer superior performance in producing working surrogate models to be included in lumped circulation networks. Sensitivities of outlets flow rates are also quantified through a Sobol’ decomposition of their total variance encoded in the orthogonal polynomial expansion. Finally, we show that augmented lumped parameter models including the proposed surrogates accurately reproduce the response of multiscale models they were derived from. In particular, results are presented for models of the coronary circulation with closed loop boundary conditions and the abdominal aorta with open loop boundary conditions. PMID:26671219

  2. Dynamical Approach Study of Spurious Numerics in Nonlinear Computations

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Mansour, Nagi (Technical Monitor)

    2002-01-01

    The last two decades have been an era when computation is ahead of analysis and when very large scale practical computations are increasingly used in poorly understood multiscale complex nonlinear physical problems and non-traditional fields. Ensuring a higher level of confidence in the predictability and reliability (PAR) of these numerical simulations could play a major role in furthering the design, understanding, affordability and safety of our next generation air and space transportation systems, and systems for planetary and atmospheric sciences, and in understanding the evolution and origin of life. The need to guarantee PAR becomes acute when computations offer the ONLY way of solving these types of data limited problems. Employing theory from nonlinear dynamical systems, some building blocks to ensure a higher level of confidence in PAR of numerical simulations have been revealed by the author and world expert collaborators in relevant fields. Five building blocks with supporting numerical examples were discussed. The next step is to utilize knowledge gained by including nonlinear dynamics, bifurcation and chaos theories as an integral part of the numerical process. The third step is to design integrated criteria for reliable and accurate algorithms that cater to the different multiscale nonlinear physics. This includes but is not limited to the construction of appropriate adaptive spatial and temporal discretizations that are suitable for the underlying governing equations. In addition, a multiresolution wavelets approach for adaptive numerical dissipation/filter controls for high speed turbulence, acoustics and combustion simulations will be sought. These steps are corner stones for guarding against spurious numerical solutions that are solutions of the discretized counterparts but are not solutions of the underlying governing equations.

  3. The Structure of Borders in a Small World

    PubMed Central

    Thiemann, Christian; Theis, Fabian; Grady, Daniel; Brune, Rafael; Brockmann, Dirk

    2010-01-01

    Territorial subdivisions and geographic borders are essential for understanding phenomena in sociology, political science, history, and economics. They influence the interregional flow of information and cross-border trade and affect the diffusion of innovation and technology. However, it is unclear if existing administrative subdivisions that typically evolved decades ago still reflect the most plausible organizational structure of today. The complexity of modern human communication, the ease of long-distance movement, and increased interaction across political borders complicate the operational definition and assessment of geographic borders that optimally reflect the multi-scale nature of today's human connectivity patterns. What border structures emerge directly from the interplay of scales in human interactions is an open question. Based on a massive proxy dataset, we analyze a multi-scale human mobility network and compute effective geographic borders inherent to human mobility patterns in the United States. We propose two computational techniques for extracting these borders and for quantifying their strength. We find that effective borders only partially overlap with existing administrative borders, and show that some of the strongest mobility borders exist in unexpected regions. We show that the observed structures cannot be generated by gravity models for human traffic. Finally, we introduce the concept of link significance that clarifies the observed structure of effective borders. Our approach represents a novel type of quantitative, comparative analysis framework for spatially embedded multi-scale interaction networks in general and may yield important insight into a multitude of spatiotemporal phenomena generated by human activity. PMID:21124970

  4. The structure of borders in a small world.

    PubMed

    Thiemann, Christian; Theis, Fabian; Grady, Daniel; Brune, Rafael; Brockmann, Dirk

    2010-11-18

    Territorial subdivisions and geographic borders are essential for understanding phenomena in sociology, political science, history, and economics. They influence the interregional flow of information and cross-border trade and affect the diffusion of innovation and technology. However, it is unclear if existing administrative subdivisions that typically evolved decades ago still reflect the most plausible organizational structure of today. The complexity of modern human communication, the ease of long-distance movement, and increased interaction across political borders complicate the operational definition and assessment of geographic borders that optimally reflect the multi-scale nature of today's human connectivity patterns. What border structures emerge directly from the interplay of scales in human interactions is an open question. Based on a massive proxy dataset, we analyze a multi-scale human mobility network and compute effective geographic borders inherent to human mobility patterns in the United States. We propose two computational techniques for extracting these borders and for quantifying their strength. We find that effective borders only partially overlap with existing administrative borders, and show that some of the strongest mobility borders exist in unexpected regions. We show that the observed structures cannot be generated by gravity models for human traffic. Finally, we introduce the concept of link significance that clarifies the observed structure of effective borders. Our approach represents a novel type of quantitative, comparative analysis framework for spatially embedded multi-scale interaction networks in general and may yield important insight into a multitude of spatiotemporal phenomena generated by human activity.

  5. Modeling Materials: Design for Planetary Entry, Electric Aircraft, and Beyond

    NASA Technical Reports Server (NTRS)

    Thompson, Alexander; Lawson, John W.

    2014-01-01

    NASA missions push the limits of what is possible. The development of high-performance materials must keep pace with the agency's demanding, cutting-edge applications. Researchers at NASA's Ames Research Center are performing multiscale computational modeling to accelerate development times and further the design of next-generation aerospace materials. Multiscale modeling combines several computationally intensive techniques ranging from the atomic level to the macroscale, passing output from one level as input to the next level. These methods are applicable to a wide variety of materials systems. For example: (a) Ultra-high-temperature ceramics for hypersonic aircraft-we utilized the full range of multiscale modeling to characterize thermal protection materials for faster, safer air- and spacecraft, (b) Planetary entry heat shields for space vehicles-we computed thermal and mechanical properties of ablative composites by combining several methods, from atomistic simulations to macroscale computations, (c) Advanced batteries for electric aircraft-we performed large-scale molecular dynamics simulations of advanced electrolytes for ultra-high-energy capacity batteries to enable long-distance electric aircraft service; and (d) Shape-memory alloys for high-efficiency aircraft-we used high-fidelity electronic structure calculations to determine phase diagrams in shape-memory transformations. Advances in high-performance computing have been critical to the development of multiscale materials modeling. We used nearly one million processor hours on NASA's Pleiades supercomputer to characterize electrolytes with a fidelity that would be otherwise impossible. For this and other projects, Pleiades enables us to push the physics and accuracy of our calculations to new levels.

  6. Novel Multiscale Modeling Tool Applied to Pseudomonas aeruginosa Biofilm Formation

    PubMed Central

    Biggs, Matthew B.; Papin, Jason A.

    2013-01-01

    Multiscale modeling is used to represent biological systems with increasing frequency and success. Multiscale models are often hybrids of different modeling frameworks and programming languages. We present the MATLAB-NetLogo extension (MatNet) as a novel tool for multiscale modeling. We demonstrate the utility of the tool with a multiscale model of Pseudomonas aeruginosa biofilm formation that incorporates both an agent-based model (ABM) and constraint-based metabolic modeling. The hybrid model correctly recapitulates oxygen-limited biofilm metabolic activity and predicts increased growth rate via anaerobic respiration with the addition of nitrate to the growth media. In addition, a genome-wide survey of metabolic mutants and biofilm formation exemplifies the powerful analyses that are enabled by this computational modeling tool. PMID:24147108

  7. Novel multiscale modeling tool applied to Pseudomonas aeruginosa biofilm formation.

    PubMed

    Biggs, Matthew B; Papin, Jason A

    2013-01-01

    Multiscale modeling is used to represent biological systems with increasing frequency and success. Multiscale models are often hybrids of different modeling frameworks and programming languages. We present the MATLAB-NetLogo extension (MatNet) as a novel tool for multiscale modeling. We demonstrate the utility of the tool with a multiscale model of Pseudomonas aeruginosa biofilm formation that incorporates both an agent-based model (ABM) and constraint-based metabolic modeling. The hybrid model correctly recapitulates oxygen-limited biofilm metabolic activity and predicts increased growth rate via anaerobic respiration with the addition of nitrate to the growth media. In addition, a genome-wide survey of metabolic mutants and biofilm formation exemplifies the powerful analyses that are enabled by this computational modeling tool.

  8. Multiscale modeling of a rectifying bipolar nanopore: Comparing Poisson-Nernst-Planck to Monte Carlo

    NASA Astrophysics Data System (ADS)

    Matejczyk, Bartłomiej; Valiskó, Mónika; Wolfram, Marie-Therese; Pietschmann, Jan-Frederik; Boda, Dezső

    2017-03-01

    In the framework of a multiscale modeling approach, we present a systematic study of a bipolar rectifying nanopore using a continuum and a particle simulation method. The common ground in the two methods is the application of the Nernst-Planck (NP) equation to compute ion transport in the framework of the implicit-water electrolyte model. The difference is that the Poisson-Boltzmann theory is used in the Poisson-Nernst-Planck (PNP) approach, while the Local Equilibrium Monte Carlo (LEMC) method is used in the particle simulation approach (NP+LEMC) to relate the concentration profile to the electrochemical potential profile. Since we consider a bipolar pore which is short and narrow, we perform simulations using two-dimensional PNP. In addition, results of a non-linear version of PNP that takes crowding of ions into account are shown. We observe that the mean field approximation applied in PNP is appropriate to reproduce the basic behavior of the bipolar nanopore (e.g., rectification) for varying parameters of the system (voltage, surface charge, electrolyte concentration, and pore radius). We present current data that characterize the nanopore's behavior as a device, as well as concentration, electrical potential, and electrochemical potential profiles.

  9. Multiscale modeling of a rectifying bipolar nanopore: Comparing Poisson-Nernst-Planck to Monte Carlo.

    PubMed

    Matejczyk, Bartłomiej; Valiskó, Mónika; Wolfram, Marie-Therese; Pietschmann, Jan-Frederik; Boda, Dezső

    2017-03-28

    In the framework of a multiscale modeling approach, we present a systematic study of a bipolar rectifying nanopore using a continuum and a particle simulation method. The common ground in the two methods is the application of the Nernst-Planck (NP) equation to compute ion transport in the framework of the implicit-water electrolytemodel. The difference is that the Poisson-Boltzmann theory is used in the Poisson-Nernst-Planck (PNP) approach, while the Local Equilibrium Monte Carlo (LEMC) method is used in the particle simulation approach (NP+LEMC) to relate the concentration profile to the electrochemical potential profile. Since we consider a bipolar pore which is short and narrow, we perform simulations using two-dimensional PNP. In addition, results of a non-linear version of PNP that takes crowding of ions into account are shown. We observe that the mean field approximation applied in PNP is appropriate to reproduce the basic behavior of the bipolar nanopore (e.g., rectification) for varying parameters of the system (voltage, surface charge,electrolyte concentration, and pore radius). We present current data that characterize the nanopore's behavior as a device, as well as concentration, electrical potential, and electrochemical potential profiles.

  10. Dual-scale Galerkin methods for Darcy flow

    NASA Astrophysics Data System (ADS)

    Wang, Guoyin; Scovazzi, Guglielmo; Nouveau, Léo; Kees, Christopher E.; Rossi, Simone; Colomés, Oriol; Main, Alex

    2018-02-01

    The discontinuous Galerkin (DG) method has found widespread application in elliptic problems with rough coefficients, of which the Darcy flow equations are a prototypical example. One of the long-standing issues of DG approximations is the overall computational cost, and many different strategies have been proposed, such as the variational multiscale DG method, the hybridizable DG method, the multiscale DG method, the embedded DG method, and the Enriched Galerkin method. In this work, we propose a mixed dual-scale Galerkin method, in which the degrees-of-freedom of a less computationally expensive coarse-scale approximation are linked to the degrees-of-freedom of a base DG approximation. We show that the proposed approach has always similar or improved accuracy with respect to the base DG method, with a considerable reduction in computational cost. For the specific definition of the coarse-scale space, we consider Raviart-Thomas finite elements for the mass flux and piecewise-linear continuous finite elements for the pressure. We provide a complete analysis of stability and convergence of the proposed method, in addition to a study on its conservation and consistency properties. We also present a battery of numerical tests to verify the results of the analysis, and evaluate a number of possible variations, such as using piecewise-linear continuous finite elements for the coarse-scale mass fluxes.

  11. Invasion emerges from cancer cell adaptation to competitive microenvironments: Quantitative predictions from multiscale mathematical models

    PubMed Central

    Rejniak, Katarzyna A.; Gerlee, Philip

    2013-01-01

    Summary In this review we summarize our recent efforts using mathematical modeling and computation to simulate cancer invasion, with a special emphasis on the tumor microenvironment. We consider cancer progression as a complex multiscale process and approach it with three single-cell based mathematical models that examine the interactions between tumor microenvironment and cancer cells at several scales. The models exploit distinct mathematical and computational techniques, yet they share core elements and can be compared and/or related to each other. The overall aim of using mathematical models is to uncover the fundamental mechanisms that lend cancer progression its direction towards invasion and metastasis. The models effectively simulate various modes of cancer cell adaptation to the microenvironment in a growing tumor. All three point to a general mechanism underlying cancer invasion: competition for adaptation between distinct cancer cell phenotypes, driven by a tumor microenvironment with scarce resources. These theoretical predictions pose an intriguing experimental challenge: test the hypothesis that invasion is an emergent property of cancer cell populations adapting to selective microenvironment pressure, rather than culmination of cancer progression producing cells with the “invasive phenotype”. In broader terms, we propose that fundamental insights into cancer can be achieved by experimentation interacting with theoretical frameworks provided by computational and mathematical modeling. PMID:18524624

  12. Towards designing an optical-flow based colonoscopy tracking algorithm: a comparative study

    NASA Astrophysics Data System (ADS)

    Liu, Jianfei; Subramanian, Kalpathi R.; Yoo, Terry S.

    2013-03-01

    Automatic co-alignment of optical and virtual colonoscopy images can supplement traditional endoscopic procedures, by providing more complete information of clinical value to the gastroenterologist. In this work, we present a comparative analysis of our optical flow based technique for colonoscopy tracking, in relation to current state of the art methods, in terms of tracking accuracy, system stability, and computational efficiency. Our optical-flow based colonoscopy tracking algorithm starts with computing multi-scale dense and sparse optical flow fields to measure image displacements. Camera motion parameters are then determined from optical flow fields by employing a Focus of Expansion (FOE) constrained egomotion estimation scheme. We analyze the design choices involved in the three major components of our algorithm: dense optical flow, sparse optical flow, and egomotion estimation. Brox's optical flow method,1 due to its high accuracy, was used to compare and evaluate our multi-scale dense optical flow scheme. SIFT6 and Harris-affine features7 were used to assess the accuracy of the multi-scale sparse optical flow, because of their wide use in tracking applications; the FOE-constrained egomotion estimation was compared with collinear,2 image deformation10 and image derivative4 based egomotion estimation methods, to understand the stability of our tracking system. Two virtual colonoscopy (VC) image sequences were used in the study, since the exact camera parameters(for each frame) were known; dense optical flow results indicated that Brox's method was superior to multi-scale dense optical flow in estimating camera rotational velocities, but the final tracking errors were comparable, viz., 6mm vs. 8mm after the VC camera traveled 110mm. Our approach was computationally more efficient, averaging 7.2 sec. vs. 38 sec. per frame. SIFT and Harris affine features resulted in tracking errors of up to 70mm, while our sparse optical flow error was 6mm. The comparison among egomotion estimation algorithms showed that our FOE-constrained egomotion estimation method achieved the optimal balance between tracking accuracy and robustness. The comparative study demonstrated that our optical-flow based colonoscopy tracking algorithm maintains good accuracy and stability for routine use in clinical practice.

  13. An approach to multiscale modelling with graph grammars.

    PubMed

    Ong, Yongzhi; Streit, Katarína; Henke, Michael; Kurth, Winfried

    2014-09-01

    Functional-structural plant models (FSPMs) simulate biological processes at different spatial scales. Methods exist for multiscale data representation and modification, but the advantages of using multiple scales in the dynamic aspects of FSPMs remain unclear. Results from multiscale models in various other areas of science that share fundamental modelling issues with FSPMs suggest that potential advantages do exist, and this study therefore aims to introduce an approach to multiscale modelling in FSPMs. A three-part graph data structure and grammar is revisited, and presented with a conceptual framework for multiscale modelling. The framework is used for identifying roles, categorizing and describing scale-to-scale interactions, thus allowing alternative approaches to model development as opposed to correlation-based modelling at a single scale. Reverse information flow (from macro- to micro-scale) is catered for in the framework. The methods are implemented within the programming language XL. Three example models are implemented using the proposed multiscale graph model and framework. The first illustrates the fundamental usage of the graph data structure and grammar, the second uses probabilistic modelling for organs at the fine scale in order to derive crown growth, and the third combines multiscale plant topology with ozone trends and metabolic network simulations in order to model juvenile beech stands under exposure to a toxic trace gas. The graph data structure supports data representation and grammar operations at multiple scales. The results demonstrate that multiscale modelling is a viable method in FSPM and an alternative to correlation-based modelling. Advantages and disadvantages of multiscale modelling are illustrated by comparisons with single-scale implementations, leading to motivations for further research in sensitivity analysis and run-time efficiency for these models.

  14. An approach to multiscale modelling with graph grammars

    PubMed Central

    Ong, Yongzhi; Streit, Katarína; Henke, Michael; Kurth, Winfried

    2014-01-01

    Background and Aims Functional–structural plant models (FSPMs) simulate biological processes at different spatial scales. Methods exist for multiscale data representation and modification, but the advantages of using multiple scales in the dynamic aspects of FSPMs remain unclear. Results from multiscale models in various other areas of science that share fundamental modelling issues with FSPMs suggest that potential advantages do exist, and this study therefore aims to introduce an approach to multiscale modelling in FSPMs. Methods A three-part graph data structure and grammar is revisited, and presented with a conceptual framework for multiscale modelling. The framework is used for identifying roles, categorizing and describing scale-to-scale interactions, thus allowing alternative approaches to model development as opposed to correlation-based modelling at a single scale. Reverse information flow (from macro- to micro-scale) is catered for in the framework. The methods are implemented within the programming language XL. Key Results Three example models are implemented using the proposed multiscale graph model and framework. The first illustrates the fundamental usage of the graph data structure and grammar, the second uses probabilistic modelling for organs at the fine scale in order to derive crown growth, and the third combines multiscale plant topology with ozone trends and metabolic network simulations in order to model juvenile beech stands under exposure to a toxic trace gas. Conclusions The graph data structure supports data representation and grammar operations at multiple scales. The results demonstrate that multiscale modelling is a viable method in FSPM and an alternative to correlation-based modelling. Advantages and disadvantages of multiscale modelling are illustrated by comparisons with single-scale implementations, leading to motivations for further research in sensitivity analysis and run-time efficiency for these models. PMID:25134929

  15. A Liver-centric Multiscale Modeling Framework for Xenobiotics ...

    EPA Pesticide Factsheets

    We describe a multi-scale framework for modeling acetaminophen-induced liver toxicity. Acetaminophen is a widely used analgesic. Overdose of acetaminophen can result in liver injury via its biotransformation into toxic product, which further induce massive necrosis. Our study focuses on developing a multi-scale computational model to characterize both phase I and phase II metabolism of acetaminophen, by bridging Physiologically Based Pharmacokinetic (PBPK) modeling at the whole body level, cell movement and blood flow at the tissue level and cell signaling and drug metabolism at the sub-cellular level. To validate the model, we estimated our model parameters by fi?tting serum concentrations of acetaminophen and its glucuronide and sulfate metabolites to experiments, and carried out sensitivity analysis on 35 parameters selected from three modules. Our study focuses on developing a multi-scale computational model to characterize both phase I and phase II metabolism of acetaminophen, by bridging Physiologically Based Pharmacokinetic (PBPK) modeling at the whole body level, cell movement and blood flow at the tissue level and cell signaling and drug metabolism at the sub-cellular level. This multiscale model bridges the CompuCell3D tool used by the Virtual Tissue project with the httk tool developed by the Rapid Exposure and Dosimetry project.

  16. Machine learning action parameters in lattice quantum chromodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shanahan, Phiala; Trewartha, Daneil; Detmold, William

    Numerical lattice quantum chromodynamics studies of the strong interaction underpin theoretical understanding of many aspects of particle and nuclear physics. Such studies require significant computing resources to undertake. A number of proposed methods promise improved efficiency of lattice calculations, and access to regions of parameter space that are currently computationally intractable, via multi-scale action-matching approaches that necessitate parametric regression of generated lattice datasets. The applicability of machine learning to this regression task is investigated, with deep neural networks found to provide an efficient solution even in cases where approaches such as principal component analysis fail. Finally, the high information contentmore » and complex symmetries inherent in lattice QCD datasets require custom neural network layers to be introduced and present opportunities for further development.« less

  17. Machine learning action parameters in lattice quantum chromodynamics

    DOE PAGES

    Shanahan, Phiala; Trewartha, Daneil; Detmold, William

    2018-05-16

    Numerical lattice quantum chromodynamics studies of the strong interaction underpin theoretical understanding of many aspects of particle and nuclear physics. Such studies require significant computing resources to undertake. A number of proposed methods promise improved efficiency of lattice calculations, and access to regions of parameter space that are currently computationally intractable, via multi-scale action-matching approaches that necessitate parametric regression of generated lattice datasets. The applicability of machine learning to this regression task is investigated, with deep neural networks found to provide an efficient solution even in cases where approaches such as principal component analysis fail. Finally, the high information contentmore » and complex symmetries inherent in lattice QCD datasets require custom neural network layers to be introduced and present opportunities for further development.« less

  18. Prospects for improving the representation of coastal and shelf seas in global ocean models

    NASA Astrophysics Data System (ADS)

    Holt, Jason; Hyder, Patrick; Ashworth, Mike; Harle, James; Hewitt, Helene T.; Liu, Hedong; New, Adrian L.; Pickles, Stephen; Porter, Andrew; Popova, Ekaterina; Icarus Allen, J.; Siddorn, John; Wood, Richard

    2017-02-01

    Accurately representing coastal and shelf seas in global ocean models represents one of the grand challenges of Earth system science. They are regions of immense societal importance through the goods and services they provide, hazards they pose and their role in global-scale processes and cycles, e.g. carbon fluxes and dense water formation. However, they are poorly represented in the current generation of global ocean models. In this contribution, we aim to briefly characterise the problem, and then to identify the important physical processes, and their scales, needed to address this issue in the context of the options available to resolve these scales globally and the evolving computational landscape.We find barotropic and topographic scales are well resolved by the current state-of-the-art model resolutions, e.g. nominal 1/12°, and still reasonably well resolved at 1/4°; here, the focus is on process representation. We identify tides, vertical coordinates, river inflows and mixing schemes as four areas where modelling approaches can readily be transferred from regional to global modelling with substantial benefit. In terms of finer-scale processes, we find that a 1/12° global model resolves the first baroclinic Rossby radius for only ˜ 8 % of regions < 500 m deep, but this increases to ˜ 70 % for a 1/72° model, so resolving scales globally requires substantially finer resolution than the current state of the art.We quantify the benefit of improved resolution and process representation using 1/12° global- and basin-scale northern North Atlantic nucleus for a European model of the ocean (NEMO) simulations; the latter includes tides and a k-ɛ vertical mixing scheme. These are compared with global stratification observations and 19 models from CMIP5. In terms of correlation and basin-wide rms error, the high-resolution models outperform all these CMIP5 models. The model with tides shows improved seasonal cycles compared to the high-resolution model without tides. The benefits of resolution are particularly apparent in eastern boundary upwelling zones.To explore the balance between the size of a globally refined model and that of multiscale modelling options (e.g. finite element, finite volume or a two-way nesting approach), we consider a simple scale analysis and a conceptual grid refining approach. We put this analysis in the context of evolving computer systems, discussing model turnaround time, scalability and resource costs. Using a simple cost model compared to a reference configuration (taken to be a 1/4° global model in 2011) and the increasing performance of the UK Research Councils' computer facility, we estimate an unstructured mesh multiscale approach, resolving process scales down to 1.5 km, would use a comparable share of the computer resource by 2021, the two-way nested multiscale approach by 2022, and a 1/72° global model by 2026. However, we also note that a 1/12° global model would not have a comparable computational cost to a 1° global model in 2017 until 2027. Hence, we conclude that for computationally expensive models (e.g. for oceanographic research or operational oceanography), resolving scales to ˜ 1.5 km would be routinely practical in about a decade given substantial effort on numerical and computational development. For complex Earth system models, this extends to about 2 decades, suggesting the focus here needs to be on improved process parameterisation to meet these challenges.

  19. Computational Medicine: Translating Models to Clinical Care

    PubMed Central

    Winslow, Raimond L.; Trayanova, Natalia; Geman, Donald; Miller, Michael I.

    2013-01-01

    Because of the inherent complexity of coupled nonlinear biological systems, the development of computational models is necessary for achieving a quantitative understanding of their structure and function in health and disease. Statistical learning is applied to high-dimensional biomolecular data to create models that describe relationships between molecules and networks. Multiscale modeling links networks to cells, organs, and organ systems. Computational approaches are used to characterize anatomic shape and its variations in health and disease. In each case, the purposes of modeling are to capture all that we know about disease and to develop improved therapies tailored to the needs of individuals. We discuss advances in computational medicine, with specific examples in the fields of cancer, diabetes, cardiology, and neurology. Advances in translating these computational methods to the clinic are described, as well as challenges in applying models for improving patient health. PMID:23115356

  20. Computational and experimental single cell biology techniques for the definition of cell type heterogeneity, interplay and intracellular dynamics.

    PubMed

    de Vargas Roditi, Laura; Claassen, Manfred

    2015-08-01

    Novel technological developments enable single cell population profiling with respect to their spatial and molecular setup. These include single cell sequencing, flow cytometry and multiparametric imaging approaches and open unprecedented possibilities to learn about the heterogeneity, dynamics and interplay of the different cell types which constitute tissues and multicellular organisms. Statistical and dynamic systems theory approaches have been applied to quantitatively describe a variety of cellular processes, such as transcription and cell signaling. Machine learning approaches have been developed to define cell types, their mutual relationships, and differentiation hierarchies shaping heterogeneous cell populations, yielding insights into topics such as, for example, immune cell differentiation and tumor cell type composition. This combination of experimental and computational advances has opened perspectives towards learning predictive multi-scale models of heterogeneous cell populations. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Toward multiscale modelings of grain-fluid systems

    NASA Astrophysics Data System (ADS)

    Chareyre, Bruno; Yuan, Chao; Montella, Eduard P.; Salager, Simon

    2017-06-01

    Computationally efficient methods have been developed for simulating partially saturated granular materials in the pendular regime. In contrast, one hardly avoid expensive direct resolutions of 2-phase fluid dynamics problem for mixed pendular-funicular situations or even saturated regimes. Following previous developments for single-phase flow, a pore-network approach of the coupling problems is described. The geometry and movements of phases and interfaces are described on the basis of a tetrahedrization of the pore space, introducing elementary objects such as bridge, meniscus, pore body and pore throat, together with local rules of evolution. As firmly established local rules are still missing on some aspects (entry capillary pressure and pore-scale pressure-saturation relations, forces on the grains, or kinetics of transfers in mixed situations) a multi-scale numerical framework is introduced, enhancing the pore-network approach with the help of direct simulations. Small subsets of a granular system are extracted, in which multiphase scenario are solved using the Lattice-Boltzman method (LBM). In turns, a global problem is assembled and solved at the network scale, as illustrated by a simulated primary drainage.

  2. On the intrinsic flexibility of the opioid receptor through multiscale modeling approaches

    NASA Astrophysics Data System (ADS)

    Vercauteren, Daniel; FosséPré, Mathieu; Leherte, Laurence; Laaksonen, Aatto

    Numerous releases of G protein-coupled receptors crystalline structures created the opportunity for computational methods to widely explore their dynamics. Here, we study the biological implication of the intrinsic flexibility properties of opioid receptor OR. First, one performed classical all-atom (AA) Molecular Dynamics (MD) simulations of OR in its apo-form. We highlighted that the various degrees of bendability of the α-helices present important consequences on the plasticity of the binding site. Hence, this latter adopts a wide diversity of shape and volume, explaining why OR interacts with very diverse ligands. Then, one introduces a new strategy for parameterizing purely mechanical but precise coarse-grained (CG) elastic network models (ENMs). The CG ENMs reproduced in a high accurate way the flexibility properties of OR versus the AA simulations. At last, one uses network modularization to design multi-grained (MG) models. They represent a novel type of low resolution models, different in nature versus CG models as being true multi-resolution models, i . e ., each MG grouping a different number of residues. The three parts constitute hierarchical and multiscale approach for tackling the flexibility of OR.

  3. Evaluation of multiple-scale 3D characterization for coal physical structure with DCM method and synchrotron X-ray CT.

    PubMed

    Wang, Haipeng; Yang, Yushuang; Yang, Jianli; Nie, Yihang; Jia, Jing; Wang, Yudan

    2015-01-01

    Multiscale nondestructive characterization of coal microscopic physical structure can provide important information for coal conversion and coal-bed methane extraction. In this study, the physical structure of a coal sample was investigated by synchrotron-based multiple-energy X-ray CT at three beam energies and two different spatial resolutions. A data-constrained modeling (DCM) approach was used to quantitatively characterize the multiscale compositional distributions at the two resolutions. The volume fractions of each voxel for four different composition groups were obtained at the two resolutions. Between the two resolutions, the difference for DCM computed volume fractions of coal matrix and pores is less than 0.3%, and the difference for mineral composition groups is less than 0.17%. This demonstrates that the DCM approach can account for compositions beyond the X-ray CT imaging resolution with adequate accuracy. By using DCM, it is possible to characterize a relatively large coal sample at a relatively low spatial resolution with minimal loss of the effect due to subpixel fine length scale structures.

  4. Evaluation of the Community Multiscale Air Quality (CMAQ) Model Version 5.2

    EPA Science Inventory

    The Community Multiscale Air Quality (CMAQ) model is a state-of-the-science air quality model that simulates the emission, transport and fate of numerous air pollutants, including ozone and particulate matter. The Computational Exposure Division (CED) of the U.S. Environmental Pr...

  5. Evaluation of the Community Multi-scale Air Quality Model Version 5.2

    EPA Science Inventory

    The Community Multiscale Air Quality (CMAQ) model is a state-of-the-science air quality model that simulates the emission, transport and fate of numerous air pollutants, including ozone and particulate matter. The Computational Exposure Division (CED) of the U.S. Environmental Pr...

  6. Thermal nanostructure: An order parameter multiscale ensemble approach

    NASA Astrophysics Data System (ADS)

    Cheluvaraja, S.; Ortoleva, P.

    2010-02-01

    Deductive all-atom multiscale techniques imply that many nanosystems can be understood in terms of the slow dynamics of order parameters that coevolve with the quasiequilibrium probability density for rapidly fluctuating atomic configurations. The result of this multiscale analysis is a set of stochastic equations for the order parameters whose dynamics is driven by thermal-average forces. We present an efficient algorithm for sampling atomistic configurations in viruses and other supramillion atom nanosystems. This algorithm allows for sampling of a wide range of configurations without creating an excess of high-energy, improbable ones. It is implemented and used to calculate thermal-average forces. These forces are then used to search the free-energy landscape of a nanosystem for deep minima. The methodology is applied to thermal structures of Cowpea chlorotic mottle virus capsid. The method has wide applicability to other nanosystems whose properties are described by the CHARMM or other interatomic force field. Our implementation, denoted SIMNANOWORLD™, achieves calibration-free nanosystem modeling. Essential atomic-scale detail is preserved via a quasiequilibrium probability density while overall character is provided via predicted values of order parameters. Applications from virology to the computer-aided design of nanocapsules for delivery of therapeutic agents and of vaccines for nonenveloped viruses are envisioned.

  7. Multiscale modelling for tokamak pedestals

    NASA Astrophysics Data System (ADS)

    Abel, I. G.

    2018-04-01

    Pedestal modelling is crucial to predict the performance of future fusion devices. Current modelling efforts suffer either from a lack of kinetic physics, or an excess of computational complexity. To ameliorate these problems, we take a first-principles multiscale approach to the pedestal. We will present three separate sets of equations, covering the dynamics of edge localised modes (ELMs), the inter-ELM pedestal and pedestal turbulence, respectively. Precisely how these equations should be coupled to each other is covered in detail. This framework is completely self-consistent; it is derived from first principles by means of an asymptotic expansion of the fundamental Vlasov-Landau-Maxwell system in appropriate small parameters. The derivation exploits the narrowness of the pedestal region, the smallness of the thermal gyroradius and the low plasma (the ratio of thermal to magnetic pressures) typical of current pedestal operation to achieve its simplifications. The relationship between this framework and gyrokinetics is analysed, and possibilities to directly match our systems of equations onto multiscale gyrokinetics are explored. A detailed comparison between our model and other models in the literature is performed. Finally, the potential for matching this framework onto an open-field-line region is briefly discussed.

  8. Accelerating Electrostatic Surface Potential Calculation with Multiscale Approximation on Graphics Processing Units

    PubMed Central

    Anandakrishnan, Ramu; Scogland, Tom R. W.; Fenley, Andrew T.; Gordon, John C.; Feng, Wu-chun; Onufriev, Alexey V.

    2010-01-01

    Tools that compute and visualize biomolecular electrostatic surface potential have been used extensively for studying biomolecular function. However, determining the surface potential for large biomolecules on a typical desktop computer can take days or longer using currently available tools and methods. Two commonly used techniques to speed up these types of electrostatic computations are approximations based on multi-scale coarse-graining and parallelization across multiple processors. This paper demonstrates that for the computation of electrostatic surface potential, these two techniques can be combined to deliver significantly greater speed-up than either one separately, something that is in general not always possible. Specifically, the electrostatic potential computation, using an analytical linearized Poisson Boltzmann (ALPB) method, is approximated using the hierarchical charge partitioning (HCP) multiscale method, and parallelized on an ATI Radeon 4870 graphical processing unit (GPU). The implementation delivers a combined 934-fold speed-up for a 476,040 atom viral capsid, compared to an equivalent non-parallel implementation on an Intel E6550 CPU without the approximation. This speed-up is significantly greater than the 42-fold speed-up for the HCP approximation alone or the 182-fold speed-up for the GPU alone. PMID:20452792

  9. Single Image Super-Resolution Based on Multi-Scale Competitive Convolutional Neural Network

    PubMed Central

    Qu, Xiaobo; He, Yifan

    2018-01-01

    Deep convolutional neural networks (CNNs) are successful in single-image super-resolution. Traditional CNNs are limited to exploit multi-scale contextual information for image reconstruction due to the fixed convolutional kernel in their building modules. To restore various scales of image details, we enhance the multi-scale inference capability of CNNs by introducing competition among multi-scale convolutional filters, and build up a shallow network under limited computational resources. The proposed network has the following two advantages: (1) the multi-scale convolutional kernel provides the multi-context for image super-resolution, and (2) the maximum competitive strategy adaptively chooses the optimal scale of information for image reconstruction. Our experimental results on image super-resolution show that the performance of the proposed network outperforms the state-of-the-art methods. PMID:29509666

  10. Single Image Super-Resolution Based on Multi-Scale Competitive Convolutional Neural Network.

    PubMed

    Du, Xiaofeng; Qu, Xiaobo; He, Yifan; Guo, Di

    2018-03-06

    Deep convolutional neural networks (CNNs) are successful in single-image super-resolution. Traditional CNNs are limited to exploit multi-scale contextual information for image reconstruction due to the fixed convolutional kernel in their building modules. To restore various scales of image details, we enhance the multi-scale inference capability of CNNs by introducing competition among multi-scale convolutional filters, and build up a shallow network under limited computational resources. The proposed network has the following two advantages: (1) the multi-scale convolutional kernel provides the multi-context for image super-resolution, and (2) the maximum competitive strategy adaptively chooses the optimal scale of information for image reconstruction. Our experimental results on image super-resolution show that the performance of the proposed network outperforms the state-of-the-art methods.

  11. Multiscale modeling of three-dimensional genome

    NASA Astrophysics Data System (ADS)

    Zhang, Bin; Wolynes, Peter

    The genome, the blueprint of life, contains nearly all the information needed to build and maintain an entire organism. A comprehensive understanding of the genome is of paramount interest to human health and will advance progress in many areas, including life sciences, medicine, and biotechnology. The overarching goal of my research is to understand the structure-dynamics-function relationships of the human genome. In this talk, I will be presenting our efforts in moving towards that goal, with a particular emphasis on studying the three-dimensional organization, the structure of the genome with multi-scale approaches. Specifically, I will discuss the reconstruction of genome structures at both interphase and metaphase by making use of data from chromosome conformation capture experiments. Computationally modeling of chromatin fiber at atomistic level from first principles will also be presented as our effort for studying the genome structure from bottom up.

  12. Advances in multi-scale modeling of solidification and casting processes

    NASA Astrophysics Data System (ADS)

    Liu, Baicheng; Xu, Qingyan; Jing, Tao; Shen, Houfa; Han, Zhiqiang

    2011-04-01

    The development of the aviation, energy and automobile industries requires an advanced integrated product/process R&D systems which could optimize the product and the process design as well. Integrated computational materials engineering (ICME) is a promising approach to fulfill this requirement and make the product and process development efficient, economic, and environmentally friendly. Advances in multi-scale modeling of solidification and casting processes, including mathematical models as well as engineering applications are presented in the paper. Dendrite morphology of magnesium and aluminum alloy of solidification process by using phase field and cellular automaton methods, mathematical models of segregation of large steel ingot, and microstructure models of unidirectionally solidified turbine blade casting are studied and discussed. In addition, some engineering case studies, including microstructure simulation of aluminum casting for automobile industry, segregation of large steel ingot for energy industry, and microstructure simulation of unidirectionally solidified turbine blade castings for aviation industry are discussed.

  13. A fault diagnosis scheme for planetary gearboxes using modified multi-scale symbolic dynamic entropy and mRMR feature selection

    NASA Astrophysics Data System (ADS)

    Li, Yongbo; Yang, Yuantao; Li, Guoyan; Xu, Minqiang; Huang, Wenhu

    2017-07-01

    Health condition identification of planetary gearboxes is crucial to reduce the downtime and maximize productivity. This paper aims to develop a novel fault diagnosis method based on modified multi-scale symbolic dynamic entropy (MMSDE) and minimum redundancy maximum relevance (mRMR) to identify the different health conditions of planetary gearbox. MMSDE is proposed to quantify the regularity of time series, which can assess the dynamical characteristics over a range of scales. MMSDE has obvious advantages in the detection of dynamical changes and computation efficiency. Then, the mRMR approach is introduced to refine the fault features. Lastly, the obtained new features are fed into the least square support vector machine (LSSVM) to complete the fault pattern identification. The proposed method is numerically and experimentally demonstrated to be able to recognize the different fault types of planetary gearboxes.

  14. Evaluation of the Community Multi-scale Air Quality (CMAQ) Model Version 5.1

    EPA Science Inventory

    The Community Multiscale Air Quality (CMAQ) model is a state-of-the-science air quality model that simulates the emission, transport and fate of numerous air pollutants, including ozone and particulate matter. The Computational Exposure Division (CED) of the U.S. Environmental Pr...

  15. Overview and Evaluation of the Community Multiscale Air Quality Model Version 5.2

    EPA Science Inventory

    The Community Multiscale Air Quality (CMAQ) model is a state-of-the-science air quality model that simulates the emission, transport and fate of numerous air pollutants, including ozone and particulate matter. The Computational Exposure Division (CED) of the U.S. Environmental Pr...

  16. Evaluation of the Community Multi-scale Air Quality (CMAQ) Model Version 5.2

    EPA Science Inventory

    The Community Multiscale Air Quality (CMAQ) model is a state-of-the-science air quality model that simulates the emission, transport and fate of numerous air pollutants, including ozone and particulate matter. The Computational Exposure Division (CED) of the U.S. Environmental Pr...

  17. The role of continuity in residual-based variational multiscale modeling of turbulence

    NASA Astrophysics Data System (ADS)

    Akkerman, I.; Bazilevs, Y.; Calo, V. M.; Hughes, T. J. R.; Hulshoff, S.

    2008-02-01

    This paper examines the role of continuity of the basis in the computation of turbulent flows. We compare standard finite elements and non-uniform rational B-splines (NURBS) discretizations that are employed in Isogeometric Analysis (Hughes et al. in Comput Methods Appl Mech Eng, 194:4135 4195, 2005). We make use of quadratic discretizations that are C 0-continuous across element boundaries in standard finite elements, and C 1-continuous in the case of NURBS. The variational multiscale residual-based method (Bazilevs in Isogeometric analysis of turbulence and fluid-structure interaction, PhD thesis, ICES, UT Austin, 2006; Bazilevs et al. in Comput Methods Appl Mech Eng, submitted, 2007; Calo in Residual-based multiscale turbulence modeling: finite volume simulation of bypass transition. PhD thesis, Department of Civil and Environmental Engineering, Stanford University, 2004; Hughes et al. in proceedings of the XXI international congress of theoretical and applied mechanics (IUTAM), Kluwer, 2004; Scovazzi in Multiscale methods in science and engineering, PhD thesis, Department of Mechanical Engineering, Stanford Universty, 2004) is employed as a turbulence modeling technique. We find that C 1-continuous discretizations outperform their C 0-continuous counterparts on a per-degree-of-freedom basis. We also find that the effect of continuity is greater for higher Reynolds number flows.

  18. Computer-aided detection of human cone photoreceptor inner segments using multi-scale circular voting

    NASA Astrophysics Data System (ADS)

    Liu, Jianfei; Dubra, Alfredo; Tam, Johnny

    2016-03-01

    Cone photoreceptors are highly specialized cells responsible for the origin of vision in the human eye. Their inner segments can be noninvasively visualized using adaptive optics scanning light ophthalmoscopes (AOSLOs) with nonconfocal split detection capabilities. Monitoring the number of cones can lead to more precise metrics for real-time diagnosis and assessment of disease progression. Cell identification in split detection AOSLO images is hindered by cell regions with heterogeneous intensity arising from shadowing effects and low contrast boundaries due to overlying blood vessels. Here, we present a multi-scale circular voting approach to overcome these challenges through the novel combination of: 1) iterative circular voting to identify candidate cells based on their circular structures, 2) a multi-scale strategy to identify the optimal circular voting response, and 3) clustering to improve robustness while removing false positives. We acquired images from three healthy subjects at various locations on the retina and manually labeled cell locations to create ground-truth for evaluating the detection accuracy. The images span a large range of cell densities. The overall recall, precision, and F1 score were 91±4%, 84±10%, and 87±7% (Mean±SD). Results showed that our method for the identification of cone photoreceptor inner segments performs well even with low contrast cell boundaries and vessel obscuration. These encouraging results demonstrate that the proposed approach can robustly and accurately identify cells in split detection AOSLO images.

  19. A numerical homogenization method for heterogeneous, anisotropic elastic media based on multiscale theory

    DOE PAGES

    Gao, Kai; Chung, Eric T.; Gibson, Richard L.; ...

    2015-06-05

    The development of reliable methods for upscaling fine scale models of elastic media has long been an important topic for rock physics and applied seismology. Several effective medium theories have been developed to provide elastic parameters for materials such as finely layered media or randomly oriented or aligned fractures. In such cases, the analytic solutions for upscaled properties can be used for accurate prediction of wave propagation. However, such theories cannot be applied directly to homogenize elastic media with more complex, arbitrary spatial heterogeneity. We therefore propose a numerical homogenization algorithm based on multiscale finite element methods for simulating elasticmore » wave propagation in heterogeneous, anisotropic elastic media. Specifically, our method used multiscale basis functions obtained from a local linear elasticity problem with appropriately defined boundary conditions. Homogenized, effective medium parameters were then computed using these basis functions, and the approach applied a numerical discretization that is similar to the rotated staggered-grid finite difference scheme. Comparisons of the results from our method and from conventional, analytical approaches for finely layered media showed that the homogenization reliably estimated elastic parameters for this simple geometry. Additional tests examined anisotropic models with arbitrary spatial heterogeneity where the average size of the heterogeneities ranged from several centimeters to several meters, and the ratio between the dominant wavelength and the average size of the arbitrary heterogeneities ranged from 10 to 100. Comparisons to finite-difference simulations proved that the numerical homogenization was equally accurate for these complex cases.« less

  20. Tracking Virus Particles in Fluorescence Microscopy Images Using Multi-Scale Detection and Multi-Frame Association.

    PubMed

    Jaiswal, Astha; Godinez, William J; Eils, Roland; Lehmann, Maik Jorg; Rohr, Karl

    2015-11-01

    Automatic fluorescent particle tracking is an essential task to study the dynamics of a large number of biological structures at a sub-cellular level. We have developed a probabilistic particle tracking approach based on multi-scale detection and two-step multi-frame association. The multi-scale detection scheme allows coping with particles in close proximity. For finding associations, we have developed a two-step multi-frame algorithm, which is based on a temporally semiglobal formulation as well as spatially local and global optimization. In the first step, reliable associations are determined for each particle individually in local neighborhoods. In the second step, the global spatial information over multiple frames is exploited jointly to determine optimal associations. The multi-scale detection scheme and the multi-frame association finding algorithm have been combined with a probabilistic tracking approach based on the Kalman filter. We have successfully applied our probabilistic tracking approach to synthetic as well as real microscopy image sequences of virus particles and quantified the performance. We found that the proposed approach outperforms previous approaches.

  1. Computational modeling of the obstructive lung diseases asthma and COPD

    PubMed Central

    2014-01-01

    Asthma and chronic obstructive pulmonary disease (COPD) are characterized by airway obstruction and airflow limitation and pose a huge burden to society. These obstructive lung diseases impact the lung physiology across multiple biological scales. Environmental stimuli are introduced via inhalation at the organ scale, and consequently impact upon the tissue, cellular and sub-cellular scale by triggering signaling pathways. These changes are propagated upwards to the organ level again and vice versa. In order to understand the pathophysiology behind these diseases we need to integrate and understand changes occurring across these scales and this is the driving force for multiscale computational modeling. There is an urgent need for improved diagnosis and assessment of obstructive lung diseases. Standard clinical measures are based on global function tests which ignore the highly heterogeneous regional changes that are characteristic of obstructive lung disease pathophysiology. Advances in scanning technology such as hyperpolarized gas MRI has led to new regional measurements of ventilation, perfusion and gas diffusion in the lungs, while new image processing techniques allow these measures to be combined with information from structural imaging such as Computed Tomography (CT). However, it is not yet known how to derive clinical measures for obstructive diseases from this wealth of new data. Computational modeling offers a powerful approach for investigating this relationship between imaging measurements and disease severity, and understanding the effects of different disease subtypes, which is key to developing improved diagnostic methods. Gaining an understanding of a system as complex as the respiratory system is difficult if not impossible via experimental methods alone. Computational models offer a complementary method to unravel the structure-function relationships occurring within a multiscale, multiphysics system such as this. Here we review the current state-of-the-art in techniques developed for pulmonary image analysis, development of structural models of the respiratory system and predictions of function within these models. We discuss application of modeling techniques to obstructive lung diseases, namely asthma and emphysema and the use of models to predict response to therapy. Finally we introduce a large European project, AirPROM that is developing multiscale models to investigate structure-function relationships in asthma and COPD. PMID:25471125

  2. Regional variations in growth plate chondrocyte deformation as predicted by three-dimensional multi-scale simulations.

    PubMed

    Gao, Jie; Roan, Esra; Williams, John L

    2015-01-01

    The physis, or growth plate, is a complex disc-shaped cartilage structure that is responsible for longitudinal bone growth. In this study, a multi-scale computational approach was undertaken to better understand how physiological loads are experienced by chondrocytes embedded inside chondrons when subjected to moderate strain under instantaneous compressive loading of the growth plate. Models of representative samples of compressed bone/growth-plate/bone from a 0.67 mm thick 4-month old bovine proximal tibial physis were subjected to a prescribed displacement equal to 20% of the growth plate thickness. At the macroscale level, the applied compressive deformation resulted in an overall compressive strain across the proliferative-hypertrophic zone of 17%. The microscale model predicted that chondrocytes sustained compressive height strains of 12% and 6% in the proliferative and hypertrophic zones, respectively, in the interior regions of the plate. This pattern was reversed within the outer 300 μm region at the free surface where cells were compressed by 10% in the proliferative and 26% in the hypertrophic zones, in agreement with experimental observations. This work provides a new approach to study growth plate behavior under compression and illustrates the need for combining computational and experimental methods to better understand the chondrocyte mechanics in the growth plate cartilage. While the current model is relevant to fast dynamic events, such as heel strike in walking, we believe this approach provides new insight into the mechanical factors that regulate bone growth at the cell level and provides a basis for developing models to help interpret experimental results at varying time scales.

  3. Regional Variations in Growth Plate Chondrocyte Deformation as Predicted By Three-Dimensional Multi-Scale Simulations

    PubMed Central

    Gao, Jie; Roan, Esra; Williams, John L.

    2015-01-01

    The physis, or growth plate, is a complex disc-shaped cartilage structure that is responsible for longitudinal bone growth. In this study, a multi-scale computational approach was undertaken to better understand how physiological loads are experienced by chondrocytes embedded inside chondrons when subjected to moderate strain under instantaneous compressive loading of the growth plate. Models of representative samples of compressed bone/growth-plate/bone from a 0.67 mm thick 4-month old bovine proximal tibial physis were subjected to a prescribed displacement equal to 20% of the growth plate thickness. At the macroscale level, the applied compressive deformation resulted in an overall compressive strain across the proliferative-hypertrophic zone of 17%. The microscale model predicted that chondrocytes sustained compressive height strains of 12% and 6% in the proliferative and hypertrophic zones, respectively, in the interior regions of the plate. This pattern was reversed within the outer 300 μm region at the free surface where cells were compressed by 10% in the proliferative and 26% in the hypertrophic zones, in agreement with experimental observations. This work provides a new approach to study growth plate behavior under compression and illustrates the need for combining computational and experimental methods to better understand the chondrocyte mechanics in the growth plate cartilage. While the current model is relevant to fast dynamic events, such as heel strike in walking, we believe this approach provides new insight into the mechanical factors that regulate bone growth at the cell level and provides a basis for developing models to help interpret experimental results at varying time scales. PMID:25885547

  4. An automated approach for extracting Barrier Island morphology from digital elevation models

    NASA Astrophysics Data System (ADS)

    Wernette, Phillipe; Houser, Chris; Bishop, Michael P.

    2016-06-01

    The response and recovery of a barrier island to extreme storms depends on the elevation of the dune base and crest, both of which can vary considerably alongshore and through time. Quantifying the response to and recovery from storms requires that we can first identify and differentiate the dune(s) from the beach and back-barrier, which in turn depends on accurate identification and delineation of the dune toe, crest and heel. The purpose of this paper is to introduce a multi-scale automated approach for extracting beach, dune (dune toe, dune crest and dune heel), and barrier island morphology. The automated approach introduced here extracts the shoreline and back-barrier shoreline based on elevation thresholds, and extracts the dune toe, dune crest and dune heel based on the average relative relief (RR) across multiple spatial scales of analysis. The multi-scale automated RR approach to extracting dune toe, dune crest, and dune heel based upon relative relief is more objective than traditional approaches because every pixel is analyzed across multiple computational scales and the identification of features is based on the calculated RR values. The RR approach out-performed contemporary approaches and represents a fast objective means to define important beach and dune features for predicting barrier island response to storms. The RR method also does not require that the dune toe, crest, or heel are spatially continuous, which is important because dune morphology is likely naturally variable alongshore.

  5. Improvements in the Scalability of the NASA Goddard Multiscale Modeling Framework for Hurricane Climate Studies

    NASA Technical Reports Server (NTRS)

    Shen, Bo-Wen; Tao, Wei-Kuo; Chern, Jiun-Dar

    2007-01-01

    Improving our understanding of hurricane inter-annual variability and the impact of climate change (e.g., doubling CO2 and/or global warming) on hurricanes brings both scientific and computational challenges to researchers. As hurricane dynamics involves multiscale interactions among synoptic-scale flows, mesoscale vortices, and small-scale cloud motions, an ideal numerical model suitable for hurricane studies should demonstrate its capabilities in simulating these interactions. The newly-developed multiscale modeling framework (MMF, Tao et al., 2007) and the substantial computing power by the NASA Columbia supercomputer show promise in pursuing the related studies, as the MMF inherits the advantages of two NASA state-of-the-art modeling components: the GEOS4/fvGCM and 2D GCEs. This article focuses on the computational issues and proposes a revised methodology to improve the MMF's performance and scalability. It is shown that this prototype implementation enables 12-fold performance improvements with 364 CPUs, thereby making it more feasible to study hurricane climate.

  6. A real-time multi-scale 2D Gaussian filter based on FPGA

    NASA Astrophysics Data System (ADS)

    Luo, Haibo; Gai, Xingqin; Chang, Zheng; Hui, Bin

    2014-11-01

    Multi-scale 2-D Gaussian filter has been widely used in feature extraction (e.g. SIFT, edge etc.), image segmentation, image enhancement, image noise removing, multi-scale shape description etc. However, their computational complexity remains an issue for real-time image processing systems. Aimed at this problem, we propose a framework of multi-scale 2-D Gaussian filter based on FPGA in this paper. Firstly, a full-hardware architecture based on parallel pipeline was designed to achieve high throughput rate. Secondly, in order to save some multiplier, the 2-D convolution is separated into two 1-D convolutions. Thirdly, a dedicate first in first out memory named as CAFIFO (Column Addressing FIFO) was designed to avoid the error propagating induced by spark on clock. Finally, a shared memory framework was designed to reduce memory costs. As a demonstration, we realized a 3 scales 2-D Gaussian filter on a single ALTERA Cyclone III FPGA chip. Experimental results show that, the proposed framework can computing a Multi-scales 2-D Gaussian filtering within one pixel clock period, is further suitable for real-time image processing. Moreover, the main principle can be popularized to the other operators based on convolution, such as Gabor filter, Sobel operator and so on.

  7. Revisiting of Multiscale Static Analysis of Notched Laminates Using the Generalized Method of Cells

    NASA Technical Reports Server (NTRS)

    Naghipour Ghezeljeh, Paria; Arnold, Steven M.; Pineda, Evan J.

    2016-01-01

    Composite material systems generally exhibit a range of behavior on different length scales (from constituent level to macro); therefore, a multiscale framework is beneficial for the design and engineering of these material systems. The complex nature of the observed composite failure during experiments suggests the need for a three-dimensional (3D) multiscale model to attain a reliable prediction. However, the size of a multiscale three-dimensional finite element model can become prohibitively large and computationally costly. Two-dimensional (2D) models are preferred due to computational efficiency, especially if many different configurations have to be analyzed for an in-depth damage tolerance and durability design study. In this study, various 2D and 3D multiscale analyses will be employed to conduct a detailed investigation into the tensile failure of a given multidirectional, notched carbon fiber reinforced polymer laminate. Threedimensional finite element analysis is typically considered more accurate than a 2D finite element model, as compared with experiments. Nevertheless, in the absence of adequate mesh refinement, large differences may be observed between a 2D and 3D analysis, especially for a shear-dominated layup. This observed difference has not been widely addressed in previous literature and is the main focus of this paper.

  8. Multi-scale Material Appearance

    NASA Astrophysics Data System (ADS)

    Wu, Hongzhi

    Modeling and rendering the appearance of materials is important for a diverse range of applications of computer graphics - from automobile design to movies and cultural heritage. The appearance of materials varies considerably at different scales, posing significant challenges due to the sheer complexity of the data, as well the need to maintain inter-scale consistency constraints. This thesis presents a series of studies around the modeling, rendering and editing of multi-scale material appearance. To efficiently render material appearance at multiple scales, we develop an object-space precomputed adaptive sampling method, which precomputes a hierarchy of view-independent points that preserve multi-level appearance. To support bi-scale material appearance design, we propose a novel reflectance filtering algorithm, which rapidly computes the large-scale appearance from small-scale details, by exploiting the low-rank structures of Bidirectional Visible Normal Distribution Functions and pre-rotated Bidirectional Reflectance Distribution Functions in the matrix formulation of the rendering algorithm. This approach can guide the physical realization of appearance, as well as the modeling of real-world materials using very sparse measurements. Finally, we present a bi-scale-inspired high-quality general representation for material appearance described by Bidirectional Texture Functions. Our representation is at once compact, easily editable, and amenable to efficient rendering.

  9. A general multiscale framework for the emergent effective elastodynamics of metamaterials

    NASA Astrophysics Data System (ADS)

    Sridhar, A.; Kouznetsova, V. G.; Geers, M. G. D.

    2018-02-01

    This paper presents a general multiscale framework towards the computation of the emergent effective elastodynamics of heterogeneous materials, to be applied for the analysis of acoustic metamaterials and phononic crystals. The generality of the framework is exemplified by two key characteristics. First, the underlying formalism relies on the Floquet-Bloch theorem to derive a robust definition of scales and scale separation. Second, unlike most homogenization approaches that rely on a classical volume average, a generalized homogenization operator is defined with respect to a family of particular projection functions. This yields a generalized macro-scale continuum, instead of the classical Cauchy continuum. This enables (in a micromorphic sense) to homogenize the rich dispersive behavior resulting from both Bragg scattering and local resonance. For an arbitrary unit cell, the homogenization projection functions are constructed using the Floquet-Bloch eigenvectors obtained in the desired frequency regime at select high symmetry points, which effectively resolves the emergent phenomena dominating that regime. Furthermore, a generalized Hill-Mandel condition is proposed that ensures power consistency between the homogenized and full-scale model. A high-order spatio-temporal gradient expansion is used to localize the multiscale problem leading to a series of recursive unit cell problems giving the appropriate micro-mechanical corrections. The developed multiscale method is validated against standard numerical Bloch analysis of the dispersion spectra of example unit cells encompassing multiple high-order branches generated by local resonance and/or Bragg scattering.

  10. Multiscale hidden Markov models for photon-limited imaging

    NASA Astrophysics Data System (ADS)

    Nowak, Robert D.

    1999-06-01

    Photon-limited image analysis is often hindered by low signal-to-noise ratios. A novel Bayesian multiscale modeling and analysis method is developed in this paper to assist in these challenging situations. In addition to providing a very natural and useful framework for modeling an d processing images, Bayesian multiscale analysis is often much less computationally demanding compared to classical Markov random field models. This paper focuses on a probabilistic graph model called the multiscale hidden Markov model (MHMM), which captures the key inter-scale dependencies present in natural image intensities. The MHMM framework presented here is specifically designed for photon-limited imagin applications involving Poisson statistics, and applications to image intensity analysis are examined.

  11. On the use of reverse Brownian motion to accelerate hybrid simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bakarji, Joseph; Tartakovsky, Daniel M., E-mail: tartakovsky@stanford.edu

    Multiscale and multiphysics simulations are two rapidly developing fields of scientific computing. Efficient coupling of continuum (deterministic or stochastic) constitutive solvers with their discrete (stochastic, particle-based) counterparts is a common challenge in both kinds of simulations. We focus on interfacial, tightly coupled simulations of diffusion that combine continuum and particle-based solvers. The latter employs the reverse Brownian motion (rBm), a Monte Carlo approach that allows one to enforce inhomogeneous Dirichlet, Neumann, or Robin boundary conditions and is trivially parallelizable. We discuss numerical approaches for improving the accuracy of rBm in the presence of inhomogeneous Neumann boundary conditions and alternative strategiesmore » for coupling the rBm solver with its continuum counterpart. Numerical experiments are used to investigate the convergence, stability, and computational efficiency of the proposed hybrid algorithm.« less

  12. A Multiscale Parallel Computing Architecture for Automated Segmentation of the Brain Connectome

    PubMed Central

    Knobe, Kathleen; Newton, Ryan R.; Schlimbach, Frank; Blower, Melanie; Reid, R. Clay

    2015-01-01

    Several groups in neurobiology have embarked into deciphering the brain circuitry using large-scale imaging of a mouse brain and manual tracing of the connections between neurons. Creating a graph of the brain circuitry, also called a connectome, could have a huge impact on the understanding of neurodegenerative diseases such as Alzheimer’s disease. Although considerably smaller than a human brain, a mouse brain already exhibits one billion connections and manually tracing the connectome of a mouse brain can only be achieved partially. This paper proposes to scale up the tracing by using automated image segmentation and a parallel computing approach designed for domain experts. We explain the design decisions behind our parallel approach and we present our results for the segmentation of the vasculature and the cell nuclei, which have been obtained without any manual intervention. PMID:21926011

  13. Computational modeling of brain tumors: discrete, continuum or hybrid?

    NASA Astrophysics Data System (ADS)

    Wang, Zhihui; Deisboeck, Thomas S.

    In spite of all efforts, patients diagnosed with highly malignant brain tumors (gliomas), continue to face a grim prognosis. Achieving significant therapeutic advances will also require a more detailed quantitative understanding of the dynamic interactions among tumor cells, and between these cells and their biological microenvironment. Data-driven computational brain tumor models have the potential to provide experimental tumor biologists with such quantitative and cost-efficient tools to generate and test hypotheses on tumor progression, and to infer fundamental operating principles governing bidirectional signal propagation in multicellular cancer systems. This review highlights the modeling objectives of and challenges with developing such in silico brain tumor models by outlining two distinct computational approaches: discrete and continuum, each with representative examples. Future directions of this integrative computational neuro-oncology field, such as hybrid multiscale multiresolution modeling are discussed.

  14. Computational modeling of brain tumors: discrete, continuum or hybrid?

    NASA Astrophysics Data System (ADS)

    Wang, Zhihui; Deisboeck, Thomas S.

    2008-04-01

    In spite of all efforts, patients diagnosed with highly malignant brain tumors (gliomas), continue to face a grim prognosis. Achieving significant therapeutic advances will also require a more detailed quantitative understanding of the dynamic interactions among tumor cells, and between these cells and their biological microenvironment. Data-driven computational brain tumor models have the potential to provide experimental tumor biologists with such quantitative and cost-efficient tools to generate and test hypotheses on tumor progression, and to infer fundamental operating principles governing bidirectional signal propagation in multicellular cancer systems. This review highlights the modeling objectives of and challenges with developing such in silicobrain tumor models by outlining two distinct computational approaches: discrete and continuum, each with representative examples. Future directions of this integrative computational neuro-oncology field, such as hybrid multiscale multiresolution modeling are discussed.

  15. A Multiscale Approach to Modelling Drug Metabolism by Membrane-Bound Cytochrome P450 Enzymes

    PubMed Central

    Sansom, Mark S. P.; Mulholland, Adrian J.

    2014-01-01

    Cytochrome P450 enzymes are found in all life forms. P450s play an important role in drug metabolism, and have potential uses as biocatalysts. Human P450s are membrane-bound proteins. However, the interactions between P450s and their membrane environment are not well-understood. To date, all P450 crystal structures have been obtained from engineered proteins, from which the transmembrane helix was absent. A significant number of computational studies have been performed on P450s, but the majority of these have been performed on the solubilised forms of P450s. Here we present a multiscale approach for modelling P450s, spanning from coarse-grained and atomistic molecular dynamics simulations to reaction modelling using hybrid quantum mechanics/molecular mechanics (QM/MM) methods. To our knowledge, this is the first application of such an integrated multiscale approach to modelling of a membrane-bound enzyme. We have applied this protocol to a key human P450 involved in drug metabolism: CYP3A4. A biologically realistic model of CYP3A4, complete with its transmembrane helix and a membrane, has been constructed and characterised. The dynamics of this complex have been studied, and the oxidation of the anticoagulant R-warfarin has been modelled in the active site. Calculations have also been performed on the soluble form of the enzyme in aqueous solution. Important differences are observed between the membrane and solution systems, most notably for the gating residues and channels that control access to the active site. The protocol that we describe here is applicable to other membrane-bound enzymes. PMID:25033460

  16. A multiscale approach to modelling drug metabolism by membrane-bound cytochrome P450 enzymes.

    PubMed

    Lonsdale, Richard; Rouse, Sarah L; Sansom, Mark S P; Mulholland, Adrian J

    2014-07-01

    Cytochrome P450 enzymes are found in all life forms. P450s play an important role in drug metabolism, and have potential uses as biocatalysts. Human P450s are membrane-bound proteins. However, the interactions between P450s and their membrane environment are not well-understood. To date, all P450 crystal structures have been obtained from engineered proteins, from which the transmembrane helix was absent. A significant number of computational studies have been performed on P450s, but the majority of these have been performed on the solubilised forms of P450s. Here we present a multiscale approach for modelling P450s, spanning from coarse-grained and atomistic molecular dynamics simulations to reaction modelling using hybrid quantum mechanics/molecular mechanics (QM/MM) methods. To our knowledge, this is the first application of such an integrated multiscale approach to modelling of a membrane-bound enzyme. We have applied this protocol to a key human P450 involved in drug metabolism: CYP3A4. A biologically realistic model of CYP3A4, complete with its transmembrane helix and a membrane, has been constructed and characterised. The dynamics of this complex have been studied, and the oxidation of the anticoagulant R-warfarin has been modelled in the active site. Calculations have also been performed on the soluble form of the enzyme in aqueous solution. Important differences are observed between the membrane and solution systems, most notably for the gating residues and channels that control access to the active site. The protocol that we describe here is applicable to other membrane-bound enzymes.

  17. Multiscale Computation. Needs and Opportunities for BER Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scheibe, Timothy D.; Smith, Jeremy C.

    2015-01-01

    The Environmental Molecular Sciences Laboratory (EMSL), a scientific user facility managed by Pacific Northwest National Laboratory for the U.S. Department of Energy, Office of Biological and Environmental Research (BER), conducted a one-day workshop on August 26, 2014 on the topic of “Multiscale Computation: Needs and Opportunities for BER Science.” Twenty invited participants, from various computational disciplines within the BER program research areas, were charged with the following objectives; Identify BER-relevant models and their potential cross-scale linkages that could be exploited to better connect molecular-scale research to BER research at larger scales and; Identify critical science directions that will motivate EMSLmore » decisions regarding future computational (hardware and software) architectures.« less

  18. Introducing Enabling Computational Tools to the Climate Sciences: Multi-Resolution Climate Modeling with Adaptive Cubed-Sphere Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jablonowski, Christiane

    The research investigates and advances strategies how to bridge the scale discrepancies between local, regional and global phenomena in climate models without the prohibitive computational costs of global cloud-resolving simulations. In particular, the research explores new frontiers in computational geoscience by introducing high-order Adaptive Mesh Refinement (AMR) techniques into climate research. AMR and statically-adapted variable-resolution approaches represent an emerging trend for atmospheric models and are likely to become the new norm in future-generation weather and climate models. The research advances the understanding of multi-scale interactions in the climate system and showcases a pathway how to model these interactions effectively withmore » advanced computational tools, like the Chombo AMR library developed at the Lawrence Berkeley National Laboratory. The research is interdisciplinary and combines applied mathematics, scientific computing and the atmospheric sciences. In this research project, a hierarchy of high-order atmospheric models on cubed-sphere computational grids have been developed that serve as an algorithmic prototype for the finite-volume solution-adaptive Chombo-AMR approach. The foci of the investigations have lied on the characteristics of both static mesh adaptations and dynamically-adaptive grids that can capture flow fields of interest like tropical cyclones. Six research themes have been chosen. These are (1) the introduction of adaptive mesh refinement techniques into the climate sciences, (2) advanced algorithms for nonhydrostatic atmospheric dynamical cores, (3) an assessment of the interplay between resolved-scale dynamical motions and subgrid-scale physical parameterizations, (4) evaluation techniques for atmospheric model hierarchies, (5) the comparison of AMR refinement strategies and (6) tropical cyclone studies with a focus on multi-scale interactions and variable-resolution modeling. The results of this research project demonstrate significant advances in all six research areas. The major conclusions are that statically-adaptive variable-resolution modeling is currently becoming mature in the climate sciences, and that AMR holds outstanding promise for future-generation weather and climate models on high-performance computing architectures.« less

  19. Models, Databases, and Simulation Tools Needed for the Realization of Integrated Computational Materials Engineering. Proceedings of the Symposium Held at Materials Science and Technology 2010

    NASA Technical Reports Server (NTRS)

    Arnold, Steven M. (Editor); Wong, Terry T. (Editor)

    2011-01-01

    Topics covered include: An Annotative Review of Multiscale Modeling and its Application to Scales Inherent in the Field of ICME; and A Multiscale, Nonlinear, Modeling Framework Enabling the Design and Analysis of Composite Materials and Structures.

  20. A review of predictive nonlinear theories for multiscale modeling of heterogeneous materials

    NASA Astrophysics Data System (ADS)

    Matouš, Karel; Geers, Marc G. D.; Kouznetsova, Varvara G.; Gillman, Andrew

    2017-02-01

    Since the beginning of the industrial age, material performance and design have been in the midst of innovation of many disruptive technologies. Today's electronics, space, medical, transportation, and other industries are enriched by development, design and deployment of composite, heterogeneous and multifunctional materials. As a result, materials innovation is now considerably outpaced by other aspects from component design to product cycle. In this article, we review predictive nonlinear theories for multiscale modeling of heterogeneous materials. Deeper attention is given to multiscale modeling in space and to computational homogenization in addressing challenging materials science questions. Moreover, we discuss a state-of-the-art platform in predictive image-based, multiscale modeling with co-designed simulations and experiments that executes on the world's largest supercomputers. Such a modeling framework consists of experimental tools, computational methods, and digital data strategies. Once fully completed, this collaborative and interdisciplinary framework can be the basis of Virtual Materials Testing standards and aids in the development of new material formulations. Moreover, it will decrease the time to market of innovative products.

  1. A review of predictive nonlinear theories for multiscale modeling of heterogeneous materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matouš, Karel, E-mail: kmatous@nd.edu; Geers, Marc G.D.; Kouznetsova, Varvara G.

    2017-02-01

    Since the beginning of the industrial age, material performance and design have been in the midst of innovation of many disruptive technologies. Today's electronics, space, medical, transportation, and other industries are enriched by development, design and deployment of composite, heterogeneous and multifunctional materials. As a result, materials innovation is now considerably outpaced by other aspects from component design to product cycle. In this article, we review predictive nonlinear theories for multiscale modeling of heterogeneous materials. Deeper attention is given to multiscale modeling in space and to computational homogenization in addressing challenging materials science questions. Moreover, we discuss a state-of-the-art platformmore » in predictive image-based, multiscale modeling with co-designed simulations and experiments that executes on the world's largest supercomputers. Such a modeling framework consists of experimental tools, computational methods, and digital data strategies. Once fully completed, this collaborative and interdisciplinary framework can be the basis of Virtual Materials Testing standards and aids in the development of new material formulations. Moreover, it will decrease the time to market of innovative products.« less

  2. From mitochondrial ion channels to arrhythmias in the heart: computational techniques to bridge the spatio-temporal scales

    PubMed Central

    Plank, Gernot; Zhou, Lufang; Greenstein, Joseph L; Cortassa, Sonia; Winslow, Raimond L; O'Rourke, Brian; Trayanova, Natalia A

    2008-01-01

    Computer simulations of electrical behaviour in the whole ventricles have become commonplace during the last few years. The goals of this article are (i) to review the techniques that are currently employed to model cardiac electrical activity in the heart, discussing the strengths and weaknesses of the various approaches, and (ii) to implement a novel modelling approach, based on physiological reasoning, that lifts some of the restrictions imposed by current state-of-the-art ionic models. To illustrate the latter approach, the present study uses a recently developed ionic model of the ventricular myocyte that incorporates an excitation–contraction coupling and mitochondrial energetics model. A paradigm to bridge the vastly disparate spatial and temporal scales, from subcellular processes to the entire organ, and from sub-microseconds to minutes, is presented. Achieving sufficient computational efficiency is the key to success in the quest to develop multiscale realistic models that are expected to lead to better understanding of the mechanisms of arrhythmia induction following failure at the organelle level, and ultimately to the development of novel therapeutic applications. PMID:18603526

  3. An Unified Multiscale Framework for Planar, Surface, and Curve Skeletonization.

    PubMed

    Jalba, Andrei C; Sobiecki, Andre; Telea, Alexandru C

    2016-01-01

    Computing skeletons of 2D shapes, and medial surface and curve skeletons of 3D shapes, is a challenging task. In particular, there is no unified framework that detects all types of skeletons using a single model, and also produces a multiscale representation which allows to progressively simplify, or regularize, all skeleton types. In this paper, we present such a framework. We model skeleton detection and regularization by a conservative mass transport process from a shape's boundary to its surface skeleton, next to its curve skeleton, and finally to the shape center. The resulting density field can be thresholded to obtain a multiscale representation of progressively simplified surface, or curve, skeletons. We detail a numerical implementation of our framework which is demonstrably stable and has high computational efficiency. We demonstrate our framework on several complex 2D and 3D shapes.

  4. The role of alpha-rhythm states in perceptual learning: insights from experiments and computational models

    PubMed Central

    Sigala, Rodrigo; Haufe, Sebastian; Roy, Dipanjan; Dinse, Hubert R.; Ritter, Petra

    2014-01-01

    During the past two decades growing evidence indicates that brain oscillations in the alpha band (~10 Hz) not only reflect an “idle” state of cortical activity, but also take a more active role in the generation of complex cognitive functions. A recent study shows that more than 60% of the observed inter-subject variability in perceptual learning can be ascribed to ongoing alpha activity. This evidence indicates a significant role of alpha oscillations for perceptual learning and hence motivates to explore the potential underlying mechanisms. Hence, it is the purpose of this review to highlight existent evidence that ascribes intrinsic alpha oscillations a role in shaping our ability to learn. In the review, we disentangle the alpha rhythm into different neural signatures that control information processing within individual functional building blocks of perceptual learning. We further highlight computational studies that shed light on potential mechanisms regarding how alpha oscillations may modulate information transfer and connectivity changes relevant for learning. To enable testing of those model based hypotheses, we emphasize the need for multidisciplinary approaches combining assessment of behavior and multi-scale neuronal activity, active modulation of ongoing brain states and computational modeling to reveal the mathematical principles of the complex neuronal interactions. In particular we highlight the relevance of multi-scale modeling frameworks such as the one currently being developed by “The Virtual Brain” project. PMID:24772077

  5. Fiber orientation interpolation for the multiscale analysis of short fiber reinforced composite parts

    NASA Astrophysics Data System (ADS)

    Köbler, Jonathan; Schneider, Matti; Ospald, Felix; Andrä, Heiko; Müller, Ralf

    2018-06-01

    For short fiber reinforced plastic parts the local fiber orientation has a strong influence on the mechanical properties. To enable multiscale computations using surrogate models we advocate a two-step identification strategy. Firstly, for a number of sample orientations an effective model is derived by numerical methods available in the literature. Secondly, to cover a general orientation state, these effective models are interpolated. In this article we develop a novel and effective strategy to carry out this interpolation. Firstly, taking into account symmetry arguments, we reduce the fiber orientation phase space to a triangle in R^2 . For an associated triangulation of this triangle we furnish each node with an surrogate model. Then, we use linear interpolation on the fiber orientation triangle to equip each fiber orientation state with an effective stress. The proposed approach is quite general, and works for any physically nonlinear constitutive law on the micro-scale, as long as surrogate models for single fiber orientation states can be extracted. To demonstrate the capabilities of our scheme we study the viscoelastic creep behavior of short glass fiber reinforced PA66, and use Schapery's collocation method together with FFT-based computational homogenization to derive single orientation state effective models. We discuss the efficient implementation of our method, and present results of a component scale computation on a benchmark component by using ABAQUS ®.

  6. Fiber orientation interpolation for the multiscale analysis of short fiber reinforced composite parts

    NASA Astrophysics Data System (ADS)

    Köbler, Jonathan; Schneider, Matti; Ospald, Felix; Andrä, Heiko; Müller, Ralf

    2018-04-01

    For short fiber reinforced plastic parts the local fiber orientation has a strong influence on the mechanical properties. To enable multiscale computations using surrogate models we advocate a two-step identification strategy. Firstly, for a number of sample orientations an effective model is derived by numerical methods available in the literature. Secondly, to cover a general orientation state, these effective models are interpolated. In this article we develop a novel and effective strategy to carry out this interpolation. Firstly, taking into account symmetry arguments, we reduce the fiber orientation phase space to a triangle in R^2 . For an associated triangulation of this triangle we furnish each node with an surrogate model. Then, we use linear interpolation on the fiber orientation triangle to equip each fiber orientation state with an effective stress. The proposed approach is quite general, and works for any physically nonlinear constitutive law on the micro-scale, as long as surrogate models for single fiber orientation states can be extracted. To demonstrate the capabilities of our scheme we study the viscoelastic creep behavior of short glass fiber reinforced PA66, and use Schapery's collocation method together with FFT-based computational homogenization to derive single orientation state effective models. We discuss the efficient implementation of our method, and present results of a component scale computation on a benchmark component by using ABAQUS ®.

  7. Multiscale decoding for reliable brain-machine interface performance over time.

    PubMed

    Han-Lin Hsieh; Wong, Yan T; Pesaran, Bijan; Shanechi, Maryam M

    2017-07-01

    Recordings from invasive implants can degrade over time, resulting in a loss of spiking activity for some electrodes. For brain-machine interfaces (BMI), such a signal degradation lowers control performance. Achieving reliable performance over time is critical for BMI clinical viability. One approach to improve BMI longevity is to simultaneously use spikes and other recording modalities such as local field potentials (LFP), which are more robust to signal degradation over time. We have developed a multiscale decoder that can simultaneously model the different statistical profiles of multi-scale spike/LFP activity (discrete spikes vs. continuous LFP). This decoder can also run at multiple time-scales (millisecond for spikes vs. tens of milliseconds for LFP). Here, we validate the multiscale decoder for estimating the movement of 7 major upper-arm joint angles in a non-human primate (NHP) during a 3D reach-to-grasp task. The multiscale decoder uses motor cortical spike/LFP recordings as its input. We show that the multiscale decoder can improve decoding accuracy by adding information from LFP to spikes, while running at the fast millisecond time-scale of the spiking activity. Moreover, this improvement is achieved using relatively few LFP channels, demonstrating the robustness of the approach. These results suggest that using multiscale decoders has the potential to improve the reliability and longevity of BMIs.

  8. Multiscale Mathematics for Biomass Conversion to Renewable Hydrogen

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Plechac, Petr

    2016-03-01

    The overall objective of this project was to develop multiscale models for understanding and eventually designing complex processes for renewables. To the best of our knowledge, our work is the first attempt at modeling complex reacting systems, whose performance relies on underlying multiscale mathematics and developing rigorous mathematical techniques and computational algorithms to study such models. Our specific application lies at the heart of biofuels initiatives of DOE and entails modeling of catalytic systems, to enable economic, environmentally benign, and efficient conversion of biomass into either hydrogen or valuable chemicals.

  9. Accelerating electrostatic surface potential calculation with multi-scale approximation on graphics processing units.

    PubMed

    Anandakrishnan, Ramu; Scogland, Tom R W; Fenley, Andrew T; Gordon, John C; Feng, Wu-chun; Onufriev, Alexey V

    2010-06-01

    Tools that compute and visualize biomolecular electrostatic surface potential have been used extensively for studying biomolecular function. However, determining the surface potential for large biomolecules on a typical desktop computer can take days or longer using currently available tools and methods. Two commonly used techniques to speed-up these types of electrostatic computations are approximations based on multi-scale coarse-graining and parallelization across multiple processors. This paper demonstrates that for the computation of electrostatic surface potential, these two techniques can be combined to deliver significantly greater speed-up than either one separately, something that is in general not always possible. Specifically, the electrostatic potential computation, using an analytical linearized Poisson-Boltzmann (ALPB) method, is approximated using the hierarchical charge partitioning (HCP) multi-scale method, and parallelized on an ATI Radeon 4870 graphical processing unit (GPU). The implementation delivers a combined 934-fold speed-up for a 476,040 atom viral capsid, compared to an equivalent non-parallel implementation on an Intel E6550 CPU without the approximation. This speed-up is significantly greater than the 42-fold speed-up for the HCP approximation alone or the 182-fold speed-up for the GPU alone. Copyright (c) 2010 Elsevier Inc. All rights reserved.

  10. Source imaging of potential fields through a matrix space-domain algorithm

    NASA Astrophysics Data System (ADS)

    Baniamerian, Jamaledin; Oskooi, Behrooz; Fedi, Maurizio

    2017-01-01

    Imaging of potential fields yields a fast 3D representation of the source distribution of potential fields. Imaging methods are all based on multiscale methods allowing the source parameters of potential fields to be estimated from a simultaneous analysis of the field at various scales or, in other words, at many altitudes. Accuracy in performing upward continuation and differentiation of the field has therefore a key role for this class of methods. We here describe an accurate method for performing upward continuation and vertical differentiation in the space-domain. We perform a direct discretization of the integral equations for upward continuation and Hilbert transform; from these equations we then define matrix operators performing the transformation, which are symmetric (upward continuation) or anti-symmetric (differentiation), respectively. Thanks to these properties, just the first row of the matrices needs to be computed, so to decrease dramatically the computation cost. Our approach allows a simple procedure, with the advantage of not involving large data extension or tapering, as due instead in case of Fourier domain computation. It also allows level-to-drape upward continuation and a stable differentiation at high frequencies; finally, upward continuation and differentiation kernels may be merged into a single kernel. The accuracy of our approach is shown to be important for multi-scale algorithms, such as the continuous wavelet transform or the DEXP (depth from extreme point method), because border errors, which tend to propagate largely at the largest scales, are radically reduced. The application of our algorithm to synthetic and real-case gravity and magnetic data sets confirms the accuracy of our space domain strategy over FFT algorithms and standard convolution procedures.

  11. Modular-based multiscale modeling on viscoelasticity of polymer nanocomposites

    NASA Astrophysics Data System (ADS)

    Li, Ying; Liu, Zeliang; Jia, Zheng; Liu, Wing Kam; Aldousari, Saad M.; Hedia, Hassan S.; Asiri, Saeed A.

    2017-02-01

    Polymer nanocomposites have been envisioned as advanced materials for improving the mechanical performance of neat polymers used in aerospace, petrochemical, environment and energy industries. With the filler size approaching the nanoscale, composite materials tend to demonstrate remarkable thermomechanical properties, even with addition of a small amount of fillers. These observations confront the classical composite theories and are usually attributed to the high surface-area-to-volume-ratio of the fillers, which can introduce strong nanoscale interfacial effect and relevant long-range perturbation on polymer chain dynamics. Despite decades of research aimed at understanding interfacial effect and improving the mechanical performance of composite materials, it is not currently possible to accurately predict the mechanical properties of polymer nanocomposites directly from their molecular constituents. To overcome this challenge, different theoretical, experimental and computational schemes will be used to uncover the key physical mechanisms at the relevant spatial and temporal scales for predicting and tuning constitutive behaviors in silico, thereby establishing a bottom-up virtual design principle to achieve unprecedented mechanical performance of nanocomposites. A modular-based multiscale modeling approach for viscoelasticity of polymer nanocomposites has been proposed and discussed in this study, including four modules: (A) neat polymer toolbox; (B) interphase toolbox; (C) microstructural toolbox and (D) homogenization toolbox. Integrating these modules together, macroscopic viscoelasticity of polymer nanocomposites could be directly predicted from their molecular constituents. This will maximize the computational ability to design novel polymer composites with advanced performance. More importantly, elucidating the viscoelasticity of polymer nanocomposites through fundamental studies is a critical step to generate an integrated computational material engineering principle for discovering and manufacturing new composites with transformative impact on aerospace, automobile, petrochemical industries.

  12. Multi-fluid Dynamics for Supersonic Jet-and-Crossflows and Liquid Plug Rupture

    NASA Astrophysics Data System (ADS)

    Hassan, Ezeldin A.

    Multi-fluid dynamics simulations require appropriate numerical treatments based on the main flow characteristics, such as flow speed, turbulence, thermodynamic state, and time and length scales. In this thesis, two distinct problems are investigated: supersonic jet and crossflow interactions; and liquid plug propagation and rupture in an airway. Gaseous non-reactive ethylene jet and air crossflow simulation represents essential physics for fuel injection in SCRAMJET engines. The regime is highly unsteady, involving shocks, turbulent mixing, and large-scale vortical structures. An eddy-viscosity-based multi-scale turbulence model is proposed to resolve turbulent structures consistent with grid resolution and turbulence length scales. Predictions of the time-averaged fuel concentration from the multi-scale model is improved over Reynolds-averaged Navier-Stokes models originally derived from stationary flow. The response to the multi-scale model alone is, however, limited, in cases where the vortical structures are small and scattered thus requiring prohibitively expensive grids in order to resolve the flow field accurately. Statistical information related to turbulent fluctuations is utilized to estimate an effective turbulent Schmidt number, which is shown to be highly varying in space. Accordingly, an adaptive turbulent Schmidt number approach is proposed, by allowing the resolved field to adaptively influence the value of turbulent Schmidt number in the multi-scale turbulence model. The proposed model estimates a time-averaged turbulent Schmidt number adapted to the computed flowfield, instead of the constant value common to the eddy-viscosity-based Navier-Stokes models. This approach is assessed using a grid-refinement study for the normal injection case, and tested with 30 degree injection, showing improved results over the constant turbulent Schmidt model both in mean and variance of fuel concentration predictions. For the incompressible liquid plug propagation and rupture study, numerical simulations are conducted using an Eulerian-Lagrangian approach with a continuous-interface method. A reconstruction scheme is developed to allow topological changes during plug rupture by altering the connectivity information of the interface mesh. Rupture time is shown to be delayed as the initial precursor film thickness increases. During the plug rupture process, a sudden increase of mechanical stresses on the tube wall is recorded, which can cause tissue damage.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Savic, Vesna; Hector, Louis G.; Ezzat, Hesham

    This paper presents an overview of a four-year project focused on development of an integrated computational materials engineering (ICME) toolset for third generation advanced high-strength steels (3GAHSS). Following a brief look at ICME as an emerging discipline within the Materials Genome Initiative, technical tasks in the ICME project will be discussed. Specific aims of the individual tasks are multi-scale, microstructure-based material model development using state-of-the-art computational and experimental techniques, forming, toolset assembly, design optimization, integration and technical cost modeling. The integrated approach is initially illustrated using a 980 grade transformation induced plasticity (TRIP) steel, subject to a two-step quenching andmore » partitioning (Q&P) heat treatment, as an example.« less

  14. DEVELOPMENT OF AN AGGREGATION AND EPISODE SELECTION SCHEME TO SUPPORT THE MODELS-3 COMMUNITY MULTISCALE AIR QUALITY MODEL

    EPA Science Inventory

    The development of an episode selection and aggregation approach, designed to support distributional estimation of use with the Models-3 Community Multiscale Air Quality (CMAQ) model, is described. The approach utilized cluster analysis of the 700-hPa east-west and north-south...

  15. Quantification of pulmonary vessel diameter in low-dose CT images

    NASA Astrophysics Data System (ADS)

    Rudyanto, Rina D.; Ortiz de Solórzano, Carlos; Muñoz-Barrutia, Arrate

    2015-03-01

    Accurate quantification of vessel diameter in low-dose Computer Tomography (CT) images is important to study pulmonary diseases, in particular for the diagnosis of vascular diseases and the characterization of morphological vascular remodeling in Chronic Obstructive Pulmonary Disease (COPD). In this study, we objectively compare several vessel diameter estimation methods using a physical phantom. Five solid tubes of differing diameters (from 0.898 to 3.980 mm) were embedded in foam, simulating vessels in the lungs. To measure the diameters, we first extracted the vessels using either of two approaches: vessel enhancement using multi-scale Hessian matrix computation, or explicitly segmenting them using intensity threshold. We implemented six methods to quantify the diameter: three estimating diameter as a function of scale used to calculate the Hessian matrix; two calculating equivalent diameter from the crosssection area obtained by thresholding the intensity and vesselness response, respectively; and finally, estimating the diameter of the object using the Full Width Half Maximum (FWHM). We find that the accuracy of frequently used methods estimating vessel diameter from the multi-scale vesselness filter depends on the range and the number of scales used. Moreover, these methods still yield a significant error margin on the challenging estimation of the smallest diameter (on the order or below the size of the CT point spread function). Obviously, the performance of the thresholding-based methods depends on the value of the threshold. Finally, we observe that a simple adaptive thresholding approach can achieve a robust and accurate estimation of the smallest vessels diameter.

  16. Multiscale Integration of -Omic, Imaging, and Clinical Data in Biomedical Informatics

    PubMed Central

    Phan, John H.; Quo, Chang F.; Cheng, Chihwen; Wang, May Dongmei

    2016-01-01

    This paper reviews challenges and opportunities in multiscale data integration for biomedical informatics. Biomedical data can come from different biological origins, data acquisition technologies, and clinical applications. Integrating such data across multiple scales (e.g., molecular, cellular/tissue, and patient) can lead to more informed decisions for personalized, predictive, and preventive medicine. However, data heterogeneity, community standards in data acquisition, and computational complexity are big challenges for such decision making. This review describes genomic and proteomic (i.e., molecular), histopathological imaging (i.e., cellular/tissue), and clinical (i.e., patient) data; it includes case studies for single-scale (e.g., combining genomic or histopathological image data), multiscale (e.g., combining histopathological image and clinical data), and multiscale and multiplatform (e.g., the Human Protein Atlas and The Cancer Genome Atlas) data integration. Numerous opportunities exist in biomedical informatics research focusing on integration of multiscale and multiplatform data. PMID:23231990

  17. Multiscale integration of -omic, imaging, and clinical data in biomedical informatics.

    PubMed

    Phan, John H; Quo, Chang F; Cheng, Chihwen; Wang, May Dongmei

    2012-01-01

    This paper reviews challenges and opportunities in multiscale data integration for biomedical informatics. Biomedical data can come from different biological origins, data acquisition technologies, and clinical applications. Integrating such data across multiple scales (e.g., molecular, cellular/tissue, and patient) can lead to more informed decisions for personalized, predictive, and preventive medicine. However, data heterogeneity, community standards in data acquisition, and computational complexity are big challenges for such decision making. This review describes genomic and proteomic (i.e., molecular), histopathological imaging (i.e., cellular/tissue), and clinical (i.e., patient) data; it includes case studies for single-scale (e.g., combining genomic or histopathological image data), multiscale (e.g., combining histopathological image and clinical data), and multiscale and multiplatform (e.g., the Human Protein Atlas and The Cancer Genome Atlas) data integration. Numerous opportunities exist in biomedical informatics research focusing on integration of multiscale and multiplatform data.

  18. Multiscale Simulation of Blood Flow in Brain Arteries with an Aneurysm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leopold Grinberg; Vitali Morozov; Dmitry A. Fedosov

    2013-04-24

    Multi-scale modeling of arterial blood flow can shed light on the interaction between events happening at micro- and meso-scales (i.e., adhesion of red blood cells to the arterial wall, clot formation) and at macro-scales (i.e., change in flow patterns due to the clot). Coupled numerical simulations of such multi-scale flow require state-of-the-art computers and algorithms, along with techniques for multi-scale visualizations.This animation presents results of studies used in the development of a multi-scale visualization methodology. First we use streamlines to show the path the flow is taking as it moves through the system, including the aneurysm. Next we investigate themore » process of thrombus (blood clot) formation, which may be responsible for the rupture of aneurysms, by concentrating on the platelet blood cells, observing as they aggregate on the wall of the aneurysm.« less

  19. Multiscale modeling and simulation of brain blood flow

    NASA Astrophysics Data System (ADS)

    Perdikaris, Paris; Grinberg, Leopold; Karniadakis, George Em

    2016-02-01

    The aim of this work is to present an overview of recent advances in multi-scale modeling of brain blood flow. In particular, we present some approaches that enable the in silico study of multi-scale and multi-physics phenomena in the cerebral vasculature. We discuss the formulation of continuum and atomistic modeling approaches, present a consistent framework for their concurrent coupling, and list some of the challenges that one needs to overcome in achieving a seamless and scalable integration of heterogeneous numerical solvers. The effectiveness of the proposed framework is demonstrated in a realistic case involving modeling the thrombus formation process taking place on the wall of a patient-specific cerebral aneurysm. This highlights the ability of multi-scale algorithms to resolve important biophysical processes that span several spatial and temporal scales, potentially yielding new insight into the key aspects of brain blood flow in health and disease. Finally, we discuss open questions in multi-scale modeling and emerging topics of future research.

  20. Multiscale simulation of DC corona discharge and ozone generation from nanostructures

    NASA Astrophysics Data System (ADS)

    Wang, Pengxiang

    Atmospheric direct current (dc) corona discharge from micro-sized objects has been widely used as an ion source in many devices, such as photocopiers, laser printers, and electronic air cleaners. Shrinking the size of the discharge electrode to the nanometer range (e.g., through the use of carbon nanotubes or CNTs) is expected to lead to a significant reduction in power consumption and detrimental ozone production in these devices. The objectives of this study are to unveil the fundamental physics of the nanoscale corona discharge and to evaluate its performance and ozone production through numerical models. The extremely small size of CNTs presents considerable complexity and challenges in modeling CNT corona discharges. A hybrid multiscale model, which combines a kinetic particle-in-cell plus Monte Carlo collision (PIC-MCC) model and a continuum model, is developed to simulate the corona discharge from nanostructures. The multiscale model is developed in several steps. First, a pure PIC-MCC model is developed and PIC-MCC simulations of corona plasma from micro-sized electrode with same boundary conditions as prior model are performed to validate the PIC-MCC scheme. The agreement between the PIC-MCC model and the prior continuum model indicates the validity of the PIC-MCC scheme. The validated PIC-MCC scheme is then coupled with a continuum model to simulate the corona discharge from a micro-sized electrode. Unlike the prior continuum model which only predicts the corona plasma region, the hybrid model successfully predicts the self-consistent discharge process in the entire corona discharge gap that includes both corona plasma region and unipolar ion region. The voltage-current density curves obtained by the hybrid model agree well with analytical prediction and experimental results. The hybrid modeling approach, which combines the accuracy of a kinetic model and the efficiency of a continuum model, is thus validated for modeling dc corona discharges. For simulation of corona discharges from nanostructures, a one-dimensional (1-D) multiscale model is used due to the prohibitive computational expense associated with two-dimensional (2-D) modeling. Near the nanoscale discharge electrode surface, a kinetic model based on PIC-MCC is used due to a relatively large Knudsen number in this region. Far away from the nanoscale discharge electrode, a continuum model is used since the Knudsen number is very small there. The multiscale modeling results are compared with experimental data. The quantitative agreement in positive discharges and qualitative agreement in negative discharges validate the modeling approach. The mechanism of sustaining the discharge process from nanostructures is revealed and is found to be different from that of discharge from micro- or macro-sized electrodes. Finally, the corona plasma model is combined with a plasma chemistry model and a transport model to predict the ozone production from the nanoscale corona. The dependence of ozone production on the applied potential and air velocity is studied. The electric field distribution in a 2-D multiscale domain (from nanoscale to microscale) is predicted by solving the Poisson's equation using a finite difference scheme. The discretized linear equations are solved using a multigrid method under the framework of PETSc on a paralleled supercomputer. Although the Poisson solver is able to resolve the multiscale field, the prohibitively long computation time limits the use of a 2-D solver in the current PIC-MCC scheme.

  1. Multi-Scale Modeling to Improve Single-Molecule, Single-Cell Experiments

    NASA Astrophysics Data System (ADS)

    Munsky, Brian; Shepherd, Douglas

    2014-03-01

    Single-cell, single-molecule experiments are producing an unprecedented amount of data to capture the dynamics of biological systems. When integrated with computational models, observations of spatial, temporal and stochastic fluctuations can yield powerful quantitative insight. We concentrate on experiments that localize and count individual molecules of mRNA. These high precision experiments have large imaging and computational processing costs, and we explore how improved computational analyses can dramatically reduce overall data requirements. In particular, we show how analyses of spatial, temporal and stochastic fluctuations can significantly enhance parameter estimation results for small, noisy data sets. We also show how full probability distribution analyses can constrain parameters with far less data than bulk analyses or statistical moment closures. Finally, we discuss how a systematic modeling progression from simple to more complex analyses can reduce total computational costs by orders of magnitude. We illustrate our approach using single-molecule, spatial mRNA measurements of Interleukin 1-alpha mRNA induction in human THP1 cells following stimulation. Our approach could improve the effectiveness of single-molecule gene regulation analyses for many other process.

  2. Gene regulatory networks: a coarse-grained, equation-free approach to multiscale computation.

    PubMed

    Erban, Radek; Kevrekidis, Ioannis G; Adalsteinsson, David; Elston, Timothy C

    2006-02-28

    We present computer-assisted methods for analyzing stochastic models of gene regulatory networks. The main idea that underlies this equation-free analysis is the design and execution of appropriately initialized short bursts of stochastic simulations; the results of these are processed to estimate coarse-grained quantities of interest, such as mesoscopic transport coefficients. In particular, using a simple model of a genetic toggle switch, we illustrate the computation of an effective free energy Phi and of a state-dependent effective diffusion coefficient D that characterize an unavailable effective Fokker-Planck equation. Additionally we illustrate the linking of equation-free techniques with continuation methods for performing a form of stochastic "bifurcation analysis"; estimation of mean switching times in the case of a bistable switch is also implemented in this equation-free context. The accuracy of our methods is tested by direct comparison with long-time stochastic simulations. This type of equation-free analysis appears to be a promising approach to computing features of the long-time, coarse-grained behavior of certain classes of complex stochastic models of gene regulatory networks, circumventing the need for long Monte Carlo simulations.

  3. Analysis of Gas-Particle Flows through Multi-Scale Simulations

    NASA Astrophysics Data System (ADS)

    Gu, Yile

    Multi-scale structures are inherent in gas-solid flows, which render the modeling efforts challenging. On one hand, detailed simulations where the fine structures are resolved and particle properties can be directly specified can account for complex flow behaviors, but they are too computationally expensive to apply for larger systems. On the other hand, coarse-grained simulations demand much less computations but they necessitate constitutive models which are often not readily available for given particle properties. The present study focuses on addressing this issue, as it seeks to provide a general framework through which one can obtain the required constitutive models from detailed simulations. To demonstrate the viability of this general framework in which closures can be proposed for different particle properties, we focus on the van der Waals force of interaction between particles. We start with Computational Fluid Dynamics (CFD) - Discrete Element Method (DEM) simulations where the fine structures are resolved and van der Waals force between particles can be directly specified, and obtain closures for stress and drag that are required for coarse-grained simulations. Specifically, we develop a new cohesion model that appropriately accounts for van der Waals force between particles to be used for CFD-DEM simulations. We then validate this cohesion model and the CFD-DEM approach by showing that it can qualitatively capture experimental results where the addition of small particles to gas fluidization reduces bubble sizes. Based on the DEM and CFD-DEM simulation results, we propose stress models that account for the van der Waals force between particles. Finally, we apply machine learning, specifically neural networks, to obtain a drag model that captures the effects from fine structures and inter-particle cohesion. We show that this novel approach using neural networks, which can be readily applied for other closures other than drag here, can take advantage of the large amount of data generated from simulations, and therefore offer superior modeling performance over traditional approaches.

  4. Assessing multiscale complexity of short heart rate variability series through a model-based linear approach

    NASA Astrophysics Data System (ADS)

    Porta, Alberto; Bari, Vlasta; Ranuzzi, Giovanni; De Maria, Beatrice; Baselli, Giuseppe

    2017-09-01

    We propose a multiscale complexity (MSC) method assessing irregularity in assigned frequency bands and being appropriate for analyzing the short time series. It is grounded on the identification of the coefficients of an autoregressive model, on the computation of the mean position of the poles generating the components of the power spectral density in an assigned frequency band, and on the assessment of its distance from the unit circle in the complex plane. The MSC method was tested on simulations and applied to the short heart period (HP) variability series recorded during graded head-up tilt in 17 subjects (age from 21 to 54 years, median = 28 years, 7 females) and during paced breathing protocols in 19 subjects (age from 27 to 35 years, median = 31 years, 11 females) to assess the contribution of time scales typical of the cardiac autonomic control, namely in low frequency (LF, from 0.04 to 0.15 Hz) and high frequency (HF, from 0.15 to 0.5 Hz) bands to the complexity of the cardiac regulation. The proposed MSC technique was compared to a traditional model-free multiscale method grounded on information theory, i.e., multiscale entropy (MSE). The approach suggests that the reduction of HP variability complexity observed during graded head-up tilt is due to a regularization of the HP fluctuations in LF band via a possible intervention of sympathetic control and the decrement of HP variability complexity observed during slow breathing is the result of the regularization of the HP variations in both LF and HF bands, thus implying the action of physiological mechanisms working at time scales even different from that of respiration. MSE did not distinguish experimental conditions at time scales larger than 1. Over a short time series MSC allows a more insightful association between cardiac control complexity and physiological mechanisms modulating cardiac rhythm compared to a more traditional tool such as MSE.

  5. SU-F-I-10: Spatially Local Statistics for Adaptive Image Filtering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iliopoulos, AS; Sun, X; Floros, D

    Purpose: To facilitate adaptive image filtering operations, addressing spatial variations in both noise and signal. Such issues are prevalent in cone-beam projections, where physical effects such as X-ray scattering result in spatially variant noise, violating common assumptions of homogeneous noise and challenging conventional filtering approaches to signal extraction and noise suppression. Methods: We present a computational mechanism for probing into and quantifying the spatial variance of noise throughout an image. The mechanism builds a pyramid of local statistics at multiple spatial scales; local statistical information at each scale includes (weighted) mean, median, standard deviation, median absolute deviation, as well asmore » histogram or dynamic range after local mean/median shifting. Based on inter-scale differences of local statistics, the spatial scope of distinguishable noise variation is detected in a semi- or un-supervised manner. Additionally, we propose and demonstrate the incorporation of such information in globally parametrized (i.e., non-adaptive) filters, effectively transforming the latter into spatially adaptive filters. The multi-scale mechanism is materialized by efficient algorithms and implemented in parallel CPU/GPU architectures. Results: We demonstrate the impact of local statistics for adaptive image processing and analysis using cone-beam projections of a Catphan phantom, fitted within an annulus to increase X-ray scattering. The effective spatial scope of local statistics calculations is shown to vary throughout the image domain, necessitating multi-scale noise and signal structure analysis. Filtering results with and without spatial filter adaptation are compared visually, illustrating improvements in imaging signal extraction and noise suppression, and in preserving information in low-contrast regions. Conclusion: Local image statistics can be incorporated in filtering operations to equip them with spatial adaptivity to spatial signal/noise variations. An efficient multi-scale computational mechanism is developed to curtail processing latency. Spatially adaptive filtering may impact subsequent processing tasks such as reconstruction and numerical gradient computations for deformable registration. NIH Grant No. R01-184173.« less

  6. Integrating multi-scale data to create a virtual physiological mouse heart.

    PubMed

    Land, Sander; Niederer, Steven A; Louch, William E; Sejersted, Ole M; Smith, Nicolas P

    2013-04-06

    While the virtual physiological human (VPH) project has made great advances in human modelling, many of the tools and insights developed as part of this initiative are also applicable for facilitating mechanistic understanding of the physiology of a range of other species. This process, in turn, has the potential to provide human relevant insights via a different scientific path. Specifically, the increasing use of mice in experimental research, not yet fully complemented by a similar increase in computational modelling, is currently missing an important opportunity for using and interpreting this growing body of experimental data to improve our understanding of cardiac function. This overview describes our work to address this issue by creating a virtual physiological mouse model of the heart. We describe the similarities between human- and mouse-focused modelling, including the reuse of VPH tools, and the development of methods for investigating parameter sensitivity that are applicable across species. We show how previous results using this approach have already provided important biological insights, and how these can also be used to advance VPH heart models. Finally, we show an example application of this approach to test competing multi-scale hypotheses by investigating variations in length-dependent properties of cardiac muscle.

  7. Integrating multi-scale data to create a virtual physiological mouse heart

    PubMed Central

    Land, Sander; Niederer, Steven A.; Louch, William E.; Sejersted, Ole M.; Smith, Nicolas P.

    2013-01-01

    While the virtual physiological human (VPH) project has made great advances in human modelling, many of the tools and insights developed as part of this initiative are also applicable for facilitating mechanistic understanding of the physiology of a range of other species. This process, in turn, has the potential to provide human relevant insights via a different scientific path. Specifically, the increasing use of mice in experimental research, not yet fully complemented by a similar increase in computational modelling, is currently missing an important opportunity for using and interpreting this growing body of experimental data to improve our understanding of cardiac function. This overview describes our work to address this issue by creating a virtual physiological mouse model of the heart. We describe the similarities between human- and mouse-focused modelling, including the reuse of VPH tools, and the development of methods for investigating parameter sensitivity that are applicable across species. We show how previous results using this approach have already provided important biological insights, and how these can also be used to advance VPH heart models. Finally, we show an example application of this approach to test competing multi-scale hypotheses by investigating variations in length-dependent properties of cardiac muscle. PMID:24427525

  8. Multi-scale genetic dynamic modelling I : an algorithm to compute generators.

    PubMed

    Kirkilionis, Markus; Janus, Ulrich; Sbano, Luca

    2011-09-01

    We present a new approach or framework to model dynamic regulatory genetic activity. The framework is using a multi-scale analysis based upon generic assumptions on the relative time scales attached to the different transitions of molecular states defining the genetic system. At micro-level such systems are regulated by the interaction of two kinds of molecular players: macro-molecules like DNA or polymerases, and smaller molecules acting as transcription factors. The proposed genetic model then represents the larger less abundant molecules with a finite discrete state space, for example describing different conformations of these molecules. This is in contrast to the representations of the transcription factors which are-like in classical reaction kinetics-represented by their particle number only. We illustrate the method by considering the genetic activity associated to certain configurations of interacting genes that are fundamental to modelling (synthetic) genetic clocks. A largely unknown question is how different molecular details incorporated via this more realistic modelling approach lead to different macroscopic regulatory genetic models which dynamical behaviour might-in general-be different for different model choices. The theory will be applied to a real synthetic clock in a second accompanying article (Kirkilioniset al., Theory Biosci, 2011).

  9. Multi-Scale Computational Modeling of Two-Phased Metal Using GMC Method

    NASA Technical Reports Server (NTRS)

    Moghaddam, Masoud Ghorbani; Achuthan, A.; Bednacyk, B. A.; Arnold, S. M.; Pineda, E. J.

    2014-01-01

    A multi-scale computational model for determining plastic behavior in two-phased CMSX-4 Ni-based superalloys is developed on a finite element analysis (FEA) framework employing crystal plasticity constitutive model that can capture the microstructural scale stress field. The generalized method of cells (GMC) micromechanics model is used for homogenizing the local field quantities. At first, GMC as stand-alone is validated by analyzing a repeating unit cell (RUC) as a two-phased sample with 72.9% volume fraction of gamma'-precipitate in the gamma-matrix phase and comparing the results with those predicted by finite element analysis (FEA) models incorporating the same crystal plasticity constitutive model. The global stress-strain behavior and the local field quantity distributions predicted by GMC demonstrated good agreement with FEA. High computational saving, at the expense of some accuracy in the components of local tensor field quantities, was obtained with GMC. Finally, the capability of the developed multi-scale model linking FEA and GMC to solve real life sized structures is demonstrated by analyzing an engine disc component and determining the microstructural scale details of the field quantities.

  10. Multiscale Multilevel Approach to Solution of Nanotechnology Problems

    NASA Astrophysics Data System (ADS)

    Polyakov, Sergey; Podryga, Viktoriia

    2018-02-01

    The paper is devoted to a multiscale multilevel approach for the solution of nanotechnology problems on supercomputer systems. The approach uses the combination of continuum mechanics models and the Newton dynamics for individual particles. This combination includes three scale levels: macroscopic, mesoscopic and microscopic. For gas-metal technical systems the following models are used. The quasihydrodynamic system of equations is used as a mathematical model at the macrolevel for gas and solid states. The system of Newton equations is used as a mathematical model at the mesoand microlevels; it is written for nanoparticles of the medium and larger particles moving in the medium. The numerical implementation of the approach is based on the method of splitting into physical processes. The quasihydrodynamic equations are solved by the finite volume method on grids of different types. The Newton equations of motion are solved by Verlet integration in each cell of the grid independently or in groups of connected cells. In the framework of the general methodology, four classes of algorithms and methods of their parallelization are provided. The parallelization uses the principles of geometric parallelism and the efficient partitioning of the computational domain. A special dynamic algorithm is used for load balancing the solvers. The testing of the developed approach was made by the example of the nitrogen outflow from a balloon with high pressure to a vacuum chamber through a micronozzle and a microchannel. The obtained results confirm the high efficiency of the developed methodology.

  11. Hybrid stochastic simplifications for multiscale gene networks.

    PubMed

    Crudu, Alina; Debussche, Arnaud; Radulescu, Ovidiu

    2009-09-07

    Stochastic simulation of gene networks by Markov processes has important applications in molecular biology. The complexity of exact simulation algorithms scales with the number of discrete jumps to be performed. Approximate schemes reduce the computational time by reducing the number of simulated discrete events. Also, answering important questions about the relation between network topology and intrinsic noise generation and propagation should be based on general mathematical results. These general results are difficult to obtain for exact models. We propose a unified framework for hybrid simplifications of Markov models of multiscale stochastic gene networks dynamics. We discuss several possible hybrid simplifications, and provide algorithms to obtain them from pure jump processes. In hybrid simplifications, some components are discrete and evolve by jumps, while other components are continuous. Hybrid simplifications are obtained by partial Kramers-Moyal expansion [1-3] which is equivalent to the application of the central limit theorem to a sub-model. By averaging and variable aggregation we drastically reduce simulation time and eliminate non-critical reactions. Hybrid and averaged simplifications can be used for more effective simulation algorithms and for obtaining general design principles relating noise to topology and time scales. The simplified models reproduce with good accuracy the stochastic properties of the gene networks, including waiting times in intermittence phenomena, fluctuation amplitudes and stationary distributions. The methods are illustrated on several gene network examples. Hybrid simplifications can be used for onion-like (multi-layered) approaches to multi-scale biochemical systems, in which various descriptions are used at various scales. Sets of discrete and continuous variables are treated with different methods and are coupled together in a physically justified approach.

  12. Network neuroscience

    PubMed Central

    Bassett, Danielle S; Sporns, Olaf

    2017-01-01

    Despite substantial recent progress, our understanding of the principles and mechanisms underlying complex brain function and cognition remains incomplete. Network neuroscience proposes to tackle these enduring challenges. Approaching brain structure and function from an explicitly integrative perspective, network neuroscience pursues new ways to map, record, analyze and model the elements and interactions of neurobiological systems. Two parallel trends drive the approach: the availability of new empirical tools to create comprehensive maps and record dynamic patterns among molecules, neurons, brain areas and social systems; and the theoretical framework and computational tools of modern network science. The convergence of empirical and computational advances opens new frontiers of scientific inquiry, including network dynamics, manipulation and control of brain networks, and integration of network processes across spatiotemporal domains. We review emerging trends in network neuroscience and attempt to chart a path toward a better understanding of the brain as a multiscale networked system. PMID:28230844

  13. From photons to big-data applications: terminating terabits

    PubMed Central

    2016-01-01

    Computer architectures have entered a watershed as the quantity of network data generated by user applications exceeds the data-processing capacity of any individual computer end-system. It will become impossible to scale existing computer systems while a gap grows between the quantity of networked data and the capacity for per system data processing. Despite this, the growth in demand in both task variety and task complexity continues unabated. Networked computer systems provide a fertile environment in which new applications develop. As networked computer systems become akin to infrastructure, any limitation upon the growth in capacity and capabilities becomes an important constraint of concern to all computer users. Considering a networked computer system capable of processing terabits per second, as a benchmark for scalability, we critique the state of the art in commodity computing, and propose a wholesale reconsideration in the design of computer architectures and their attendant ecosystem. Our proposal seeks to reduce costs, save power and increase performance in a multi-scale approach that has potential application from nanoscale to data-centre-scale computers. PMID:26809573

  14. From photons to big-data applications: terminating terabits.

    PubMed

    Zilberman, Noa; Moore, Andrew W; Crowcroft, Jon A

    2016-03-06

    Computer architectures have entered a watershed as the quantity of network data generated by user applications exceeds the data-processing capacity of any individual computer end-system. It will become impossible to scale existing computer systems while a gap grows between the quantity of networked data and the capacity for per system data processing. Despite this, the growth in demand in both task variety and task complexity continues unabated. Networked computer systems provide a fertile environment in which new applications develop. As networked computer systems become akin to infrastructure, any limitation upon the growth in capacity and capabilities becomes an important constraint of concern to all computer users. Considering a networked computer system capable of processing terabits per second, as a benchmark for scalability, we critique the state of the art in commodity computing, and propose a wholesale reconsideration in the design of computer architectures and their attendant ecosystem. Our proposal seeks to reduce costs, save power and increase performance in a multi-scale approach that has potential application from nanoscale to data-centre-scale computers. © 2016 The Authors.

  15. Multiscale Simulation Framework for Coupled Fluid Flow and Mechanical Deformation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hou, Thomas; Efendiev, Yalchin; Tchelepi, Hamdi

    2016-05-24

    Our work in this project is aimed at making fundamental advances in multiscale methods for flow and transport in highly heterogeneous porous media. The main thrust of this research is to develop a systematic multiscale analysis and efficient coarse-scale models that can capture global effects and extend existing multiscale approaches to problems with additional physics and uncertainties. A key emphasis is on problems without an apparent scale separation. Multiscale solution methods are currently under active investigation for the simulation of subsurface flow in heterogeneous formations. These procedures capture the effects of fine-scale permeability variations through the calculation of specialized coarse-scalemore » basis functions. Most of the multiscale techniques presented to date employ localization approximations in the calculation of these basis functions. For some highly correlated (e.g., channelized) formations, however, global effects are important and these may need to be incorporated into the multiscale basis functions. Other challenging issues facing multiscale simulations are the extension of existing multiscale techniques to problems with additional physics, such as compressibility, capillary effects, etc. In our project, we explore the improvement of multiscale methods through the incorporation of additional (single-phase flow) information and the development of a general multiscale framework for flows in the presence of uncertainties, compressible flow and heterogeneous transport, and geomechanics. We have considered (1) adaptive local-global multiscale methods, (2) multiscale methods for the transport equation, (3) operator-based multiscale methods and solvers, (4) multiscale methods in the presence of uncertainties and applications, (5) multiscale finite element methods for high contrast porous media and their generalizations, and (6) multiscale methods for geomechanics.« less

  16. Software Integration in Multi-scale Simulations: the PUPIL System

    NASA Astrophysics Data System (ADS)

    Torras, J.; Deumens, E.; Trickey, S. B.

    2006-10-01

    The state of the art for computational tools in both computational chemistry and computational materials physics includes many algorithms and functionalities which are implemented again and again. Several projects aim to reduce, eliminate, or avoid this problem. Most such efforts seem to be focused within a particular specialty, either quantum chemistry or materials physics. Multi-scale simulations, by their very nature however, cannot respect that specialization. In simulation of fracture, for example, the energy gradients that drive the molecular dynamics (MD) come from a quantum mechanical treatment that most often derives from quantum chemistry. That “QM” region is linked to a surrounding “CM” region in which potentials yield the forces. The approach therefore requires the integration or at least inter-operation of quantum chemistry and materials physics algorithms. The same problem occurs in “QM/MM” simulations in computational biology. The challenge grows if pattern recognition or other analysis codes of some kind must be used as well. The most common mode of inter-operation is user intervention: codes are modified as needed and data files are managed “by hand” by the user (interactively and via shell scripts). User intervention is however inefficient by nature, difficult to transfer to the community, and prone to error. Some progress (e.g Sethna’s work at Cornell [C.R. Myers et al., Mat. Res. Soc. Symp. Proc., 538(1999) 509, C.-S. Chen et al., Poster presented at the Material Research Society Meeting (2000)]) has been made on using Python scripts to achieve a more efficient level of interoperation. In this communication we present an alternative approach to merging current working packages without the necessity of major recoding and with only a relatively light wrapper interface. The scheme supports communication among the different components required for a given multi-scale calculation and access to the functionalities of those components for the potential user. A general main program allows the management of every package with a special communication protocol between their interfaces following the directives introduced by the user which are stored in an XML structured file. The initial prototype of the PUPIL (Program for User Packages Interfacing and Linking) system has been done using Java as a fast, easy prototyping object oriented (OO) language. In order to test it, we have applied this prototype to a previously studied problem, the fracture of a silica nanorod. We did so joining two different packages to do a QM/MD calculation. The results show the potential for this software system to do different kind of simulations and its simplicity of maintenance.

  17. Multiscale Embedded Gene Co-expression Network Analysis

    PubMed Central

    Song, Won-Min; Zhang, Bin

    2015-01-01

    Gene co-expression network analysis has been shown effective in identifying functional co-expressed gene modules associated with complex human diseases. However, existing techniques to construct co-expression networks require some critical prior information such as predefined number of clusters, numerical thresholds for defining co-expression/interaction, or do not naturally reproduce the hallmarks of complex systems such as the scale-free degree distribution of small-worldness. Previously, a graph filtering technique called Planar Maximally Filtered Graph (PMFG) has been applied to many real-world data sets such as financial stock prices and gene expression to extract meaningful and relevant interactions. However, PMFG is not suitable for large-scale genomic data due to several drawbacks, such as the high computation complexity O(|V|3), the presence of false-positives due to the maximal planarity constraint, and the inadequacy of the clustering framework. Here, we developed a new co-expression network analysis framework called Multiscale Embedded Gene Co-expression Network Analysis (MEGENA) by: i) introducing quality control of co-expression similarities, ii) parallelizing embedded network construction, and iii) developing a novel clustering technique to identify multi-scale clustering structures in Planar Filtered Networks (PFNs). We applied MEGENA to a series of simulated data and the gene expression data in breast carcinoma and lung adenocarcinoma from The Cancer Genome Atlas (TCGA). MEGENA showed improved performance over well-established clustering methods and co-expression network construction approaches. MEGENA revealed not only meaningful multi-scale organizations of co-expressed gene clusters but also novel targets in breast carcinoma and lung adenocarcinoma. PMID:26618778

  18. Data-Driven Hierarchical Structure Kernel for Multiscale Part-Based Object Recognition

    PubMed Central

    Wang, Botao; Xiong, Hongkai; Jiang, Xiaoqian; Zheng, Yuan F.

    2017-01-01

    Detecting generic object categories in images and videos are a fundamental issue in computer vision. However, it faces the challenges from inter and intraclass diversity, as well as distortions caused by viewpoints, poses, deformations, and so on. To solve object variations, this paper constructs a structure kernel and proposes a multiscale part-based model incorporating the discriminative power of kernels. The structure kernel would measure the resemblance of part-based objects in three aspects: 1) the global similarity term to measure the resemblance of the global visual appearance of relevant objects; 2) the part similarity term to measure the resemblance of the visual appearance of distinctive parts; and 3) the spatial similarity term to measure the resemblance of the spatial layout of parts. In essence, the deformation of parts in the structure kernel is penalized in a multiscale space with respect to horizontal displacement, vertical displacement, and scale difference. Part similarities are combined with different weights, which are optimized efficiently to maximize the intraclass similarities and minimize the interclass similarities by the normalized stochastic gradient ascent algorithm. In addition, the parameters of the structure kernel are learned during the training process with regard to the distribution of the data in a more discriminative way. With flexible part sizes on scale and displacement, it can be more robust to the intraclass variations, poses, and viewpoints. Theoretical analysis and experimental evaluations demonstrate that the proposed multiscale part-based representation model with structure kernel exhibits accurate and robust performance, and outperforms state-of-the-art object classification approaches. PMID:24808345

  19. Multi-tissue and multi-scale approach for nuclei segmentation in H&E stained images.

    PubMed

    Salvi, Massimo; Molinari, Filippo

    2018-06-20

    Accurate nuclei detection and segmentation in histological images is essential for many clinical purposes. While manual annotations are time-consuming and operator-dependent, full automated segmentation remains a challenging task due to the high variability of cells intensity, size and morphology. Most of the proposed algorithms for the automated segmentation of nuclei were designed for specific organ or tissues. The aim of this study was to develop and validate a fully multiscale method, named MANA (Multiscale Adaptive Nuclei Analysis), for nuclei segmentation in different tissues and magnifications. MANA was tested on a dataset of H&E stained tissue images with more than 59,000 annotated nuclei, taken from six organs (colon, liver, bone, prostate, adrenal gland and thyroid) and three magnifications (10×, 20×, 40×). Automatic results were compared with manual segmentations and three open-source software designed for nuclei detection. For each organ, MANA obtained always an F1-score higher than 0.91, with an average F1 of 0.9305 ± 0.0161. The average computational time was about 20 s independently of the number of nuclei to be detected (anyway, higher than 1000), indicating the efficiency of the proposed technique. To the best of our knowledge, MANA is the first fully automated multi-scale and multi-tissue algorithm for nuclei detection. Overall, the robustness and versatility of MANA allowed to achieve, on different organs and magnifications, performances in line or better than those of state-of-art algorithms optimized for single tissues.

  20. Multiscale Embedded Gene Co-expression Network Analysis.

    PubMed

    Song, Won-Min; Zhang, Bin

    2015-11-01

    Gene co-expression network analysis has been shown effective in identifying functional co-expressed gene modules associated with complex human diseases. However, existing techniques to construct co-expression networks require some critical prior information such as predefined number of clusters, numerical thresholds for defining co-expression/interaction, or do not naturally reproduce the hallmarks of complex systems such as the scale-free degree distribution of small-worldness. Previously, a graph filtering technique called Planar Maximally Filtered Graph (PMFG) has been applied to many real-world data sets such as financial stock prices and gene expression to extract meaningful and relevant interactions. However, PMFG is not suitable for large-scale genomic data due to several drawbacks, such as the high computation complexity O(|V|3), the presence of false-positives due to the maximal planarity constraint, and the inadequacy of the clustering framework. Here, we developed a new co-expression network analysis framework called Multiscale Embedded Gene Co-expression Network Analysis (MEGENA) by: i) introducing quality control of co-expression similarities, ii) parallelizing embedded network construction, and iii) developing a novel clustering technique to identify multi-scale clustering structures in Planar Filtered Networks (PFNs). We applied MEGENA to a series of simulated data and the gene expression data in breast carcinoma and lung adenocarcinoma from The Cancer Genome Atlas (TCGA). MEGENA showed improved performance over well-established clustering methods and co-expression network construction approaches. MEGENA revealed not only meaningful multi-scale organizations of co-expressed gene clusters but also novel targets in breast carcinoma and lung adenocarcinoma.

  1. Identity in agent-based models : modeling dynamic multiscale social processes.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ozik, J.; Sallach, D. L.; Macal, C. M.

    Identity-related issues play central roles in many current events, including those involving factional politics, sectarianism, and tribal conflicts. Two popular models from the computational-social-science (CSS) literature - the Threat Anticipation Program and SharedID models - incorporate notions of identity (individual and collective) and processes of identity formation. A multiscale conceptual framework that extends some ideas presented in these models and draws other capabilities from the broader CSS literature is useful in modeling the formation of political identities. The dynamic, multiscale processes that constitute and transform social identities can be mapped to expressive structures of the framework

  2. An Overview of the State of the Art in Atomistic and Multiscale Simulation of Fracture

    NASA Technical Reports Server (NTRS)

    Saether, Erik; Yamakov, Vesselin; Phillips, Dawn R.; Glaessgen, Edward H.

    2009-01-01

    The emerging field of nanomechanics is providing a new focus in the study of the mechanics of materials, particularly in simulating fundamental atomic mechanisms involved in the initiation and evolution of damage. Simulating fundamental material processes using first principles in physics strongly motivates the formulation of computational multiscale methods to link macroscopic failure to the underlying atomic processes from which all material behavior originates. This report gives an overview of the state of the art in applying concurrent and sequential multiscale methods to analyze damage and failure mechanisms across length scales.

  3. Filter-based multiscale entropy analysis of complex physiological time series.

    PubMed

    Xu, Yuesheng; Zhao, Liang

    2013-08-01

    Multiscale entropy (MSE) has been widely and successfully used in analyzing the complexity of physiological time series. We reinterpret the averaging process in MSE as filtering a time series by a filter of a piecewise constant type. From this viewpoint, we introduce filter-based multiscale entropy (FME), which filters a time series to generate multiple frequency components, and then we compute the blockwise entropy of the resulting components. By choosing filters adapted to the feature of a given time series, FME is able to better capture its multiscale information and to provide more flexibility for studying its complexity. Motivated by the heart rate turbulence theory, which suggests that the human heartbeat interval time series can be described in piecewise linear patterns, we propose piecewise linear filter multiscale entropy (PLFME) for the complexity analysis of the time series. Numerical results from PLFME are more robust to data of various lengths than those from MSE. The numerical performance of the adaptive piecewise constant filter multiscale entropy without prior information is comparable to that of PLFME, whose design takes prior information into account.

  4. Multiscale Pressure-Balanced Structures in Three-dimensional Magnetohydrodynamic Turbulence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Liping; Zhang, Lei; Feng, Xueshang

    2017-02-10

    Observations of solar wind turbulence indicate the existence of multiscale pressure-balanced structures (PBSs) in the solar wind. In this work, we conduct a numerical simulation to investigate multiscale PBSs and in particular their formation in compressive magnetohydrodynamic turbulence. By the use of the higher-order Godunov code Athena, a driven compressible turbulence with an imposed uniform guide field is simulated. The simulation results show that both the magnetic pressure and the thermal pressure exhibit a turbulent spectrum with a Kolmogorov-like power law, and that in many regions of the simulation domain they are anticorrelated. The computed wavelet cross-coherence spectra of themore » magnetic pressure and the thermal pressure, as well as their space series, indicate the existence of multiscale PBSs, with the small PBSs being embedded in the large ones. These multiscale PBSs are likely to be related to the highly oblique-propagating slow-mode waves, as the traced multiscale PBS is found to be traveling in a certain direction at a speed consistent with that predicted theoretically for a slow-mode wave propagating in the same direction.« less

  5. Fast Decentralized Averaging via Multi-scale Gossip

    NASA Astrophysics Data System (ADS)

    Tsianos, Konstantinos I.; Rabbat, Michael G.

    We are interested in the problem of computing the average consensus in a distributed fashion on random geometric graphs. We describe a new algorithm called Multi-scale Gossip which employs a hierarchical decomposition of the graph to partition the computation into tractable sub-problems. Using only pairwise messages of fixed size that travel at most O(n^{1/3}) hops, our algorithm is robust and has communication cost of O(n loglogn logɛ - 1) transmissions, which is order-optimal up to the logarithmic factor in n. Simulated experiments verify the good expected performance on graphs of many thousands of nodes.

  6. A fast random walk algorithm for computing the pulsed-gradient spin-echo signal in multiscale porous media.

    PubMed

    Grebenkov, Denis S

    2011-02-01

    A new method for computing the signal attenuation due to restricted diffusion in a linear magnetic field gradient is proposed. A fast random walk (FRW) algorithm for simulating random trajectories of diffusing spin-bearing particles is combined with gradient encoding. As random moves of a FRW are continuously adapted to local geometrical length scales, the method is efficient for simulating pulsed-gradient spin-echo experiments in hierarchical or multiscale porous media such as concrete, sandstones, sedimentary rocks and, potentially, brain or lungs. Copyright © 2010 Elsevier Inc. All rights reserved.

  7. Modeling Primary Atomization of Liquid Fuels using a Multiphase DNS/LES Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arienti, Marco; Oefelein, Joe; Doisneau, Francois

    2016-08-01

    As part of a Laboratory Directed Research and Development project, we are developing a modeling-and-simulation capability to study fuel direct injection in automotive engines. Predicting mixing and combustion at realistic conditions remains a challenging objective of energy science. And it is a research priority in Sandia’s mission-critical area of energy security, being also relevant to many flows in defense and climate. High-performance computing applied to this non-linear multi-scale problem is key to engine calculations with increased scientific reliability.

  8. Multiscale Modeling of Multiphase Fluid Flow

    DTIC Science & Technology

    2016-08-01

    the disparate time and length scales involved in modeling fluid flow and heat transfer. Molecular dynamics simulations were carried out to provide a...fluid dynamics methods were used to investigate the heat transfer process in open-cell micro-foam with phase change material; enhancement of natural...Computational fluid dynamics, Heat transfer, Phase change material in Micro-foam, Molecular Dynamics, Multiphase flow, Multiscale modeling, Natural

  9. Multiscale Mathematical Modeling in Dental Tissue Engineering: Toward Computer-Aided Design of a Regenerative System Based on Hydroxyapatite Granules, Focussing on Early and Mid-Term Stiffness Recovery

    PubMed Central

    Scheiner, Stefan; Komlev, Vladimir S.; Gurin, Alexey N.; Hellmich, Christian

    2016-01-01

    We here explore for the very first time how an advanced multiscale mathematical modeling approach may support the design of a provenly successful tissue engineering concept for mandibular bone. The latter employs double-porous, potentially cracked, single millimeter-sized granules packed into an overall conglomerate-type scaffold material, which is then gradually penetrated and partially replaced by newly grown bone tissue. During this process, the newly developing scaffold-bone compound needs to attain the stiffness of mandibular bone under normal physiological conditions. In this context, the question arises how the compound stiffness is driven by the key design parameters of the tissue engineering system: macroporosity, crack density, as well as scaffold resorption/bone formation rates. We here tackle this question by combining the latest state-of-the-art mathematical modeling techniques in the field of multiscale micromechanics, into an unprecedented suite of highly efficient, semi-analytically defined computation steps resolving several levels of hierarchical organization, from the millimeter- down to the nanometer-scale. This includes several types of homogenization schemes, namely such for porous polycrystals with elongated solid elements, for cracked matrix-inclusion composites, as well as for assemblies of coated spherical compounds. Together with the experimentally known stiffnesses of hydroxyapatite crystals and mandibular bone tissue, the new mathematical model suggests that early stiffness recovery (i.e., within several weeks) requires total avoidance of microcracks in the hydroxyapatite scaffolds, while mid-term stiffness recovery (i.e., within several months) is additionally promoted by provision of small granule sizes, in combination with high bone formation and low scaffold resorption rates. PMID:27708584

  10. CNNH_PSS: protein 8-class secondary structure prediction by convolutional neural network with highway.

    PubMed

    Zhou, Jiyun; Wang, Hongpeng; Zhao, Zhishan; Xu, Ruifeng; Lu, Qin

    2018-05-08

    Protein secondary structure is the three dimensional form of local segments of proteins and its prediction is an important problem in protein tertiary structure prediction. Developing computational approaches for protein secondary structure prediction is becoming increasingly urgent. We present a novel deep learning based model, referred to as CNNH_PSS, by using multi-scale CNN with highway. In CNNH_PSS, any two neighbor convolutional layers have a highway to deliver information from current layer to the output of the next one to keep local contexts. As lower layers extract local context while higher layers extract long-range interdependencies, the highways between neighbor layers allow CNNH_PSS to have ability to extract both local contexts and long-range interdependencies. We evaluate CNNH_PSS on two commonly used datasets: CB6133 and CB513. CNNH_PSS outperforms the multi-scale CNN without highway by at least 0.010 Q8 accuracy and also performs better than CNF, DeepCNF and SSpro8, which cannot extract long-range interdependencies, by at least 0.020 Q8 accuracy, demonstrating that both local contexts and long-range interdependencies are indeed useful for prediction. Furthermore, CNNH_PSS also performs better than GSM and DCRNN which need extra complex model to extract long-range interdependencies. It demonstrates that CNNH_PSS not only cost less computer resource, but also achieves better predicting performance. CNNH_PSS have ability to extracts both local contexts and long-range interdependencies by combing multi-scale CNN and highway network. The evaluations on common datasets and comparisons with state-of-the-art methods indicate that CNNH_PSS is an useful and efficient tool for protein secondary structure prediction.

  11. 3D hierarchical geometric modeling and multiscale FE analysis as a base for individualized medical diagnosis of bone structure.

    PubMed

    Podshivalov, L; Fischer, A; Bar-Yoseph, P Z

    2011-04-01

    This paper describes a new alternative for individualized mechanical analysis of bone trabecular structure. This new method closes the gap between the classic homogenization approach that is applied to macro-scale models and the modern micro-finite element method that is applied directly to micro-scale high-resolution models. The method is based on multiresolution geometrical modeling that generates intermediate structural levels. A new method for estimating multiscale material properties has also been developed to facilitate reliable and efficient mechanical analysis. What makes this method unique is that it enables direct and interactive analysis of the model at every intermediate level. Such flexibility is of principal importance in the analysis of trabecular porous structure. The method enables physicians to zoom-in dynamically and focus on the volume of interest (VOI), thus paving the way for a large class of investigations into the mechanical behavior of bone structure. This is one of the very few methods in the field of computational bio-mechanics that applies mechanical analysis adaptively on large-scale high resolution models. The proposed computational multiscale FE method can serve as an infrastructure for a future comprehensive computerized system for diagnosis of bone structures. The aim of such a system is to assist physicians in diagnosis, prognosis, drug treatment simulation and monitoring. Such a system can provide a better understanding of the disease, and hence benefit patients by providing better and more individualized treatment and high quality healthcare. In this paper, we demonstrate the feasibility of our method on a high-resolution model of vertebra L3. Copyright © 2010 Elsevier Inc. All rights reserved.

  12. Multi-scale Mexican spotted owl (Strix occidentalis lucida) nest/roost habitat selection in Arizona and a comparison with single-scale modeling results

    Treesearch

    Brad C. Timm; Kevin McGarigal; Samuel A. Cushman; Joseph L. Ganey

    2016-01-01

    Efficacy of future habitat selection studies will benefit by taking a multi-scale approach. In addition to potentially providing increased explanatory power and predictive capacity, multi-scale habitat models enhance our understanding of the scales at which species respond to their environment, which is critical knowledge required to implement effective...

  13. Capturing remote mixing due to internal tides using multi-scale modeling tool: SOMAR-LES

    NASA Astrophysics Data System (ADS)

    Santilli, Edward; Chalamalla, Vamsi; Scotti, Alberto; Sarkar, Sutanu

    2016-11-01

    Internal tides that are generated during the interaction of an oscillating barotropic tide with the bottom bathymetry dissipate only a fraction of their energy near the generation region. The rest is radiated away in the form of low- high-mode internal tides. These internal tides dissipate energy at remote locations when they interact with the upper ocean pycnocline, continental slope, and large scale eddies. Capturing the wide range of length and time scales involved during the life-cycle of internal tides is computationally very expensive. A recently developed multi-scale modeling tool called SOMAR-LES combines the adaptive grid refinement features of SOMAR with the turbulence modeling features of a Large Eddy Simulation (LES) to capture multi-scale processes at a reduced computational cost. Numerical simulations of internal tide generation at idealized bottom bathymetries are performed to demonstrate this multi-scale modeling technique. Although each of the remote mixing phenomena have been considered independently in previous studies, this work aims to capture remote mixing processes during the life cycle of an internal tide in more realistic settings, by allowing multi-level (coarse and fine) grids to co-exist and exchange information during the time stepping process.

  14. Multiscale Analysis of Time Irreversibility Based on Phase-Space Reconstruction and Horizontal Visibility Graph Approach

    NASA Astrophysics Data System (ADS)

    Zhang, Yongping; Shang, Pengjian; Xiong, Hui; Xia, Jianan

    Time irreversibility is an important property of nonequilibrium dynamic systems. A visibility graph approach was recently proposed, and this approach is generally effective to measure time irreversibility of time series. However, its result may be unreliable when dealing with high-dimensional systems. In this work, we consider the joint concept of time irreversibility and adopt the phase-space reconstruction technique to improve this visibility graph approach. Compared with the previous approach, the improved approach gives a more accurate estimate for the irreversibility of time series, and is more effective to distinguish irreversible and reversible stochastic processes. We also use this approach to extract the multiscale irreversibility to account for the multiple inherent dynamics of time series. Finally, we apply the approach to detect the multiscale irreversibility of financial time series, and succeed to distinguish the time of financial crisis and the plateau. In addition, Asian stock indexes away from other indexes are clearly visible in higher time scales. Simulations and real data support the effectiveness of the improved approach when detecting time irreversibility.

  15. Integrated multiscale biomaterials experiment and modelling: a perspective

    PubMed Central

    Buehler, Markus J.; Genin, Guy M.

    2016-01-01

    Advances in multiscale models and computational power have enabled a broad toolset to predict how molecules, cells, tissues and organs behave and develop. A key theme in biological systems is the emergence of macroscale behaviour from collective behaviours across a range of length and timescales, and a key element of these models is therefore hierarchical simulation. However, this predictive capacity has far outstripped our ability to validate predictions experimentally, particularly when multiple hierarchical levels are involved. The state of the art represents careful integration of multiscale experiment and modelling, and yields not only validation, but also insights into deformation and relaxation mechanisms across scales. We present here a sampling of key results that highlight both challenges and opportunities for integrated multiscale experiment and modelling in biological systems. PMID:28981126

  16. A Machine Learning Method for the Prediction of Receptor Activation in the Simulation of Synapses

    PubMed Central

    Montes, Jesus; Gomez, Elena; Merchán-Pérez, Angel; DeFelipe, Javier; Peña, Jose-Maria

    2013-01-01

    Chemical synaptic transmission involves the release of a neurotransmitter that diffuses in the extracellular space and interacts with specific receptors located on the postsynaptic membrane. Computer simulation approaches provide fundamental tools for exploring various aspects of the synaptic transmission under different conditions. In particular, Monte Carlo methods can track the stochastic movements of neurotransmitter molecules and their interactions with other discrete molecules, the receptors. However, these methods are computationally expensive, even when used with simplified models, preventing their use in large-scale and multi-scale simulations of complex neuronal systems that may involve large numbers of synaptic connections. We have developed a machine-learning based method that can accurately predict relevant aspects of the behavior of synapses, such as the percentage of open synaptic receptors as a function of time since the release of the neurotransmitter, with considerably lower computational cost compared with the conventional Monte Carlo alternative. The method is designed to learn patterns and general principles from a corpus of previously generated Monte Carlo simulations of synapses covering a wide range of structural and functional characteristics. These patterns are later used as a predictive model of the behavior of synapses under different conditions without the need for additional computationally expensive Monte Carlo simulations. This is performed in five stages: data sampling, fold creation, machine learning, validation and curve fitting. The resulting procedure is accurate, automatic, and it is general enough to predict synapse behavior under experimental conditions that are different to the ones it has been trained on. Since our method efficiently reproduces the results that can be obtained with Monte Carlo simulations at a considerably lower computational cost, it is suitable for the simulation of high numbers of synapses and it is therefore an excellent tool for multi-scale simulations. PMID:23894367

  17. Simulating Cancer Growth with Multiscale Agent-Based Modeling

    PubMed Central

    Wang, Zhihui; Butner, Joseph D.; Kerketta, Romica; Cristini, Vittorio; Deisboeck, Thomas S.

    2014-01-01

    There have been many techniques developed in recent years to in silico model a variety of cancer behaviors. Agent-based modeling is a specific discrete-based hybrid modeling approach that allows simulating the role of diversity in cell populations as well as within each individual cell; it has therefore become a powerful modeling method widely used by computational cancer researchers. Many aspects of tumor morphology including phenotype-changing mutations, the adaptation to microenvironment, the process of angiogenesis, the influence of extracellular matrix, reactions to chemotherapy or surgical intervention, the effects of oxygen and nutrient availability, and metastasis and invasion of healthy tissues have been incorporated and investigated in agent-based models. In this review, we introduce some of the most recent agent-based models that have provided insight into the understanding of cancer growth and invasion, spanning multiple biological scales in time and space, and we further describe several experimentally testable hypotheses generated by those models. We also discuss some of the current challenges of multiscale agent-based cancer models. PMID:24793698

  18. Spectral sideband produced by a hemispherical concave multilayer on the African shield-bug Calidea panaethiopica (Scutelleridae)

    NASA Astrophysics Data System (ADS)

    Vigneron, Jean Pol; Ouedraogo, Moussa; Colomer, Jean-François; Rassart, Marie

    2009-02-01

    The African shield-backed bug Calidea panaethiopica is a very colorful insect which produces a range of iridescent yellow, green, and blue reflections. The cuticle of the dorsal side of the insect, on the shield, the prothorax and part of the head, is pricked of uniformly distributed hemispherical hollow cavities a few tens micrometers deep. Under normal illumination and viewing the insect’s muffin-tin shaped surface gives rise to two distinct colors: a yellow spot arising from the bottom of the well and a blue annular cloud that appears to float around the yellow spot. This effect is explained by multiple reflections on a hemispherical Bragg mirror with a mesoscopic curvature. A multiscale computing methodology was found to be needed to evaluate the reflection spectrum for such a curved multilayer. This multiscale approach is very general and should be useful for dealing with visual effects in many natural and artificial systems.

  19. Effect of solute atoms on dislocation motion in Mg: An electronic structure perspective

    PubMed Central

    Tsuru, T.; Chrzan, D. C.

    2015-01-01

    Solution strengthening is a well-known approach to tailoring the mechanical properties of structural alloys. Ultimately, the properties of the dislocation/solute interaction are rooted in the electronic structure of the alloy. Accordingly, we compute the electronic structure associated with, and the energy barriers to dislocation cross-slip. The energy barriers so obtained can be used in the development of multiscale models for dislocation mediated plasticity. The computed electronic structure can be used to identify substitutional solutes likely to interact strongly with the dislocation. Using the example of a-type screw dislocations in Mg, we compute accurately the Peierls barrier to prismatic plane slip and argue that Y, Ca, Ti, and Zr should interact strongly with the studied dislocation, and thereby decrease the dislocation slip anisotropy in the alloy. PMID:25740411

  20. Multiphysics and multiscale modelling, data-model fusion and integration of organ physiology in the clinic: ventricular cardiac mechanics.

    PubMed

    Chabiniok, Radomir; Wang, Vicky Y; Hadjicharalambous, Myrianthi; Asner, Liya; Lee, Jack; Sermesant, Maxime; Kuhl, Ellen; Young, Alistair A; Moireau, Philippe; Nash, Martyn P; Chapelle, Dominique; Nordsletten, David A

    2016-04-06

    With heart and cardiovascular diseases continually challenging healthcare systems worldwide, translating basic research on cardiac (patho)physiology into clinical care is essential. Exacerbating this already extensive challenge is the complexity of the heart, relying on its hierarchical structure and function to maintain cardiovascular flow. Computational modelling has been proposed and actively pursued as a tool for accelerating research and translation. Allowing exploration of the relationships between physics, multiscale mechanisms and function, computational modelling provides a platform for improving our understanding of the heart. Further integration of experimental and clinical data through data assimilation and parameter estimation techniques is bringing computational models closer to use in routine clinical practice. This article reviews developments in computational cardiac modelling and how their integration with medical imaging data is providing new pathways for translational cardiac modelling.

  1. Multi-scale modelling of rubber-like materials and soft tissues: an appraisal

    PubMed Central

    Puglisi, G.

    2016-01-01

    We survey, in a partial way, multi-scale approaches for the modelling of rubber-like and soft tissues and compare them with classical macroscopic phenomenological models. Our aim is to show how it is possible to obtain practical mathematical models for the mechanical behaviour of these materials incorporating mesoscopic (network scale) information. Multi-scale approaches are crucial for the theoretical comprehension and prediction of the complex mechanical response of these materials. Moreover, such models are fundamental in the perspective of the design, through manipulation at the micro- and nano-scales, of new polymeric and bioinspired materials with exceptional macroscopic properties. PMID:27118927

  2. Adaptive multiresolution modeling of groundwater flow in heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    Malenica, Luka; Gotovac, Hrvoje; Srzic, Veljko; Andric, Ivo

    2016-04-01

    Proposed methodology was originally developed by our scientific team in Split who designed multiresolution approach for analyzing flow and transport processes in highly heterogeneous porous media. The main properties of the adaptive Fup multi-resolution approach are: 1) computational capabilities of Fup basis functions with compact support capable to resolve all spatial and temporal scales, 2) multi-resolution presentation of heterogeneity as well as all other input and output variables, 3) accurate, adaptive and efficient strategy and 4) semi-analytical properties which increase our understanding of usually complex flow and transport processes in porous media. The main computational idea behind this approach is to separately find the minimum number of basis functions and resolution levels necessary to describe each flow and transport variable with the desired accuracy on a particular adaptive grid. Therefore, each variable is separately analyzed, and the adaptive and multi-scale nature of the methodology enables not only computational efficiency and accuracy, but it also describes subsurface processes closely related to their understood physical interpretation. The methodology inherently supports a mesh-free procedure, avoiding the classical numerical integration, and yields continuous velocity and flux fields, which is vitally important for flow and transport simulations. In this paper, we will show recent improvements within the proposed methodology. Since "state of the art" multiresolution approach usually uses method of lines and only spatial adaptive procedure, temporal approximation was rarely considered as a multiscale. Therefore, novel adaptive implicit Fup integration scheme is developed, resolving all time scales within each global time step. It means that algorithm uses smaller time steps only in lines where solution changes are intensive. Application of Fup basis functions enables continuous time approximation, simple interpolation calculations across different temporal lines and local time stepping control. Critical aspect of time integration accuracy is construction of spatial stencil due to accurate calculation of spatial derivatives. Since common approach applied for wavelets and splines uses a finite difference operator, we developed here collocation one including solution values and differential operator. In this way, new improved algorithm is adaptive in space and time enabling accurate solution for groundwater flow problems, especially in highly heterogeneous porous media with large lnK variances and different correlation length scales. In addition, differences between collocation and finite volume approaches are discussed. Finally, results show application of methodology to the groundwater flow problems in highly heterogeneous confined and unconfined aquifers.

  3. Year 2 Report: Protein Function Prediction Platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, C E

    2012-04-27

    Upon completion of our second year of development in a 3-year development cycle, we have completed a prototype protein structure-function annotation and function prediction system: Protein Function Prediction (PFP) platform (v.0.5). We have met our milestones for Years 1 and 2 and are positioned to continue development in completion of our original statement of work, or a reasonable modification thereof, in service to DTRA Programs involved in diagnostics and medical countermeasures research and development. The PFP platform is a multi-scale computational modeling system for protein structure-function annotation and function prediction. As of this writing, PFP is the only existing fullymore » automated, high-throughput, multi-scale modeling, whole-proteome annotation platform, and represents a significant advance in the field of genome annotation (Fig. 1). PFP modules perform protein functional annotations at the sequence, systems biology, protein structure, and atomistic levels of biological complexity (Fig. 2). Because these approaches provide orthogonal means of characterizing proteins and suggesting protein function, PFP processing maximizes the protein functional information that can currently be gained by computational means. Comprehensive annotation of pathogen genomes is essential for bio-defense applications in pathogen characterization, threat assessment, and medical countermeasure design and development in that it can short-cut the time and effort required to select and characterize protein biomarkers.« less

  4. Improving NASA's Multiscale Modeling Framework for Tropical Cyclone Climate Study

    NASA Technical Reports Server (NTRS)

    Shen, Bo-Wen; Nelson, Bron; Cheung, Samson; Tao, Wei-Kuo

    2013-01-01

    One of the current challenges in tropical cyclone (TC) research is how to improve our understanding of TC interannual variability and the impact of climate change on TCs. Recent advances in global modeling, visualization, and supercomputing technologies at NASA show potential for such studies. In this article, the authors discuss recent scalability improvement to the multiscale modeling framework (MMF) that makes it feasible to perform long-term TC-resolving simulations. The MMF consists of the finite-volume general circulation model (fvGCM), supplemented by a copy of the Goddard cumulus ensemble model (GCE) at each of the fvGCM grid points, giving 13,104 GCE copies. The original fvGCM implementation has a 1D data decomposition; the revised MMF implementation retains the 1D decomposition for most of the code, but uses a 2D decomposition for the massive copies of GCEs. Because the vast majority of computation time in the MMF is spent computing the GCEs, this approach can achieve excellent speedup without incurring the cost of modifying the entire code. Intelligent process mapping allows differing numbers of processes to be assigned to each domain for load balancing. The revised parallel implementation shows highly promising scalability, obtaining a nearly 80-fold speedup by increasing the number of cores from 30 to 3,335.

  5. Microphysics in the Multi-Scale Modeling Systems with Unified Physics

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo; Chern, J.; Lamg, S.; Matsui, T.; Shen, B.; Zeng, X.; Shi, R.

    2011-01-01

    In recent years, exponentially increasing computer power has extended Cloud Resolving Model (CRM) integrations from hours to months, the number of computational grid points from less than a thousand to close to ten million. Three-dimensional models are now more prevalent. Much attention is devoted to precipitating cloud systems where the crucial 1-km scales are resolved in horizontal domains as large as 10,000 km in two-dimensions, and 1,000 x 1,000 km2 in three-dimensions. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that NWP and mesoscale model can be run in grid size similar to cloud resolving model through nesting technique. Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (l) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, the microphysics developments of the multi-scale modeling system will be presented. In particular, the results from using multi-scale modeling system to study the heavy precipitation processes will be presented.

  6. Multiscale modeling methods in biomechanics.

    PubMed

    Bhattacharya, Pinaki; Viceconti, Marco

    2017-05-01

    More and more frequently, computational biomechanics deals with problems where the portion of physical reality to be modeled spans over such a large range of spatial and temporal dimensions, that it is impossible to represent it as a single space-time continuum. We are forced to consider multiple space-time continua, each representing the phenomenon of interest at a characteristic space-time scale. Multiscale models describe a complex process across multiple scales, and account for how quantities transform as we move from one scale to another. This review offers a set of definitions for this emerging field, and provides a brief summary of the most recent developments on multiscale modeling in biomechanics. Of all possible perspectives, we chose that of the modeling intent, which vastly affect the nature and the structure of each research activity. To the purpose we organized all papers reviewed in three categories: 'causal confirmation,' where multiscale models are used as materializations of the causation theories; 'predictive accuracy,' where multiscale modeling is aimed to improve the predictive accuracy; and 'determination of effect,' where multiscale modeling is used to model how a change at one scale manifests in an effect at another radically different space-time scale. Consistent with how the volume of computational biomechanics research is distributed across application targets, we extensively reviewed papers targeting the musculoskeletal and the cardiovascular systems, and covered only a few exemplary papers targeting other organ systems. The review shows a research subdomain still in its infancy, where causal confirmation papers remain the most common. WIREs Syst Biol Med 2017, 9:e1375. doi: 10.1002/wsbm.1375 For further resources related to this article, please visit the WIREs website. © 2017 The Authors. WIREs Systems Biology and Medicine published by Wiley Periodicals, Inc.

  7. Multiscale Modeling of PEEK Using Reactive Molecular Dynamics Modeling and Micromechanics

    NASA Technical Reports Server (NTRS)

    Pisani, William A.; Radue, Matthew; Chinkanjanarot, Sorayot; Bednarcyk, Brett A.; Pineda, Evan J.; King, Julia A.; Odegard, Gregory M.

    2018-01-01

    Polyether ether ketone (PEEK) is a high-performance, semi-crystalline thermoplastic that is used in a wide range of engineering applications, including some structural components of aircraft. The design of new PEEK-based materials requires a precise understanding of the multiscale structure and behavior of semi-crystalline PEEK. Molecular Dynamics (MD) modeling can efficiently predict bulk-level properties of single phase polymers, and micromechanics can be used to homogenize those phases based on the overall polymer microstructure. In this study, MD modeling was used to predict the mechanical properties of the amorphous and crystalline phases of PEEK. The hierarchical microstructure of PEEK, which combines the aforementioned phases, was modeled using a multiscale modeling approach facilitated by NASA's MSGMC. The bulk mechanical properties of semi-crystalline PEEK predicted using MD modeling and MSGMC agree well with vendor data, thus validating the multiscale modeling approach.

  8. Bio-inspired configurable multiscale extracellular matrix-like structures for functional alignment and guided orientation of cells.

    PubMed

    Bae, Won-Gyu; Kim, Jangho; Choung, Yun-Hoon; Chung, Yesol; Suh, Kahp Y; Pang, Changhyun; Chung, Jong Hoon; Jeong, Hoon Eui

    2015-11-01

    Inspired by the hierarchically organized protein fibers in extracellular matrix (ECM) as well as the physiological importance of multiscale topography, we developed a simple but robust method for the design and manipulation of precisely controllable multiscale hierarchical structures using capillary force lithography in combination with an original wrinkling technique. In this study, based on our proposed fabrication technology, we approached a conceptual platform that can mimic the hierarchically multiscale topographical and orientation cues of the ECM for controlling cell structure and function. We patterned the polyurethane acrylate-based nanotopography with various orientations on the microgrooves, which could provide multiscale topography signals of ECM to control single and multicellular morphology and orientation with precision. Using our platforms, we found that the structures and orientations of fibroblast cells were greatly influenced by the nanotopography, rather than the microtopography. We also proposed a new approach that enables the generation of native ECM having nanofibers in specific three-dimensional (3D) configurations by culturing fibroblast cells on the multiscale substrata. We suggest that our methodology could be used as efficient strategies for the design and manipulation of various functional platforms, including well-defined 3D tissue structures for advanced regenerative medicine applications. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Multiscale Modeling of Structurally-Graded Materials Using Discrete Dislocation Plasticity Models and Continuum Crystal Plasticity Models

    NASA Technical Reports Server (NTRS)

    Saether, Erik; Hochhalter, Jacob D.; Glaessgen, Edward H.

    2012-01-01

    A multiscale modeling methodology that combines the predictive capability of discrete dislocation plasticity and the computational efficiency of continuum crystal plasticity is developed. Single crystal configurations of different grain sizes modeled with periodic boundary conditions are analyzed using discrete dislocation plasticity (DD) to obtain grain size-dependent stress-strain predictions. These relationships are mapped into crystal plasticity parameters to develop a multiscale DD/CP model for continuum level simulations. A polycrystal model of a structurally-graded microstructure is developed, analyzed and used as a benchmark for comparison between the multiscale DD/CP model and the DD predictions. The multiscale DD/CP model follows the DD predictions closely up to an initial peak stress and then follows a strain hardening path that is parallel but somewhat offset from the DD predictions. The difference is believed to be from a combination of the strain rate in the DD simulation and the inability of the DD/CP model to represent non-monotonic material response.

  10. Multi-scale calculation based on dual domain material point method combined with molecular dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dhakal, Tilak Raj

    This dissertation combines the dual domain material point method (DDMP) with molecular dynamics (MD) in an attempt to create a multi-scale numerical method to simulate materials undergoing large deformations with high strain rates. In these types of problems, the material is often in a thermodynamically non-equilibrium state, and conventional constitutive relations are often not available. In this method, the closure quantities, such as stress, at each material point are calculated from a MD simulation of a group of atoms surrounding the material point. Rather than restricting the multi-scale simulation in a small spatial region, such as phase interfaces, or crackmore » tips, this multi-scale method can be used to consider non-equilibrium thermodynamic e ects in a macroscopic domain. This method takes advantage that the material points only communicate with mesh nodes, not among themselves; therefore MD simulations for material points can be performed independently in parallel. First, using a one-dimensional shock problem as an example, the numerical properties of the original material point method (MPM), the generalized interpolation material point (GIMP) method, the convected particle domain interpolation (CPDI) method, and the DDMP method are investigated. Among these methods, only the DDMP method converges as the number of particles increases, but the large number of particles needed for convergence makes the method very expensive especially in our multi-scale method where we calculate stress in each material point using MD simulation. To improve DDMP, the sub-point method is introduced in this dissertation, which provides high quality numerical solutions with a very small number of particles. The multi-scale method based on DDMP with sub-points is successfully implemented for a one dimensional problem of shock wave propagation in a cerium crystal. The MD simulation to calculate stress in each material point is performed in GPU using CUDA to accelerate the computation. The numerical properties of the multiscale method are investigated as well as the results from this multi-scale calculation are compared of particles needed for convergence makes the method very expensive especially in our multi-scale method where we calculate stress in each material point using MD simulation. To improve DDMP, the sub-point method is introduced in this dissertation, which provides high quality numerical solutions with a very small number of particles. The multi-scale method based on DDMP with sub-points is successfully implemented for a one dimensional problem of shock wave propagation in a cerium crystal. The MD simulation to calculate stress in each material point is performed in GPU using CUDA to accelerate the computation. The numerical properties of the multiscale method are investigated as well as the results from this multi-scale calculation are compared with direct MD simulation results to demonstrate the feasibility of the method. Also, the multi-scale method is applied for a two dimensional problem of jet formation around copper notch under a strong impact.« less

  11. Computational Chemistry Toolkit for Energetic Materials Design

    DTIC Science & Technology

    2006-11-01

    industry are aggressively engaged in efforts to develop multiscale modeling and simulation methodologies to model and analyze complex phenomena across...energetic materials design. It is hoped that this toolkit will evolve into a collection of well-integrated multiscale modeling methodologies...Experimenta Theoreticala This Work 1-5-Diamino-4- methyl- tetrazolium nitrate 8.4 41.7 47.5 1-5-Diamino-4- methyl- tetrazolium azide 138.1 161.6

  12. Three-Dimensional Visualization of Ozone Process Data.

    DTIC Science & Technology

    1997-06-18

    Scattered Multivariate Data. IEEE Computer Graphics & Applications. 11 (May), 47-55. Odman, M.T. and Ingram, C.L. (1996) Multiscale Air Quality Simulation...the Multiscale Air Quality Simulation Platform (MAQSIP) modeling system. MAQSIP is a modular comprehensive air quality modeling system which MCNC...photolyzed back again to nitric oxide. Finally, oxides of 6 nitrogen are terminated through loss or combination into nitric acid, organic nitrates

  13. Control of Thermo-Acoustics Instabilities: The Multi-Scale Extended Kalman Approach

    NASA Technical Reports Server (NTRS)

    Le, Dzu K.; DeLaat, John C.; Chang, Clarence T.

    2003-01-01

    "Multi-Scale Extended Kalman" (MSEK) is a novel model-based control approach recently found to be effective for suppressing combustion instabilities in gas turbines. A control law formulated in this approach for fuel modulation demonstrated steady suppression of a high-frequency combustion instability (less than 500Hz) in a liquid-fuel combustion test rig under engine-realistic conditions. To make-up for severe transport-delays on control effect, the MSEK controller combines a wavelet -like Multi-Scale analysis and an Extended Kalman Observer to predict the thermo-acoustic states of combustion pressure perturbations. The commanded fuel modulation is composed of a damper action based on the predicted states, and a tones suppression action based on the Multi-Scale estimation of thermal excitations and other transient disturbances. The controller performs automatic adjustments of the gain and phase of these actions to minimize the Time-Scale Averaged Variances of the pressures inside the combustion zone and upstream of the injector. The successful demonstration of Active Combustion Control with this MSEK controller completed an important NASA milestone for the current research in advanced combustion technologies.

  14. Hierarchical multi-scale approach to validation and uncertainty quantification of hyper-spectral image modeling

    NASA Astrophysics Data System (ADS)

    Engel, Dave W.; Reichardt, Thomas A.; Kulp, Thomas J.; Graff, David L.; Thompson, Sandra E.

    2016-05-01

    Validating predictive models and quantifying uncertainties inherent in the modeling process is a critical component of the HARD Solids Venture program [1]. Our current research focuses on validating physics-based models predicting the optical properties of solid materials for arbitrary surface morphologies and characterizing the uncertainties in these models. We employ a systematic and hierarchical approach by designing physical experiments and comparing the experimental results with the outputs of computational predictive models. We illustrate this approach through an example comparing a micro-scale forward model to an idealized solid-material system and then propagating the results through a system model to the sensor level. Our efforts should enhance detection reliability of the hyper-spectral imaging technique and the confidence in model utilization and model outputs by users and stakeholders.

  15. Cardiac Multi-detector CT Segmentation Based on Multiscale Directional Edge Detector and 3D Level Set.

    PubMed

    Antunes, Sofia; Esposito, Antonio; Palmisano, Anna; Colantoni, Caterina; Cerutti, Sergio; Rizzo, Giovanna

    2016-05-01

    Extraction of the cardiac surfaces of interest from multi-detector computed tomographic (MDCT) data is a pre-requisite step for cardiac analysis, as well as for image guidance procedures. Most of the existing methods need manual corrections, which is time-consuming. We present a fully automatic segmentation technique for the extraction of the right ventricle, left ventricular endocardium and epicardium from MDCT images. The method consists in a 3D level set surface evolution approach coupled to a new stopping function based on a multiscale directional second derivative Gaussian filter, which is able to stop propagation precisely on the real boundary of the structures of interest. We validated the segmentation method on 18 MDCT volumes from healthy and pathologic subjects using manual segmentation performed by a team of expert radiologists as gold standard. Segmentation errors were assessed for each structure resulting in a surface-to-surface mean error below 0.5 mm and a percentage of surface distance with errors less than 1 mm above 80%. Moreover, in comparison to other segmentation approaches, already proposed in previous work, our method presented an improved accuracy (with surface distance errors less than 1 mm increased of 8-20% for all structures). The obtained results suggest that our approach is accurate and effective for the segmentation of ventricular cavities and myocardium from MDCT images.

  16. A multiscale fixed stress split iterative scheme for coupled flow and poromechanics in deep subsurface reservoirs

    NASA Astrophysics Data System (ADS)

    Dana, Saumik; Ganis, Benjamin; Wheeler, Mary F.

    2018-01-01

    In coupled flow and poromechanics phenomena representing hydrocarbon production or CO2 sequestration in deep subsurface reservoirs, the spatial domain in which fluid flow occurs is usually much smaller than the spatial domain over which significant deformation occurs. The typical approach is to either impose an overburden pressure directly on the reservoir thus treating it as a coupled problem domain or to model flow on a huge domain with zero permeability cells to mimic the no flow boundary condition on the interface of the reservoir and the surrounding rock. The former approach precludes a study of land subsidence or uplift and further does not mimic the true effect of the overburden on stress sensitive reservoirs whereas the latter approach has huge computational costs. In order to address these challenges, we augment the fixed-stress split iterative scheme with upscaling and downscaling operators to enable modeling flow and mechanics on overlapping nonmatching hexahedral grids. Flow is solved on a finer mesh using a multipoint flux mixed finite element method and mechanics is solved on a coarse mesh using a conforming Galerkin method. The multiscale operators are constructed using a procedure that involves singular value decompositions, a surface intersections algorithm and Delaunay triangulations. We numerically demonstrate the convergence of the augmented scheme using the classical Mandel's problem solution.

  17. Hierarchical algorithms for modeling the ocean on hierarchical architectures

    NASA Astrophysics Data System (ADS)

    Hill, C. N.

    2012-12-01

    This presentation will describe an approach to using accelerator/co-processor technology that maps hierarchical, multi-scale modeling techniques to an underlying hierarchical hardware architecture. The focus of this work is on making effective use of both CPU and accelerator/co-processor parts of a system, for large scale ocean modeling. In the work, a lower resolution basin scale ocean model is locally coupled to multiple, "embedded", limited area higher resolution sub-models. The higher resolution models execute on co-processor/accelerator hardware and do not interact directly with other sub-models. The lower resolution basin scale model executes on the system CPU(s). The result is a multi-scale algorithm that aligns with hardware designs in the co-processor/accelerator space. We demonstrate this approach being used to substitute explicit process models for standard parameterizations. Code for our sub-models is implemented through a generic abstraction layer, so that we can target multiple accelerator architectures with different programming environments. We will present two application and implementation examples. One uses the CUDA programming environment and targets GPU hardware. This example employs a simple non-hydrostatic two dimensional sub-model to represent vertical motion more accurately. The second example uses a highly threaded three-dimensional model at high resolution. This targets a MIC/Xeon Phi like environment and uses sub-models as a way to explicitly compute sub-mesoscale terms. In both cases the accelerator/co-processor capability provides extra compute cycles that allow improved model fidelity for little or no extra wall-clock time cost.

  18. Peridynamic Multiscale Finite Element Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Costa, Timothy; Bond, Stephen D.; Littlewood, David John

    The problem of computing quantum-accurate design-scale solutions to mechanics problems is rich with applications and serves as the background to modern multiscale science research. The prob- lem can be broken into component problems comprised of communicating across adjacent scales, which when strung together create a pipeline for information to travel from quantum scales to design scales. Traditionally, this involves connections between a) quantum electronic structure calculations and molecular dynamics and between b) molecular dynamics and local partial differ- ential equation models at the design scale. The second step, b), is particularly challenging since the appropriate scales of molecular dynamic andmore » local partial differential equation models do not overlap. The peridynamic model for continuum mechanics provides an advantage in this endeavor, as the basic equations of peridynamics are valid at a wide range of scales limiting from the classical partial differential equation models valid at the design scale to the scale of molecular dynamics. In this work we focus on the development of multiscale finite element methods for the peridynamic model, in an effort to create a mathematically consistent channel for microscale information to travel from the upper limits of the molecular dynamics scale to the design scale. In particular, we first develop a Nonlocal Multiscale Finite Element Method which solves the peridynamic model at multiple scales to include microscale information at the coarse-scale. We then consider a method that solves a fine-scale peridynamic model to build element-support basis functions for a coarse- scale local partial differential equation model, called the Mixed Locality Multiscale Finite Element Method. Given decades of research and development into finite element codes for the local partial differential equation models of continuum mechanics there is a strong desire to couple local and nonlocal models to leverage the speed and state of the art of local models with the flexibility and accuracy of the nonlocal peridynamic model. In the mixed locality method this coupling occurs across scales, so that the nonlocal model can be used to communicate material heterogeneity at scales inappropriate to local partial differential equation models. Additionally, the computational burden of the weak form of the peridynamic model is reduced dramatically by only requiring that the model be solved on local patches of the simulation domain which may be computed in parallel, taking advantage of the heterogeneous nature of next generation computing platforms. Addition- ally, we present a novel Galerkin framework, the 'Ambulant Galerkin Method', which represents a first step towards a unified mathematical analysis of local and nonlocal multiscale finite element methods, and whose future extension will allow the analysis of multiscale finite element methods that mix models across scales under certain assumptions of the consistency of those models.« less

  19. Fully implicit adaptive mesh refinement algorithm for reduced MHD

    NASA Astrophysics Data System (ADS)

    Philip, Bobby; Pernice, Michael; Chacon, Luis

    2006-10-01

    In the macroscopic simulation of plasmas, the numerical modeler is faced with the challenge of dealing with multiple time and length scales. Traditional approaches based on explicit time integration techniques and fixed meshes are not suitable for this challenge, as such approaches prevent the modeler from using realistic plasma parameters to keep the computation feasible. We propose here a novel approach, based on implicit methods and structured adaptive mesh refinement (SAMR). Our emphasis is on both accuracy and scalability with the number of degrees of freedom. As a proof-of-principle, we focus on the reduced resistive MHD model as a basic MHD model paradigm, which is truly multiscale. The approach taken here is to adapt mature physics-based technology to AMR grids, and employ AMR-aware multilevel techniques (such as fast adaptive composite grid --FAC-- algorithms) for scalability. We demonstrate that the concept is indeed feasible, featuring near-optimal scalability under grid refinement. Results of fully-implicit, dynamically-adaptive AMR simulations in challenging dissipation regimes will be presented on a variety of problems that benefit from this capability, including tearing modes, the island coalescence instability, and the tilt mode instability. L. Chac'on et al., J. Comput. Phys. 178 (1), 15- 36 (2002) B. Philip, M. Pernice, and L. Chac'on, Lecture Notes in Computational Science and Engineering, accepted (2006)

  20. Computational Design of Materials: Planetary Entry to Electric Aircraft and Beyond

    NASA Technical Reports Server (NTRS)

    Thompson, Alexander; Lawson, John W.

    2014-01-01

    NASA's projects and missions push the bounds of what is possible. To support the agency's work, materials development must stay on the cutting edge in order to keep pace. Today, researchers at NASA Ames Research Center perform multiscale modeling to aid the development of new materials and provide insight into existing ones. Multiscale modeling enables researchers to determine micro- and macroscale properties by connecting computational methods ranging from the atomic level (density functional theory, molecular dynamics) to the macroscale (finite element method). The output of one level is passed on as input to the next level, creating a powerful predictive model.

  1. Multiscale Mathematics for Biomass Conversion to Renewable Hydrogen

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Plechac, Petr; Vlachos, Dionisios; Katsoulakis, Markos

    2013-09-05

    The overall objective of this project is to develop multiscale models for understanding and eventually designing complex processes for renewables. To the best of our knowledge, our work is the first attempt at modeling complex reacting systems, whose performance relies on underlying multiscale mathematics. Our specific application lies at the heart of biofuels initiatives of DOE and entails modeling of catalytic systems, to enable economic, environmentally benign, and efficient conversion of biomass into either hydrogen or valuable chemicals. Specific goals include: (i) Development of rigorous spatio-temporal coarse-grained kinetic Monte Carlo (KMC) mathematics and simulation for microscopic processes encountered in biomassmore » transformation. (ii) Development of hybrid multiscale simulation that links stochastic simulation to a deterministic partial differential equation (PDE) model for an entire reactor. (iii) Development of hybrid multiscale simulation that links KMC simulation with quantum density functional theory (DFT) calculations. (iv) Development of parallelization of models of (i)-(iii) to take advantage of Petaflop computing and enable real world applications of complex, multiscale models. In this NCE period, we continued addressing these objectives and completed the proposed work. Main initiatives, key results, and activities are outlined.« less

  2. Multiscale modelling of hydraulic conductivity in vuggy porous media

    PubMed Central

    Daly, K. R.; Roose, T.

    2014-01-01

    Flow in both saturated and non-saturated vuggy porous media, i.e. soil, is inherently multiscale. The complex microporous structure of the soil aggregates and the wider vugs provides a multitude of flow pathways and has received significant attention from the X-ray computed tomography (CT) community with a constant drive to image at higher resolution. Using multiscale homogenization, we derive averaged equations to study the effects of the microscale structure on the macroscopic flow. The averaged model captures the underlying geometry through a series of cell problems and is verified through direct comparison to numerical simulations of the full structure. These methods offer significant reductions in computation time and allow us to perform three-dimensional calculations with complex geometries on a desktop PC. The results show that the surface roughness of the aggregate has a significantly greater effect on the flow than the microstructure within the aggregate. Hence, this is the region in which the resolution of X-ray CT for image-based modelling has the greatest impact. PMID:24511248

  3. 3D multiscale crack propagation using the XFEM applied to a gas turbine blade

    NASA Astrophysics Data System (ADS)

    Holl, Matthias; Rogge, Timo; Loehnert, Stefan; Wriggers, Peter; Rolfes, Raimund

    2014-01-01

    This work presents a new multiscale technique to investigate advancing cracks in three dimensional space. This fully adaptive multiscale technique is designed to take into account cracks of different length scales efficiently, by enabling fine scale domains locally in regions of interest, i.e. where stress concentrations and high stress gradients occur. Due to crack propagation, these regions change during the simulation process. Cracks are modeled using the extended finite element method, such that an accurate and powerful numerical tool is achieved. Restricting ourselves to linear elastic fracture mechanics, the -integral yields an accurate solution of the stress intensity factors, and with the criterion of maximum hoop stress, a precise direction of growth. If necessary, the on the finest scale computed crack surface is finally transferred to the corresponding scale. In a final step, the model is applied to a quadrature point of a gas turbine blade, to compute crack growth on the microscale of a real structure.

  4. Thermo-Oxidative Induced Damage in Polymer Composites: Microstructure Image-Based Multi-Scale Modeling and Experimental Validation

    NASA Astrophysics Data System (ADS)

    Hussein, Rafid M.; Chandrashekhara, K.

    2017-11-01

    A multi-scale modeling approach is presented to simulate and validate thermo-oxidation shrinkage and cracking damage of a high temperature polymer composite. The multi-scale approach investigates coupled transient diffusion-reaction and static structural at macro- to micro-scale. The micro-scale shrinkage deformation and cracking damage are simulated and validated using 2D and 3D simulations. Localized shrinkage displacement boundary conditions for the micro-scale simulations are determined from the respective meso- and macro-scale simulations, conducted for a cross-ply laminate. The meso-scale geometrical domain and the micro-scale geometry and mesh are developed using the object oriented finite element (OOF). The macro-scale shrinkage and weight loss are measured using unidirectional coupons and used to build the macro-shrinkage model. The cross-ply coupons are used to validate the macro-shrinkage model by the shrinkage profiles acquired using scanning electron images at the cracked surface. The macro-shrinkage model deformation shows a discrepancy when the micro-scale image-based cracking is computed. The local maximum shrinkage strain is assumed to be 13 times the maximum macro-shrinkage strain of 2.5 × 10-5, upon which the discrepancy is minimized. The microcrack damage of the composite is modeled using a static elastic analysis with extended finite element and cohesive surfaces by considering the modulus spatial evolution. The 3D shrinkage displacements are fed to the model using node-wise boundary/domain conditions of the respective oxidized region. Microcrack simulation results: length, meander, and opening are closely matched to the crack in the area of interest for the scanning electron images.

  5. Hybrid stochastic simplifications for multiscale gene networks

    PubMed Central

    Crudu, Alina; Debussche, Arnaud; Radulescu, Ovidiu

    2009-01-01

    Background Stochastic simulation of gene networks by Markov processes has important applications in molecular biology. The complexity of exact simulation algorithms scales with the number of discrete jumps to be performed. Approximate schemes reduce the computational time by reducing the number of simulated discrete events. Also, answering important questions about the relation between network topology and intrinsic noise generation and propagation should be based on general mathematical results. These general results are difficult to obtain for exact models. Results We propose a unified framework for hybrid simplifications of Markov models of multiscale stochastic gene networks dynamics. We discuss several possible hybrid simplifications, and provide algorithms to obtain them from pure jump processes. In hybrid simplifications, some components are discrete and evolve by jumps, while other components are continuous. Hybrid simplifications are obtained by partial Kramers-Moyal expansion [1-3] which is equivalent to the application of the central limit theorem to a sub-model. By averaging and variable aggregation we drastically reduce simulation time and eliminate non-critical reactions. Hybrid and averaged simplifications can be used for more effective simulation algorithms and for obtaining general design principles relating noise to topology and time scales. The simplified models reproduce with good accuracy the stochastic properties of the gene networks, including waiting times in intermittence phenomena, fluctuation amplitudes and stationary distributions. The methods are illustrated on several gene network examples. Conclusion Hybrid simplifications can be used for onion-like (multi-layered) approaches to multi-scale biochemical systems, in which various descriptions are used at various scales. Sets of discrete and continuous variables are treated with different methods and are coupled together in a physically justified approach. PMID:19735554

  6. Combining computer modelling and cardiac imaging to understand right ventricular pump function.

    PubMed

    Walmsley, John; van Everdingen, Wouter; Cramer, Maarten J; Prinzen, Frits W; Delhaas, Tammo; Lumens, Joost

    2017-10-01

    Right ventricular (RV) dysfunction is a strong predictor of outcome in heart failure and is a key determinant of exercise capacity. Despite these crucial findings, the RV remains understudied in the clinical, experimental, and computer modelling literature. This review outlines how recent advances in using computer modelling and cardiac imaging synergistically help to understand RV function in health and disease. We begin by highlighting the complexity of interactions that make modelling the RV both challenging and necessary, and then summarize the multiscale modelling approaches used to date to simulate RV pump function in the context of these interactions. We go on to demonstrate how these modelling approaches in combination with cardiac imaging have improved understanding of RV pump function in pulmonary arterial hypertension, arrhythmogenic right ventricular cardiomyopathy, dyssynchronous heart failure and cardiac resynchronization therapy, hypoplastic left heart syndrome, and repaired tetralogy of Fallot. We conclude with a perspective on key issues to be addressed by computational models of the RV in the near future. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2017. For permissions, please email: journals.permissions@oup.com.

  7. Combining wet and dry research: experience with model development for cardiac mechano-electric structure-function studies

    PubMed Central

    Quinn, T. Alexander; Kohl, Peter

    2013-01-01

    Since the development of the first mathematical cardiac cell model 50 years ago, computational modelling has become an increasingly powerful tool for the analysis of data and for the integration of information related to complex cardiac behaviour. Current models build on decades of iteration between experiment and theory, representing a collective understanding of cardiac function. All models, whether computational, experimental, or conceptual, are simplified representations of reality and, like tools in a toolbox, suitable for specific applications. Their range of applicability can be explored (and expanded) by iterative combination of ‘wet’ and ‘dry’ investigation, where experimental or clinical data are used to first build and then validate computational models (allowing integration of previous findings, quantitative assessment of conceptual models, and projection across relevant spatial and temporal scales), while computational simulations are utilized for plausibility assessment, hypotheses-generation, and prediction (thereby defining further experimental research targets). When implemented effectively, this combined wet/dry research approach can support the development of a more complete and cohesive understanding of integrated biological function. This review illustrates the utility of such an approach, based on recent examples of multi-scale studies of cardiac structure and mechano-electric function. PMID:23334215

  8. Capturing Multiscale Phenomena via Adaptive Mesh Refinement (AMR) in 2D and 3D Atmospheric Flows

    NASA Astrophysics Data System (ADS)

    Ferguson, J. O.; Jablonowski, C.; Johansen, H.; McCorquodale, P.; Ullrich, P. A.; Langhans, W.; Collins, W. D.

    2017-12-01

    Extreme atmospheric events such as tropical cyclones are inherently complex multiscale phenomena. Such phenomena are a challenge to simulate in conventional atmosphere models, which typically use rather coarse uniform-grid resolutions. To enable study of these systems, Adaptive Mesh Refinement (AMR) can provide sufficient local resolution by dynamically placing high-resolution grid patches selectively over user-defined features of interest, such as a developing cyclone, while limiting the total computational burden of requiring such high-resolution globally. This work explores the use of AMR with a high-order, non-hydrostatic, finite-volume dynamical core, which uses the Chombo AMR library to implement refinement in both space and time on a cubed-sphere grid. The characteristics of the AMR approach are demonstrated via a series of idealized 2D and 3D test cases designed to mimic atmospheric dynamics and multiscale flows. In particular, new shallow-water test cases with forcing mechanisms are introduced to mimic the strengthening of tropical cyclone-like vortices and to include simplified moisture and convection processes. The forced shallow-water experiments quantify the improvements gained from AMR grids, assess how well transient features are preserved across grid boundaries, and determine effective refinement criteria. In addition, results from idealized 3D test cases are shown to characterize the accuracy and stability of the non-hydrostatic 3D AMR dynamical core.

  9. Density-Dependent Formulation of Dispersion-Repulsion Interactions in Hybrid Multiscale Quantum/Molecular Mechanics (QM/MM) Models.

    PubMed

    Curutchet, Carles; Cupellini, Lorenzo; Kongsted, Jacob; Corni, Stefano; Frediani, Luca; Steindal, Arnfinn Hykkerud; Guido, Ciro A; Scalmani, Giovanni; Mennucci, Benedetta

    2018-03-13

    Mixed multiscale quantum/molecular mechanics (QM/MM) models are widely used to explore the structure, reactivity, and electronic properties of complex chemical systems. Whereas such models typically include electrostatics and potentially polarization in so-called electrostatic and polarizable embedding approaches, respectively, nonelectrostatic dispersion and repulsion interactions are instead commonly described through classical potentials despite their quantum mechanical origin. Here we present an extension of the Tkatchenko-Scheffler semiempirical van der Waals (vdW TS ) scheme aimed at describing dispersion and repulsion interactions between quantum and classical regions within a QM/MM polarizable embedding framework. Starting from the vdW TS expression, we define a dispersion and a repulsion term, both of them density-dependent and consistently based on a Lennard-Jones-like potential. We explore transferable atom type-based parametrization strategies for the MM parameters, based on either vdW TS calculations performed on isolated fragments or on a direct estimation of the parameters from atomic polarizabilities taken from a polarizable force field. We investigate the performance of the implementation by computing self-consistent interaction energies for the S22 benchmark set, designed to represent typical noncovalent interactions in biological systems, in both equilibrium and out-of-equilibrium geometries. Overall, our results suggest that the present implementation is a promising strategy to include dispersion and repulsion in multiscale QM/MM models incorporating their explicit dependence on the electronic density.

  10. A multiscale modelling approach to understand atherosclerosis formation: A patient-specific case study in the aortic bifurcation

    PubMed Central

    Alimohammadi, Mona; Pichardo-Almarza, Cesar; Agu, Obiekezie; Díaz-Zuccarini, Vanessa

    2017-01-01

    Atherogenesis, the formation of plaques in the wall of blood vessels, starts as a result of lipid accumulation (low-density lipoprotein cholesterol) in the vessel wall. Such accumulation is related to the site of endothelial mechanotransduction, the endothelial response to mechanical stimuli and haemodynamics, which determines biochemical processes regulating the vessel wall permeability. This interaction between biomechanical and biochemical phenomena is complex, spanning different biological scales and is patient-specific, requiring tools able to capture such mathematical and biological complexity in a unified framework. Mathematical models offer an elegant and efficient way of doing this, by taking into account multifactorial and multiscale processes and mechanisms, in order to capture the fundamentals of plaque formation in individual patients. In this study, a mathematical model to understand plaque and calcification locations is presented: this model provides a strong interpretability and physical meaning through a multiscale, complex index or metric (the penetration site of low-density lipoprotein cholesterol, expressed as volumetric flux). Computed tomography scans of the aortic bifurcation and iliac arteries are analysed and compared with the results of the multifactorial model. The results indicate that the model shows potential to predict the majority of the plaque locations, also not predicting regions where plaques are absent. The promising results from this case study provide a proof of concept that can be applied to a larger patient population. PMID:28427316

  11. Stochastic simulation of multiscale complex systems with PISKaS: A rule-based approach.

    PubMed

    Perez-Acle, Tomas; Fuenzalida, Ignacio; Martin, Alberto J M; Santibañez, Rodrigo; Avaria, Rodrigo; Bernardin, Alejandro; Bustos, Alvaro M; Garrido, Daniel; Dushoff, Jonathan; Liu, James H

    2018-03-29

    Computational simulation is a widely employed methodology to study the dynamic behavior of complex systems. Although common approaches are based either on ordinary differential equations or stochastic differential equations, these techniques make several assumptions which, when it comes to biological processes, could often lead to unrealistic models. Among others, model approaches based on differential equations entangle kinetics and causality, failing when complexity increases, separating knowledge from models, and assuming that the average behavior of the population encompasses any individual deviation. To overcome these limitations, simulations based on the Stochastic Simulation Algorithm (SSA) appear as a suitable approach to model complex biological systems. In this work, we review three different models executed in PISKaS: a rule-based framework to produce multiscale stochastic simulations of complex systems. These models span multiple time and spatial scales ranging from gene regulation up to Game Theory. In the first example, we describe a model of the core regulatory network of gene expression in Escherichia coli highlighting the continuous model improvement capacities of PISKaS. The second example describes a hypothetical outbreak of the Ebola virus occurring in a compartmentalized environment resembling cities and highways. Finally, in the last example, we illustrate a stochastic model for the prisoner's dilemma; a common approach from social sciences describing complex interactions involving trust within human populations. As whole, these models demonstrate the capabilities of PISKaS providing fertile scenarios where to explore the dynamics of complex systems. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  12. Simulation model of a gear synchronisation unit for application in a real-time HiL environment

    NASA Astrophysics Data System (ADS)

    Kirchner, Markus; Eberhard, Peter

    2017-05-01

    Gear shifting simulations using the multibody system approach and the finite-element method are standard in the development of transmissions. However, the corresponding models are typically large due to the complex geometries and numerous contacts, which causes long calculation times. The present work sets itself apart from these detailed shifting simulations by proposing a much simpler but powerful synchronisation model which can be computed in real-time while it is still more realistic than a pure rigid multibody model. Therefore, the model is even used as part of a Hardware-in-the-Loop (HiL) test rig. The proposed real-time capable synchronization model combines the rigid multibody system approach with a multiscale simulation approach. The multibody system approach is suitable for the description of the large motions. The multiscale simulation approach is using also the finite-element method suitable for the analysis of the contact processes. An efficient contact search for the claws of a car transmission synchronisation unit is described in detail which shortens the required calculation time of the model considerably. To further shorten the calculation time, the use of a complex pre-synchronisation model with a nonlinear contour is presented. The model has to provide realistic results with the time-step size of the HiL test rig. To reach this specification, a particularly adapted multirate method for the synchronisation model is shown. Measured results of test rigs of the real-time capable synchronisation model are verified on plausibility. The simulation model is then also used in the HiL test rig for a transmission control unit.

  13. Advanced Composite Wind Turbine Blade Design Based on Durability and Damage Tolerance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abumeri, Galib; Abdi, Frank

    2012-02-16

    The objective of the program was to demonstrate and verify Certification-by-Analysis (CBA) capability for wind turbine blades made from advanced lightweight composite materials. The approach integrated durability and damage tolerance analysis with robust design and virtual testing capabilities to deliver superior, durable, low weight, low cost, long life, and reliable wind blade design. The GENOA durability and life prediction software suite was be used as the primary simulation tool. First, a micromechanics-based computational approach was used to assess the durability of composite laminates with ply drop features commonly used in wind turbine applications. Ply drops occur in composite joints andmore » closures of wind turbine blades to reduce skin thicknesses along the blade span. They increase localized stress concentration, which may cause premature delamination failure in composite and reduced fatigue service life. Durability and damage tolerance (D&DT) were evaluated utilizing a multi-scale micro-macro progressive failure analysis (PFA) technique. PFA is finite element based and is capable of detecting all stages of material damage including initiation and propagation of delamination. It assesses multiple failure criteria and includes the effects of manufacturing anomalies (i.e., void, fiber waviness). Two different approaches have been used within PFA. The first approach is Virtual Crack Closure Technique (VCCT) PFA while the second one is strength-based. Constituent stiffness and strength properties for glass and carbon based material systems were reverse engineered for use in D&DT evaluation of coupons with ply drops under static loading. Lamina and laminate properties calculated using manufacturing and composite architecture details matched closely published test data. Similarly, resin properties were determined for fatigue life calculation. The simulation not only reproduced static strength and fatigue life as observed in the test, it also showed composite damage and fracture modes that resemble those reported in the tests. The results show that computational simulation can be relied on to enhance the design of tapered composite structures such as the ones used in turbine wind blades. A computational simulation for durability, damage tolerance (D&DT) and reliability of composite wind turbine blade structures in presence of uncertainties in material properties was performed. A composite turbine blade was first assessed with finite element based multi-scale progressive failure analysis to determine failure modes and locations as well as the fracture load. D&DT analyses were then validated with static test performed at Sandia National Laboratories. The work was followed by detailed weight analysis to identify contribution of various materials to the overall weight of the blade. The methodology ensured that certain types of failure modes, such as delamination progression, are contained to reduce risk to the structure. Probabilistic analysis indicated that composite shear strength has a great influence on the blade ultimate load under static loading. Weight was reduced by 12% with robust design without loss in reliability or D&DT. Structural benefits obtained with the use of enhanced matrix properties through nanoparticles infusion were also assessed. Thin unidirectional fiberglass layers enriched with silica nanoparticles were applied to the outer surfaces of a wind blade to improve its overall structural performance and durability. The wind blade was a 9-meter prototype structure manufactured and tested subject to three saddle static loading at Sandia National Laboratory (SNL). The blade manufacturing did not include the use of any nano-material. With silica nanoparticles in glass composite applied to the exterior surfaces of the blade, the durability and damage tolerance (D&DT) results from multi-scale PFA showed an increase in ultimate load of the blade by 9.2% as compared to baseline structural performance (without nano). The use of nanoparticles lead to a delay in the onset of delamination. Load-displacement relationships obtained from testing of the blade with baseline neat material were compared to the ones from analytical simulation using neat resin and using silica nanoparticles in the resin. Multi-scale PFA results for the neat material construction matched closely those from test for both load displacement and location and type of damage and failure. AlphaSTAR demonstrated that wind blade structures made from advanced composite materials can be certified with multi-scale progressive failure analysis by following building block verification approach.« less

  14. Chest CT window settings with multiscale adaptive histogram equalization: pilot study.

    PubMed

    Fayad, Laura M; Jin, Yinpeng; Laine, Andrew F; Berkmen, Yahya M; Pearson, Gregory D; Freedman, Benjamin; Van Heertum, Ronald

    2002-06-01

    Multiscale adaptive histogram equalization (MAHE), a wavelet-based algorithm, was investigated as a method of automatic simultaneous display of the full dynamic contrast range of a computed tomographic image. Interpretation times were significantly lower for MAHE-enhanced images compared with those for conventionally displayed images. Diagnostic accuracy, however, was insufficient in this pilot study to allow recommendation of MAHE as a replacement for conventional window display.

  15. Multiscale Macromolecular Simulation: Role of Evolving Ensembles

    PubMed Central

    Singharoy, A.; Joshi, H.; Ortoleva, P.J.

    2013-01-01

    Multiscale analysis provides an algorithm for the efficient simulation of macromolecular assemblies. This algorithm involves the coevolution of a quasiequilibrium probability density of atomic configurations and the Langevin dynamics of spatial coarse-grained variables denoted order parameters (OPs) characterizing nanoscale system features. In practice, implementation of the probability density involves the generation of constant OP ensembles of atomic configurations. Such ensembles are used to construct thermal forces and diffusion factors that mediate the stochastic OP dynamics. Generation of all-atom ensembles at every Langevin timestep is computationally expensive. Here, multiscale computation for macromolecular systems is made more efficient by a method that self-consistently folds in ensembles of all-atom configurations constructed in an earlier step, history, of the Langevin evolution. This procedure accounts for the temporal evolution of these ensembles, accurately providing thermal forces and diffusions. It is shown that efficiency and accuracy of the OP-based simulations is increased via the integration of this historical information. Accuracy improves with the square root of the number of historical timesteps included in the calculation. As a result, CPU usage can be decreased by a factor of 3-8 without loss of accuracy. The algorithm is implemented into our existing force-field based multiscale simulation platform and demonstrated via the structural dynamics of viral capsomers. PMID:22978601

  16. Multiscale and Multiphysics Modeling of Additive Manufacturing of Advanced Materials

    NASA Technical Reports Server (NTRS)

    Liou, Frank; Newkirk, Joseph; Fan, Zhiqiang; Sparks, Todd; Chen, Xueyang; Fletcher, Kenneth; Zhang, Jingwei; Zhang, Yunlu; Kumar, Kannan Suresh; Karnati, Sreekar

    2015-01-01

    The objective of this proposed project is to research and develop a prediction tool for advanced additive manufacturing (AAM) processes for advanced materials and develop experimental methods to provide fundamental properties and establish validation data. Aircraft structures and engines demand materials that are stronger, useable at much higher temperatures, provide less acoustic transmission, and enable more aeroelastic tailoring than those currently used. Significant improvements in properties can only be achieved by processing the materials under nonequilibrium conditions, such as AAM processes. AAM processes encompass a class of processes that use a focused heat source to create a melt pool on a substrate. Examples include Electron Beam Freeform Fabrication and Direct Metal Deposition. These types of additive processes enable fabrication of parts directly from CAD drawings. To achieve the desired material properties and geometries of the final structure, assessing the impact of process parameters and predicting optimized conditions with numerical modeling as an effective prediction tool is necessary. The targets for the processing are multiple and at different spatial scales, and the physical phenomena associated occur in multiphysics and multiscale. In this project, the research work has been developed to model AAM processes in a multiscale and multiphysics approach. A macroscale model was developed to investigate the residual stresses and distortion in AAM processes. A sequentially coupled, thermomechanical, finite element model was developed and validated experimentally. The results showed the temperature distribution, residual stress, and deformation within the formed deposits and substrates. A mesoscale model was developed to include heat transfer, phase change with mushy zone, incompressible free surface flow, solute redistribution, and surface tension. Because of excessive computing time needed, a parallel computing approach was also tested. In addition, after investigating various methods, a Smoothed Particle Hydrodynamics Model (SPH Model) was developed to model wire feeding process. Its computational efficiency and simple architecture makes it more robust and flexible than other models. More research on material properties may be needed to realistically model the AAM processes. A microscale model was developed to investigate heterogeneous nucleation, dendritic grain growth, epitaxial growth of columnar grains, columnar-to-equiaxed transition, grain transport in melt, and other properties. The orientations of the columnar grains were almost perpendicular to the laser motion's direction. Compared to the similar studies in the literature, the multiple grain morphology modeling result is in the same order of magnitude as optical morphologies in the experiment. Experimental work was conducted to validate different models. An infrared camera was incorporated as a process monitoring and validating tool to identify the solidus and mushy zones during deposition. The images were successfully processed to identify these regions. This research project has investigated multiscale and multiphysics of the complex AAM processes thus leading to advanced understanding of these processes. The project has also developed several modeling tools and experimental validation tools that will be very critical in the future of AAM process qualification and certification.

  17. Multiscale Modeling of UHTC: Thermal Conductivity

    NASA Technical Reports Server (NTRS)

    Lawson, John W.; Murry, Daw; Squire, Thomas; Bauschlicher, Charles W.

    2012-01-01

    We are developing a multiscale framework in computational modeling for the ultra high temperature ceramics (UHTC) ZrB2 and HfB2. These materials are characterized by high melting point, good strength, and reasonable oxidation resistance. They are candidate materials for a number of applications in extreme environments including sharp leading edges of hypersonic aircraft. In particular, we used a combination of ab initio methods, atomistic simulations and continuum computations to obtain insights into fundamental properties of these materials. Ab initio methods were used to compute basic structural, mechanical and thermal properties. From these results, a database was constructed to fit a Tersoff style interatomic potential suitable for atomistic simulations. These potentials were used to evaluate the lattice thermal conductivity of single crystals and the thermal resistance of simple grain boundaries. Finite element method (FEM) computations using atomistic results as inputs were performed with meshes constructed on SEM images thereby modeling the realistic microstructure. These continuum computations showed the reduction in thermal conductivity due to the grain boundary network.

  18. Multiscale measurement error models for aggregated small area health data.

    PubMed

    Aregay, Mehreteab; Lawson, Andrew B; Faes, Christel; Kirby, Russell S; Carroll, Rachel; Watjou, Kevin

    2016-08-01

    Spatial data are often aggregated from a finer (smaller) to a coarser (larger) geographical level. The process of data aggregation induces a scaling effect which smoothes the variation in the data. To address the scaling problem, multiscale models that link the convolution models at different scale levels via the shared random effect have been proposed. One of the main goals in aggregated health data is to investigate the relationship between predictors and an outcome at different geographical levels. In this paper, we extend multiscale models to examine whether a predictor effect at a finer level hold true at a coarser level. To adjust for predictor uncertainty due to aggregation, we applied measurement error models in the framework of multiscale approach. To assess the benefit of using multiscale measurement error models, we compare the performance of multiscale models with and without measurement error in both real and simulated data. We found that ignoring the measurement error in multiscale models underestimates the regression coefficient, while it overestimates the variance of the spatially structured random effect. On the other hand, accounting for the measurement error in multiscale models provides a better model fit and unbiased parameter estimates. © The Author(s) 2016.

  19. BGK-MD, Version 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haack, Jeffrey; Shohet, Gil

    2016-12-02

    The software implements a heterogeneous multiscale method (HMM), which involves solving a classical molecular dynamics (MD) problem and then computes the entropy production in order to compute the relaxation times towards equilibrium for use in a Bhatnagar-Gross-Krook (BGK) solver.

  20. A theory for protein dynamics: Global anisotropy and a normal mode approach to local complexity

    NASA Astrophysics Data System (ADS)

    Copperman, Jeremy; Romano, Pablo; Guenza, Marina

    2014-03-01

    We propose a novel Langevin equation description for the dynamics of biological macromolecules by projecting the solvent and all atomic degrees of freedom onto a set of coarse-grained sites at the single residue level. We utilize a multi-scale approach where molecular dynamic simulations are performed to obtain equilibrium structural correlations input to a modified Rouse-Zimm description which can be solved analytically. The normal mode solution provides a minimal basis set to account for important properties of biological polymers such as the anisotropic global structure, and internal motion on a complex free-energy surface. This multi-scale modeling method predicts the dynamics of both global rotational diffusion and constrained internal motion from the picosecond to the nanosecond regime, and is quantitative when compared to both simulation trajectory and NMR relaxation times. Utilizing non-equilibrium sampling techniques and an explicit treatment of the free-energy barriers in the mode coordinates, the model is extended to include biologically important fluctuations in the microsecond regime, such as bubble and fork formation in nucleic acids, and protein domain motion. This work supported by the NSF under the Graduate STEM Fellows in K-12 Education (GK-12) program, grant DGE-0742540 and NSF grant DMR-0804145, computational support from XSEDE and ACISS.

  1. Multiscale framework for predicting the coupling between deformation and fluid diffusion in porous rocks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrade, José E; Rudnicki, John W

    2012-12-14

    In this project, a predictive multiscale framework will be developed to simulate the strong coupling between solid deformations and fluid diffusion in porous rocks. We intend to improve macroscale modeling by incorporating fundamental physical modeling at the microscale in a computationally efficient way. This is an essential step toward further developments in multiphysics modeling, linking hydraulic, thermal, chemical, and geomechanical processes. This research will focus on areas where severe deformations are observed, such as deformation bands, where classical phenomenology breaks down. Multiscale geometric complexities and key geomechanical and hydraulic attributes of deformation bands (e.g., grain sliding and crushing, and poremore » collapse, causing interstitial fluid expulsion under saturated conditions), can significantly affect the constitutive response of the skeleton and the intrinsic permeability. Discrete mechanics (DEM) and the lattice Boltzmann method (LBM) will be used to probe the microstructure---under the current state---to extract the evolution of macroscopic constitutive parameters and the permeability tensor. These evolving macroscopic constitutive parameters are then directly used in continuum scale predictions using the finite element method (FEM) accounting for the coupled solid deformation and fluid diffusion. A particularly valuable aspect of this research is the thorough quantitative verification and validation program at different scales. The multiscale homogenization framework will be validated using X-ray computed tomography and 3D digital image correlation in situ at the Advanced Photon Source in Argonne National Laboratories. Also, the hierarchical computations at the specimen level will be validated using the aforementioned techniques in samples of sandstone undergoing deformation bands.« less

  2. Multiscale study for stochastic characterization of shale samples

    NASA Astrophysics Data System (ADS)

    Tahmasebi, Pejman; Javadpour, Farzam; Sahimi, Muhammad; Piri, Mohammad

    2016-03-01

    Characterization of shale reservoirs, which are typically of low permeability, is very difficult because of the presence of multiscale structures. While three-dimensional (3D) imaging can be an ultimate solution for revealing important complexities of such reservoirs, acquiring such images is costly and time consuming. On the other hand, high-quality 2D images, which are widely available, also reveal useful information about shales' pore connectivity and size. Most of the current modeling methods that are based on 2D images use limited and insufficient extracted information. One remedy to the shortcoming is direct use of qualitative images, a concept that we introduce in this paper. We demonstrate that higher-order statistics (as opposed to the traditional two-point statistics, such as variograms) are necessary for developing an accurate model of shales, and describe an efficient method for using 2D images that is capable of utilizing qualitative and physical information within an image and generating stochastic realizations of shales. We then further refine the model by describing and utilizing several techniques, including an iterative framework, for removing some possible artifacts and better pattern reproduction. Next, we introduce a new histogram-matching algorithm that accounts for concealed nanostructures in shale samples. We also present two new multiresolution and multiscale approaches for dealing with distinct pore structures that are common in shale reservoirs. In the multiresolution method, the original high-quality image is upscaled in a pyramid-like manner in order to achieve more accurate global and long-range structures. The multiscale approach integrates two images, each containing diverse pore networks - the nano- and microscale pores - using a high-resolution image representing small-scale pores and, at the same time, reconstructing large pores using a low-quality image. Eventually, the results are integrated to generate a 3D model. The methods are tested on two shale samples for which full 3D samples are available. The quantitative accuracy of the models is demonstrated by computing their morphological and flow properties and comparing them with those of the actual 3D images. The success of the method hinges upon the use of very different low- and high-resolution images.

  3. Mechanical Properties of Graphene Nanoplatelet/Carbon Fiber/Epoxy Hybrid Composites: Multiscale Modeling and Experiments

    NASA Technical Reports Server (NTRS)

    Hadden, C. M.; Klimek-McDonald, D. R.; Pineda, E. J.; King, J. A.; Reichanadter, A. M.; Miskioglu, I.; Gowtham, S.; Odegard, G. M.

    2015-01-01

    Because of the relatively high specific mechanical properties of carbon fiber/epoxy composite materials, they are often used as structural components in aerospace applications. Graphene nanoplatelets (GNPs) can be added to the epoxy matrix to improve the overall mechanical properties of the composite. The resulting GNP/carbon fiber/epoxy hybrid composites have been studied using multiscale modeling to determine the influence of GNP volume fraction, epoxy crosslink density, and GNP dispersion on the mechanical performance. The hierarchical multiscale modeling approach developed herein includes Molecular Dynamics (MD) and micromechanical modeling, and it is validated with experimental testing of the same hybrid composite material system. The results indicate that the multiscale modeling approach is accurate and provides physical insight into the composite mechanical behavior. Also, the results quantify the substantial impact of GNP volume fraction and dispersion on the transverse mechanical properties of the hybrid composite while the effect on the axial properties is shown to be insignificant.

  4. Mechanical Properties of Graphene Nanoplatelet/Carbon Fiber/Epoxy Hybrid Composites: Multiscale Modeling and Experiments

    NASA Technical Reports Server (NTRS)

    Hadden, C. M.; Klimek-McDonald, D. R.; Pineda, E. J.; King, J. A.; Reichanadter, A. M.; Miskioglu, I.; Gowtham, S.; Odegard, G. M.

    2015-01-01

    Because of the relatively high specific mechanical properties of carbon fiber/epoxy composite materials, they are often used as structural components in aerospace applications. Graphene nanoplatelets (GNPs) can be added to the epoxy matrix to improve the overall mechanical properties of the composite. The resulting GNP/carbon fiber/epoxy hybrid composites have been studied using multiscale modeling to determine the influence of GNP volume fraction, epoxy crosslink density, and GNP dispersion on the mechanical performance. The hierarchical multiscale modeling approach developed herein includes Molecular Dynamics (MD) and micromechanical modeling, and it is validated with experimental testing of the same hybrid composite material system. The results indicate that the multiscale modeling approach is accurate and provides physical insight into the composite mechanical behavior. Also, the results quantify the substantial impact of GNP volume fraction and dispersion on the transverse mechanical properties of the hybrid composite, while the effect on the axial properties is shown to be insignificant.

  5. Mechanical Properties of Graphene Nanoplatelet Carbon Fiber Epoxy Hybrid Composites: Multiscale Modeling and Experiments

    NASA Technical Reports Server (NTRS)

    Hadden, Cameron M.; Klimek-McDonald, Danielle R.; Pineda, Evan J.; King, Julie A.; Reichanadter, Alex M.; Miskioglu, Ibrahim; Gowtham, S.; Odegard, Gregory M.

    2015-01-01

    Because of the relatively high specific mechanical properties of carbon fiber/epoxy composite materials, they are often used as structural components in aerospace applications. Graphene nanoplatelets (GNPs) can be added to the epoxy matrix to improve the overall mechanical properties of the composite. The resulting GNP/carbon fiber/epoxy hybrid composites have been studied using multiscale modeling to determine the influence of GNP volume fraction, epoxy crosslink density, and GNP dispersion on the mechanical performance. The hierarchical multiscale modeling approach developed herein includes Molecular Dynamics (MD) and micromechanical modeling, and it is validated with experimental testing of the same hybrid composite material system. The results indicate that the multiscale modeling approach is accurate and provides physical insight into the composite mechanical behavior. Also, the results quantify the substantial impact of GNP volume fraction and dispersion on the transverse mechanical properties of the hybrid composite, while the effect on the axial properties is shown to be insignificant.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perdikaris, Paris, E-mail: parisp@mit.edu; Grinberg, Leopold, E-mail: leopoldgrinberg@us.ibm.com; Karniadakis, George Em, E-mail: george-karniadakis@brown.edu

    The aim of this work is to present an overview of recent advances in multi-scale modeling of brain blood flow. In particular, we present some approaches that enable the in silico study of multi-scale and multi-physics phenomena in the cerebral vasculature. We discuss the formulation of continuum and atomistic modeling approaches, present a consistent framework for their concurrent coupling, and list some of the challenges that one needs to overcome in achieving a seamless and scalable integration of heterogeneous numerical solvers. The effectiveness of the proposed framework is demonstrated in a realistic case involving modeling the thrombus formation process takingmore » place on the wall of a patient-specific cerebral aneurysm. This highlights the ability of multi-scale algorithms to resolve important biophysical processes that span several spatial and temporal scales, potentially yielding new insight into the key aspects of brain blood flow in health and disease. Finally, we discuss open questions in multi-scale modeling and emerging topics of future research.« less

  7. Data fusion of multi-scale representations for structural damage detection

    NASA Astrophysics Data System (ADS)

    Guo, Tian; Xu, Zili

    2018-01-01

    Despite extensive researches into structural health monitoring (SHM) in the past decades, there are few methods that can detect multiple slight damage in noisy environments. Here, we introduce a new hybrid method that utilizes multi-scale space theory and data fusion approach for multiple damage detection in beams and plates. A cascade filtering approach provides multi-scale space for noisy mode shapes and filters the fluctuations caused by measurement noise. In multi-scale space, a series of amplification and data fusion algorithms are utilized to search the damage features across all possible scales. We verify the effectiveness of the method by numerical simulation using damaged beams and plates with various types of boundary conditions. Monte Carlo simulations are conducted to illustrate the effectiveness and noise immunity of the proposed method. The applicability is further validated via laboratory cases studies focusing on different damage scenarios. Both results demonstrate that the proposed method has a superior noise tolerant ability, as well as damage sensitivity, without knowing material properties or boundary conditions.

  8. Multi-scale Modeling in Clinical Oncology: Opportunities and Barriers to Success.

    PubMed

    Yankeelov, Thomas E; An, Gary; Saut, Oliver; Luebeck, E Georg; Popel, Aleksander S; Ribba, Benjamin; Vicini, Paolo; Zhou, Xiaobo; Weis, Jared A; Ye, Kaiming; Genin, Guy M

    2016-09-01

    Hierarchical processes spanning several orders of magnitude of both space and time underlie nearly all cancers. Multi-scale statistical, mathematical, and computational modeling methods are central to designing, implementing and assessing treatment strategies that account for these hierarchies. The basic science underlying these modeling efforts is maturing into a new discipline that is close to influencing and facilitating clinical successes. The purpose of this review is to capture the state-of-the-art as well as the key barriers to success for multi-scale modeling in clinical oncology. We begin with a summary of the long-envisioned promise of multi-scale modeling in clinical oncology, including the synthesis of disparate data types into models that reveal underlying mechanisms and allow for experimental testing of hypotheses. We then evaluate the mathematical techniques employed most widely and present several examples illustrating their application as well as the current gap between pre-clinical and clinical applications. We conclude with a discussion of what we view to be the key challenges and opportunities for multi-scale modeling in clinical oncology.

  9. Multi-scale Modeling in Clinical Oncology: Opportunities and Barriers to Success

    PubMed Central

    Yankeelov, Thomas E.; An, Gary; Saut, Oliver; Luebeck, E. Georg; Popel, Aleksander S.; Ribba, Benjamin; Vicini, Paolo; Zhou, Xiaobo; Weis, Jared A.; Ye, Kaiming; Genin, Guy M.

    2016-01-01

    Hierarchical processes spanning several orders of magnitude of both space and time underlie nearly all cancers. Multi-scale statistical, mathematical, and computational modeling methods are central to designing, implementing and assessing treatment strategies that account for these hierarchies. The basic science underlying these modeling efforts is maturing into a new discipline that is close to influencing and facilitating clinical successes. The purpose of this review is to capture the state-of-the-art as well as the key barriers to success for multi-scale modeling in clinical oncology. We begin with a summary of the long-envisioned promise of multi-scale modeling in clinical oncology, including the synthesis of disparate data types into models that reveal underlying mechanisms and allow for experimental testing of hypotheses. We then evaluate the mathematical techniques employed most widely and present several examples illustrating their application as well as the current gap between pre-clinical and clinical applications. We conclude with a discussion of what we view to be the key challenges and opportunities for multi-scale modeling in clinical oncology. PMID:27384942

  10. A physics based multiscale modeling of cavitating flows.

    PubMed

    Ma, Jingsen; Hsiao, Chao-Tsung; Chahine, Georges L

    2017-03-02

    Numerical modeling of cavitating bubbly flows is challenging due to the wide range of characteristic lengths of the physics at play: from micrometers (e.g., bubble nuclei radius) to meters (e.g., propeller diameter or sheet cavity length). To address this, we present here a multiscale approach which integrates a Discrete Singularities Model (DSM) for dispersed microbubbles and a two-phase Navier Stokes solver for the bubbly medium, which includes a level set approach to describe large cavities or gaseous pockets. Inter-scale schemes are used to smoothly bridge the two transitioning subgrid DSM bubbles into larger discretized cavities. This approach is demonstrated on several problems including cavitation inception and vapor core formation in a vortex flow, sheet-to-cloud cavitation over a hydrofoil, cavitation behind a blunt body, and cavitation on a propeller. These examples highlight the capabilities of the developed multiscale model in simulating various form of cavitation.

  11. A physics based multiscale modeling of cavitating flows

    PubMed Central

    Ma, Jingsen; Hsiao, Chao-Tsung; Chahine, Georges L.

    2018-01-01

    Numerical modeling of cavitating bubbly flows is challenging due to the wide range of characteristic lengths of the physics at play: from micrometers (e.g., bubble nuclei radius) to meters (e.g., propeller diameter or sheet cavity length). To address this, we present here a multiscale approach which integrates a Discrete Singularities Model (DSM) for dispersed microbubbles and a two-phase Navier Stokes solver for the bubbly medium, which includes a level set approach to describe large cavities or gaseous pockets. Inter-scale schemes are used to smoothly bridge the two transitioning subgrid DSM bubbles into larger discretized cavities. This approach is demonstrated on several problems including cavitation inception and vapor core formation in a vortex flow, sheet-to-cloud cavitation over a hydrofoil, cavitation behind a blunt body, and cavitation on a propeller. These examples highlight the capabilities of the developed multiscale model in simulating various form of cavitation. PMID:29720773

  12. Hierarchical Multi-Scale Approach To Validation and Uncertainty Quantification of Hyper-Spectral Image Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engel, David W.; Reichardt, Thomas A.; Kulp, Thomas J.

    Validating predictive models and quantifying uncertainties inherent in the modeling process is a critical component of the HARD Solids Venture program [1]. Our current research focuses on validating physics-based models predicting the optical properties of solid materials for arbitrary surface morphologies and characterizing the uncertainties in these models. We employ a systematic and hierarchical approach by designing physical experiments and comparing the experimental results with the outputs of computational predictive models. We illustrate this approach through an example comparing a micro-scale forward model to an idealized solid-material system and then propagating the results through a system model to the sensormore » level. Our efforts should enhance detection reliability of the hyper-spectral imaging technique and the confidence in model utilization and model outputs by users and stakeholders.« less

  13. Gaussian Multiscale Aggregation Applied to Segmentation in Hand Biometrics

    PubMed Central

    de Santos Sierra, Alberto; Ávila, Carmen Sánchez; Casanova, Javier Guerra; del Pozo, Gonzalo Bailador

    2011-01-01

    This paper presents an image segmentation algorithm based on Gaussian multiscale aggregation oriented to hand biometric applications. The method is able to isolate the hand from a wide variety of background textures such as carpets, fabric, glass, grass, soil or stones. The evaluation was carried out by using a publicly available synthetic database with 408,000 hand images in different backgrounds, comparing the performance in terms of accuracy and computational cost to two competitive segmentation methods existing in literature, namely Lossy Data Compression (LDC) and Normalized Cuts (NCuts). The results highlight that the proposed method outperforms current competitive segmentation methods with regard to computational cost, time performance, accuracy and memory usage. PMID:22247658

  14. Gaussian multiscale aggregation applied to segmentation in hand biometrics.

    PubMed

    de Santos Sierra, Alberto; Avila, Carmen Sánchez; Casanova, Javier Guerra; del Pozo, Gonzalo Bailador

    2011-01-01

    This paper presents an image segmentation algorithm based on Gaussian multiscale aggregation oriented to hand biometric applications. The method is able to isolate the hand from a wide variety of background textures such as carpets, fabric, glass, grass, soil or stones. The evaluation was carried out by using a publicly available synthetic database with 408,000 hand images in different backgrounds, comparing the performance in terms of accuracy and computational cost to two competitive segmentation methods existing in literature, namely Lossy Data Compression (LDC) and Normalized Cuts (NCuts). The results highlight that the proposed method outperforms current competitive segmentation methods with regard to computational cost, time performance, accuracy and memory usage.

  15. Object-based classification of global undersea topography and geomorphological features from the SRTM30_PLUS data

    NASA Astrophysics Data System (ADS)

    Dekavalla, Maria; Argialas, Demetre

    2017-07-01

    The analysis of undersea topography and geomorphological features provides necessary information to related disciplines and many applications. The development of an automated knowledge-based classification approach of undersea topography and geomorphological features is challenging due to their multi-scale nature. The aim of the study is to develop and evaluate an automated knowledge-based OBIA approach to: i) decompose the global undersea topography to multi-scale regions of distinct morphometric properties, and ii) assign the derived regions to characteristic geomorphological features. First, the global undersea topography was decomposed through the SRTM30_PLUS bathymetry data to the so-called morphometric objects of discrete morphometric properties and spatial scales defined by data-driven methods (local variance graphs and nested means) and multi-scale analysis. The derived morphometric objects were combined with additional relative topographic position information computed with a self-adaptive pattern recognition method (geomorphons), and auxiliary data and were assigned to characteristic undersea geomorphological feature classes through a knowledge base, developed from standard definitions. The decomposition of the SRTM30_PLUS data to morphometric objects was considered successful for the requirements of maximizing intra-object and inter-object heterogeneity, based on the near zero values of the Moran's I and the low values of the weighted variance index. The knowledge-based classification approach was tested for its transferability in six case studies of various tectonic settings and achieved the efficient extraction of 11 undersea geomorphological feature classes. The classification results for the six case studies were compared with the digital global seafloor geomorphic features map (GSFM). The 11 undersea feature classes and their producer's accuracies in respect to the GSFM relevant areas were Basin (95%), Continental Shelf (94.9%), Trough (88.4%), Plateau (78.9%), Continental Slope (76.4%), Trench (71.2%), Abyssal Hill (62.9%), Abyssal Plain (62.4%), Ridge (49.8%), Seamount (48.8%) and Continental Rise (25.4%). The knowledge-based OBIA classification approach was considered transferable since the percentages of spatial and thematic agreement between the most of the classified undersea feature classes and the GSFM exhibited low deviations across the six case studies.

  16. MODELS-3 COMMUNITY MULTISCALE AIR QUALITY (CMAQ) MODEL AEROSOL COMPONENT 1: MODEL DESCRIPTION

    EPA Science Inventory

    The aerosol component of the Community Multiscale Air Quality (CMAQ) model is designed to be an efficient and economical depiction of aerosol dynamics in the atmosphere. The approach taken represents the particle size distribution as the superposition of three lognormal subdis...

  17. Industrial process system assessment: bridging process engineering and life cycle assessment through multiscale modeling.

    EPA Science Inventory

    The Industrial Process System Assessment (IPSA) methodology is a multiple step allocation approach for connecting information from the production line level up to the facility level and vice versa using a multiscale model of process systems. The allocation procedure assigns inpu...

  18. Evaluation of the Community Multi-scale Air Quality (CMAQ) ...

    EPA Pesticide Factsheets

    The Community Multiscale Air Quality (CMAQ) model is a state-of-the-science air quality model that simulates the emission, transport and fate of numerous air pollutants, including ozone and particulate matter. The Computational Exposure Division (CED) of the U.S. Environmental Protection Agency develops the CMAQ model and periodically releases new versions of the model that include bug fixes and various other improvements to the modeling system. In the fall of 2015, CMAQ version 5.1 was released. This new version of CMAQ will contain important bug fixes to several issues that were identified in CMAQv5.0.2 and additionally include updates to other portions of the code. Several annual, and numerous episodic, CMAQv5.1 simulations were performed to assess the impact of these improvements on the model results. These results will be presented, along with a base evaluation of the performance of the CMAQv5.1 modeling system against available surface and upper-air measurements available during the time period simulated. The National Exposure Research Laboratory (NERL) Computational Exposure Division (CED) develops and evaluates data, decision-support tools, and models to be applied to media-specific or receptor-specific problem areas. CED uses modeling-based approaches to characterize exposures, evaluate fate and transport, and support environmental diagnostics/forensics with input from multiple data sources. It also develops media- and receptor-specific models, proces

  19. Damage and failure modelling of hybrid three-dimensional textile composites: a mesh objective multi-scale approach

    PubMed Central

    Patel, Deepak K.

    2016-01-01

    This paper is concerned with predicting the progressive damage and failure of multi-layered hybrid textile composites subjected to uniaxial tensile loading, using a novel two-scale computational mechanics framework. These composites include three-dimensional woven textile composites (3DWTCs) with glass, carbon and Kevlar fibre tows. Progressive damage and failure of 3DWTCs at different length scales are captured in the present model by using a macroscale finite-element (FE) analysis at the representative unit cell (RUC) level, while a closed-form micromechanics analysis is implemented simultaneously at the subscale level using material properties of the constituents (fibre and matrix) as input. The N-layers concentric cylinder (NCYL) model (Zhang and Waas 2014 Acta Mech. 225, 1391–1417; Patel et al. submitted Acta Mech.) to compute local stress, srain and displacement fields in the fibre and matrix is used at the subscale. The 2-CYL fibre–matrix concentric cylinder model is extended to fibre and (N−1) matrix layers, keeping the volume fraction constant, and hence is called the NCYL model where the matrix damage can be captured locally within each discrete layer of the matrix volume. The influence of matrix microdamage at the subscale causes progressive degradation of fibre tow stiffness and matrix stiffness at the macroscale. The global RUC stiffness matrix remains positive definite, until the strain softening response resulting from different failure modes (such as fibre tow breakage, tow splitting in the transverse direction due to matrix cracking inside tow and surrounding matrix tensile failure outside of fibre tows) are initiated. At this stage, the macroscopic post-peak softening response is modelled using the mesh objective smeared crack approach (Rots et al. 1985 HERON 30, 1–48; Heinrich and Waas 2012 53rd AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference, Honolulu, HI, 23–26 April 2012. AIAA 2012-1537). Manufacturing-induced geometric imperfections are included in the simulation, where the FE mesh of the unit cell is generated directly from micro-computed tomography (MCT) real data using a code Simpleware. Results from multi-scale analysis for both an idealized perfect geometry and one that includes geometric imperfections are compared with experimental results (Pankow et al. 2012 53rd AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference, Honolulu, HI, 23–26 April 2012. AIAA 2012-1572). This article is part of the themed issue ‘Multiscale modelling of the structural integrity of composite materials’. PMID:27242294

  20. Damage and failure modelling of hybrid three-dimensional textile composites: a mesh objective multi-scale approach.

    PubMed

    Patel, Deepak K; Waas, Anthony M

    2016-07-13

    This paper is concerned with predicting the progressive damage and failure of multi-layered hybrid textile composites subjected to uniaxial tensile loading, using a novel two-scale computational mechanics framework. These composites include three-dimensional woven textile composites (3DWTCs) with glass, carbon and Kevlar fibre tows. Progressive damage and failure of 3DWTCs at different length scales are captured in the present model by using a macroscale finite-element (FE) analysis at the representative unit cell (RUC) level, while a closed-form micromechanics analysis is implemented simultaneously at the subscale level using material properties of the constituents (fibre and matrix) as input. The N-layers concentric cylinder (NCYL) model (Zhang and Waas 2014 Acta Mech. 225, 1391-1417; Patel et al. submitted Acta Mech.) to compute local stress, srain and displacement fields in the fibre and matrix is used at the subscale. The 2-CYL fibre-matrix concentric cylinder model is extended to fibre and (N-1) matrix layers, keeping the volume fraction constant, and hence is called the NCYL model where the matrix damage can be captured locally within each discrete layer of the matrix volume. The influence of matrix microdamage at the subscale causes progressive degradation of fibre tow stiffness and matrix stiffness at the macroscale. The global RUC stiffness matrix remains positive definite, until the strain softening response resulting from different failure modes (such as fibre tow breakage, tow splitting in the transverse direction due to matrix cracking inside tow and surrounding matrix tensile failure outside of fibre tows) are initiated. At this stage, the macroscopic post-peak softening response is modelled using the mesh objective smeared crack approach (Rots et al. 1985 HERON 30, 1-48; Heinrich and Waas 2012 53rd AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference, Honolulu, HI, 23-26 April 2012 AIAA 2012-1537). Manufacturing-induced geometric imperfections are included in the simulation, where the FE mesh of the unit cell is generated directly from micro-computed tomography (MCT) real data using a code Simpleware Results from multi-scale analysis for both an idealized perfect geometry and one that includes geometric imperfections are compared with experimental results (Pankow et al. 2012 53rd AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference, Honolulu, HI, 23-26 April 2012 AIAA 2012-1572). This article is part of the themed issue 'Multiscale modelling of the structural integrity of composite materials'. © 2016 The Author(s).

  1. An adhesive contact mechanics formulation based on atomistically induced surface traction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fan, Houfu; Ren, Bo; Li, Shaofan, E-mail: shaofan@berkeley.edu

    2015-12-01

    In this work, we have developed a novel multiscale computational contact formulation based on the generalized Derjuguin approximation for continua that are characterized by atomistically enriched constitutive relations in order to study macroscopic interaction between arbitrarily shaped deformable continua. The proposed adhesive contact formulation makes use of the microscopic interaction forces between individual particles in the interacting bodies. In particular, the double-layer volume integral describing the contact interaction (energy, force vector, matrix) is converted into a double-layer surface integral through a mathematically consistent approach that employs the divergence theorem and a special partitioning technique. The proposed contact model is formulatedmore » in the nonlinear continuum mechanics framework and implemented using the standard finite element method. With no large penalty constant, the stiffness matrix of the system will in general be well-conditioned, which is of great significance for quasi-static analysis. Three numerical examples are presented to illustrate the capability of the proposed method. Results indicate that with the same mesh configuration, the finite element computation based on the surface integral approach is faster and more accurate than the volume integral based approach. In addition, the proposed approach is energy preserving even in a very long dynamic simulation.« less

  2. Fast hierarchical knowledge-based approach for human face detection in color images

    NASA Astrophysics Data System (ADS)

    Jiang, Jun; Gong, Jie; Zhang, Guilin; Hu, Ruolan

    2001-09-01

    This paper presents a fast hierarchical knowledge-based approach for automatically detecting multi-scale upright faces in still color images. The approach consists of three levels. At the highest level, skin-like regions are determinated by skin model, which is based on the color attributes hue and saturation in HSV color space, as well color attributes red and green in normalized color space. In level 2, a new eye model is devised to select human face candidates in segmented skin-like regions. An important feature of the eye model is that it is independent of the scale of human face. So it is possible for finding human faces in different scale with scanning image only once, and it leads to reduction the computation time of face detection greatly. In level 3, a human face mosaic image model, which is consistent with physical structure features of human face well, is applied to judge whether there are face detects in human face candidate regions. This model includes edge and gray rules. Experiment results show that the approach has high robustness and fast speed. It has wide application perspective at human-computer interactions and visual telephone etc.

  3. Mathematical and computational approaches can complement experimental studies of host-pathogen interactions.

    PubMed

    Kirschner, Denise E; Linderman, Jennifer J

    2009-04-01

    In addition to traditional and novel experimental approaches to study host-pathogen interactions, mathematical and computer modelling have recently been applied to address open questions in this area. These modelling tools not only offer an additional avenue for exploring disease dynamics at multiple biological scales, but also complement and extend knowledge gained via experimental tools. In this review, we outline four examples where modelling has complemented current experimental techniques in a way that can or has already pushed our knowledge of host-pathogen dynamics forward. Two of the modelling approaches presented go hand in hand with articles in this issue exploring fluorescence resonance energy transfer and two-photon intravital microscopy. Two others explore virtual or 'in silico' deletion and depletion as well as a new method to understand and guide studies in genetic epidemiology. In each of these examples, the complementary nature of modelling and experiment is discussed. We further note that multi-scale modelling may allow us to integrate information across length (molecular, cellular, tissue, organism, population) and time (e.g. seconds to lifetimes). In sum, when combined, these compatible approaches offer new opportunities for understanding host-pathogen interactions.

  4. Automated Photoreceptor Cell Identification on Nonconfocal Adaptive Optics Images Using Multiscale Circular Voting.

    PubMed

    Liu, Jianfei; Jung, HaeWon; Dubra, Alfredo; Tam, Johnny

    2017-09-01

    Adaptive optics scanning light ophthalmoscopy (AOSLO) has enabled quantification of the photoreceptor mosaic in the living human eye using metrics such as cell density and average spacing. These rely on the identification of individual cells. Here, we demonstrate a novel approach for computer-aided identification of cone photoreceptors on nonconfocal split detection AOSLO images. Algorithms for identification of cone photoreceptors were developed, based on multiscale circular voting (MSCV) in combination with a priori knowledge that split detection images resemble Nomarski differential interference contrast images, in which dark and bright regions are present on the two sides of each cell. The proposed algorithm locates dark and bright region pairs, iteratively refining the identification across multiple scales. Identification accuracy was assessed in data from 10 subjects by comparing automated identifications with manual labeling, followed by computation of density and spacing metrics for comparison to histology and published data. There was good agreement between manual and automated cone identifications with overall recall, precision, and F1 score of 92.9%, 90.8%, and 91.8%, respectively. On average, computed density and spacing values using automated identification were within 10.7% and 11.2% of the expected histology values across eccentricities ranging from 0.5 to 6.2 mm. There was no statistically significant difference between MSCV-based and histology-based density measurements (P = 0.96, Kolmogorov-Smirnov 2-sample test). MSCV can accurately detect cone photoreceptors on split detection images across a range of eccentricities, enabling quick, objective estimation of photoreceptor mosaic metrics, which will be important for future clinical trials utilizing adaptive optics.

  5. Generalized multiscale finite-element method (GMsFEM) for elastic wave propagation in heterogeneous, anisotropic media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Kai; Fu, Shubin; Gibson, Richard L.

    It is important to develop fast yet accurate numerical methods for seismic wave propagation to characterize complex geological structures and oil and gas reservoirs. However, the computational cost of conventional numerical modeling methods, such as finite-difference method and finite-element method, becomes prohibitively expensive when applied to very large models. We propose a Generalized Multiscale Finite-Element Method (GMsFEM) for elastic wave propagation in heterogeneous, anisotropic media, where we construct basis functions from multiple local problems for both the boundaries and interior of a coarse node support or coarse element. The application of multiscale basis functions can capture the fine scale mediummore » property variations, and allows us to greatly reduce the degrees of freedom that are required to implement the modeling compared with conventional finite-element method for wave equation, while restricting the error to low values. We formulate the continuous Galerkin and discontinuous Galerkin formulation of the multiscale method, both of which have pros and cons. Applications of the multiscale method to three heterogeneous models show that our multiscale method can effectively model the elastic wave propagation in anisotropic media with a significant reduction in the degrees of freedom in the modeling system.« less

  6. Generalized Multiscale Finite-Element Method (GMsFEM) for elastic wave propagation in heterogeneous, anisotropic media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Kai, E-mail: kaigao87@gmail.com; Fu, Shubin, E-mail: shubinfu89@gmail.com; Gibson, Richard L., E-mail: gibson@tamu.edu

    It is important to develop fast yet accurate numerical methods for seismic wave propagation to characterize complex geological structures and oil and gas reservoirs. However, the computational cost of conventional numerical modeling methods, such as finite-difference method and finite-element method, becomes prohibitively expensive when applied to very large models. We propose a Generalized Multiscale Finite-Element Method (GMsFEM) for elastic wave propagation in heterogeneous, anisotropic media, where we construct basis functions from multiple local problems for both the boundaries and interior of a coarse node support or coarse element. The application of multiscale basis functions can capture the fine scale mediummore » property variations, and allows us to greatly reduce the degrees of freedom that are required to implement the modeling compared with conventional finite-element method for wave equation, while restricting the error to low values. We formulate the continuous Galerkin and discontinuous Galerkin formulation of the multiscale method, both of which have pros and cons. Applications of the multiscale method to three heterogeneous models show that our multiscale method can effectively model the elastic wave propagation in anisotropic media with a significant reduction in the degrees of freedom in the modeling system.« less

  7. Generalized multiscale finite-element method (GMsFEM) for elastic wave propagation in heterogeneous, anisotropic media

    DOE PAGES

    Gao, Kai; Fu, Shubin; Gibson, Richard L.; ...

    2015-04-14

    It is important to develop fast yet accurate numerical methods for seismic wave propagation to characterize complex geological structures and oil and gas reservoirs. However, the computational cost of conventional numerical modeling methods, such as finite-difference method and finite-element method, becomes prohibitively expensive when applied to very large models. We propose a Generalized Multiscale Finite-Element Method (GMsFEM) for elastic wave propagation in heterogeneous, anisotropic media, where we construct basis functions from multiple local problems for both the boundaries and interior of a coarse node support or coarse element. The application of multiscale basis functions can capture the fine scale mediummore » property variations, and allows us to greatly reduce the degrees of freedom that are required to implement the modeling compared with conventional finite-element method for wave equation, while restricting the error to low values. We formulate the continuous Galerkin and discontinuous Galerkin formulation of the multiscale method, both of which have pros and cons. Applications of the multiscale method to three heterogeneous models show that our multiscale method can effectively model the elastic wave propagation in anisotropic media with a significant reduction in the degrees of freedom in the modeling system.« less

  8. Simulating cancer growth with multiscale agent-based modeling.

    PubMed

    Wang, Zhihui; Butner, Joseph D; Kerketta, Romica; Cristini, Vittorio; Deisboeck, Thomas S

    2015-02-01

    There have been many techniques developed in recent years to in silico model a variety of cancer behaviors. Agent-based modeling is a specific discrete-based hybrid modeling approach that allows simulating the role of diversity in cell populations as well as within each individual cell; it has therefore become a powerful modeling method widely used by computational cancer researchers. Many aspects of tumor morphology including phenotype-changing mutations, the adaptation to microenvironment, the process of angiogenesis, the influence of extracellular matrix, reactions to chemotherapy or surgical intervention, the effects of oxygen and nutrient availability, and metastasis and invasion of healthy tissues have been incorporated and investigated in agent-based models. In this review, we introduce some of the most recent agent-based models that have provided insight into the understanding of cancer growth and invasion, spanning multiple biological scales in time and space, and we further describe several experimentally testable hypotheses generated by those models. We also discuss some of the current challenges of multiscale agent-based cancer models. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Na; Zhang, Peng; Kang, Wei

    Multiscale simulations of fluids such as blood represent a major computational challenge of coupling the disparate spatiotemporal scales between molecular and macroscopic transport phenomena characterizing such complex fluids. In this paper, a coarse-grained (CG) particle model is developed for simulating blood flow by modifying the Morse potential, traditionally used in Molecular Dynamics for modeling vibrating structures. The modified Morse potential is parameterized with effective mass scales for reproducing blood viscous flow properties, including density, pressure, viscosity, compressibility and characteristic flow dynamics of human blood plasma fluid. The parameterization follows a standard inverse-problem approach in which the optimal micro parameters aremore » systematically searched, by gradually decoupling loosely correlated parameter spaces, to match the macro physical quantities of viscous blood flow. The predictions of this particle based multiscale model compare favorably to classic viscous flow solutions such as Counter-Poiseuille and Couette flows. It demonstrates that such coarse grained particle model can be applied to replicate the dynamics of viscous blood flow, with the advantage of bridging the gap between macroscopic flow scales and the cellular scales characterizing blood flow that continuum based models fail to handle adequately.« less

  10. Physics-based multiscale coupling for full core nuclear reactor simulation

    DOE PAGES

    Gaston, Derek R.; Permann, Cody J.; Peterson, John W.; ...

    2015-10-01

    Numerical simulation of nuclear reactors is a key technology in the quest for improvements in efficiency, safety, and reliability of both existing and future reactor designs. Historically, simulation of an entire reactor was accomplished by linking together multiple existing codes that each simulated a subset of the relevant multiphysics phenomena. Recent advances in the MOOSE (Multiphysics Object Oriented Simulation Environment) framework have enabled a new approach: multiple domain-specific applications, all built on the same software framework, are efficiently linked to create a cohesive application. This is accomplished with a flexible coupling capability that allows for a variety of different datamore » exchanges to occur simultaneously on high performance parallel computational hardware. Examples based on the KAIST-3A benchmark core, as well as a simplified Westinghouse AP-1000 configuration, demonstrate the power of this new framework for tackling—in a coupled, multiscale manner—crucial reactor phenomena such as CRUD-induced power shift and fuel shuffle. 2014 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY-NC-SA license« less

  11. Advances and challenges in logical modeling of cell cycle regulation: perspective for multi-scale, integrative yeast cell models

    PubMed Central

    Todd, Robert G.; van der Zee, Lucas

    2016-01-01

    Abstract The eukaryotic cell cycle is robustly designed, with interacting molecules organized within a definite topology that ensures temporal precision of its phase transitions. Its underlying dynamics are regulated by molecular switches, for which remarkable insights have been provided by genetic and molecular biology efforts. In a number of cases, this information has been made predictive, through computational models. These models have allowed for the identification of novel molecular mechanisms, later validated experimentally. Logical modeling represents one of the youngest approaches to address cell cycle regulation. We summarize the advances that this type of modeling has achieved to reproduce and predict cell cycle dynamics. Furthermore, we present the challenge that this type of modeling is now ready to tackle: its integration with intracellular networks, and its formalisms, to understand crosstalks underlying systems level properties, ultimate aim of multi-scale models. Specifically, we discuss and illustrate how such an integration may be realized, by integrating a minimal logical model of the cell cycle with a metabolic network. PMID:27993914

  12. Multiscale Investigations of the Early Stage Oxidation on Cu Surfaces

    NASA Astrophysics Data System (ADS)

    Zhu, Qing; Xiao, Penghao; Lian, Xin; Yang, Shen-Che; Henkelman, Grame; Saidi, Wissam; Yang, Judith; University of Pittsburgh Team; University of Texas at Austin Team

    Previous in situ TEM experiments have shown that the oxidation of the three low index Cu surfaces (100), (110) and (111) exhibit different oxide nucleation rates, and the resulting oxides have 3-dimensional (3D) island shapes or 2D rafts under different conditions. In order to better understand these results, we have investigated the early stages of Cu oxidation using a multiscale computational approach that employs density functional theory (DFT), reactive force field (ReaxFF), and kinetic Mote Carlo (KMC). With DFT calculation, we have compared O2 dissociation barriers on Cu (100), (110) and (111) surfaces at high oxygen coverage to evaluate the kinetic barrier of sublayer oxidization. We found that O2 dissociation barriers on Cu(111) surface are all lower than those on (110) and (100) surfaces. This trend agrees with experimental observations that (111) surface is easier to oxidize. These DFT calculated energy barriers are then incorporated into KMC simulations. The large scale ReaxFF molecular dynamics and KMC simulations detail the oxidation dynamics of the different Cu surfaces, and show the formation of various oxide morphologies that are consistent with experimental observations.

  13. Multiscale Modeling of Diffusion in a Crowded Environment.

    PubMed

    Meinecke, Lina

    2017-11-01

    We present a multiscale approach to model diffusion in a crowded environment and its effect on the reaction rates. Diffusion in biological systems is often modeled by a discrete space jump process in order to capture the inherent noise of biological systems, which becomes important in the low copy number regime. To model diffusion in the crowded cell environment efficiently, we compute the jump rates in this mesoscopic model from local first exit times, which account for the microscopic positions of the crowding molecules, while the diffusing molecules jump on a coarser Cartesian grid. We then extract a macroscopic description from the resulting jump rates, where the excluded volume effect is modeled by a diffusion equation with space-dependent diffusion coefficient. The crowding molecules can be of arbitrary shape and size, and numerical experiments demonstrate that those factors together with the size of the diffusing molecule play a crucial role on the magnitude of the decrease in diffusive motion. When correcting the reaction rates for the altered diffusion we can show that molecular crowding either enhances or inhibits chemical reactions depending on local fluctuations of the obstacle density.

  14. A Multi-scale Modeling System with Unified Physics to Study Precipitation Processes

    NASA Astrophysics Data System (ADS)

    Tao, W. K.

    2017-12-01

    In recent years, exponentially increasing computer power has extended Cloud Resolving Model (CRM) integrations from hours to months, the number of computational grid points from less than a thousand to close to ten million. Three-dimensional models are now more prevalent. Much attention is devoted to precipitating cloud systems where the crucial 1-km scales are resolved in horizontal domains as large as 10,000 km in two-dimensions, and 1,000 x 1,000 km2 in three-dimensions. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that NWP and mesoscale model can be run in grid size similar to cloud resolving model through nesting technique. Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), and (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF). The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, a review of developments and applications of the multi-scale modeling system will be presented. In particular, the results from using multi-scale modeling system to study the precipitation, processes and their sensitivity on model resolution and microphysics schemes will be presented. Also how to use of the multi-satellite simulator to improve precipitation processes will be discussed.

  15. Using Multi-Scale Modeling Systems and Satellite Data to Study the Precipitation Processes

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo; Chern, J.; Lamg, S.; Matsui, T.; Shen, B.; Zeng, X.; Shi, R.

    2011-01-01

    In recent years, exponentially increasing computer power has extended Cloud Resolving Model (CRM) integrations from hours to months, the number of computational grid points from less than a thousand to close to ten million. Three-dimensional models are now more prevalent. Much attention is devoted to precipitating cloud systems where the crucial 1-km scales are resolved in horizontal domains as large as 10,000 km in two-dimensions, and 1,000 x 1,000 km2 in three-dimensions. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that NWP and mesoscale model can be run in grid size similar to cloud resolving model through nesting technique. Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (l) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, the recent developments and applications of the multi-scale modeling system will be presented. In particular, the results from using multi-scale modeling system to study the precipitating systems and hurricanes/typhoons will be presented. The high-resolution spatial and temporal visualization will be utilized to show the evolution of precipitation processes. Also how to use of the multi-satellite simulator tqimproy precipitation processes will be discussed.

  16. Using Multi-Scale Modeling Systems and Satellite Data to Study the Precipitation Processes

    NASA Technical Reports Server (NTRS)

    Tao, Wei--Kuo; Chern, J.; Lamg, S.; Matsui, T.; Shen, B.; Zeng, X.; Shi, R.

    2010-01-01

    In recent years, exponentially increasing computer power extended Cloud Resolving Model (CRM) integrations from hours to months, the number of computational grid points from less than a thousand to close to ten million. Three-dimensional models are now more prevalent. Much attention is devoted to precipitating cloud systems where the crucial 1-km scales are resolved in horizontal domains as large as 10,000 km in two-dimensions, and 1,000 x 1,000 sq km in three-dimensions. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that NWP and mesoscale models can be run in grid size similar to cloud resolving models through nesting technique. Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model). (2) a regional scale model (a NASA unified weather research and forecast, W8F). (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, a review of developments and applications of the multi-scale modeling system will be presented. In particular, the results from using multi-scale modeling systems to study the interactions between clouds, precipitation, and aerosols will be presented. Also how to use the multi-satellite simulator to improve precipitation processes will be discussed.

  17. Using Multi-Scale Modeling Systems to Study the Precipitation Processes

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo

    2010-01-01

    In recent years, exponentially increasing computer power has extended Cloud Resolving Model (CRM) integrations from hours to months, the number of computational grid points from less than a thousand to close to ten million. Three-dimensional models are now more prevalent. Much attention is devoted to precipitating cloud systems where the crucial 1-km scales are resolved in horizontal domains as large as 10,000 km in two-dimensions, and 1,000 x 1,000 km2 in three-dimensions. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that NWP and mesoscale model can be run in grid size similar to cloud resolving model through nesting technique. Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, a review of developments and applications of the multi-scale modeling system will be presented. In particular, the results from using multi-scale modeling system to study the interactions between clouds, precipitation, and aerosols will be presented. Also how to use of the multi-satellite simulator to improve precipitation processes will be discussed.

  18. Multiscale methods for gore curvature calculations from FSI modeling of spacecraft parachutes

    NASA Astrophysics Data System (ADS)

    Takizawa, Kenji; Tezduyar, Tayfun E.; Kolesar, Ryan; Boswell, Cody; Kanai, Taro; Montel, Kenneth

    2014-12-01

    There are now some sophisticated and powerful methods for computer modeling of parachutes. These methods are capable of addressing some of the most formidable computational challenges encountered in parachute modeling, including fluid-structure interaction (FSI) between the parachute and air flow, design complexities such as those seen in spacecraft parachutes, and operational complexities such as use in clusters and disreefing. One should be able to extract from a reliable full-scale parachute modeling any data or analysis needed. In some cases, however, the parachute engineers may want to perform quickly an extended or repetitive analysis with methods based on simplified models. Some of the data needed by a simplified model can very effectively be extracted from a full-scale computer modeling that serves as a pilot. A good example of such data is the circumferential curvature of a parachute gore, where a gore is the slice of the parachute canopy between two radial reinforcement cables running from the parachute vent to the skirt. We present the multiscale methods we devised for gore curvature calculation from FSI modeling of spacecraft parachutes. The methods include those based on the multiscale sequentially-coupled FSI technique and using NURBS meshes. We show how the methods work for the fully-open and two reefed stages of the Orion spacecraft main and drogue parachutes.

  19. An agent-based model of leukocyte transendothelial migration during atherogenesis.

    PubMed

    Bhui, Rita; Hayenga, Heather N

    2017-05-01

    A vast amount of work has been dedicated to the effects of hemodynamics and cytokines on leukocyte adhesion and trans-endothelial migration (TEM) and subsequent accumulation of leukocyte-derived foam cells in the artery wall. However, a comprehensive mechanobiological model to capture these spatiotemporal events and predict the growth and remodeling of an atherosclerotic artery is still lacking. Here, we present a multiscale model of leukocyte TEM and plaque evolution in the left anterior descending (LAD) coronary artery. The approach integrates cellular behaviors via agent-based modeling (ABM) and hemodynamic effects via computational fluid dynamics (CFD). In this computational framework, the ABM implements the diffusion kinetics of key biological proteins, namely Low Density Lipoprotein (LDL), Tissue Necrosis Factor alpha (TNF-α), Interlukin-10 (IL-10) and Interlukin-1 beta (IL-1β), to predict chemotactic driven leukocyte migration into and within the artery wall. The ABM also considers wall shear stress (WSS) dependent leukocyte TEM and compensatory arterial remodeling obeying Glagov's phenomenon. Interestingly, using fully developed steady blood flow does not result in a representative number of leukocyte TEM as compared to pulsatile flow, whereas passing WSS at peak systole of the pulsatile flow waveform does. Moreover, using the model, we have found leukocyte TEM increases monotonically with decreases in luminal volume. At critical plaque shapes the WSS changes rapidly resulting in sudden increases in leukocyte TEM suggesting lumen volumes that will give rise to rapid plaque growth rates if left untreated. Overall this multi-scale and multi-physics approach appropriately captures and integrates the spatiotemporal events occurring at the cellular level in order to predict leukocyte transmigration and plaque evolution.

  20. An agent-based model of leukocyte transendothelial migration during atherogenesis

    PubMed Central

    Bhui, Rita; Hayenga, Heather N.

    2017-01-01

    A vast amount of work has been dedicated to the effects of hemodynamics and cytokines on leukocyte adhesion and trans-endothelial migration (TEM) and subsequent accumulation of leukocyte-derived foam cells in the artery wall. However, a comprehensive mechanobiological model to capture these spatiotemporal events and predict the growth and remodeling of an atherosclerotic artery is still lacking. Here, we present a multiscale model of leukocyte TEM and plaque evolution in the left anterior descending (LAD) coronary artery. The approach integrates cellular behaviors via agent-based modeling (ABM) and hemodynamic effects via computational fluid dynamics (CFD). In this computational framework, the ABM implements the diffusion kinetics of key biological proteins, namely Low Density Lipoprotein (LDL), Tissue Necrosis Factor alpha (TNF-α), Interlukin-10 (IL-10) and Interlukin-1 beta (IL-1β), to predict chemotactic driven leukocyte migration into and within the artery wall. The ABM also considers wall shear stress (WSS) dependent leukocyte TEM and compensatory arterial remodeling obeying Glagov’s phenomenon. Interestingly, using fully developed steady blood flow does not result in a representative number of leukocyte TEM as compared to pulsatile flow, whereas passing WSS at peak systole of the pulsatile flow waveform does. Moreover, using the model, we have found leukocyte TEM increases monotonically with decreases in luminal volume. At critical plaque shapes the WSS changes rapidly resulting in sudden increases in leukocyte TEM suggesting lumen volumes that will give rise to rapid plaque growth rates if left untreated. Overall this multi-scale and multi-physics approach appropriately captures and integrates the spatiotemporal events occurring at the cellular level in order to predict leukocyte transmigration and plaque evolution. PMID:28542193

  1. Combining frozen-density embedding with the conductor-like screening model using Lagrangian techniques for response properties.

    PubMed

    Schieschke, Nils; Di Remigio, Roberto; Frediani, Luca; Heuser, Johannes; Höfener, Sebastian

    2017-07-15

    We present the explicit derivation of an approach to the multiscale description of molecules in complex environments that combines frozen-density embedding (FDE) with continuum solvation models, in particular the conductor-like screening model (COSMO). FDE provides an explicit atomistic description of molecule-environment interactions at reduced computational cost, while the outer continuum layer accounts for the effect of long-range isotropic electrostatic interactions. Our treatment is based on a variational Lagrangian framework, enabling rigorous derivations of ground- and excited-state response properties. As an example of the flexibility of the theoretical framework, we derive and discuss FDE + COSMO analytical molecular gradients for excited states within the Tamm-Dancoff approximation (TDA) and for ground states within second-order Møller-Plesset perturbation theory (MP2) and a second-order approximate coupled cluster with singles and doubles (CC2). It is shown how this method can be used to describe vertical electronic excitation (VEE) energies and Stokes shifts for uracil in water and carbostyril in dimethyl sulfoxide (DMSO), respectively. In addition, VEEs for some simplified protein models are computed, illustrating the performance of this method when applied to larger systems. The interaction terms between the FDE subsystem densities and the continuum can influence excitation energies up to 0.3 eV and, thus, cannot be neglected for general applications. We find that the net influence of the continuum in presence of the first FDE shell on the excitation energy amounts to about 0.05 eV for the cases investigated. The present work is an important step toward rigorously derived ab initio multilayer and multiscale modeling approaches. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  2. Intelligent Fault Diagnosis of Rotary Machinery Based on Unsupervised Multiscale Representation Learning

    NASA Astrophysics Data System (ADS)

    Jiang, Guo-Qian; Xie, Ping; Wang, Xiao; Chen, Meng; He, Qun

    2017-11-01

    The performance of traditional vibration based fault diagnosis methods greatly depends on those handcrafted features extracted using signal processing algorithms, which require significant amounts of domain knowledge and human labor, and do not generalize well to new diagnosis domains. Recently, unsupervised representation learning provides an alternative promising solution to feature extraction in traditional fault diagnosis due to its superior learning ability from unlabeled data. Given that vibration signals usually contain multiple temporal structures, this paper proposes a multiscale representation learning (MSRL) framework to learn useful features directly from raw vibration signals, with the aim to capture rich and complementary fault pattern information at different scales. In our proposed approach, a coarse-grained procedure is first employed to obtain multiple scale signals from an original vibration signal. Then, sparse filtering, a newly developed unsupervised learning algorithm, is applied to automatically learn useful features from each scale signal, respectively, and then the learned features at each scale to be concatenated one by one to obtain multiscale representations. Finally, the multiscale representations are fed into a supervised classifier to achieve diagnosis results. Our proposed approach is evaluated using two different case studies: motor bearing and wind turbine gearbox fault diagnosis. Experimental results show that the proposed MSRL approach can take full advantages of the availability of unlabeled data to learn discriminative features and achieved better performance with higher accuracy and stability compared to the traditional approaches.

  3. Systems Biology for Organotypic Cell Cultures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grego, Sonia; Dougherty, Edward R.; Alexander, Francis J.

    Translating in vitro biological data into actionable information related to human health holds the potential to improve disease treatment and risk assessment of chemical exposures. While genomics has identified regulatory pathways at the cellular level, translation to the organism level requires a multiscale approach accounting for intra-cellular regulation, inter-cellular interaction, and tissue/organ-level effects. Tissue-level effects can now be probed in vitro thanks to recently developed systems of three-dimensional (3D), multicellular, “organotypic” cell cultures, which mimic functional responses of living tissue. However, there remains a knowledge gap regarding interactions across different biological scales, complicating accurate prediction of health outcomes from molecular/genomicmore » data and tissue responses. Systems biology aims at mathematical modeling of complex, non-linear biological systems. We propose to apply a systems biology approach to achieve a computational representation of tissue-level physiological responses by integrating empirical data derived from organotypic culture systems with computational models of intracellular pathways to better predict human responses. Successful implementation of this integrated approach will provide a powerful tool for faster, more accurate and cost-effective screening of potential toxicants and therapeutics. On September 11, 2015, an interdisciplinary group of scientists, engineers, and clinicians gathered for a workshop in Research Triangle Park, North Carolina, to discuss this ambitious goal. Participants represented laboratory-based and computational modeling approaches to pharmacology and toxicology, as well as the pharmaceutical industry, government, non-profits, and academia. Discussions focused on identifying critical system perturbations to model, the computational tools required, and the experimental approaches best suited to generating key data. This consensus report summarizes the discussions held.« less

  4. Workshop Report: Systems Biology for Organotypic Cell Cultures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grego, Sonia; Dougherty, Edward R.; Alexander, Francis Joseph

    Translating in vitro biological data into actionable information related to human health holds the potential to improve disease treatment and risk assessment of chemical exposures. While genomics has identified regulatory pathways at the cellular level, translation to the organism level requires a multiscale approach accounting for intra-cellular regulation, inter-cellular interaction, and tissue/organ-level effects. Tissue-level effects can now be probed in vitro thanks to recently developed systems of three-dimensional (3D), multicellular, “organotypic” cell cultures, which mimic functional responses of living tissue. However, there remains a knowledge gap regarding interactions across different biological scales, complicating accurate prediction of health outcomes from molecular/genomicmore » data and tissue responses. Systems biology aims at mathematical modeling of complex, non-linear biological systems. We propose to apply a systems biology approach to achieve a computational representation of tissue-level physiological responses by integrating empirical data derived from organotypic culture systems with computational models of intracellular pathways to better predict human responses. Successful implementation of this integrated approach will provide a powerful tool for faster, more accurate and cost-effective screening of potential toxicants and therapeutics. On September 11, 2015, an interdisciplinary group of scientists, engineers, and clinicians gathered for a workshop in Research Triangle Park, North Carolina, to discuss this ambitious goal. Participants represented laboratory-based and computational modeling approaches to pharmacology and toxicology, as well as the pharmaceutical industry, government, non-profits, and academia. Discussions focused on identifying critical system perturbations to model, the computational tools required, and the experimental approaches best suited to generating key data.« less

  5. Workshop Report: Systems Biology for Organotypic Cell Cultures

    DOE PAGES

    Grego, Sonia; Dougherty, Edward R.; Alexander, Francis Joseph; ...

    2016-11-14

    Translating in vitro biological data into actionable information related to human health holds the potential to improve disease treatment and risk assessment of chemical exposures. While genomics has identified regulatory pathways at the cellular level, translation to the organism level requires a multiscale approach accounting for intra-cellular regulation, inter-cellular interaction, and tissue/organ-level effects. Tissue-level effects can now be probed in vitro thanks to recently developed systems of three-dimensional (3D), multicellular, “organotypic” cell cultures, which mimic functional responses of living tissue. However, there remains a knowledge gap regarding interactions across different biological scales, complicating accurate prediction of health outcomes from molecular/genomicmore » data and tissue responses. Systems biology aims at mathematical modeling of complex, non-linear biological systems. We propose to apply a systems biology approach to achieve a computational representation of tissue-level physiological responses by integrating empirical data derived from organotypic culture systems with computational models of intracellular pathways to better predict human responses. Successful implementation of this integrated approach will provide a powerful tool for faster, more accurate and cost-effective screening of potential toxicants and therapeutics. On September 11, 2015, an interdisciplinary group of scientists, engineers, and clinicians gathered for a workshop in Research Triangle Park, North Carolina, to discuss this ambitious goal. Participants represented laboratory-based and computational modeling approaches to pharmacology and toxicology, as well as the pharmaceutical industry, government, non-profits, and academia. Discussions focused on identifying critical system perturbations to model, the computational tools required, and the experimental approaches best suited to generating key data.« less

  6. Systems biology for organotypic cell cultures.

    PubMed

    Grego, Sonia; Dougherty, Edward R; Alexander, Francis J; Auerbach, Scott S; Berridge, Brian R; Bittner, Michael L; Casey, Warren; Cooley, Philip C; Dash, Ajit; Ferguson, Stephen S; Fennell, Timothy R; Hawkins, Brian T; Hickey, Anthony J; Kleensang, Andre; Liebman, Michael N J; Martin, Florian; Maull, Elizabeth A; Paragas, Jason; Qiao, Guilin Gary; Ramaiahgari, Sreenivasa; Sumner, Susan J; Yoon, Miyoung

    2017-01-01

    Translating in vitro biological data into actionable information related to human health holds the potential to improve disease treatment and risk assessment of chemical exposures. While genomics has identified regulatory pathways at the cellular level, translation to the organism level requires a multiscale approach accounting for intra-cellular regulation, inter-cellular interaction, and tissue/organ-level effects. Tissue-level effects can now be probed in vitro thanks to recently developed systems of three-dimensional (3D), multicellular, "organotypic" cell cultures, which mimic functional responses of living tissue. However, there remains a knowledge gap regarding interactions across different biological scales, complicating accurate prediction of health outcomes from molecular/genomic data and tissue responses. Systems biology aims at mathematical modeling of complex, non-linear biological systems. We propose to apply a systems biology approach to achieve a computational representation of tissue-level physiological responses by integrating empirical data derived from organotypic culture systems with computational models of intracellular pathways to better predict human responses. Successful implementation of this integrated approach will provide a powerful tool for faster, more accurate and cost-effective screening of potential toxicants and therapeutics. On September 11, 2015, an interdisciplinary group of scientists, engineers, and clinicians gathered for a workshop in Research Triangle Park, North Carolina, to discuss this ambitious goal. Participants represented laboratory-based and computational modeling approaches to pharmacology and toxicology, as well as the pharmaceutical industry, government, non-profits, and academia. Discussions focused on identifying critical system perturbations to model, the computational tools required, and the experimental approaches best suited to generating key data.

  7. Hybrid Parallelization of Adaptive MHD-Kinetic Module in Multi-Scale Fluid-Kinetic Simulation Suite

    DOE PAGES

    Borovikov, Sergey; Heerikhuisen, Jacob; Pogorelov, Nikolai

    2013-04-01

    The Multi-Scale Fluid-Kinetic Simulation Suite has a computational tool set for solving partially ionized flows. In this paper we focus on recent developments of the kinetic module which solves the Boltzmann equation using the Monte-Carlo method. The module has been recently redesigned to utilize intra-node hybrid parallelization. We describe in detail the redesign process, implementation issues, and modifications made to the code. Finally, we conduct a performance analysis.

  8. Evaluation of the Community Multiscale Air Quality (CMAQ) ...

    EPA Pesticide Factsheets

    This work evaluates particle size-composition distributions simulated by the Community Multiscale Air Quality (CMAQ) model using Micro-Orifice Uniform Deposit Impactor (MOUDI) measurements at 18 sites across North America. Size-resolved measurements of particulate SO4+, with the model ranging from an underestimation to overestimation of both the peak diameter and peak particle concentration across the sites. Computing PM2.5 from the modeled size distribution parameters rather than by summing the masses in the Aitken and a

  9. Multiscale model reduction for shale gas transport in poroelastic fractured media

    NASA Astrophysics Data System (ADS)

    Akkutlu, I. Yucel; Efendiev, Yalchin; Vasilyeva, Maria; Wang, Yuhe

    2018-01-01

    Inherently coupled flow and geomechanics processes in fractured shale media have implications for shale gas production. The system involves highly complex geo-textures comprised of a heterogeneous anisotropic fracture network spatially embedded in an ultra-tight matrix. In addition, nonlinearities due to viscous flow, diffusion, and desorption in the matrix and high velocity gas flow in the fractures complicates the transport. In this paper, we develop a multiscale model reduction approach to couple gas flow and geomechanics in fractured shale media. A Discrete Fracture Model (DFM) is used to treat the complex network of fractures on a fine grid. The coupled flow and geomechanics equations are solved using a fixed stress-splitting scheme by solving the pressure equation using a continuous Galerkin method and the displacement equation using an interior penalty discontinuous Galerkin method. We develop a coarse grid approximation and coupling using the Generalized Multiscale Finite Element Method (GMsFEM). GMsFEM constructs the multiscale basis functions in a systematic way to capture the fracture networks and their interactions with the shale matrix. Numerical results and an error analysis is provided showing that the proposed approach accurately captures the coupled process using a few multiscale basis functions, i.e. a small fraction of the degrees of freedom of the fine-scale problem.

  10. Flexible feature-space-construction architecture and its VLSI implementation for multi-scale object detection

    NASA Astrophysics Data System (ADS)

    Luo, Aiwen; An, Fengwei; Zhang, Xiangyu; Chen, Lei; Huang, Zunkai; Jürgen Mattausch, Hans

    2018-04-01

    Feature extraction techniques are a cornerstone of object detection in computer-vision-based applications. The detection performance of vison-based detection systems is often degraded by, e.g., changes in the illumination intensity of the light source, foreground-background contrast variations or automatic gain control from the camera. In order to avoid such degradation effects, we present a block-based L1-norm-circuit architecture which is configurable for different image-cell sizes, cell-based feature descriptors and image resolutions according to customization parameters from the circuit input. The incorporated flexibility in both the image resolution and the cell size for multi-scale image pyramids leads to lower computational complexity and power consumption. Additionally, an object-detection prototype for performance evaluation in 65 nm CMOS implements the proposed L1-norm circuit together with a histogram of oriented gradients (HOG) descriptor and a support vector machine (SVM) classifier. The proposed parallel architecture with high hardware efficiency enables real-time processing, high detection robustness, small chip-core area as well as low power consumption for multi-scale object detection.

  11. A multiscale computational approach to dissect early events in the Erb family receptor mediated activation, differential signaling, and relevance to oncogenic transformations.

    PubMed

    Liu, Yingting; Purvis, Jeremy; Shih, Andrew; Weinstein, Joshua; Agrawal, Neeraj; Radhakrishnan, Ravi

    2007-06-01

    We describe a hierarchical multiscale computational approach based on molecular dynamics simulations, free energy-based molecular docking simulations, deterministic network-based kinetic modeling, and hybrid discrete/continuum stochastic dynamics protocols to study the dimer-mediated receptor activation characteristics of the Erb family receptors, specifically the epidermal growth factor receptor (EGFR). Through these modeling approaches, we are able to extend the prior modeling of EGF-mediated signal transduction by considering specific EGFR tyrosine kinase (EGFRTK) docking interactions mediated by differential binding and phosphorylation of different C-terminal peptide tyrosines on the RTK tail. By modeling signal flows through branching pathways of the EGFRTK resolved on a molecular basis, we are able to transcribe the effects of molecular alterations in the receptor (e.g., mutant forms of the receptor) to differing kinetic behavior and downstream signaling response. Our molecular dynamics simulations show that the drug sensitizing mutation (L834R) of EGFR stabilizes the active conformation to make the system constitutively active. Docking simulations show preferential characteristics (for wildtype vs. mutant receptors) in inhibitor binding as well as preferential enhancement of phosphorylation of particular substrate tyrosines over others. We find that in comparison to the wildtype system, the L834R mutant RTK preferentially binds the inhibitor erlotinib, as well as preferentially phosphorylates the substrate tyrosine Y1068 but not Y1173. We predict that these molecular level changes result in preferential activation of the Akt signaling pathway in comparison to the Erk signaling pathway for cells with normal EGFR expression. For cells with EGFR over expression, the mutant over activates both Erk and Akt pathways, in comparison to wildtype. These results are consistent with qualitative experimental measurements reported in the literature. We discuss these consequences in light of how the network topology and signaling characteristics of altered (mutant) cell lines are shaped differently in relationship to native cell lines.

  12. Intergranular Strain Evolution During Biaxial Loading: A Multiscale FE-FFT Approach

    NASA Astrophysics Data System (ADS)

    Upadhyay, M. V.; Capek, J.; Van Petegem, S.; Lebensohn, R. A.; Van Swygenhoven, H.

    2017-05-01

    Predicting the macroscopic and microscopic mechanical response of metals and alloys subjected to complex loading conditions necessarily requires a synergistic combination of multiscale material models and characterization techniques. This article focuses on the use of a multiscale approach to study the difference between intergranular lattice strain evolution for various grain families measured during in situ neutron diffraction on dog bone and cruciform 316L samples. At the macroscale, finite element simulations capture the complex coupling between applied forces and gauge stresses in cruciform geometries. The predicted gauge stresses are used as macroscopic boundary conditions to drive a mesoscale full-field elasto-viscoplastic fast Fourier transform crystal plasticity model. The results highlight the role of grain neighborhood on the intergranular strain evolution under uniaxial and equibiaxial loading.

  13. A Computational Framework for Efficient Low Temperature Plasma Simulations

    NASA Astrophysics Data System (ADS)

    Verma, Abhishek Kumar; Venkattraman, Ayyaswamy

    2016-10-01

    Over the past years, scientific computing has emerged as an essential tool for the investigation and prediction of low temperature plasmas (LTP) applications which includes electronics, nanomaterial synthesis, metamaterials etc. To further explore the LTP behavior with greater fidelity, we present a computational toolbox developed to perform LTP simulations. This framework will allow us to enhance our understanding of multiscale plasma phenomenon using high performance computing tools mainly based on OpenFOAM FVM distribution. Although aimed at microplasma simulations, the modular framework is able to perform multiscale, multiphysics simulations of physical systems comprises of LTP. Some salient introductory features are capability to perform parallel, 3D simulations of LTP applications on unstructured meshes. Performance of the solver is tested based on numerical results assessing accuracy and efficiency of benchmarks for problems in microdischarge devices. Numerical simulation of microplasma reactor at atmospheric pressure with hemispherical dielectric coated electrodes will be discussed and hence, provide an overview of applicability and future scope of this framework.

  14. A hybrid multiscale Monte Carlo algorithm (HyMSMC) to cope with disparity in time scales and species populations in intracellular networks.

    PubMed

    Samant, Asawari; Ogunnaike, Babatunde A; Vlachos, Dionisios G

    2007-05-24

    The fundamental role that intrinsic stochasticity plays in cellular functions has been shown via numerous computational and experimental studies. In the face of such evidence, it is important that intracellular networks are simulated with stochastic algorithms that can capture molecular fluctuations. However, separation of time scales and disparity in species population, two common features of intracellular networks, make stochastic simulation of such networks computationally prohibitive. While recent work has addressed each of these challenges separately, a generic algorithm that can simultaneously tackle disparity in time scales and population scales in stochastic systems is currently lacking. In this paper, we propose the hybrid, multiscale Monte Carlo (HyMSMC) method that fills in this void. The proposed HyMSMC method blends stochastic singular perturbation concepts, to deal with potential stiffness, with a hybrid of exact and coarse-grained stochastic algorithms, to cope with separation in population sizes. In addition, we introduce the computational singular perturbation (CSP) method as a means of systematically partitioning fast and slow networks and computing relaxation times for convergence. We also propose a new criteria of convergence of fast networks to stochastic low-dimensional manifolds, which further accelerates the algorithm. We use several prototype and biological examples, including a gene expression model displaying bistability, to demonstrate the efficiency, accuracy and applicability of the HyMSMC method. Bistable models serve as stringent tests for the success of multiscale MC methods and illustrate limitations of some literature methods.

  15. Phase Separation and d Electronic Orbitals on Cyclic Degradation in Li-Mn-O Compounds: First-Principles Multiscale Modeling and Experimental Observations.

    PubMed

    Kim, Duho; Lim, Jin-Myoung; Park, Min-Sik; Cho, Kyeongjae; Cho, Maenghyo

    2016-07-06

    A combined study involving experiments and multiscale computational approaches is conducted to propose a theoretical solution for the suppression of the Jahn-Teller distortion which causes severe cyclic degradation. As-synthesized pristine and Al-doped Mn spinel compounds are the focus to understand the mechanism of the cyclic degradation in terms of the Jahn-Teller distortion, and the electrochemical performance of the Al-doped sample shows enhanced cyclic performance compared with that of the pristine one. Considering the electronic structures of the two systems using first-principles calculations, the pristine spinel suffers entirely from the Jahn-Teller distortion by Mn(3+), indicating an anisotropic electronic structure, but the Al-doped spinel exhibits an isotropic electronic structure, which means the suppressed Jahn-Teller distortion. A multiscale phase field model in nanodomain shows that the phase separation of the pristine spinel occurs to inactive Li0Mn2O4 (i.e., fully delithiated) gradually during cycles. In contrast, the Al-doped spinel does not show phase separation to an inactive phase. This explains why the Al-doped spinel maintains the capacity of the first charge during the subsequent cycles. On the basis of the mechanistic understanding of the origins and mechanism of the suppression of the Jahn-Teller distortion, fundamental insight for making tremendous cuts in the cyclic degradation could be provided for the Li-Mn-O compounds of Li-ion batteries.

  16. Multi-scale Material Parameter Identification Using LS-DYNA® and LS-OPT®

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stander, Nielen; Basudhar, Anirban; Basu, Ushnish

    2015-06-15

    Ever-tightening regulations on fuel economy and carbon emissions demand continual innovation in finding ways for reducing vehicle mass. Classical methods for computational mass reduction include sizing, shape and topology optimization. One of the few remaining options for weight reduction can be found in materials engineering and material design optimization. Apart from considering different types of materials by adding material diversity, an appealing option in automotive design is to engineer steel alloys for the purpose of reducing thickness while retaining sufficient strength and ductility required for durability and safety. Such a project was proposed and is currently being executed under themore » auspices of the United States Automotive Materials Partnership (USAMP) funded by the Department of Energy. Under this program, new steel alloys (Third Generation Advanced High Strength Steel or 3GAHSS) are being designed, tested and integrated with the remaining design variables of a benchmark vehicle Finite Element model. In this project the principal phases identified are (i) material identification, (ii) formability optimization and (iii) multi-disciplinary vehicle optimization. This paper serves as an introduction to the LS-OPT methodology and therefore mainly focuses on the first phase, namely an approach to integrate material identification using material models of different length scales. For this purpose, a multi-scale material identification strategy, consisting of a Crystal Plasticity (CP) material model and a Homogenized State Variable (SV) model, is discussed and demonstrated. The paper concludes with proposals for integrating the multi-scale methodology into the overall vehicle design.« less

  17. Multi-Scale Surface Descriptors

    PubMed Central

    Cipriano, Gregory; Phillips, George N.; Gleicher, Michael

    2010-01-01

    Local shape descriptors compactly characterize regions of a surface, and have been applied to tasks in visualization, shape matching, and analysis. Classically, curvature has be used as a shape descriptor; however, this differential property characterizes only an infinitesimal neighborhood. In this paper, we provide shape descriptors for surface meshes designed to be multi-scale, that is, capable of characterizing regions of varying size. These descriptors capture statistically the shape of a neighborhood around a central point by fitting a quadratic surface. They therefore mimic differential curvature, are efficient to compute, and encode anisotropy. We show how simple variants of mesh operations can be used to compute the descriptors without resorting to expensive parameterizations, and additionally provide a statistical approximation for reduced computational cost. We show how these descriptors apply to a number of uses in visualization, analysis, and matching of surfaces, particularly to tasks in protein surface analysis. PMID:19834190

  18. Classification of JERS-1 Image Mosaic of Central Africa Using A Supervised Multiscale Classifier of Texture Features

    NASA Technical Reports Server (NTRS)

    Saatchi, Sassan; DeGrandi, Franco; Simard, Marc; Podest, Erika

    1999-01-01

    In this paper, a multiscale approach is introduced to classify the Japanese Research Satellite-1 (JERS-1) mosaic image over the Central African rainforest. A series of texture maps are generated from the 100 m mosaic image at various scales. Using a quadtree model and relating classes at each scale by a Markovian relationship, the multiscale images are classified from course to finer scale. The results are verified at various scales and the evolution of classification is monitored by calculating the error at each stage.

  19. Modeling and Simulation of Nanoindentation

    NASA Astrophysics Data System (ADS)

    Huang, Sixie; Zhou, Caizhi

    2017-11-01

    Nanoindentation is a hardness test method applied to small volumes of material which can provide some unique effects and spark many related research activities. To fully understand the phenomena observed during nanoindentation tests, modeling and simulation methods have been developed to predict the mechanical response of materials during nanoindentation. However, challenges remain with those computational approaches, because of their length scale, predictive capability, and accuracy. This article reviews recent progress and challenges for modeling and simulation of nanoindentation, including an overview of molecular dynamics, the quasicontinuum method, discrete dislocation dynamics, and the crystal plasticity finite element method, and discusses how to integrate multiscale modeling approaches seamlessly with experimental studies to understand the length-scale effects and microstructure evolution during nanoindentation tests, creating a unique opportunity to establish new calibration procedures for the nanoindentation technique.

  20. Modeling Impact-induced Failure of Polysilicon MEMS: A Multi-scale Approach.

    PubMed

    Mariani, Stefano; Ghisi, Aldo; Corigliano, Alberto; Zerbini, Sarah

    2009-01-01

    Failure of packaged polysilicon micro-electro-mechanical systems (MEMS) subjected to impacts involves phenomena occurring at several length-scales. In this paper we present a multi-scale finite element approach to properly allow for: (i) the propagation of stress waves inside the package; (ii) the dynamics of the whole MEMS; (iii) the spreading of micro-cracking in the failing part(s) of the sensor. Through Monte Carlo simulations, some effects of polysilicon micro-structure on the failure mode are elucidated.

  1. Multiscale Modeling of Ultra High Temperature Ceramics (UHTC) ZrB2 and HfB2: Application to Lattice Thermal Conductivity

    NASA Technical Reports Server (NTRS)

    Lawson, John W.; Daw, Murray S.; Squire, Thomas H.; Bauschlicher, Charles W.

    2012-01-01

    We are developing a multiscale framework in computational modeling for the ultra high temperature ceramics (UHTC) ZrB2 and HfB2. These materials are characterized by high melting point, good strength, and reasonable oxidation resistance. They are candidate materials for a number of applications in extreme environments including sharp leading edges of hypersonic aircraft. In particular, we used a combination of ab initio methods, atomistic simulations and continuum computations to obtain insights into fundamental properties of these materials. Ab initio methods were used to compute basic structural, mechanical and thermal properties. From these results, a database was constructed to fit a Tersoff style interatomic potential suitable for atomistic simulations. These potentials were used to evaluate the lattice thermal conductivity of single crystals and the thermal resistance of simple grain boundaries. Finite element method (FEM) computations using atomistic results as inputs were performed with meshes constructed on SEM images thereby modeling the realistic microstructure. These continuum computations showed the reduction in thermal conductivity due to the grain boundary network.

  2. Mathematical modeling and computational prediction of cancer drug resistance.

    PubMed

    Sun, Xiaoqiang; Hu, Bin

    2017-06-23

    Diverse forms of resistance to anticancer drugs can lead to the failure of chemotherapy. Drug resistance is one of the most intractable issues for successfully treating cancer in current clinical practice. Effective clinical approaches that could counter drug resistance by restoring the sensitivity of tumors to the targeted agents are urgently needed. As numerous experimental results on resistance mechanisms have been obtained and a mass of high-throughput data has been accumulated, mathematical modeling and computational predictions using systematic and quantitative approaches have become increasingly important, as they can potentially provide deeper insights into resistance mechanisms, generate novel hypotheses or suggest promising treatment strategies for future testing. In this review, we first briefly summarize the current progress of experimentally revealed resistance mechanisms of targeted therapy, including genetic mechanisms, epigenetic mechanisms, posttranslational mechanisms, cellular mechanisms, microenvironmental mechanisms and pharmacokinetic mechanisms. Subsequently, we list several currently available databases and Web-based tools related to drug sensitivity and resistance. Then, we focus primarily on introducing some state-of-the-art computational methods used in drug resistance studies, including mechanism-based mathematical modeling approaches (e.g. molecular dynamics simulation, kinetic model of molecular networks, ordinary differential equation model of cellular dynamics, stochastic model, partial differential equation model, agent-based model, pharmacokinetic-pharmacodynamic model, etc.) and data-driven prediction methods (e.g. omics data-based conventional screening approach for node biomarkers, static network approach for edge biomarkers and module biomarkers, dynamic network approach for dynamic network biomarkers and dynamic module network biomarkers, etc.). Finally, we discuss several further questions and future directions for the use of computational methods for studying drug resistance, including inferring drug-induced signaling networks, multiscale modeling, drug combinations and precision medicine. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  3. Multi-Scale Modeling, Surrogate-Based Analysis, and Optimization of Lithium-Ion Batteries for Vehicle Applications

    NASA Astrophysics Data System (ADS)

    Du, Wenbo

    A common attribute of electric-powered aerospace vehicles and systems such as unmanned aerial vehicles, hybrid- and fully-electric aircraft, and satellites is that their performance is usually limited by the energy density of their batteries. Although lithium-ion batteries offer distinct advantages such as high voltage and low weight over other battery technologies, they are a relatively new development, and thus significant gaps in the understanding of the physical phenomena that govern battery performance remain. As a result of this limited understanding, batteries must often undergo a cumbersome design process involving many manual iterations based on rules of thumb and ad-hoc design principles. A systematic study of the relationship between operational, geometric, morphological, and material-dependent properties and performance metrics such as energy and power density is non-trivial due to the multiphysics, multiphase, and multiscale nature of the battery system. To address these challenges, two numerical frameworks are established in this dissertation: a process for analyzing and optimizing several key design variables using surrogate modeling tools and gradient-based optimizers, and a multi-scale model that incorporates more detailed microstructural information into the computationally efficient but limited macro-homogeneous model. In the surrogate modeling process, multi-dimensional maps for the cell energy density with respect to design variables such as the particle size, ion diffusivity, and electron conductivity of the porous cathode material are created. A combined surrogate- and gradient-based approach is employed to identify optimal values for cathode thickness and porosity under various operating conditions, and quantify the uncertainty in the surrogate model. The performance of multiple cathode materials is also compared by defining dimensionless transport parameters. The multi-scale model makes use of detailed 3-D FEM simulations conducted at the particle-level. A monodisperse system of ellipsoidal particles is used to simulate the effective transport coefficients and interfacial reaction current density within the porous microstructure. Microscopic simulation results are shown to match well with experimental measurements, while differing significantly from homogenization approximations used in the macroscopic model. Global sensitivity analysis and surrogate modeling tools are applied to couple the two length scales and complete the multi-scale model.

  4. A review of combined experimental and computational procedures for assessing biopolymer structure–process–property relationships

    PubMed Central

    Gronau, Greta; Krishnaji, Sreevidhya T.; Kinahan, Michelle E.; Giesa, Tristan; Wong, Joyce Y.; Kaplan, David L.; Buehler, Markus J.

    2013-01-01

    Tailored biomaterials with tunable functional properties are desirable for many applications ranging from drug delivery to regenerative medicine. To improve the predictability of biopolymer materials functionality, multiple design parameters need to be considered, along with appropriate models. In this article we review the state of the art of synthesis and processing related to the design of biopolymers, with an emphasis on the integration of bottom-up computational modeling in the design process. We consider three prominent examples of well-studied biopolymer materials – elastin, silk, and collagen – and assess their hierarchical structure, intriguing functional properties and categorize existing approaches to study these materials. We find that an integrated design approach in which both experiments and computational modeling are used has rarely been applied for these materials due to difficulties in relating insights gained on different length- and time-scales. In this context, multiscale engineering offers a powerful means to accelerate the biomaterials design process for the development of tailored materials that suit the needs posed by the various applications. The combined use of experimental and computational tools has a very broad applicability not only in the field of biopolymers, but can be exploited to tailor the properties of other polymers and composite materials in general. PMID:22938765

  5. A review of combined experimental and computational procedures for assessing biopolymer structure-process-property relationships.

    PubMed

    Gronau, Greta; Krishnaji, Sreevidhya T; Kinahan, Michelle E; Giesa, Tristan; Wong, Joyce Y; Kaplan, David L; Buehler, Markus J

    2012-11-01

    Tailored biomaterials with tunable functional properties are desirable for many applications ranging from drug delivery to regenerative medicine. To improve the predictability of biopolymer materials functionality, multiple design parameters need to be considered, along with appropriate models. In this article we review the state of the art of synthesis and processing related to the design of biopolymers, with an emphasis on the integration of bottom-up computational modeling in the design process. We consider three prominent examples of well-studied biopolymer materials - elastin, silk, and collagen - and assess their hierarchical structure, intriguing functional properties and categorize existing approaches to study these materials. We find that an integrated design approach in which both experiments and computational modeling are used has rarely been applied for these materials due to difficulties in relating insights gained on different length- and time-scales. In this context, multiscale engineering offers a powerful means to accelerate the biomaterials design process for the development of tailored materials that suit the needs posed by the various applications. The combined use of experimental and computational tools has a very broad applicability not only in the field of biopolymers, but can be exploited to tailor the properties of other polymers and composite materials in general. Copyright © 2012 Elsevier Ltd. All rights reserved.

  6. Beyond mean-field approximations for accurate and computationally efficient models of on-lattice chemical kinetics

    NASA Astrophysics Data System (ADS)

    Pineda, M.; Stamatakis, M.

    2017-07-01

    Modeling the kinetics of surface catalyzed reactions is essential for the design of reactors and chemical processes. The majority of microkinetic models employ mean-field approximations, which lead to an approximate description of catalytic kinetics by assuming spatially uncorrelated adsorbates. On the other hand, kinetic Monte Carlo (KMC) methods provide a discrete-space continuous-time stochastic formulation that enables an accurate treatment of spatial correlations in the adlayer, but at a significant computation cost. In this work, we use the so-called cluster mean-field approach to develop higher order approximations that systematically increase the accuracy of kinetic models by treating spatial correlations at a progressively higher level of detail. We further demonstrate our approach on a reduced model for NO oxidation incorporating first nearest-neighbor lateral interactions and construct a sequence of approximations of increasingly higher accuracy, which we compare with KMC and mean-field. The latter is found to perform rather poorly, overestimating the turnover frequency by several orders of magnitude for this system. On the other hand, our approximations, while more computationally intense than the traditional mean-field treatment, still achieve tremendous computational savings compared to KMC simulations, thereby opening the way for employing them in multiscale modeling frameworks.

  7. Generalization of mixed multiscale finite element methods with applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, C S

    Many science and engineering problems exhibit scale disparity and high contrast. The small scale features cannot be omitted in the physical models because they can affect the macroscopic behavior of the problems. However, resolving all the scales in these problems can be prohibitively expensive. As a consequence, some types of model reduction techniques are required to design efficient solution algorithms. For practical purpose, we are interested in mixed finite element problems as they produce solutions with certain conservative properties. Existing multiscale methods for such problems include the mixed multiscale finite element methods. We show that for complicated problems, the mixedmore » multiscale finite element methods may not be able to produce reliable approximations. This motivates the need of enrichment for coarse spaces. Two enrichment approaches are proposed, one is based on generalized multiscale finte element metthods (GMsFEM), while the other is based on spectral element-based algebraic multigrid (rAMGe). The former one, which is called mixed GMsFEM, is developed for both Darcy’s flow and linear elasticity. Application of the algorithm in two-phase flow simulations are demonstrated. For linear elasticity, the algorithm is subtly modified due to the symmetry requirement of the stress tensor. The latter enrichment approach is based on rAMGe. The algorithm differs from GMsFEM in that both of the velocity and pressure spaces are coarsened. Due the multigrid nature of the algorithm, recursive application is available, which results in an efficient multilevel construction of the coarse spaces. Stability, convergence analysis, and exhaustive numerical experiments are carried out to validate the proposed enrichment approaches. iii« less

  8. Dual tree fractional quaternion wavelet transform for disparity estimation.

    PubMed

    Kumar, Sanoj; Kumar, Sanjeev; Sukavanam, Nagarajan; Raman, Balasubramanian

    2014-03-01

    This paper proposes a novel phase based approach for computing disparity as the optical flow from the given pair of consecutive images. A new dual tree fractional quaternion wavelet transform (FrQWT) is proposed by defining the 2D Fourier spectrum upto a single quadrant. In the proposed FrQWT, each quaternion wavelet consists of a real part (a real DWT wavelet) and three imaginary parts that are organized according to the quaternion algebra. First two FrQWT phases encode the shifts of image features in the absolute horizontal and vertical coordinate system, while the third phase has the texture information. The FrQWT allowed a multi-scale framework for calculating and adjusting local disparities and executing phase unwrapping from coarse to fine scales with linear computational efficiency. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  9. Development and application of air quality models at the US ...

    EPA Pesticide Factsheets

    Overview of the development and application of air quality models at the U.S. EPA, particularly focused on the development and application of the Community Multiscale Air Quality (CMAQ) model developed within the Computation Exposure Division (CED) of the National Exposure Research Laboratory (NERL). This presentation will provide a simple overview of air quality model development and application geared toward a non-technical student audience. The National Exposure Research Laboratory (NERL) Computational Exposure Division (CED) develops and evaluates data, decision-support tools, and models to be applied to media-specific or receptor-specific problem areas. CED uses modeling-based approaches to characterize exposures, evaluate fate and transport, and support environmental diagnostics/forensics with input from multiple data sources. It also develops media- and receptor-specific models, process models, and decision support tools for use both within and outside of EPA.

  10. Conceptual strategies and inter-theory relations: The case of nanoscale cracks

    NASA Astrophysics Data System (ADS)

    Bursten, Julia R.

    2018-05-01

    This paper introduces a new account of inter-theory relations in physics, which I call the conceptual strategies account. Using the example of a multiscale computer simulation model of nanoscale crack propagation in silicon, I illustrate this account and contrast it with existing reductive, emergent, and handshaking approaches. The conceptual strategies account develops the notion that relations among physical theories, and among their models, are constrained but not dictated by limitations from physics, mathematics, and computation, and that conceptual reasoning within those limits is required both to generate and to understand the relations between theories. Conceptual strategies result in a variety of types of relations between theories and models. These relations are themselves epistemic objects, like theories and models, and as such are an under-recognized part of the epistemic landscape of science.

  11. Bringing global gyrokinetic turbulence simulations to the transport timescale using a multiscale approach

    NASA Astrophysics Data System (ADS)

    Parker, Jeffrey; Lodestro, Lynda; Told, Daniel; Merlo, Gabriele; Ricketson, Lee; Campos, Alejandro; Jenko, Frank; Hittinger, Jeffrey

    2017-10-01

    Predictive whole-device simulation models will play an increasingly important role in ensuring the success of fusion experiments and accelerating the development of fusion energy. In the core of tokamak plasmas, a separation of timescales between turbulence and transport makes a single direct simulation of both processes computationally expensive. We present the first demonstration of a multiple-timescale method coupling global gyrokinetic simulations with a transport solver to calculate the self-consistent, steady-state temperature profile. Initial results are highly encouraging, with the coupling method appearing robust to the difficult problem of turbulent fluctuations. The method holds potential for integrating first-principles turbulence simulations into whole-device models and advancing the understanding of global plasma behavior. Work supported by US DOE under Contract DE-AC52-07NA27344 and the Exascale Computing Project (17-SC-20-SC).

  12. Obtaining macroscopic quantities for the contact line problem from Density Functional Theory using asymptotic methods

    NASA Astrophysics Data System (ADS)

    Sibley, David; Nold, Andreas; Kalliadasis, Serafim

    2015-11-01

    Density Functional Theory (DFT), a statistical mechanics of fluids approach, captures microscopic details of the fluid density structure in the vicinity of contact lines, as seen in computations in our recent study. Contact lines describe the location where interfaces between two fluids meet solid substrates, and have stimulated a wealth of research due to both their ubiquity in nature and technological applications and also due to their rich multiscale behaviour. Whilst progress can be made computationally to capture the microscopic to mesoscopic structure from DFT, complete analytical results to fully bridge to the macroscale are lacking. In this work, we describe our efforts to bring asymptotic methods to DFT to obtain results for contact angles and other macroscopic quantities in various parameter regimes. We acknowledge financial support from European Research Council via Advanced Grant No. 247031.

  13. Multiscale techniques for parabolic equations.

    PubMed

    Målqvist, Axel; Persson, Anna

    2018-01-01

    We use the local orthogonal decomposition technique introduced in Målqvist and Peterseim (Math Comput 83(290):2583-2603, 2014) to derive a generalized finite element method for linear and semilinear parabolic equations with spatial multiscale coefficients. We consider nonsmooth initial data and a backward Euler scheme for the temporal discretization. Optimal order convergence rate, depending only on the contrast, but not on the variations of the coefficients, is proven in the [Formula: see text]-norm. We present numerical examples, which confirm our theoretical findings.

  14. Use of Computational Fluid Dynamics for improving freeze-dryers design and process understanding. Part 1: Modelling the lyophilisation chamber.

    PubMed

    Barresi, Antonello A; Rasetto, Valeria; Marchisio, Daniele L

    2018-05-15

    This manuscript shows how computational models, mainly based on Computational Fluid Dynamics (CFD), can be used to simulate different parts of an industrial freeze-drying equipment and to properly design them; in particular, the freeze-dryer chamber and the duct connecting the chamber with the condenser, with the valves and vanes eventually present are analysed in this work. In Part 1, it will be shown how CFD can be employed to improve specific designs, to perform geometry optimization, to evaluate different design choices and how it is useful to evaluate the effect on product drying and batch variance. Such an approach allows an in-depth process understanding and assessment of the critical aspects of lyophilisation. This can be done by running either steady-state or transient simulations with imposed sublimation rates or with multi-scale approaches. This methodology will be demonstrated on freeze-drying equipment of different sizes, investigating the influence of the equipment geometry and shelf inter-distance. The effect of valve type (butterfly and mushroom) and shape on duct conductance and critical flow conditions will be instead investigated in Part 2. Copyright © 2018. Published by Elsevier B.V.

  15. A multiscale-based approach for composite materials with embedded PZT filaments for energy harvesting

    NASA Astrophysics Data System (ADS)

    El-Etriby, Ahmed E.; Abdel-Meguid, Mohamed E.; Hatem, Tarek M.; Bahei-El-Din, Yehia A.

    2014-03-01

    Ambient vibrations are major source of wasted energy, exploiting properly such vibration can be converted to valuable energy and harvested to power up devices, i.e. electronic devices. Accordingly, energy harvesting using smart structures with active piezoelectric ceramics has gained wide interest over the past few years as a method for converting such wasted energy. This paper provides numerical and experimental analysis of piezoelectric fiber based composites for energy harvesting applications proposing a multi-scale modeling approach coupled with experimental verification. The multi-scale approach suggested to predict the behavior of piezoelectric fiber-based composites use micromechanical model based on Transformation Field Analysis (TFA) to calculate the overall material properties of electrically active composite structure. Capitalizing on the calculated properties, single-phase analysis of a homogeneous structure is conducted using finite element method. The experimental work approach involves running dynamic tests on piezoelectric fiber-based composites to simulate mechanical vibrations experienced by a subway train floor tiles. Experimental results agree well with the numerical results both for static and dynamic tests.

  16. Homogenization-based interval analysis for structural-acoustic problem involving periodical composites and multi-scale uncertain-but-bounded parameters.

    PubMed

    Chen, Ning; Yu, Dejie; Xia, Baizhan; Liu, Jian; Ma, Zhengdong

    2017-04-01

    This paper presents a homogenization-based interval analysis method for the prediction of coupled structural-acoustic systems involving periodical composites and multi-scale uncertain-but-bounded parameters. In the structural-acoustic system, the macro plate structure is assumed to be composed of a periodically uniform microstructure. The equivalent macro material properties of the microstructure are computed using the homogenization method. By integrating the first-order Taylor expansion interval analysis method with the homogenization-based finite element method, a homogenization-based interval finite element method (HIFEM) is developed to solve a periodical composite structural-acoustic system with multi-scale uncertain-but-bounded parameters. The corresponding formulations of the HIFEM are deduced. A subinterval technique is also introduced into the HIFEM for higher accuracy. Numerical examples of a hexahedral box and an automobile passenger compartment are given to demonstrate the efficiency of the presented method for a periodical composite structural-acoustic system with multi-scale uncertain-but-bounded parameters.

  17. Heterogeneity and Self-Organization of Complex Systems Through an Application to Financial Market with Multiagent Systems

    NASA Astrophysics Data System (ADS)

    Lucas, Iris; Cotsaftis, Michel; Bertelle, Cyrille

    2017-12-01

    Multiagent systems (MAS) provide a useful tool for exploring the complex dynamics and behavior of financial markets and now MAS approach has been widely implemented and documented in the empirical literature. This paper introduces the implementation of an innovative multi-scale mathematical model for a computational agent-based financial market. The paper develops a method to quantify the degree of self-organization which emerges in the system and shows that the capacity of self-organization is maximized when the agent behaviors are heterogeneous. Numerical results are presented and analyzed, showing how the global market behavior emerges from specific individual behavior interactions.

  18. Learning relevant features of data with multi-scale tensor networks

    NASA Astrophysics Data System (ADS)

    Miles Stoudenmire, E.

    2018-07-01

    Inspired by coarse-graining approaches used in physics, we show how similar algorithms can be adapted for data. The resulting algorithms are based on layered tree tensor networks and scale linearly with both the dimension of the input and the training set size. Computing most of the layers with an unsupervised algorithm, then optimizing just the top layer for supervised classification of the MNIST and fashion MNIST data sets gives very good results. We also discuss mixing a prior guess for supervised weights together with an unsupervised representation of the data, yielding a smaller number of features nevertheless able to give good performance.

  19. Multi-scale mechanics of granular solids from grain-resolved X-ray measurements

    NASA Astrophysics Data System (ADS)

    Hurley, R. C.; Hall, S. A.; Wright, J. P.

    2017-11-01

    This work discusses an experimental technique for studying the mechanics of three-dimensional (3D) granular solids. The approach combines 3D X-ray diffraction and X-ray computed tomography to measure grain-resolved strains, kinematics and contact fabric in the bulk of a granular solid, from which continuum strains, grain stresses, interparticle forces and coarse-grained elasto-plastic moduli can be determined. We demonstrate the experimental approach and analysis of selected results on a sample of 1099 stiff, frictional grains undergoing multiple uniaxial compression cycles. We investigate the inter-particle force network, elasto-plastic moduli and associated length scales, reversibility of mechanical responses during cyclic loading, the statistics of microscopic responses and microstructure-property relationships. This work serves to highlight both the fundamental insight into granular mechanics that is furnished by combined X-ray measurements and describes future directions in the field of granular materials that can be pursued with such approaches.

  20. Construction of multi-scale consistent brain networks: methods and applications.

    PubMed

    Ge, Bao; Tian, Yin; Hu, Xintao; Chen, Hanbo; Zhu, Dajiang; Zhang, Tuo; Han, Junwei; Guo, Lei; Liu, Tianming

    2015-01-01

    Mapping human brain networks provides a basis for studying brain function and dysfunction, and thus has gained significant interest in recent years. However, modeling human brain networks still faces several challenges including constructing networks at multiple spatial scales and finding common corresponding networks across individuals. As a consequence, many previous methods were designed for a single resolution or scale of brain network, though the brain networks are multi-scale in nature. To address this problem, this paper presents a novel approach to constructing multi-scale common structural brain networks from DTI data via an improved multi-scale spectral clustering applied on our recently developed and validated DICCCOLs (Dense Individualized and Common Connectivity-based Cortical Landmarks). Since the DICCCOL landmarks possess intrinsic structural correspondences across individuals and populations, we employed the multi-scale spectral clustering algorithm to group the DICCCOL landmarks and their connections into sub-networks, meanwhile preserving the intrinsically-established correspondences across multiple scales. Experimental results demonstrated that the proposed method can generate multi-scale consistent and common structural brain networks across subjects, and its reproducibility has been verified by multiple independent datasets. As an application, these multi-scale networks were used to guide the clustering of multi-scale fiber bundles and to compare the fiber integrity in schizophrenia and healthy controls. In general, our methods offer a novel and effective framework for brain network modeling and tract-based analysis of DTI data.

  1. Assessment of current atomic scale modelling methods for the investigation of nuclear fuels under irradiation: Example of uranium dioxide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bertolus, Marjorie; Krack, Matthias; Freyss, Michel

    Multiscale approaches are developed to build more physically based kinetic and mechanical mesoscale models to enhance the predictive capability of fuel performance codes and increase the efficiency of the development of the safer and more innovative nuclear materials needed in the future. Atomic scale methods, and in particular electronic structure and empirical potential methods, form the basis of this multiscale approach. It is therefore essential to know the accuracy of the results computed at this scale if we want to feed them into higher scale models. We focus here on the assessment of the description of interatomic interactions in uraniummore » dioxide using on the one hand electronic structure methods, in particular in the density functional theory (DFT) framework and on the other hand empirical potential methods. These two types of methods are complementary, the former enabling to get results from a minimal amount of input data and further insight into the electronic and magnetic properties, while the latter are irreplaceable for studies where a large number of atoms needs to be considered. We consider basic properties as well as specific ones, which are important for the description of nuclear fuel under irradiation. These are especially energies, which are the main data passed to higher scale models. We limit ourselves to uranium dioxide.« less

  2. A multiscale approach to simulating the conformational properties of unbound multi-C₂H₂ zinc finger proteins.

    PubMed

    Liu, Lei; Wade, Rebecca C; Heermann, Dieter W

    2015-09-01

    The conformational properties of unbound multi-Cys2 His2 (mC2H2) zinc finger proteins, in which zinc finger domains are connected by flexible linkers, are studied by a multiscale approach. Three methods on different length scales are utilized. First, atomic detail molecular dynamics simulations of one zinc finger and its adjacent flexible linker confirmed that the zinc finger is more rigid than the flexible linker. Second, the end-to-end distance distributions of mC2H2 zinc finger proteins are computed using an efficient atomistic pivoting algorithm, which only takes excluded volume interactions into consideration. The end-to-end distance distribution gradually changes its profile, from left-tailed to right-tailed, as the number of zinc fingers increases. This is explained by using a worm-like chain model. For proteins of a few zinc fingers, an effective bending constraint favors an extended conformation. Only for proteins containing more than nine zinc fingers, is a somewhat compacted conformation preferred. Third, a mesoscale model is modified to study both the local and the global conformational properties of multi-C2H2 zinc finger proteins. Simulations of the CCCTC-binding factor (CTCF), an important mC2H2 zinc finger protein for genome spatial organization, are presented. © 2015 Wiley Periodicals, Inc.

  3. NREL Kicks Off Next Phase of Advanced Computer-Aided Battery Engineering |

    Science.gov Websites

    lithium-ion (Li-ion) batteries, known as a multi-scale multi-domain (GH-MSMD) model framework, was News | NREL Kicks Off Next Phase of Advanced Computer-Aided Battery Engineering NREL Kicks Off Next Phase of Advanced Computer-Aided Battery Engineering March 16, 2016 NREL researcher looks across

  4. A FRAMEWORK FOR FINE-SCALE COMPUTATIONAL FLUID DYNAMICS AIR QUALITY MODELING AND ANALYSIS

    EPA Science Inventory

    This paper discusses a framework for fine-scale CFD modeling that may be developed to complement the present Community Multi-scale Air Quality (CMAQ) modeling system which itself is a computational fluid dynamics model. A goal of this presentation is to stimulate discussions on w...

  5. Application of empirical and dynamical closure methods to simple climate models

    NASA Astrophysics Data System (ADS)

    Padilla, Lauren Elizabeth

    This dissertation applies empirically- and physically-based methods for closure of uncertain parameters and processes to three model systems that lie on the simple end of climate model complexity. Each model isolates one of three sources of closure uncertainty: uncertain observational data, large dimension, and wide ranging length scales. They serve as efficient test systems toward extension of the methods to more realistic climate models. The empirical approach uses the Unscented Kalman Filter (UKF) to estimate the transient climate sensitivity (TCS) parameter in a globally-averaged energy balance model. Uncertainty in climate forcing and historical temperature make TCS difficult to determine. A range of probabilistic estimates of TCS computed for various assumptions about past forcing and natural variability corroborate ranges reported in the IPCC AR4 found by different means. Also computed are estimates of how quickly uncertainty in TCS may be expected to diminish in the future as additional observations become available. For higher system dimensions the UKF approach may become prohibitively expensive. A modified UKF algorithm is developed in which the error covariance is represented by a reduced-rank approximation, substantially reducing the number of model evaluations required to provide probability densities for unknown parameters. The method estimates the state and parameters of an abstract atmospheric model, known as Lorenz 96, with accuracy close to that of a full-order UKF for 30-60% rank reduction. The physical approach to closure uses the Multiscale Modeling Framework (MMF) to demonstrate closure of small-scale, nonlinear processes that would not be resolved directly in climate models. A one-dimensional, abstract test model with a broad spatial spectrum is developed. The test model couples the Kuramoto-Sivashinsky equation to a transport equation that includes cloud formation and precipitation-like processes. In the test model, three main sources of MMF error are evaluated independently. Loss of nonlinear multi-scale interactions and periodic boundary conditions in closure models were dominant sources of error. Using a reduced order modeling approach to maximize energy content allowed reduction of the closure model dimension up to 75% without loss in accuracy. MMF and a comparable alternative model peformed equally well compared to direct numerical simulation.

  6. Parallel algorithm for multiscale atomistic/continuum simulations using LAMMPS

    NASA Astrophysics Data System (ADS)

    Pavia, F.; Curtin, W. A.

    2015-07-01

    Deformation and fracture processes in engineering materials often require simultaneous descriptions over a range of length and time scales, with each scale using a different computational technique. Here we present a high-performance parallel 3D computing framework for executing large multiscale studies that couple an atomic domain, modeled using molecular dynamics and a continuum domain, modeled using explicit finite elements. We use the robust Coupled Atomistic/Discrete-Dislocation (CADD) displacement-coupling method, but without the transfer of dislocations between atoms and continuum. The main purpose of the work is to provide a multiscale implementation within an existing large-scale parallel molecular dynamics code (LAMMPS) that enables use of all the tools associated with this popular open-source code, while extending CADD-type coupling to 3D. Validation of the implementation includes the demonstration of (i) stability in finite-temperature dynamics using Langevin dynamics, (ii) elimination of wave reflections due to large dynamic events occurring in the MD region and (iii) the absence of spurious forces acting on dislocations due to the MD/FE coupling, for dislocations further than 10 Å from the coupling boundary. A first non-trivial example application of dislocation glide and bowing around obstacles is shown, for dislocation lengths of ∼50 nm using fewer than 1 000 000 atoms but reproducing results of extremely large atomistic simulations at much lower computational cost.

  7. Multi-Scale Modeling of a Graphite-Epoxy-Nanotube System

    NASA Technical Reports Server (NTRS)

    Frankland, S. J. V.; Riddick, J. C.; Gates, T. S.

    2005-01-01

    A multi-scale method is utilized to determine some of the constitutive properties of a three component graphite-epoxy-nanotube system. This system is of interest because carbon nanotubes have been proposed as stiffening and toughening agents in the interlaminar regions of carbon fiber/epoxy laminates. The multi-scale method uses molecular dynamics simulation and equivalent-continuum modeling to compute three of the elastic constants of the graphite-epoxy-nanotube system: C11, C22, and C33. The 1-direction is along the nanotube axis, and the graphene sheets lie in the 1-2 plane. It was found that the C11 is only 4% larger than the C22. The nanotube therefore does have a small, but positive effect on the constitutive properties in the interlaminar region.

  8. Simulating and mapping spatial complexity using multi-scale techniques

    USGS Publications Warehouse

    De Cola, L.

    1994-01-01

    A central problem in spatial analysis is the mapping of data for complex spatial fields using relatively simple data structures, such as those of a conventional GIS. This complexity can be measured using such indices as multi-scale variance, which reflects spatial autocorrelation, and multi-fractal dimension, which characterizes the values of fields. These indices are computed for three spatial processes: Gaussian noise, a simple mathematical function, and data for a random walk. Fractal analysis is then used to produce a vegetation map of the central region of California based on a satellite image. This analysis suggests that real world data lie on a continuum between the simple and the random, and that a major GIS challenge is the scientific representation and understanding of rapidly changing multi-scale fields. -Author

  9. Integrating Multiscale Modeling with Drug Effects for Cancer Treatment.

    PubMed

    Li, Xiangfang L; Oduola, Wasiu O; Qian, Lijun; Dougherty, Edward R

    2015-01-01

    In this paper, we review multiscale modeling for cancer treatment with the incorporation of drug effects from an applied system's pharmacology perspective. Both the classical pharmacology and systems biology are inherently quantitative; however, systems biology focuses more on networks and multi factorial controls over biological processes rather than on drugs and targets in isolation, whereas systems pharmacology has a strong focus on studying drugs with regard to the pharmacokinetic (PK) and pharmacodynamic (PD) relations accompanying drug interactions with multiscale physiology as well as the prediction of dosage-exposure responses and economic potentials of drugs. Thus, it requires multiscale methods to address the need for integrating models from the molecular levels to the cellular, tissue, and organism levels. It is a common belief that tumorigenesis and tumor growth can be best understood and tackled by employing and integrating a multifaceted approach that includes in vivo and in vitro experiments, in silico models, multiscale tumor modeling, continuous/discrete modeling, agent-based modeling, and multiscale modeling with PK/PD drug effect inputs. We provide an example application of multiscale modeling employing stochastic hybrid system for a colon cancer cell line HCT-116 with the application of Lapatinib drug. It is observed that the simulation results are similar to those observed from the setup of the wet-lab experiments at the Translational Genomics Research Institute.

  10. Coarse-grained models using local-density potentials optimized with the relative entropy: Application to implicit solvation

    NASA Astrophysics Data System (ADS)

    Sanyal, Tanmoy; Shell, M. Scott

    2016-07-01

    Bottom-up multiscale techniques are frequently used to develop coarse-grained (CG) models for simulations at extended length and time scales but are often limited by a compromise between computational efficiency and accuracy. The conventional approach to CG nonbonded interactions uses pair potentials which, while computationally efficient, can neglect the inherently multibody contributions of the local environment of a site to its energy, due to degrees of freedom that were coarse-grained out. This effect often causes the CG potential to depend strongly on the overall system density, composition, or other properties, which limits its transferability to states other than the one at which it was parameterized. Here, we propose to incorporate multibody effects into CG potentials through additional nonbonded terms, beyond pair interactions, that depend in a mean-field manner on local densities of different atomic species. This approach is analogous to embedded atom and bond-order models that seek to capture multibody electronic effects in metallic systems. We show that the relative entropy coarse-graining framework offers a systematic route to parameterizing such local density potentials. We then characterize this approach in the development of implicit solvation strategies for interactions between model hydrophobes in an aqueous environment.

  11. Single-Scale Fusion: An Effective Approach to Merging Images.

    PubMed

    Ancuti, Codruta O; Ancuti, Cosmin; De Vleeschouwer, Christophe; Bovik, Alan C

    2017-01-01

    Due to its robustness and effectiveness, multi-scale fusion (MSF) based on the Laplacian pyramid decomposition has emerged as a popular technique that has shown utility in many applications. Guided by several intuitive measures (weight maps) the MSF process is versatile and straightforward to be implemented. However, the number of pyramid levels increases with the image size, which implies sophisticated data management and memory accesses, as well as additional computations. Here, we introduce a simplified formulation that reduces MSF to only a single level process. Starting from the MSF decomposition, we explain both mathematically and intuitively (visually) a way to simplify the classical MSF approach with minimal loss of information. The resulting single-scale fusion (SSF) solution is a close approximation of the MSF process that eliminates important redundant computations. It also provides insights regarding why MSF is so effective. While our simplified expression is derived in the context of high dynamic range imaging, we show its generality on several well-known fusion-based applications, such as image compositing, extended depth of field, medical imaging, and blending thermal (infrared) images with visible light. Besides visual validation, quantitative evaluations demonstrate that our SSF strategy is able to yield results that are highly competitive with traditional MSF approaches.

  12. Multi-Scale Peak and Trough Detection Optimised for Periodic and Quasi-Periodic Neuroscience Data.

    PubMed

    Bishop, Steven M; Ercole, Ari

    2018-01-01

    The reliable detection of peaks and troughs in physiological signals is essential to many investigative techniques in medicine and computational biology. Analysis of the intracranial pressure (ICP) waveform is a particular challenge due to multi-scale features, a changing morphology over time and signal-to-noise limitations. Here we present an efficient peak and trough detection algorithm that extends the scalogram approach of Scholkmann et al., and results in greatly improved algorithm runtime performance. Our improved algorithm (modified Scholkmann) was developed and analysed in MATLAB R2015b. Synthesised waveforms (periodic, quasi-periodic and chirp sinusoids) were degraded with white Gaussian noise to achieve signal-to-noise ratios down to 5 dB and were used to compare the performance of the original Scholkmann and modified Scholkmann algorithms. The modified Scholkmann algorithm has false-positive (0%) and false-negative (0%) detection rates identical to the original Scholkmann when applied to our test suite. Actual compute time for a 200-run Monte Carlo simulation over a multicomponent noisy test signal was 40.96 ± 0.020 s (mean ± 95%CI) for the original Scholkmann and 1.81 ± 0.003 s (mean ± 95%CI) for the modified Scholkmann, demonstrating the expected improvement in runtime complexity from [Formula: see text] to [Formula: see text]. The accurate interpretation of waveform data to identify peaks and troughs is crucial in signal parameterisation, feature extraction and waveform identification tasks. Modification of a standard scalogram technique has produced a robust algorithm with linear computational complexity that is particularly suited to the challenges presented by large, noisy physiological datasets. The algorithm is optimised through a single parameter and can identify sub-waveform features with minimal additional overhead, and is easily adapted to run in real time on commodity hardware.

  13. Automated Photoreceptor Cell Identification on Nonconfocal Adaptive Optics Images Using Multiscale Circular Voting

    PubMed Central

    Liu, Jianfei; Jung, HaeWon; Dubra, Alfredo; Tam, Johnny

    2017-01-01

    Purpose Adaptive optics scanning light ophthalmoscopy (AOSLO) has enabled quantification of the photoreceptor mosaic in the living human eye using metrics such as cell density and average spacing. These rely on the identification of individual cells. Here, we demonstrate a novel approach for computer-aided identification of cone photoreceptors on nonconfocal split detection AOSLO images. Methods Algorithms for identification of cone photoreceptors were developed, based on multiscale circular voting (MSCV) in combination with a priori knowledge that split detection images resemble Nomarski differential interference contrast images, in which dark and bright regions are present on the two sides of each cell. The proposed algorithm locates dark and bright region pairs, iteratively refining the identification across multiple scales. Identification accuracy was assessed in data from 10 subjects by comparing automated identifications with manual labeling, followed by computation of density and spacing metrics for comparison to histology and published data. Results There was good agreement between manual and automated cone identifications with overall recall, precision, and F1 score of 92.9%, 90.8%, and 91.8%, respectively. On average, computed density and spacing values using automated identification were within 10.7% and 11.2% of the expected histology values across eccentricities ranging from 0.5 to 6.2 mm. There was no statistically significant difference between MSCV-based and histology-based density measurements (P = 0.96, Kolmogorov-Smirnov 2-sample test). Conclusions MSCV can accurately detect cone photoreceptors on split detection images across a range of eccentricities, enabling quick, objective estimation of photoreceptor mosaic metrics, which will be important for future clinical trials utilizing adaptive optics. PMID:28873173

  14. Reduced-Order Biogeochemical Flux Model for High-Resolution Multi-Scale Biophysical Simulations

    NASA Astrophysics Data System (ADS)

    Smith, Katherine; Hamlington, Peter; Pinardi, Nadia; Zavatarelli, Marco

    2017-04-01

    Biogeochemical tracers and their interactions with upper ocean physical processes such as submesoscale circulations and small-scale turbulence are critical for understanding the role of the ocean in the global carbon cycle. These interactions can cause small-scale spatial and temporal heterogeneity in tracer distributions that can, in turn, greatly affect carbon exchange rates between the atmosphere and interior ocean. For this reason, it is important to take into account small-scale biophysical interactions when modeling the global carbon cycle. However, explicitly resolving these interactions in an earth system model (ESM) is currently infeasible due to the enormous associated computational cost. As a result, understanding and subsequently parameterizing how these small-scale heterogeneous distributions develop and how they relate to larger resolved scales is critical for obtaining improved predictions of carbon exchange rates in ESMs. In order to address this need, we have developed the reduced-order, 17 state variable Biogeochemical Flux Model (BFM-17) that follows the chemical functional group approach, which allows for non-Redfield stoichiometric ratios and the exchange of matter through units of carbon, nitrate, and phosphate. This model captures the behavior of open-ocean biogeochemical systems without substantially increasing computational cost, thus allowing the model to be combined with computationally-intensive, fully three-dimensional, non-hydrostatic large eddy simulations (LES). In this talk, we couple BFM-17 with the Princeton Ocean Model and show good agreement between predicted monthly-averaged results and Bermuda testbed area field data (including the Bermuda-Atlantic Time-series Study and Bermuda Testbed Mooring). Through these tests, we demonstrate the capability of BFM-17 to accurately model open-ocean biochemistry. Additionally, we discuss the use of BFM-17 within a multi-scale LES framework and outline how this will further our understanding of turbulent biophysical interactions in the upper ocean.

  15. A MULTISCALE FRAMEWORK FOR THE STOCHASTIC ASSIMILATION AND MODELING OF UNCERTAINTY ASSOCIATED NCF COMPOSITE MATERIALS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mehrez, Loujaine; Ghanem, Roger; McAuliffe, Colin

    multiscale framework to construct stochastic macroscopic constitutive material models is proposed. A spectral projection approach, specifically polynomial chaos expansion, has been used to construct explicit functional relationships between the homogenized properties and input parameters from finer scales. A homogenization engine embedded in Multiscale Designer, software for composite materials, has been used for the upscaling process. The framework is demonstrated using non-crimp fabric composite materials by constructing probabilistic models of the homogenized properties of a non-crimp fabric laminate in terms of the input parameters together with the homogenized properties from finer scales.

  16. Rough Set Approach to Incomplete Multiscale Information System

    PubMed Central

    Yang, Xibei; Qi, Yong; Yu, Dongjun; Yu, Hualong; Song, Xiaoning; Yang, Jingyu

    2014-01-01

    Multiscale information system is a new knowledge representation system for expressing the knowledge with different levels of granulations. In this paper, by considering the unknown values, which can be seen everywhere in real world applications, the incomplete multiscale information system is firstly investigated. The descriptor technique is employed to construct rough sets at different scales for analyzing the hierarchically structured data. The problem of unravelling decision rules at different scales is also addressed. Finally, the reduct descriptors are formulated to simplify decision rules, which can be derived from different scales. Some numerical examples are employed to substantiate the conceptual arguments. PMID:25276852

  17. Adaptive two-regime method: Application to front propagation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robinson, Martin, E-mail: martin.robinson@maths.ox.ac.uk; Erban, Radek, E-mail: erban@maths.ox.ac.uk; Flegg, Mark, E-mail: mark.flegg@monash.edu

    2014-03-28

    The Adaptive Two-Regime Method (ATRM) is developed for hybrid (multiscale) stochastic simulation of reaction-diffusion problems. It efficiently couples detailed Brownian dynamics simulations with coarser lattice-based models. The ATRM is a generalization of the previously developed Two-Regime Method [Flegg et al., J. R. Soc., Interface 9, 859 (2012)] to multiscale problems which require a dynamic selection of regions where detailed Brownian dynamics simulation is used. Typical applications include a front propagation or spatio-temporal oscillations. In this paper, the ATRM is used for an in-depth study of front propagation in a stochastic reaction-diffusion system which has its mean-field model given in termsmore » of the Fisher equation [R. Fisher, Ann. Eugen. 7, 355 (1937)]. It exhibits a travelling reaction front which is sensitive to stochastic fluctuations at the leading edge of the wavefront. Previous studies into stochastic effects on the Fisher wave propagation speed have focused on lattice-based models, but there has been limited progress using off-lattice (Brownian dynamics) models, which suffer due to their high computational cost, particularly at the high molecular numbers that are necessary to approach the Fisher mean-field model. By modelling only the wavefront itself with the off-lattice model, it is shown that the ATRM leads to the same Fisher wave results as purely off-lattice models, but at a fraction of the computational cost. The error analysis of the ATRM is also presented for a morphogen gradient model.« less

  18. Establishing Multiscale Models for Simulating Whole Limb Estimates of Electric Fields for Osseointegrated Implants

    PubMed Central

    Isaacson, Brad M.; Stinstra, Jeroen G.; Bloebaum, Roy D.; Pasquina, COL Paul F.; MacLeod, Rob S.

    2011-01-01

    Although the survival rates of warfighters in recent conflicts are among the highest in military history, those who have sustained proximal limb amputations, may pose additional rehabilitation concerns. In some of these cases, traditional prosthetic limbs may not provide adequate function for returning to an active lifestyle. Osseointegration has emerged as a potential prosthetic alternative for those with limited residual limb length. Using this technology, direct skeletal attachment occurs between a transcutaneous osseointegrated implant (TOI) and the host bone, thereby eliminating the need for a socket. While reports from the first 100 patients with a TOI have been promising, some rehabilitation regimens require 12–18 months of restricted weight bearing to prevent overloading at the bone implant-interface. Electrically induced osseointegration has been proposed as an option for expediting periprosthetic fixation and preliminary studies have demonstrated the feasibility of adapting the TOI into a functional cathode. To assure safe and effective electrical fields that are conducive for osseoinduction and osseointegration, we have developed multiscale modeling approaches to simulate the expected electric metrics at the bone-implant interface. We have used computed tomography scans and volume segmentation tools to create anatomically accurate models that clearly distinguish tissue parameters and serve as the basis for finite element analysis. This translational computational biological process has supported biomedical electrode design, implant placement, and experiments to date have demonstrated the clinical feasibility of electrically induced osseointegration. PMID:21712151

  19. A Multi-scale Computational Platform to Mechanistically Assess the Effect of Genetic Variation on Drug Responses in Human Erythrocyte Metabolism

    PubMed Central

    Bordbar, Aarash; Palsson, Bernhard O.

    2016-01-01

    Progress in systems medicine brings promise to addressing patient heterogeneity and individualized therapies. Recently, genome-scale models of metabolism have been shown to provide insight into the mechanistic link between drug therapies and systems-level off-target effects while being expanded to explicitly include the three-dimensional structure of proteins. The integration of these molecular-level details, such as the physical, structural, and dynamical properties of proteins, notably expands the computational description of biochemical network-level properties and the possibility of understanding and predicting whole cell phenotypes. In this study, we present a multi-scale modeling framework that describes biological processes which range in scale from atomistic details to an entire metabolic network. Using this approach, we can understand how genetic variation, which impacts the structure and reactivity of a protein, influences both native and drug-induced metabolic states. As a proof-of-concept, we study three enzymes (catechol-O-methyltransferase, glucose-6-phosphate dehydrogenase, and glyceraldehyde-3-phosphate dehydrogenase) and their respective genetic variants which have clinically relevant associations. Using all-atom molecular dynamic simulations enables the sampling of long timescale conformational dynamics of the proteins (and their mutant variants) in complex with their respective native metabolites or drug molecules. We find that changes in a protein’s structure due to a mutation influences protein binding affinity to metabolites and/or drug molecules, and inflicts large-scale changes in metabolism. PMID:27467583

  20. A Multi-scale Computational Platform to Mechanistically Assess the Effect of Genetic Variation on Drug Responses in Human Erythrocyte Metabolism.

    PubMed

    Mih, Nathan; Brunk, Elizabeth; Bordbar, Aarash; Palsson, Bernhard O

    2016-07-01

    Progress in systems medicine brings promise to addressing patient heterogeneity and individualized therapies. Recently, genome-scale models of metabolism have been shown to provide insight into the mechanistic link between drug therapies and systems-level off-target effects while being expanded to explicitly include the three-dimensional structure of proteins. The integration of these molecular-level details, such as the physical, structural, and dynamical properties of proteins, notably expands the computational description of biochemical network-level properties and the possibility of understanding and predicting whole cell phenotypes. In this study, we present a multi-scale modeling framework that describes biological processes which range in scale from atomistic details to an entire metabolic network. Using this approach, we can understand how genetic variation, which impacts the structure and reactivity of a protein, influences both native and drug-induced metabolic states. As a proof-of-concept, we study three enzymes (catechol-O-methyltransferase, glucose-6-phosphate dehydrogenase, and glyceraldehyde-3-phosphate dehydrogenase) and their respective genetic variants which have clinically relevant associations. Using all-atom molecular dynamic simulations enables the sampling of long timescale conformational dynamics of the proteins (and their mutant variants) in complex with their respective native metabolites or drug molecules. We find that changes in a protein's structure due to a mutation influences protein binding affinity to metabolites and/or drug molecules, and inflicts large-scale changes in metabolism.

  1. A Multiscale Computational Model of the Response of Swine Epidermis After Acute Irradiation

    NASA Technical Reports Server (NTRS)

    Hu, Shaowen; Cucinotta, Francis A.

    2012-01-01

    Radiation exposure from Solar Particle Events can lead to very high skin dose for astronauts on exploration missions outside the protection of the Earth s magnetic field [1]. Assessing the detrimental effects to human skin under such adverse conditions could be predicted by conducting territorial experiments on animal models. In this study we apply a computational approach to simulate the experimental data of the radiation response of swine epidermis, which is closely similar to human epidermis [2]. Incorporating experimentally measured histological and cell kinetic parameters into a multiscale tissue modeling framework, we obtain results of population kinetics and proliferation index comparable to unirradiated and acutely irradiated swine experiments [3]. It is noted the basal cell doubling time is 10 to 16 days in the intact population, but drops to 13.6 hr in the regenerating populations surviving irradiation. This complex 30-fold variation is proposed to be attributed to the shortening of the G1 phase duration. We investigate this radiation induced effect by considering at the sub-cellular level the expression and signaling of TGF-beta, as it is recognized as a key regulatory factor of tissue formation and wound healing [4]. This integrated model will allow us to test the validity of various basic biological rules at the cellular level and sub-cellular mechanisms by qualitatively comparing simulation results with published research, and should lead to a fuller understanding of the pathophysiological effects of ionizing radiation on the skin.

  2. Advanced Aerospace Materials by Design

    NASA Technical Reports Server (NTRS)

    Srivastava, Deepak; Djomehri, Jahed; Wei, Chen-Yu

    2004-01-01

    The advances in the emerging field of nanophase thermal and structural composite materials; materials with embedded sensors and actuators for morphing structures; light-weight composite materials for energy and power storage; and large surface area materials for in-situ resource generation and waste recycling, are expected to :revolutionize the capabilities of virtually every system comprising of future robotic and :human moon and mars exploration missions. A high-performance multiscale simulation platform, including the computational capabilities and resources of Columbia - the new supercomputer, is being developed to discover, validate, and prototype next generation (of such advanced materials. This exhibit will describe the porting and scaling of multiscale 'physics based core computer simulation codes for discovering and designing carbon nanotube-polymer composite materials for light-weight load bearing structural and 'thermal protection applications.

  3. Multiscale modeling of metabolism, flows, and exchanges in heterogeneous organs

    PubMed Central

    Bassingthwaighte, James B.; Raymond, Gary M.; Butterworth, Erik; Alessio, Adam; Caldwell, James H.

    2010-01-01

    Large-scale models accounting for the processes supporting metabolism and function in an organ or tissue with a marked heterogeneity of flows and metabolic rates are computationally complex and tedious to compute. Their use in the analysis of data from positron emission tomography (PET) and magnetic resonance imaging (MRI) requires model reduction since the data are composed of concentration–time curves from hundreds of regions of interest (ROI) within the organ. Within each ROI, one must account for blood flow, intracapillary gradients in concentrations, transmembrane transport, and intracellular reactions. Using modular design, we configured a whole organ model, GENTEX, to allow adaptive usage for multiple reacting molecular species while omitting computation of unused components. The temporal and spatial resolution and the number of species are adaptable and the numerical accuracy and computational speed is adjustable during optimization runs, which increases accuracy and spatial resolution as convergence approaches. An application to the interpretation of PET image sequences after intravenous injection of 13NH3 provides functional image maps of regional myocardial blood flows. PMID:20201893

  4. Differential geometry based solvation model. III. Quantum formulation

    PubMed Central

    Chen, Zhan; Wei, Guo-Wei

    2011-01-01

    Solvation is of fundamental importance to biomolecular systems. Implicit solvent models, particularly those based on the Poisson-Boltzmann equation for electrostatic analysis, are established approaches for solvation analysis. However, ad hoc solvent-solute interfaces are commonly used in the implicit solvent theory. Recently, we have introduced differential geometry based solvation models which allow the solvent-solute interface to be determined by the variation of a total free energy functional. Atomic fixed partial charges (point charges) are used in our earlier models, which depends on existing molecular mechanical force field software packages for partial charge assignments. As most force field models are parameterized for a certain class of molecules or materials, the use of partial charges limits the accuracy and applicability of our earlier models. Moreover, fixed partial charges do not account for the charge rearrangement during the solvation process. The present work proposes a differential geometry based multiscale solvation model which makes use of the electron density computed directly from the quantum mechanical principle. To this end, we construct a new multiscale total energy functional which consists of not only polar and nonpolar solvation contributions, but also the electronic kinetic and potential energies. By using the Euler-Lagrange variation, we derive a system of three coupled governing equations, i.e., the generalized Poisson-Boltzmann equation for the electrostatic potential, the generalized Laplace-Beltrami equation for the solvent-solute boundary, and the Kohn-Sham equations for the electronic structure. We develop an iterative procedure to solve three coupled equations and to minimize the solvation free energy. The present multiscale model is numerically validated for its stability, consistency and accuracy, and is applied to a few sets of molecules, including a case which is difficult for existing solvation models. Comparison is made to many other classic and quantum models. By using experimental data, we show that the present quantum formulation of our differential geometry based multiscale solvation model improves the prediction of our earlier models, and outperforms some explicit solvation model. PMID:22112067

  5. Adaptive Multiscale Modeling of Geochemical Impacts on Fracture Evolution

    NASA Astrophysics Data System (ADS)

    Molins, S.; Trebotich, D.; Steefel, C. I.; Deng, H.

    2016-12-01

    Understanding fracture evolution is essential for many subsurface energy applications, including subsurface storage, shale gas production, fracking, CO2 sequestration, and geothermal energy extraction. Geochemical processes in particular play a significant role in the evolution of fractures through dissolution-driven widening, fines migration, and/or fracture sealing due to precipitation. One obstacle to understanding and exploiting geochemical fracture evolution is that it is a multiscale process. However, current geochemical modeling of fractures cannot capture this multi-scale nature of geochemical and mechanical impacts on fracture evolution, and is limited to either a continuum or pore-scale representation. Conventional continuum-scale models treat fractures as preferential flow paths, with their permeability evolving as a function (often, a cubic law) of the fracture aperture. This approach has the limitation that it oversimplifies flow within the fracture in its omission of pore scale effects while also assuming well-mixed conditions. More recently, pore-scale models along with advanced characterization techniques have allowed for accurate simulations of flow and reactive transport within the pore space (Molins et al., 2014, 2015). However, these models, even with high performance computing, are currently limited in their ability to treat tractable domain sizes (Steefel et al., 2013). Thus, there is a critical need to develop an adaptive modeling capability that can account for separate properties and processes, emergent and otherwise, in the fracture and the rock matrix at different spatial scales. Here we present an adaptive modeling capability that treats geochemical impacts on fracture evolution within a single multiscale framework. Model development makes use of the high performance simulation capability, Chombo-Crunch, leveraged by high resolution characterization and experiments. The modeling framework is based on the adaptive capability in Chombo which not only enables mesh refinement, but also refinement of the model-pore scale or continuum Darcy scale-in a dynamic way such that the appropriate model is used only when and where it is needed. Explicit flux matching provides coupling betwen the scales.

  6. Good coupling for the multiscale patch scheme on systems with microscale heterogeneity

    NASA Astrophysics Data System (ADS)

    Bunder, J. E.; Roberts, A. J.; Kevrekidis, I. G.

    2017-05-01

    Computational simulation of microscale detailed systems is frequently only feasible over spatial domains much smaller than the macroscale of interest. The 'equation-free' methodology couples many small patches of microscale computations across space to empower efficient computational simulation over macroscale domains of interest. Motivated by molecular or agent simulations, we analyse the performance of various coupling schemes for patches when the microscale is inherently 'rough'. As a canonical problem in this universality class, we systematically analyse the case of heterogeneous diffusion on a lattice. Computer algebra explores how the dynamics of coupled patches predict the large scale emergent macroscale dynamics of the computational scheme. We determine good design for the coupling of patches by comparing the macroscale predictions from patch dynamics with the emergent macroscale on the entire domain, thus minimising the computational error of the multiscale modelling. The minimal error on the macroscale is obtained when the coupling utilises averaging regions which are between a third and a half of the patch. Moreover, when the symmetry of the inter-patch coupling matches that of the underlying microscale structure, patch dynamics predicts the desired macroscale dynamics to any specified order of error. The results confirm that the patch scheme is useful for macroscale computational simulation of a range of systems with microscale heterogeneity.

  7. A real-space stochastic density matrix approach for density functional electronic structure.

    PubMed

    Beck, Thomas L

    2015-12-21

    The recent development of real-space grid methods has led to more efficient, accurate, and adaptable approaches for large-scale electrostatics and density functional electronic structure modeling. With the incorporation of multiscale techniques, linear-scaling real-space solvers are possible for density functional problems if localized orbitals are used to represent the Kohn-Sham energy functional. These methods still suffer from high computational and storage overheads, however, due to extensive matrix operations related to the underlying wave function grid representation. In this paper, an alternative stochastic method is outlined that aims to solve directly for the one-electron density matrix in real space. In order to illustrate aspects of the method, model calculations are performed for simple one-dimensional problems that display some features of the more general problem, such as spatial nodes in the density matrix. This orbital-free approach may prove helpful considering a future involving increasingly parallel computing architectures. Its primary advantage is the near-locality of the random walks, allowing for simultaneous updates of the density matrix in different regions of space partitioned across the processors. In addition, it allows for testing and enforcement of the particle number and idempotency constraints through stabilization of a Feynman-Kac functional integral as opposed to the extensive matrix operations in traditional approaches.

  8. Fully implicit adaptive mesh refinement MHD algorithm

    NASA Astrophysics Data System (ADS)

    Philip, Bobby

    2005-10-01

    In the macroscopic simulation of plasmas, the numerical modeler is faced with the challenge of dealing with multiple time and length scales. The former results in stiffness due to the presence of very fast waves. The latter requires one to resolve the localized features that the system develops. Traditional approaches based on explicit time integration techniques and fixed meshes are not suitable for this challenge, as such approaches prevent the modeler from using realistic plasma parameters to keep the computation feasible. We propose here a novel approach, based on implicit methods and structured adaptive mesh refinement (SAMR). Our emphasis is on both accuracy and scalability with the number of degrees of freedom. To our knowledge, a scalable, fully implicit AMR algorithm has not been accomplished before for MHD. As a proof-of-principle, we focus on the reduced resistive MHD model as a basic MHD model paradigm, which is truly multiscale. The approach taken here is to adapt mature physics-based technologyootnotetextL. Chac'on et al., J. Comput. Phys. 178 (1), 15- 36 (2002) to AMR grids, and employ AMR-aware multilevel techniques (such as fast adaptive composite --FAC-- algorithms) for scalability. We will demonstrate that the concept is indeed feasible, featuring optimal scalability under grid refinement. Results of fully-implicit, dynamically-adaptive AMR simulations will be presented on a variety of problems.

  9. Particle swarm optimization method for small retinal vessels detection on multiresolution fundus images.

    PubMed

    Khomri, Bilal; Christodoulidis, Argyrios; Djerou, Leila; Babahenini, Mohamed Chaouki; Cheriet, Farida

    2018-05-01

    Retinal vessel segmentation plays an important role in the diagnosis of eye diseases and is considered as one of the most challenging tasks in computer-aided diagnosis (CAD) systems. The main goal of this study was to propose a method for blood-vessel segmentation that could deal with the problem of detecting vessels of varying diameters in high- and low-resolution fundus images. We proposed to use the particle swarm optimization (PSO) algorithm to improve the multiscale line detection (MSLD) method. The PSO algorithm was applied to find the best arrangement of scales in the MSLD method and to handle the problem of multiscale response recombination. The performance of the proposed method was evaluated on two low-resolution (DRIVE and STARE) and one high-resolution fundus (HRF) image datasets. The data include healthy (H) and diabetic retinopathy (DR) cases. The proposed approach improved the sensitivity rate against the MSLD by 4.7% for the DRIVE dataset and by 1.8% for the STARE dataset. For the high-resolution dataset, the proposed approach achieved 87.09% sensitivity rate, whereas the MSLD method achieves 82.58% sensitivity rate at the same specificity level. When only the smallest vessels were considered, the proposed approach improved the sensitivity rate by 11.02% and by 4.42% for the healthy and the diabetic cases, respectively. Integrating the proposed method in a comprehensive CAD system for DR screening would allow the reduction of false positives due to missed small vessels, misclassified as red lesions. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  10. Towards systems biology of the gravity response of higher plants -multiscale analysis of Arabidopsis thaliana root growth

    NASA Astrophysics Data System (ADS)

    Palme, Klaus; Aubry, D.; Bensch, M.; Schmidt, T.; Ronneberger, O.; Neu, C.; Li, X.; Wang, H.; Santos, F.; Wang, B.; Paponov, I.; Ditengou, F. A.; Teale, W. T.; Volkmann, D.; Baluska, F.; Nonis, A.; Trevisan, S.; Ruperti, B.; Dovzhenko, A.

    Gravity plays a fundamental role in plant growth and development. Up to now, little is known about the molecular organisation of the signal transduction cascades and networks which co-ordinate gravity perception and response. By using an integrated systems biological approach, a systems analysis of gravity perception and the subsequent tightly-regulated growth response is planned in the model plant Arabidopsis thaliana. This approach will address questions such as: (i) what are the components of gravity signal transduction pathways? (ii) what are the dynamics of these components? (iii) what is their spatio-temporal regulation in different tis-sues? Using Arabidopsis thaliana as a model-we use root growth to obtain insights in the gravity response. New techniques enable identification of the individual genes affected by grav-ity and further integration of transcriptomics and proteomics data into interaction networks and cell communication events that operate during gravitropic curvature. Using systematic multiscale analysis we have identified regulatory networks consisting of transcription factors, the protein degradation machinery, vesicle trafficking and cellular signalling during the gravire-sponse. We developed approach allowing to incorporate key features of the root system across all relevant spatial and temporal scales to describe gene-expression patterns and correlate them with individual gene and protein functions. Combination of high-resolution microscopy and novel computational tools resulted in development of the root 3D model in which quantitative descriptions of cellular network properties and of multicellular interactions important in root growth and gravitropism can be integrated for the first time.

  11. Application of Mortar Coupling in Multiscale Modelling of Coupled Flow, Transport, and Biofilm Growth in Porous Media

    NASA Astrophysics Data System (ADS)

    Laleian, A.; Valocchi, A. J.; Werth, C. J.

    2017-12-01

    Multiscale models of reactive transport in porous media are capable of capturing complex pore-scale processes while leveraging the efficiency of continuum-scale models. In particular, porosity changes caused by biofilm development yield complex feedbacks between transport and reaction that are difficult to quantify at the continuum scale. Pore-scale models, needed to accurately resolve these dynamics, are often impractical for applications due to their computational cost. To address this challenge, we are developing a multiscale model of biofilm growth in which non-overlapping regions at pore and continuum spatial scales are coupled with a mortar method providing continuity at interfaces. We explore two decompositions of coupled pore-scale and continuum-scale regions to study biofilm growth in a transverse mixing zone. In the first decomposition, all reaction is confined to a pore-scale region extending the transverse mixing zone length. Only solute transport occurs in the surrounding continuum-scale regions. Relative to a fully pore-scale result, we find the multiscale model with this decomposition has a reduced run time and consistent result in terms of biofilm growth and solute utilization. In the second decomposition, reaction occurs in both an up-gradient pore-scale region and a down-gradient continuum-scale region. To quantify clogging, the continuum-scale model implements empirical relations between porosity and continuum-scale parameters, such as permeability and the transverse dispersion coefficient. Solutes are sufficiently mixed at the end of the pore-scale region, such that the initial reaction rate is accurately computed using averaged concentrations in the continuum-scale region. Relative to a fully pore-scale result, we find accuracy of biomass growth in the multiscale model with this decomposition improves as the interface between pore-scale and continuum-scale regions moves downgradient where transverse mixing is more fully developed. Also, this decomposition poses additional challenges with respect to mortar coupling. We explore these challenges and potential solutions. While recent work has demonstrated growing interest in multiscale models, further development is needed for their application to field-scale subsurface contaminant transport and remediation.

  12. Detection of Neuron Membranes in Electron Microscopy Images Using Multi-scale Context and Radon-Like Features

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seyedhosseini, Mojtaba; Kumar, Ritwik; Jurrus, Elizabeth R.

    2011-10-01

    Automated neural circuit reconstruction through electron microscopy (EM) images is a challenging problem. In this paper, we present a novel method that exploits multi-scale contextual information together with Radon-like features (RLF) to learn a series of discriminative models. The main idea is to build a framework which is capable of extracting information about cell membranes from a large contextual area of an EM image in a computationally efficient way. Toward this goal, we extract RLF that can be computed efficiently from the input image and generate a scale-space representation of the context images that are obtained at the output ofmore » each discriminative model in the series. Compared to a single-scale model, the use of a multi-scale representation of the context image gives the subsequent classifiers access to a larger contextual area in an effective way. Our strategy is general and independent of the classifier and has the potential to be used in any context based framework. We demonstrate that our method outperforms the state-of-the-art algorithms in detection of neuron membranes in EM images.« less

  13. Community Multiscale Air Quality Modeling System (CMAQ)

    EPA Pesticide Factsheets

    CMAQ is a computational tool used for air quality management. It models air pollutants including ozone, particulate matter and other air toxics to help determine optimum air quality management scenarios.

  14. An analytical particle mover for the charge- and energy-conserving, nonlinearly implicit, electrostatic particle-in-cell algorithm

    NASA Astrophysics Data System (ADS)

    Chen, G.; Chacón, L.

    2013-08-01

    We propose a 1D analytical particle mover for the recent charge- and energy-conserving electrostatic particle-in-cell (PIC) algorithm in Ref. [G. Chen, L. Chacón, D.C. Barnes, An energy- and charge-conserving, implicit, electrostatic particle-in-cell algorithm, Journal of Computational Physics 230 (2011) 7018-7036]. The approach computes particle orbits exactly for a given piece-wise linear electric field. The resulting PIC algorithm maintains the exact charge and energy conservation properties of the original algorithm, but with improved performance (both in efficiency and robustness against the number of particles and timestep). We demonstrate the advantageous properties of the scheme with a challenging multiscale numerical test case, the ion acoustic wave. Using the analytical mover as a reference, we demonstrate that the choice of error estimator in the Crank-Nicolson mover has significant impact on the overall performance of the implicit PIC algorithm. The generalization of the approach to the multi-dimensional case is outlined, based on a novel and simple charge conserving interpolation scheme.

  15. Data Decomposition Techniques with Multi-Scale Permutation Entropy Calculations for Bearing Fault Diagnosis

    PubMed Central

    Yasir, Muhammad Naveed; Koh, Bong-Hwan

    2018-01-01

    This paper presents the local mean decomposition (LMD) integrated with multi-scale permutation entropy (MPE), also known as LMD-MPE, to investigate the rolling element bearing (REB) fault diagnosis from measured vibration signals. First, the LMD decomposed the vibration data or acceleration measurement into separate product functions that are composed of both amplitude and frequency modulation. MPE then calculated the statistical permutation entropy from the product functions to extract the nonlinear features to assess and classify the condition of the healthy and damaged REB system. The comparative experimental results of the conventional LMD-based multi-scale entropy and MPE were presented to verify the authenticity of the proposed technique. The study found that LMD-MPE’s integrated approach provides reliable, damage-sensitive features when analyzing the bearing condition. The results of REB experimental datasets show that the proposed approach yields more vigorous outcomes than existing methods. PMID:29690526

  16. A multi-scale, multi-disciplinary approach for assessing the technological, economic and environmental performance of bio-based chemicals.

    PubMed

    Herrgård, Markus; Sukumara, Sumesh; Campodonico, Miguel; Zhuang, Kai

    2015-12-01

    In recent years, bio-based chemicals have gained interest as a renewable alternative to petrochemicals. However, there is a significant need to assess the technological, biological, economic and environmental feasibility of bio-based chemicals, particularly during the early research phase. Recently, the Multi-scale framework for Sustainable Industrial Chemicals (MuSIC) was introduced to address this issue by integrating modelling approaches at different scales ranging from cellular to ecological scales. This framework can be further extended by incorporating modelling of the petrochemical value chain and the de novo prediction of metabolic pathways connecting existing host metabolism to desirable chemical products. This multi-scale, multi-disciplinary framework for quantitative assessment of bio-based chemicals will play a vital role in supporting engineering, strategy and policy decisions as we progress towards a sustainable chemical industry. © 2015 Authors; published by Portland Press Limited.

  17. Modeling crack propagation in polycrystalline microstructure using variational multiscale method

    DOE PAGES

    Sun, Shang; Sundararaghavan, Veera

    2016-01-01

    Crack propagation in a polycrystalline microstructure is analyzed using a novel multiscale model. The model includes an explicit microstructural representation at critical regions (stress concentrators such as notches and cracks) and a reduced order model that statistically captures the microstructure at regions far away from stress concentrations. Crack propagation is modeled in these critical regions using the variational multiscale method. In this approach, a discontinuous displacement field is added to elements that exceed the critical values of normal or tangential tractions during loading. Compared to traditional cohesive zone modeling approaches, the method does not require the use of any specialmore » interface elements in the microstructure and thus can model arbitrary crack paths. As a result, the capability of the method in predicting both intergranular and transgranular failure modes in an elastoplastic polycrystal is demonstrated under tensile and three-point bending loads.« less

  18. Data Decomposition Techniques with Multi-Scale Permutation Entropy Calculations for Bearing Fault Diagnosis.

    PubMed

    Yasir, Muhammad Naveed; Koh, Bong-Hwan

    2018-04-21

    This paper presents the local mean decomposition (LMD) integrated with multi-scale permutation entropy (MPE), also known as LMD-MPE, to investigate the rolling element bearing (REB) fault diagnosis from measured vibration signals. First, the LMD decomposed the vibration data or acceleration measurement into separate product functions that are composed of both amplitude and frequency modulation. MPE then calculated the statistical permutation entropy from the product functions to extract the nonlinear features to assess and classify the condition of the healthy and damaged REB system. The comparative experimental results of the conventional LMD-based multi-scale entropy and MPE were presented to verify the authenticity of the proposed technique. The study found that LMD-MPE’s integrated approach provides reliable, damage-sensitive features when analyzing the bearing condition. The results of REB experimental datasets show that the proposed approach yields more vigorous outcomes than existing methods.

  19. Beyond Low Rank + Sparse: Multi-scale Low Rank Matrix Decomposition

    PubMed Central

    Ong, Frank; Lustig, Michael

    2016-01-01

    We present a natural generalization of the recent low rank + sparse matrix decomposition and consider the decomposition of matrices into components of multiple scales. Such decomposition is well motivated in practice as data matrices often exhibit local correlations in multiple scales. Concretely, we propose a multi-scale low rank modeling that represents a data matrix as a sum of block-wise low rank matrices with increasing scales of block sizes. We then consider the inverse problem of decomposing the data matrix into its multi-scale low rank components and approach the problem via a convex formulation. Theoretically, we show that under various incoherence conditions, the convex program recovers the multi-scale low rank components either exactly or approximately. Practically, we provide guidance on selecting the regularization parameters and incorporate cycle spinning to reduce blocking artifacts. Experimentally, we show that the multi-scale low rank decomposition provides a more intuitive decomposition than conventional low rank methods and demonstrate its effectiveness in four applications, including illumination normalization for face images, motion separation for surveillance videos, multi-scale modeling of the dynamic contrast enhanced magnetic resonance imaging and collaborative filtering exploiting age information. PMID:28450978

  20. Anti-arrhythmic strategies for atrial fibrillation

    PubMed Central

    Grandi, Eleonora; Maleckar, Mary M.

    2016-01-01

    Atrial fibrillation (AF), the most common cardiac arrhythmia, is associated with increased risk of cerebrovascular stroke, and with several other pathologies, including heart failure. Current therapies for AF are targeted at reducing risk of stroke (anticoagulation) and tachycardia-induced cardiomyopathy (rate or rhythm control). Rate control, typically achieved by atrioventricular nodal blocking drugs, is often insufficient to alleviate symptoms. Rhythm control approaches include antiarrhythmic drugs, electrical cardioversion, and ablation strategies. Here, we offer several examples of how computational modeling can provide a quantitative framework for integrating multi scale data to: (a) gain insight into multi-scale mechanisms of AF; (b) identify and test pharmacological and electrical therapy and interventions; and (c) support clinical decisions. We review how modeling approaches have evolved and contributed to the research pipeline and preclinical development and discuss future directions and challenges in the field. PMID:27612549

Top