Cilfone, Nicholas A.; Kirschner, Denise E.; Linderman, Jennifer J.
2015-01-01
Biologically related processes operate across multiple spatiotemporal scales. For computational modeling methodologies to mimic this biological complexity, individual scale models must be linked in ways that allow for dynamic exchange of information across scales. A powerful methodology is to combine a discrete modeling approach, agent-based models (ABMs), with continuum models to form hybrid models. Hybrid multi-scale ABMs have been used to simulate emergent responses of biological systems. Here, we review two aspects of hybrid multi-scale ABMs: linking individual scale models and efficiently solving the resulting model. We discuss the computational choices associated with aspects of linking individual scale models while simultaneously maintaining model tractability. We demonstrate implementations of existing numerical methods in the context of hybrid multi-scale ABMs. Using an example model describing Mycobacterium tuberculosis infection, we show relative computational speeds of various combinations of numerical methods. Efficient linking and solution of hybrid multi-scale ABMs is key to model portability, modularity, and their use in understanding biological phenomena at a systems level. PMID:26366228
Multi-scale modeling in cell biology
Meier-Schellersheim, Martin; Fraser, Iain D. C.; Klauschen, Frederick
2009-01-01
Biomedical research frequently involves performing experiments and developing hypotheses that link different scales of biological systems such as, for instance, the scales of intracellular molecular interactions to the scale of cellular behavior and beyond to the behavior of cell populations. Computational modeling efforts that aim at exploring such multi-scale systems quantitatively with the help of simulations have to incorporate several different simulation techniques due to the different time and space scales involved. Here, we provide a non-technical overview of how different scales of experimental research can be combined with the appropriate computational modeling techniques. We also show that current modeling software permits building and simulating multi-scale models without having to become involved with the underlying technical details of computational modeling. PMID:20448808
Multi-Scale Computational Models for Electrical Brain Stimulation
Seo, Hyeon; Jun, Sung C.
2017-01-01
Electrical brain stimulation (EBS) is an appealing method to treat neurological disorders. To achieve optimal stimulation effects and a better understanding of the underlying brain mechanisms, neuroscientists have proposed computational modeling studies for a decade. Recently, multi-scale models that combine a volume conductor head model and multi-compartmental models of cortical neurons have been developed to predict stimulation effects on the macroscopic and microscopic levels more precisely. As the need for better computational models continues to increase, we overview here recent multi-scale modeling studies; we focused on approaches that coupled a simplified or high-resolution volume conductor head model and multi-compartmental models of cortical neurons, and constructed realistic fiber models using diffusion tensor imaging (DTI). Further implications for achieving better precision in estimating cellular responses are discussed. PMID:29123476
Multi-GPU hybrid programming accelerated three-dimensional phase-field model in binary alloy
NASA Astrophysics Data System (ADS)
Zhu, Changsheng; Liu, Jieqiong; Zhu, Mingfang; Feng, Li
2018-03-01
In the process of dendritic growth simulation, the computational efficiency and the problem scales have extremely important influence on simulation efficiency of three-dimensional phase-field model. Thus, seeking for high performance calculation method to improve the computational efficiency and to expand the problem scales has a great significance to the research of microstructure of the material. A high performance calculation method based on MPI+CUDA hybrid programming model is introduced. Multi-GPU is used to implement quantitative numerical simulations of three-dimensional phase-field model in binary alloy under the condition of multi-physical processes coupling. The acceleration effect of different GPU nodes on different calculation scales is explored. On the foundation of multi-GPU calculation model that has been introduced, two optimization schemes, Non-blocking communication optimization and overlap of MPI and GPU computing optimization, are proposed. The results of two optimization schemes and basic multi-GPU model are compared. The calculation results show that the use of multi-GPU calculation model can improve the computational efficiency of three-dimensional phase-field obviously, which is 13 times to single GPU, and the problem scales have been expanded to 8193. The feasibility of two optimization schemes is shown, and the overlap of MPI and GPU computing optimization has better performance, which is 1.7 times to basic multi-GPU model, when 21 GPUs are used.
2016-07-15
AFRL-AFOSR-JP-TR-2016-0068 Multi-scale Computational Electromagnetics for Phenomenology and Saliency Characterization in Remote Sensing Hean-Teik...SUBTITLE Multi-scale Computational Electromagnetics for Phenomenology and Saliency Characterization in Remote Sensing 5a. CONTRACT NUMBER 5b. GRANT NUMBER... electromagnetics to the application in microwave remote sensing as well as extension of modelling capability with computational flexibility to study
2016-07-15
AFRL-AFOSR-JP-TR-2016-0068 Multi-scale Computational Electromagnetics for Phenomenology and Saliency Characterization in Remote Sensing Hean-Teik...SUBTITLE Multi-scale Computational Electromagnetics for Phenomenology and Saliency Characterization in Remote Sensing 5a. CONTRACT NUMBER 5b. GRANT NUMBER...electromagnetics to the application in microwave remote sensing as well as extension of modelling capability with computational flexibility to study
Users matter : multi-agent systems model of high performance computing cluster users.
DOE Office of Scientific and Technical Information (OSTI.GOV)
North, M. J.; Hood, C. S.; Decision and Information Sciences
2005-01-01
High performance computing clusters have been a critical resource for computational science for over a decade and have more recently become integral to large-scale industrial analysis. Despite their well-specified components, the aggregate behavior of clusters is poorly understood. The difficulties arise from complicated interactions between cluster components during operation. These interactions have been studied by many researchers, some of whom have identified the need for holistic multi-scale modeling that simultaneously includes network level, operating system level, process level, and user level behaviors. Each of these levels presents its own modeling challenges, but the user level is the most complex duemore » to the adaptability of human beings. In this vein, there are several major user modeling goals, namely descriptive modeling, predictive modeling and automated weakness discovery. This study shows how multi-agent techniques were used to simulate a large-scale computing cluster at each of these levels.« less
Microphysics in the Multi-Scale Modeling Systems with Unified Physics
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo; Chern, J.; Lamg, S.; Matsui, T.; Shen, B.; Zeng, X.; Shi, R.
2011-01-01
In recent years, exponentially increasing computer power has extended Cloud Resolving Model (CRM) integrations from hours to months, the number of computational grid points from less than a thousand to close to ten million. Three-dimensional models are now more prevalent. Much attention is devoted to precipitating cloud systems where the crucial 1-km scales are resolved in horizontal domains as large as 10,000 km in two-dimensions, and 1,000 x 1,000 km2 in three-dimensions. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that NWP and mesoscale model can be run in grid size similar to cloud resolving model through nesting technique. Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (l) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, the microphysics developments of the multi-scale modeling system will be presented. In particular, the results from using multi-scale modeling system to study the heavy precipitation processes will be presented.
Multi-Scale Computational Modeling of Two-Phased Metal Using GMC Method
NASA Technical Reports Server (NTRS)
Moghaddam, Masoud Ghorbani; Achuthan, A.; Bednacyk, B. A.; Arnold, S. M.; Pineda, E. J.
2014-01-01
A multi-scale computational model for determining plastic behavior in two-phased CMSX-4 Ni-based superalloys is developed on a finite element analysis (FEA) framework employing crystal plasticity constitutive model that can capture the microstructural scale stress field. The generalized method of cells (GMC) micromechanics model is used for homogenizing the local field quantities. At first, GMC as stand-alone is validated by analyzing a repeating unit cell (RUC) as a two-phased sample with 72.9% volume fraction of gamma'-precipitate in the gamma-matrix phase and comparing the results with those predicted by finite element analysis (FEA) models incorporating the same crystal plasticity constitutive model. The global stress-strain behavior and the local field quantity distributions predicted by GMC demonstrated good agreement with FEA. High computational saving, at the expense of some accuracy in the components of local tensor field quantities, was obtained with GMC. Finally, the capability of the developed multi-scale model linking FEA and GMC to solve real life sized structures is demonstrated by analyzing an engine disc component and determining the microstructural scale details of the field quantities.
A FRAMEWORK FOR FINE-SCALE COMPUTATIONAL FLUID DYNAMICS AIR QUALITY MODELING AND ANALYSIS
This paper discusses a framework for fine-scale CFD modeling that may be developed to complement the present Community Multi-scale Air Quality (CMAQ) modeling system which itself is a computational fluid dynamics model. A goal of this presentation is to stimulate discussions on w...
Plank, G; Prassl, AJ; Augustin, C
2014-01-01
Despite the evident multiphysics nature of the heart – it is an electrically controlled mechanical pump – most modeling studies considered electrophysiology and mechanics in isolation. In no small part, this is due to the formidable modeling challenges involved in building strongly coupled anatomically accurate and biophyically detailed multi-scale multi-physics models of cardiac electro-mechanics. Among the main challenges are the selection of model components and their adjustments to achieve integration into a consistent organ-scale model, dealing with technical difficulties such as the exchange of data between electro-physiological and mechanical model, particularly when using different spatio-temporal grids for discretization, and, finally, the implementation of advanced numerical techniques to deal with the substantial computational. In this study we report on progress made in developing a novel modeling framework suited to tackle these challenges. PMID:24043050
CPMIP: measurements of real computational performance of Earth system models in CMIP6
NASA Astrophysics Data System (ADS)
Balaji, Venkatramani; Maisonnave, Eric; Zadeh, Niki; Lawrence, Bryan N.; Biercamp, Joachim; Fladrich, Uwe; Aloisio, Giovanni; Benson, Rusty; Caubel, Arnaud; Durachta, Jeffrey; Foujols, Marie-Alice; Lister, Grenville; Mocavero, Silvia; Underwood, Seth; Wright, Garrett
2017-01-01
A climate model represents a multitude of processes on a variety of timescales and space scales: a canonical example of multi-physics multi-scale modeling. The underlying climate system is physically characterized by sensitive dependence on initial conditions, and natural stochastic variability, so very long integrations are needed to extract signals of climate change. Algorithms generally possess weak scaling and can be I/O and/or memory-bound. Such weak-scaling, I/O, and memory-bound multi-physics codes present particular challenges to computational performance. Traditional metrics of computational efficiency such as performance counters and scaling curves do not tell us enough about real sustained performance from climate models on different machines. They also do not provide a satisfactory basis for comparative information across models. codes present particular challenges to computational performance. We introduce a set of metrics that can be used for the study of computational performance of climate (and Earth system) models. These measures do not require specialized software or specific hardware counters, and should be accessible to anyone. They are independent of platform and underlying parallel programming models. We show how these metrics can be used to measure actually attained performance of Earth system models on different machines, and identify the most fruitful areas of research and development for performance engineering. codes present particular challenges to computational performance. We present results for these measures for a diverse suite of models from several modeling centers, and propose to use these measures as a basis for a CPMIP, a computational performance model intercomparison project (MIP).
A Multi-scale Modeling System with Unified Physics to Study Precipitation Processes
NASA Astrophysics Data System (ADS)
Tao, W. K.
2017-12-01
In recent years, exponentially increasing computer power has extended Cloud Resolving Model (CRM) integrations from hours to months, the number of computational grid points from less than a thousand to close to ten million. Three-dimensional models are now more prevalent. Much attention is devoted to precipitating cloud systems where the crucial 1-km scales are resolved in horizontal domains as large as 10,000 km in two-dimensions, and 1,000 x 1,000 km2 in three-dimensions. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that NWP and mesoscale model can be run in grid size similar to cloud resolving model through nesting technique. Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), and (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF). The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, a review of developments and applications of the multi-scale modeling system will be presented. In particular, the results from using multi-scale modeling system to study the precipitation, processes and their sensitivity on model resolution and microphysics schemes will be presented. Also how to use of the multi-satellite simulator to improve precipitation processes will be discussed.
Using Multi-Scale Modeling Systems and Satellite Data to Study the Precipitation Processes
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo; Chern, J.; Lamg, S.; Matsui, T.; Shen, B.; Zeng, X.; Shi, R.
2011-01-01
In recent years, exponentially increasing computer power has extended Cloud Resolving Model (CRM) integrations from hours to months, the number of computational grid points from less than a thousand to close to ten million. Three-dimensional models are now more prevalent. Much attention is devoted to precipitating cloud systems where the crucial 1-km scales are resolved in horizontal domains as large as 10,000 km in two-dimensions, and 1,000 x 1,000 km2 in three-dimensions. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that NWP and mesoscale model can be run in grid size similar to cloud resolving model through nesting technique. Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (l) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, the recent developments and applications of the multi-scale modeling system will be presented. In particular, the results from using multi-scale modeling system to study the precipitating systems and hurricanes/typhoons will be presented. The high-resolution spatial and temporal visualization will be utilized to show the evolution of precipitation processes. Also how to use of the multi-satellite simulator tqimproy precipitation processes will be discussed.
Using Multi-Scale Modeling Systems and Satellite Data to Study the Precipitation Processes
NASA Technical Reports Server (NTRS)
Tao, Wei--Kuo; Chern, J.; Lamg, S.; Matsui, T.; Shen, B.; Zeng, X.; Shi, R.
2010-01-01
In recent years, exponentially increasing computer power extended Cloud Resolving Model (CRM) integrations from hours to months, the number of computational grid points from less than a thousand to close to ten million. Three-dimensional models are now more prevalent. Much attention is devoted to precipitating cloud systems where the crucial 1-km scales are resolved in horizontal domains as large as 10,000 km in two-dimensions, and 1,000 x 1,000 sq km in three-dimensions. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that NWP and mesoscale models can be run in grid size similar to cloud resolving models through nesting technique. Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model). (2) a regional scale model (a NASA unified weather research and forecast, W8F). (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, a review of developments and applications of the multi-scale modeling system will be presented. In particular, the results from using multi-scale modeling systems to study the interactions between clouds, precipitation, and aerosols will be presented. Also how to use the multi-satellite simulator to improve precipitation processes will be discussed.
Using Multi-Scale Modeling Systems to Study the Precipitation Processes
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo
2010-01-01
In recent years, exponentially increasing computer power has extended Cloud Resolving Model (CRM) integrations from hours to months, the number of computational grid points from less than a thousand to close to ten million. Three-dimensional models are now more prevalent. Much attention is devoted to precipitating cloud systems where the crucial 1-km scales are resolved in horizontal domains as large as 10,000 km in two-dimensions, and 1,000 x 1,000 km2 in three-dimensions. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that NWP and mesoscale model can be run in grid size similar to cloud resolving model through nesting technique. Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, a review of developments and applications of the multi-scale modeling system will be presented. In particular, the results from using multi-scale modeling system to study the interactions between clouds, precipitation, and aerosols will be presented. Also how to use of the multi-satellite simulator to improve precipitation processes will be discussed.
Capturing remote mixing due to internal tides using multi-scale modeling tool: SOMAR-LES
NASA Astrophysics Data System (ADS)
Santilli, Edward; Chalamalla, Vamsi; Scotti, Alberto; Sarkar, Sutanu
2016-11-01
Internal tides that are generated during the interaction of an oscillating barotropic tide with the bottom bathymetry dissipate only a fraction of their energy near the generation region. The rest is radiated away in the form of low- high-mode internal tides. These internal tides dissipate energy at remote locations when they interact with the upper ocean pycnocline, continental slope, and large scale eddies. Capturing the wide range of length and time scales involved during the life-cycle of internal tides is computationally very expensive. A recently developed multi-scale modeling tool called SOMAR-LES combines the adaptive grid refinement features of SOMAR with the turbulence modeling features of a Large Eddy Simulation (LES) to capture multi-scale processes at a reduced computational cost. Numerical simulations of internal tide generation at idealized bottom bathymetries are performed to demonstrate this multi-scale modeling technique. Although each of the remote mixing phenomena have been considered independently in previous studies, this work aims to capture remote mixing processes during the life cycle of an internal tide in more realistic settings, by allowing multi-level (coarse and fine) grids to co-exist and exchange information during the time stepping process.
Blood Flow: Multi-scale Modeling and Visualization (July 2011)
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2011-01-01
Multi-scale modeling of arterial blood flow can shed light on the interaction between events happening at micro- and meso-scales (i.e., adhesion of red blood cells to the arterial wall, clot formation) and at macro-scales (i.e., change in flow patterns due to the clot). Coupled numerical simulations of such multi-scale flow require state-of-the-art computers and algorithms, along with techniques for multi-scale visualizations. This animation presents early results of two studies used in the development of a multi-scale visualization methodology. The fisrt illustrates a flow of healthy (red) and diseased (blue) blood cells with a Dissipative Particle Dynamics (DPD) method. Each bloodmore » cell is represented by a mesh, small spheres show a sub-set of particles representing the blood plasma, while instantaneous streamlines and slices represent the ensemble average velocity. In the second we investigate the process of thrombus (blood clot) formation, which may be responsible for the rupture of aneurysms, by concentrating on the platelet blood cells, observing as they aggregate on the wall of an aneruysm. Simulation was performed on Kraken at the National Institute for Computational Sciences. Visualization was produced using resources of the Argonne Leadership Computing Facility at Argonne National Laboratory.« less
Alsmadi, Othman M K; Abo-Hammour, Zaer S
2015-01-01
A robust computational technique for model order reduction (MOR) of multi-time-scale discrete systems (single input single output (SISO) and multi-input multioutput (MIMO)) is presented in this paper. This work is motivated by the singular perturbation of multi-time-scale systems where some specific dynamics may not have significant influence on the overall system behavior. The new approach is proposed using genetic algorithms (GA) with the advantage of obtaining a reduced order model, maintaining the exact dominant dynamics in the reduced order, and minimizing the steady state error. The reduction process is performed by obtaining an upper triangular transformed matrix of the system state matrix defined in state space representation along with the elements of B, C, and D matrices. The GA computational procedure is based on maximizing the fitness function corresponding to the response deviation between the full and reduced order models. The proposed computational intelligence MOR method is compared to recently published work on MOR techniques where simulation results show the potential and advantages of the new approach.
Nagaoka, Tomoaki; Watanabe, Soichi
2012-01-01
Electromagnetic simulation with anatomically realistic computational human model using the finite-difference time domain (FDTD) method has recently been performed in a number of fields in biomedical engineering. To improve the method's calculation speed and realize large-scale computing with the computational human model, we adapt three-dimensional FDTD code to a multi-GPU cluster environment with Compute Unified Device Architecture and Message Passing Interface. Our multi-GPU cluster system consists of three nodes. The seven GPU boards (NVIDIA Tesla C2070) are mounted on each node. We examined the performance of the FDTD calculation on multi-GPU cluster environment. We confirmed that the FDTD calculation on the multi-GPU clusters is faster than that on a multi-GPU (a single workstation), and we also found that the GPU cluster system calculate faster than a vector supercomputer. In addition, our GPU cluster system allowed us to perform the large-scale FDTD calculation because were able to use GPU memory of over 100 GB.
Multi-scale computational modeling of developmental biology.
Setty, Yaki
2012-08-01
Normal development of multicellular organisms is regulated by a highly complex process in which a set of precursor cells proliferate, differentiate and move, forming over time a functioning tissue. To handle their complexity, developmental systems can be studied over distinct scales. The dynamics of each scale is determined by the collective activity of entities at the scale below it. I describe a multi-scale computational approach for modeling developmental systems and detail the methodology through a synthetic example of a developmental system that retains key features of real developmental systems. I discuss the simulation of the system as it emerges from cross-scale and intra-scale interactions and describe how an in silico study can be carried out by modifying these interactions in a way that mimics in vivo experiments. I highlight biological features of the results through a comparison with findings in Caenorhabditis elegans germline development and finally discuss about the applications of the approach in real developmental systems and propose future extensions. The source code of the model of the synthetic developmental system can be found in www.wisdom.weizmann.ac.il/~yaki/MultiScaleModel. yaki.setty@gmail.com Supplementary data are available at Bioinformatics online.
2013-03-01
of coarser-scale materials and structures containing Kevlar fibers (e.g., yarns, fabrics, plies, lamina, and laminates ). Journal of Materials...Multi-Length Scale-Enriched Continuum-Level Material Model for Kevlar -Fiber-Reinforced Polymer-Matrix Composites M. Grujicic, B. Pandurangan, J.S...extensive set of molecular-level computational analyses regarding the role of various microstructural/morphological defects on the Kevlar fiber
Multiscale Modeling in the Clinic: Drug Design and Development
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clancy, Colleen E.; An, Gary; Cannon, William R.
A wide range of length and time scales are relevant to pharmacology, especially in drug development, drug design and drug delivery. Therefore, multi-scale computational modeling and simulation methods and paradigms that advance the linkage of phenomena occurring at these multiple scales have become increasingly important. Multi-scale approaches present in silico opportunities to advance laboratory research to bedside clinical applications in pharmaceuticals research. This is achievable through the capability of modeling to reveal phenomena occurring across multiple spatial and temporal scales, which are not otherwise readily accessible to experimentation. The resultant models, when validated, are capable of making testable predictions tomore » guide drug design and delivery. In this review we describe the goals, methods, and opportunities of multi-scale modeling in drug design and development. We demonstrate the impact of multiple scales of modeling in this field. We indicate the common mathematical techniques employed for multi-scale modeling approaches used in pharmacology and present several examples illustrating the current state-of-the-art regarding drug development for: Excitable Systems (Heart); Cancer (Metastasis and Differentiation); Cancer (Angiogenesis and Drug Targeting); Metabolic Disorders; and Inflammation and Sepsis. We conclude with a focus on barriers to successful clinical translation of drug development, drug design and drug delivery multi-scale models.« less
Integrative Systems Models of Cardiac Excitation Contraction Coupling
Greenstein, Joseph L.; Winslow, Raimond L.
2010-01-01
Excitation-contraction coupling in the cardiac myocyte is mediated by a number of highly integrated mechanisms of intracellular Ca2+ transport. The complexity and integrative nature of heart cell electrophysiology and Ca2+-cycling has led to an evolution of computational models that have played a crucial role in shaping our understanding of heart function. An important emerging theme in systems biology is that the detailed nature of local signaling events, such as those that occur in the cardiac dyad, have important consequences at higher biological scales. Multi-scale modeling techniques have revealed many mechanistic links between micro-scale events, such as Ca2+ binding to a channel protein, and macro-scale phenomena, such as excitation-contraction coupling gain. Here we review experimentally based multi-scale computational models of excitation-contraction coupling and the insights that have been gained through their application. PMID:21212390
Multi-scale Modeling in Clinical Oncology: Opportunities and Barriers to Success.
Yankeelov, Thomas E; An, Gary; Saut, Oliver; Luebeck, E Georg; Popel, Aleksander S; Ribba, Benjamin; Vicini, Paolo; Zhou, Xiaobo; Weis, Jared A; Ye, Kaiming; Genin, Guy M
2016-09-01
Hierarchical processes spanning several orders of magnitude of both space and time underlie nearly all cancers. Multi-scale statistical, mathematical, and computational modeling methods are central to designing, implementing and assessing treatment strategies that account for these hierarchies. The basic science underlying these modeling efforts is maturing into a new discipline that is close to influencing and facilitating clinical successes. The purpose of this review is to capture the state-of-the-art as well as the key barriers to success for multi-scale modeling in clinical oncology. We begin with a summary of the long-envisioned promise of multi-scale modeling in clinical oncology, including the synthesis of disparate data types into models that reveal underlying mechanisms and allow for experimental testing of hypotheses. We then evaluate the mathematical techniques employed most widely and present several examples illustrating their application as well as the current gap between pre-clinical and clinical applications. We conclude with a discussion of what we view to be the key challenges and opportunities for multi-scale modeling in clinical oncology.
Multi-scale Modeling in Clinical Oncology: Opportunities and Barriers to Success
Yankeelov, Thomas E.; An, Gary; Saut, Oliver; Luebeck, E. Georg; Popel, Aleksander S.; Ribba, Benjamin; Vicini, Paolo; Zhou, Xiaobo; Weis, Jared A.; Ye, Kaiming; Genin, Guy M.
2016-01-01
Hierarchical processes spanning several orders of magnitude of both space and time underlie nearly all cancers. Multi-scale statistical, mathematical, and computational modeling methods are central to designing, implementing and assessing treatment strategies that account for these hierarchies. The basic science underlying these modeling efforts is maturing into a new discipline that is close to influencing and facilitating clinical successes. The purpose of this review is to capture the state-of-the-art as well as the key barriers to success for multi-scale modeling in clinical oncology. We begin with a summary of the long-envisioned promise of multi-scale modeling in clinical oncology, including the synthesis of disparate data types into models that reveal underlying mechanisms and allow for experimental testing of hypotheses. We then evaluate the mathematical techniques employed most widely and present several examples illustrating their application as well as the current gap between pre-clinical and clinical applications. We conclude with a discussion of what we view to be the key challenges and opportunities for multi-scale modeling in clinical oncology. PMID:27384942
Multi-scale Material Appearance
NASA Astrophysics Data System (ADS)
Wu, Hongzhi
Modeling and rendering the appearance of materials is important for a diverse range of applications of computer graphics - from automobile design to movies and cultural heritage. The appearance of materials varies considerably at different scales, posing significant challenges due to the sheer complexity of the data, as well the need to maintain inter-scale consistency constraints. This thesis presents a series of studies around the modeling, rendering and editing of multi-scale material appearance. To efficiently render material appearance at multiple scales, we develop an object-space precomputed adaptive sampling method, which precomputes a hierarchy of view-independent points that preserve multi-level appearance. To support bi-scale material appearance design, we propose a novel reflectance filtering algorithm, which rapidly computes the large-scale appearance from small-scale details, by exploiting the low-rank structures of Bidirectional Visible Normal Distribution Functions and pre-rotated Bidirectional Reflectance Distribution Functions in the matrix formulation of the rendering algorithm. This approach can guide the physical realization of appearance, as well as the modeling of real-world materials using very sparse measurements. Finally, we present a bi-scale-inspired high-quality general representation for material appearance described by Bidirectional Texture Functions. Our representation is at once compact, easily editable, and amenable to efficient rendering.
NREL Kicks Off Next Phase of Advanced Computer-Aided Battery Engineering |
lithium-ion (Li-ion) batteries, known as a multi-scale multi-domain (GH-MSMD) model framework, was News | NREL Kicks Off Next Phase of Advanced Computer-Aided Battery Engineering NREL Kicks Off Next Phase of Advanced Computer-Aided Battery Engineering March 16, 2016 NREL researcher looks across
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jablonowski, Christiane
The research investigates and advances strategies how to bridge the scale discrepancies between local, regional and global phenomena in climate models without the prohibitive computational costs of global cloud-resolving simulations. In particular, the research explores new frontiers in computational geoscience by introducing high-order Adaptive Mesh Refinement (AMR) techniques into climate research. AMR and statically-adapted variable-resolution approaches represent an emerging trend for atmospheric models and are likely to become the new norm in future-generation weather and climate models. The research advances the understanding of multi-scale interactions in the climate system and showcases a pathway how to model these interactions effectively withmore » advanced computational tools, like the Chombo AMR library developed at the Lawrence Berkeley National Laboratory. The research is interdisciplinary and combines applied mathematics, scientific computing and the atmospheric sciences. In this research project, a hierarchy of high-order atmospheric models on cubed-sphere computational grids have been developed that serve as an algorithmic prototype for the finite-volume solution-adaptive Chombo-AMR approach. The foci of the investigations have lied on the characteristics of both static mesh adaptations and dynamically-adaptive grids that can capture flow fields of interest like tropical cyclones. Six research themes have been chosen. These are (1) the introduction of adaptive mesh refinement techniques into the climate sciences, (2) advanced algorithms for nonhydrostatic atmospheric dynamical cores, (3) an assessment of the interplay between resolved-scale dynamical motions and subgrid-scale physical parameterizations, (4) evaluation techniques for atmospheric model hierarchies, (5) the comparison of AMR refinement strategies and (6) tropical cyclone studies with a focus on multi-scale interactions and variable-resolution modeling. The results of this research project demonstrate significant advances in all six research areas. The major conclusions are that statically-adaptive variable-resolution modeling is currently becoming mature in the climate sciences, and that AMR holds outstanding promise for future-generation weather and climate models on high-performance computing architectures.« less
NASA Astrophysics Data System (ADS)
Phillips, M.; Denning, A. S.; Randall, D. A.; Branson, M.
2016-12-01
Multi-scale models of the atmosphere provide an opportunity to investigate processes that are unresolved by traditional Global Climate Models while at the same time remaining viable in terms of computational resources for climate-length time scales. The MMF represents a shift away from large horizontal grid spacing in traditional GCMs that leads to overabundant light precipitation and lack of heavy events, toward a model where precipitation intensity is allowed to vary over a much wider range of values. Resolving atmospheric motions on the scale of 4 km makes it possible to recover features of precipitation, such as intense downpours, that were previously only obtained by computationally expensive regional simulations. These heavy precipitation events may have little impact on large-scale moisture and energy budgets, but are outstanding in terms of interaction with the land surface and potential impact on human life. Three versions of the Community Earth System Model were used in this study; the standard CESM, the multi-scale `Super-Parameterized' CESM where large-scale parameterizations have been replaced with a 2D cloud-permitting model, and a multi-instance land version of the SP-CESM where each column of the 2D CRM is allowed to interact with an individual land unit. These simulations were carried out using prescribed Sea Surface Temperatures for the period from 1979-2006 with daily precipitation saved for all 28 years. Comparisons of the statistical properties of precipitation between model architectures and against observations from rain gauges were made, with specific focus on detection and evaluation of extreme precipitation events.
[The research on bidirectional reflectance computer simulation of forest canopy at pixel scale].
Song, Jin-Ling; Wang, Jin-Di; Shuai, Yan-Min; Xiao, Zhi-Qiang
2009-08-01
Computer simulation is based on computer graphics to generate the realistic 3D structure scene of vegetation, and to simulate the canopy regime using radiosity method. In the present paper, the authors expand the computer simulation model to simulate forest canopy bidirectional reflectance at pixel scale. But usually, the trees are complex structures, which are tall and have many branches. So there is almost a need for hundreds of thousands or even millions of facets to built up the realistic structure scene for the forest It is difficult for the radiosity method to compute so many facets. In order to make the radiosity method to simulate the forest scene at pixel scale, in the authors' research, the authors proposed one idea to simplify the structure of forest crowns, and abstract the crowns to ellipsoids. And based on the optical characteristics of the tree component and the characteristics of the internal energy transmission of photon in real crown, the authors valued the optical characteristics of ellipsoid surface facets. In the computer simulation of the forest, with the idea of geometrical optics model, the gap model is considered to get the forest canopy bidirectional reflectance at pixel scale. Comparing the computer simulation results with the GOMS model, and Multi-angle Imaging SpectroRadiometer (MISR) multi-angle remote sensing data, the simulation results are in agreement with the GOMS simulation result and MISR BRF. But there are also some problems to be solved. So the authors can conclude that the study has important value for the application of multi-angle remote sensing and the inversion of vegetation canopy structure parameters.
Large scale cardiac modeling on the Blue Gene supercomputer.
Reumann, Matthias; Fitch, Blake G; Rayshubskiy, Aleksandr; Keller, David U; Weiss, Daniel L; Seemann, Gunnar; Dössel, Olaf; Pitman, Michael C; Rice, John J
2008-01-01
Multi-scale, multi-physical heart models have not yet been able to include a high degree of accuracy and resolution with respect to model detail and spatial resolution due to computational limitations of current systems. We propose a framework to compute large scale cardiac models. Decomposition of anatomical data in segments to be distributed on a parallel computer is carried out by optimal recursive bisection (ORB). The algorithm takes into account a computational load parameter which has to be adjusted according to the cell models used. The diffusion term is realized by the monodomain equations. The anatomical data-set was given by both ventricles of the Visible Female data-set in a 0.2 mm resolution. Heterogeneous anisotropy was included in the computation. Model weights as input for the decomposition and load balancing were set to (a) 1 for tissue and 0 for non-tissue elements; (b) 10 for tissue and 1 for non-tissue elements. Scaling results for 512, 1024, 2048, 4096 and 8192 computational nodes were obtained for 10 ms simulation time. The simulations were carried out on an IBM Blue Gene/L parallel computer. A 1 s simulation was then carried out on 2048 nodes for the optimal model load. Load balances did not differ significantly across computational nodes even if the number of data elements distributed to each node differed greatly. Since the ORB algorithm did not take into account computational load due to communication cycles, the speedup is close to optimal for the computation time but not optimal overall due to the communication overhead. However, the simulation times were reduced form 87 minutes on 512 to 11 minutes on 8192 nodes. This work demonstrates that it is possible to run simulations of the presented detailed cardiac model within hours for the simulation of a heart beat.
Multi-scaling modelling in financial markets
NASA Astrophysics Data System (ADS)
Liu, Ruipeng; Aste, Tomaso; Di Matteo, T.
2007-12-01
In the recent years, a new wave of interest spurred the involvement of complexity in finance which might provide a guideline to understand the mechanism of financial markets, and researchers with different backgrounds have made increasing contributions introducing new techniques and methodologies. In this paper, Markov-switching multifractal models (MSM) are briefly reviewed and the multi-scaling properties of different financial data are analyzed by computing the scaling exponents by means of the generalized Hurst exponent H(q). In particular we have considered H(q) for price data, absolute returns and squared returns of different empirical financial time series. We have computed H(q) for the simulated data based on the MSM models with Binomial and Lognormal distributions of the volatility components. The results demonstrate the capacity of the multifractal (MF) models to capture the stylized facts in finance, and the ability of the generalized Hurst exponents approach to detect the scaling feature of financial time series.
Multi-scale Material Parameter Identification Using LS-DYNA® and LS-OPT®
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stander, Nielen; Basudhar, Anirban; Basu, Ushnish
2015-09-14
Ever-tightening regulations on fuel economy, and the likely future regulation of carbon emissions, demand persistent innovation in vehicle design to reduce vehicle mass. Classical methods for computational mass reduction include sizing, shape and topology optimization. One of the few remaining options for weight reduction can be found in materials engineering and material design optimization. Apart from considering different types of materials, by adding material diversity and composite materials, an appealing option in automotive design is to engineer steel alloys for the purpose of reducing plate thickness while retaining sufficient strength and ductility required for durability and safety. A project tomore » develop computational material models for advanced high strength steel is currently being executed under the auspices of the United States Automotive Materials Partnership (USAMP) funded by the US Department of Energy. Under this program, new Third Generation Advanced High Strength Steel (i.e., 3GAHSS) are being designed, tested and integrated with the remaining design variables of a benchmark vehicle Finite Element model. The objectives of the project are to integrate atomistic, microstructural, forming and performance models to create an integrated computational materials engineering (ICME) toolkit for 3GAHSS. The mechanical properties of Advanced High Strength Steels (AHSS) are controlled by many factors, including phase composition and distribution in the overall microstructure, volume fraction, size and morphology of phase constituents as well as stability of the metastable retained austenite phase. The complex phase transformation and deformation mechanisms in these steels make the well-established traditional techniques obsolete, and a multi-scale microstructure-based modeling approach following the ICME [0]strategy was therefore chosen in this project. Multi-scale modeling as a major area of research and development is an outgrowth of the Comprehensive Test Ban Treaty of 1996 which banned surface testing of nuclear devices [1]. This had the effect that experimental work was reduced from large scale tests to multiscale experiments to provide material models with validation at different length scales. In the subsequent years industry realized that multi-scale modeling and simulation-based design were transferable to the design optimization of any structural system. Horstemeyer [1] lists a number of advantages of the use of multiscale modeling. Among these are: the reduction of product development time by alleviating costly trial-and-error iterations as well as the reduction of product costs through innovations in material, product and process designs. Multi-scale modeling can reduce the number of costly large scale experiments and can increase product quality by providing more accurate predictions. Research tends to be focussed on each particular length scale, which enhances accuracy in the long term. This paper serves as an introduction to the LS-OPT and LS-DYNA methodology for multi-scale modeling. It mainly focuses on an approach to integrate material identification using material models of different length scales. As an example, a multi-scale material identification strategy, consisting of a Crystal Plasticity (CP) material model and a homogenized State Variable (SV) model, is discussed and the parameter identification of the individual material models of different length scales is demonstrated. The paper concludes with thoughts on integrating the multi-scale methodology into the overall vehicle design.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, David; Agarwal, Deborah A.; Sun, Xin
2011-09-01
The Carbon Capture Simulation Initiative is developing state-of-the-art computational modeling and simulation tools to accelerate the commercialization of carbon capture technology. The CCSI Toolset consists of an integrated multi-scale modeling and simulation framework, which includes extensive use of reduced order models (ROMs) and a comprehensive uncertainty quantification (UQ) methodology. This paper focuses on the interrelation among high performance computing, detailed device simulations, ROMs for scale-bridging, UQ and the integration framework.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, D.; Agarwal, D.; Sun, X.
2011-01-01
The Carbon Capture Simulation Initiative is developing state-of-the-art computational modeling and simulation tools to accelerate the commercialization of carbon capture technology. The CCSI Toolset consists of an integrated multi-scale modeling and simulation framework, which includes extensive use of reduced order models (ROMs) and a comprehensive uncertainty quantification (UQ) methodology. This paper focuses on the interrelation among high performance computing, detailed device simulations, ROMs for scale-bridging, UQ and the integration framework.
Progress in fast, accurate multi-scale climate simulations
Collins, W. D.; Johansen, H.; Evans, K. J.; ...
2015-06-01
We present a survey of physical and computational techniques that have the potential to contribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth with these computational improvements include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enablingmore » improved accuracy and fidelity in simulation of dynamics and allowing more complete representations of climate features at the global scale. At the same time, partnerships with computer science teams have focused on taking advantage of evolving computer architectures such as many-core processors and GPUs. As a result, approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.« less
NASA Astrophysics Data System (ADS)
Zhang, Ying; Feng, Yuanming; Wang, Wei; Yang, Chengwen; Wang, Ping
2017-03-01
A novel and versatile “bottom-up” approach is developed to estimate the radiobiological effect of clinic radiotherapy. The model consists of multi-scale Monte Carlo simulations from organ to cell levels. At cellular level, accumulated damages are computed using a spectrum-based accumulation algorithm and predefined cellular damage database. The damage repair mechanism is modeled by an expanded reaction-rate two-lesion kinetic model, which were calibrated through replicating a radiobiological experiment. Multi-scale modeling is then performed on a lung cancer patient under conventional fractionated irradiation. The cell killing effects of two representative voxels (isocenter and peripheral voxel of the tumor) are computed and compared. At microscopic level, the nucleus dose and damage yields vary among all nucleuses within the voxels. Slightly larger percentage of cDSB yield is observed for the peripheral voxel (55.0%) compared to the isocenter one (52.5%). For isocenter voxel, survival fraction increase monotonically at reduced oxygen environment. Under an extreme anoxic condition (0.001%), survival fraction is calculated to be 80% and the hypoxia reduction factor reaches a maximum value of 2.24. In conclusion, with biological-related variations, the proposed multi-scale approach is more versatile than the existing approaches for evaluating personalized radiobiological effects in radiotherapy.
A Liver-centric Multiscale Modeling Framework for Xenobiotics ...
We describe a multi-scale framework for modeling acetaminophen-induced liver toxicity. Acetaminophen is a widely used analgesic. Overdose of acetaminophen can result in liver injury via its biotransformation into toxic product, which further induce massive necrosis. Our study focuses on developing a multi-scale computational model to characterize both phase I and phase II metabolism of acetaminophen, by bridging Physiologically Based Pharmacokinetic (PBPK) modeling at the whole body level, cell movement and blood flow at the tissue level and cell signaling and drug metabolism at the sub-cellular level. To validate the model, we estimated our model parameters by fi?tting serum concentrations of acetaminophen and its glucuronide and sulfate metabolites to experiments, and carried out sensitivity analysis on 35 parameters selected from three modules. Our study focuses on developing a multi-scale computational model to characterize both phase I and phase II metabolism of acetaminophen, by bridging Physiologically Based Pharmacokinetic (PBPK) modeling at the whole body level, cell movement and blood flow at the tissue level and cell signaling and drug metabolism at the sub-cellular level. This multiscale model bridges the CompuCell3D tool used by the Virtual Tissue project with the httk tool developed by the Rapid Exposure and Dosimetry project.
SNAVA-A real-time multi-FPGA multi-model spiking neural network simulation architecture.
Sripad, Athul; Sanchez, Giovanny; Zapata, Mireya; Pirrone, Vito; Dorta, Taho; Cambria, Salvatore; Marti, Albert; Krishnamourthy, Karthikeyan; Madrenas, Jordi
2018-01-01
Spiking Neural Networks (SNN) for Versatile Applications (SNAVA) simulation platform is a scalable and programmable parallel architecture that supports real-time, large-scale, multi-model SNN computation. This parallel architecture is implemented in modern Field-Programmable Gate Arrays (FPGAs) devices to provide high performance execution and flexibility to support large-scale SNN models. Flexibility is defined in terms of programmability, which allows easy synapse and neuron implementation. This has been achieved by using a special-purpose Processing Elements (PEs) for computing SNNs, and analyzing and customizing the instruction set according to the processing needs to achieve maximum performance with minimum resources. The parallel architecture is interfaced with customized Graphical User Interfaces (GUIs) to configure the SNN's connectivity, to compile the neuron-synapse model and to monitor SNN's activity. Our contribution intends to provide a tool that allows to prototype SNNs faster than on CPU/GPU architectures but significantly cheaper than fabricating a customized neuromorphic chip. This could be potentially valuable to the computational neuroscience and neuromorphic engineering communities. Copyright © 2017 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seyedhosseini, Mojtaba; Kumar, Ritwik; Jurrus, Elizabeth R.
2011-10-01
Automated neural circuit reconstruction through electron microscopy (EM) images is a challenging problem. In this paper, we present a novel method that exploits multi-scale contextual information together with Radon-like features (RLF) to learn a series of discriminative models. The main idea is to build a framework which is capable of extracting information about cell membranes from a large contextual area of an EM image in a computationally efficient way. Toward this goal, we extract RLF that can be computed efficiently from the input image and generate a scale-space representation of the context images that are obtained at the output ofmore » each discriminative model in the series. Compared to a single-scale model, the use of a multi-scale representation of the context image gives the subsequent classifiers access to a larger contextual area in an effective way. Our strategy is general and independent of the classifier and has the potential to be used in any context based framework. We demonstrate that our method outperforms the state-of-the-art algorithms in detection of neuron membranes in EM images.« less
Recent Enhancements to the Community Multiscale Air Quality Modeling System (CMAQ)
EPA’s Office of Research and Development, Computational Exposure Division held a webinar on January 31, 2017 to present the recent scientific and computational updates made by EPA to the Community Multi-Scale Air Quality Model (CMAQ). Topics covered included: (1) Improveme...
Multi-level discriminative dictionary learning with application to large scale image classification.
Shen, Li; Sun, Gang; Huang, Qingming; Wang, Shuhui; Lin, Zhouchen; Wu, Enhua
2015-10-01
The sparse coding technique has shown flexibility and capability in image representation and analysis. It is a powerful tool in many visual applications. Some recent work has shown that incorporating the properties of task (such as discrimination for classification task) into dictionary learning is effective for improving the accuracy. However, the traditional supervised dictionary learning methods suffer from high computation complexity when dealing with large number of categories, making them less satisfactory in large scale applications. In this paper, we propose a novel multi-level discriminative dictionary learning method and apply it to large scale image classification. Our method takes advantage of hierarchical category correlation to encode multi-level discriminative information. Each internal node of the category hierarchy is associated with a discriminative dictionary and a classification model. The dictionaries at different layers are learnt to capture the information of different scales. Moreover, each node at lower layers also inherits the dictionary of its parent, so that the categories at lower layers can be described with multi-scale information. The learning of dictionaries and associated classification models is jointly conducted by minimizing an overall tree loss. The experimental results on challenging data sets demonstrate that our approach achieves excellent accuracy and competitive computation cost compared with other sparse coding methods for large scale image classification.
Multi-scale image segmentation and numerical modeling in carbonate rocks
NASA Astrophysics Data System (ADS)
Alves, G. C.; Vanorio, T.
2016-12-01
Numerical methods based on computational simulations can be an important tool in estimating physical properties of rocks. These can complement experimental results, especially when time constraints and sample availability are a problem. However, computational models created at different scales can yield conflicting results with respect to the physical laboratory. This problem is exacerbated in carbonate rocks due to their heterogeneity at all scales. We developed a multi-scale approach performing segmentation of the rock images and numerical modeling across several scales, accounting for those heterogeneities. As a first step, we measured the porosity and the elastic properties of a group of carbonate samples with varying micrite content. Then, samples were imaged by Scanning Electron Microscope (SEM) as well as optical microscope at different magnifications. We applied three different image segmentation techniques to create numerical models from the SEM images and performed numerical simulations of the elastic wave-equation. Our results show that a multi-scale approach can efficiently account for micro-porosities in tight micrite-supported samples, yielding acoustic velocities comparable to those obtained experimentally. Nevertheless, in high-porosity samples characterized by larger grain/micrite ratio, results show that SEM scale images tend to overestimate velocities, mostly due to their inability to capture macro- and/or intragranular- porosity. This suggests that, for high-porosity carbonate samples, optical microscope images would be more suited for numerical simulations.
NASA Astrophysics Data System (ADS)
Cao, Chao
2009-03-01
Nano-scale physical phenomena and processes, especially those in electronics, have drawn great attention in the past decade. Experiments have shown that electronic and transport properties of functionalized carbon nanotubes are sensitive to adsorption of gas molecules such as H2, NO2, and NH3. Similar measurements have also been performed to study adsorption of proteins on other semiconductor nano-wires. These experiments suggest that nano-scale systems can be useful for making future chemical and biological sensors. Aiming to understand the physical mechanisms underlying and governing property changes at nano-scale, we start off by investigating, via first-principles method, the electronic structure of Pd-CNT before and after hydrogen adsorption, and continue with coherent electronic transport using non-equilibrium Green’s function techniques combined with density functional theory. Once our results are fully analyzed they can be used to interpret and understand experimental data, with a few difficult issues to be addressed. Finally, we discuss a newly developed multi-scale computing architecture, OPAL, that coordinates simultaneous execution of multiple codes. Inspired by the capabilities of this computing framework, we present a scenario of future modeling and simulation of multi-scale, multi-physical processes.
An Eye Model for Computational Dosimetry Using A Multi-Scale Voxel Phantom
NASA Astrophysics Data System (ADS)
Caracappa, Peter F.; Rhodes, Ashley; Fiedler, Derek
2014-06-01
The lens of the eye is a radiosensitive tissue with cataract formation being the major concern. Recently reduced recommended dose limits to the lens of the eye have made understanding the dose to this tissue of increased importance. Due to memory limitations, the voxel resolution of computational phantoms used for radiation dose calculations is too large to accurately represent the dimensions of the eye. A revised eye model is constructed using physiological data for the dimensions of radiosensitive tissues, and is then transformed into a high-resolution voxel model. This eye model is combined with an existing set of whole body models to form a multi-scale voxel phantom, which is used with the MCNPX code to calculate radiation dose from various exposure types. This phantom provides an accurate representation of the radiation transport through the structures of the eye. Two alternate methods of including a high-resolution eye model within an existing whole body model are developed. The accuracy and performance of each method is compared against existing computational phantoms.
Tawhai, Merryn H; Bates, Jason H T
2011-05-01
Multi-scale modeling of biological systems has recently become fashionable due to the growing power of digital computers as well as to the growing realization that integrative systems behavior is as important to life as is the genome. While it is true that the behavior of a living organism must ultimately be traceable to all its components and their myriad interactions, attempting to codify this in its entirety in a model misses the insights gained from understanding how collections of system components at one level of scale conspire to produce qualitatively different behavior at higher levels. The essence of multi-scale modeling thus lies not in the inclusion of every conceivable biological detail, but rather in the judicious selection of emergent phenomena appropriate to the level of scale being modeled. These principles are exemplified in recent computational models of the lung. Airways responsiveness, for example, is an organ-level manifestation of events that begin at the molecular level within airway smooth muscle cells, yet it is not necessary to invoke all these molecular events to accurately describe the contraction dynamics of a cell, nor is it necessary to invoke all phenomena observable at the level of the cell to account for the changes in overall lung function that occur following methacholine challenge. Similarly, the regulation of pulmonary vascular tone has complex origins within the individual smooth muscle cells that line the blood vessels but, again, many of the fine details of cell behavior average out at the level of the organ to produce an effect on pulmonary vascular pressure that can be described in much simpler terms. The art of multi-scale lung modeling thus reduces not to being limitlessly inclusive, but rather to knowing what biological details to leave out.
NASA Astrophysics Data System (ADS)
Huang, Yanhui; Zhao, He; Wang, Yixing; Ratcliff, Tyree; Breneman, Curt; Brinson, L. Catherine; Chen, Wei; Schadler, Linda S.
2017-08-01
It has been found that doping dielectric polymers with a small amount of nanofiller or molecular additive can stabilize the material under a high field and lead to increased breakdown strength and lifetime. Choosing appropriate fillers is critical to optimizing the material performance, but current research largely relies on experimental trial and error. The employment of computer simulations for nanodielectric design is rarely reported. In this work, we propose a multi-scale modeling approach that employs ab initio, Monte Carlo, and continuum scales to predict the breakdown strength and lifetime of polymer nanocomposites based on the charge trapping effect of the nanofillers. The charge transfer, charge energy relaxation, and space charge effects are modeled in respective hierarchical scales by distinctive simulation techniques, and these models are connected together for high fidelity and robustness. The preliminary results show good agreement with the experimental data, suggesting its promise for use in the computer aided material design of high performance dielectrics.
Towards Personalized Cardiology: Multi-Scale Modeling of the Failing Heart
Amr, Ali; Neumann, Dominik; Georgescu, Bogdan; Seegerer, Philipp; Kamen, Ali; Haas, Jan; Frese, Karen S.; Irawati, Maria; Wirsz, Emil; King, Vanessa; Buss, Sebastian; Mereles, Derliz; Zitron, Edgar; Keller, Andreas; Katus, Hugo A.; Comaniciu, Dorin; Meder, Benjamin
2015-01-01
Background Despite modern pharmacotherapy and advanced implantable cardiac devices, overall prognosis and quality of life of HF patients remain poor. This is in part due to insufficient patient stratification and lack of individualized therapy planning, resulting in less effective treatments and a significant number of non-responders. Methods and Results State-of-the-art clinical phenotyping was acquired, including magnetic resonance imaging (MRI) and biomarker assessment. An individualized, multi-scale model of heart function covering cardiac anatomy, electrophysiology, biomechanics and hemodynamics was estimated using a robust framework. The model was computed on n=46 HF patients, showing for the first time that advanced multi-scale models can be fitted consistently on large cohorts. Novel multi-scale parameters derived from the model of all cases were analyzed and compared against clinical parameters, cardiac imaging, lab tests and survival scores to evaluate the explicative power of the model and its potential for better patient stratification. Model validation was pursued by comparing clinical parameters that were not used in the fitting process against model parameters. Conclusion This paper illustrates how advanced multi-scale models can complement cardiovascular imaging and how they could be applied in patient care. Based on obtained results, it becomes conceivable that, after thorough validation, such heart failure models could be applied for patient management and therapy planning in the future, as we illustrate in one patient of our cohort who received CRT-D implantation. PMID:26230546
A multi-scale approach to designing therapeutics for tuberculosis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Linderman, Jennifer J.; Cilfone, Nicholas A.; Pienaar, Elsje
Approximately one third of the world’s population is infected with Mycobacterium tuberculosis. Limited information about how the immune system fights M. tuberculosis and what constitutes protection from the bacteria impact our ability to develop effective therapies for tuberculosis. We present an in vivo systems biology approach that integrates data from multiple model systems and over multiple length and time scales into a comprehensive multi-scale and multi-compartment view of the in vivo immune response to M. tuberculosis. Lastly, we describe computational models that can be used to study (a) immunomodulation with the cytokines tumor necrosis factor and interleukin 10, (b) oralmore » and inhaled antibiotics, and (c) the effect of vaccination.« less
A multi-scale approach to designing therapeutics for tuberculosis
Linderman, Jennifer J.; Cilfone, Nicholas A.; Pienaar, Elsje; ...
2015-04-20
Approximately one third of the world’s population is infected with Mycobacterium tuberculosis. Limited information about how the immune system fights M. tuberculosis and what constitutes protection from the bacteria impact our ability to develop effective therapies for tuberculosis. We present an in vivo systems biology approach that integrates data from multiple model systems and over multiple length and time scales into a comprehensive multi-scale and multi-compartment view of the in vivo immune response to M. tuberculosis. Lastly, we describe computational models that can be used to study (a) immunomodulation with the cytokines tumor necrosis factor and interleukin 10, (b) oralmore » and inhaled antibiotics, and (c) the effect of vaccination.« less
SOMAR-LES: A framework for multi-scale modeling of turbulent stratified oceanic flows
NASA Astrophysics Data System (ADS)
Chalamalla, Vamsi K.; Santilli, Edward; Scotti, Alberto; Jalali, Masoud; Sarkar, Sutanu
2017-12-01
A new multi-scale modeling technique, SOMAR-LES, is presented in this paper. Localized grid refinement gives SOMAR (the Stratified Ocean Model with Adaptive Resolution) access to small scales of the flow which are normally inaccessible to general circulation models (GCMs). SOMAR-LES drives a LES (Large Eddy Simulation) on SOMAR's finest grids, forced with large scale forcing from the coarser grids. Three-dimensional simulations of internal tide generation, propagation and scattering are performed to demonstrate this multi-scale modeling technique. In the case of internal tide generation at a two-dimensional bathymetry, SOMAR-LES is able to balance the baroclinic energy budget and accurately model turbulence losses at only 10% of the computational cost required by a non-adaptive solver running at SOMAR-LES's fine grid resolution. This relative cost is significantly reduced in situations with intermittent turbulence or where the location of the turbulence is not known a priori because SOMAR-LES does not require persistent, global, high resolution. To illustrate this point, we consider a three-dimensional bathymetry with grids adaptively refined along the tidally generated internal waves to capture remote mixing in regions of wave focusing. The computational cost in this case is found to be nearly 25 times smaller than that of a non-adaptive solver at comparable resolution. In the final test case, we consider the scattering of a mode-1 internal wave at an isolated two-dimensional and three-dimensional topography, and we compare the results with Legg (2014) numerical experiments. We find good agreement with theoretical estimates. SOMAR-LES is less dissipative than the closure scheme employed by Legg (2014) near the bathymetry. Depending on the flow configuration and resolution employed, a reduction of more than an order of magnitude in computational costs is expected, relative to traditional existing solvers.
Adaptive subdomain modeling: A multi-analysis technique for ocean circulation models
NASA Astrophysics Data System (ADS)
Altuntas, Alper; Baugh, John
2017-07-01
Many coastal and ocean processes of interest operate over large temporal and geographical scales and require a substantial amount of computational resources, particularly when engineering design and failure scenarios are also considered. This study presents an adaptive multi-analysis technique that improves the efficiency of these computations when multiple alternatives are being simulated. The technique, called adaptive subdomain modeling, concurrently analyzes any number of child domains, with each instance corresponding to a unique design or failure scenario, in addition to a full-scale parent domain providing the boundary conditions for its children. To contain the altered hydrodynamics originating from the modifications, the spatial extent of each child domain is adaptively adjusted during runtime depending on the response of the model. The technique is incorporated in ADCIRC++, a re-implementation of the popular ADCIRC ocean circulation model with an updated software architecture designed to facilitate this adaptive behavior and to utilize concurrent executions of multiple domains. The results of our case studies confirm that the method substantially reduces computational effort while maintaining accuracy.
Zhang, Guoqing; Zhang, Xianku; Pang, Hongshuai
2015-09-01
This research is concerned with the problem of 4 degrees of freedom (DOF) ship manoeuvring identification modelling with the full-scale trial data. To avoid the multi-innovation matrix inversion in the conventional multi-innovation least squares (MILS) algorithm, a new transformed multi-innovation least squares (TMILS) algorithm is first developed by virtue of the coupling identification concept. And much effort is made to guarantee the uniformly ultimate convergence. Furthermore, the auto-constructed TMILS scheme is derived for the ship manoeuvring motion identification by combination with a statistic index. Comparing with the existing results, the proposed scheme has the significant computational advantage and is able to estimate the model structure. The illustrative examples demonstrate the effectiveness of the proposed algorithm, especially including the identification application with full-scale trial data. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Efficient and Extensible Quasi-Explicit Modular Nonlinear Multiscale Battery Model: GH-MSMD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Gi-Heon; Smith, Kandler; Lawrence-Simon, Jake
Complex physics and long computation time hinder the adoption of computer aided engineering models in the design of large-format battery cells and systems. A modular, efficient battery simulation model -- the multiscale multidomain (MSMD) model -- was previously introduced to aid the scale-up of Li-ion material and electrode designs to complete cell and pack designs, capturing electrochemical interplay with 3-D electronic current pathways and thermal response. Here, this paper enhances the computational efficiency of the MSMD model using a separation of time-scales principle to decompose model field variables. The decomposition provides a quasi-explicit linkage between the multiple length-scale domains andmore » thus reduces time-consuming nested iteration when solving model equations across multiple domains. In addition to particle-, electrode- and cell-length scales treated in the previous work, the present formulation extends to bus bar- and multi-cell module-length scales. We provide example simulations for several variants of GH electrode-domain models.« less
Efficient and Extensible Quasi-Explicit Modular Nonlinear Multiscale Battery Model: GH-MSMD
Kim, Gi-Heon; Smith, Kandler; Lawrence-Simon, Jake; ...
2017-03-24
Complex physics and long computation time hinder the adoption of computer aided engineering models in the design of large-format battery cells and systems. A modular, efficient battery simulation model -- the multiscale multidomain (MSMD) model -- was previously introduced to aid the scale-up of Li-ion material and electrode designs to complete cell and pack designs, capturing electrochemical interplay with 3-D electronic current pathways and thermal response. Here, this paper enhances the computational efficiency of the MSMD model using a separation of time-scales principle to decompose model field variables. The decomposition provides a quasi-explicit linkage between the multiple length-scale domains andmore » thus reduces time-consuming nested iteration when solving model equations across multiple domains. In addition to particle-, electrode- and cell-length scales treated in the previous work, the present formulation extends to bus bar- and multi-cell module-length scales. We provide example simulations for several variants of GH electrode-domain models.« less
Using CellML with OpenCMISS to Simulate Multi-Scale Physiology
Nickerson, David P.; Ladd, David; Hussan, Jagir R.; Safaei, Soroush; Suresh, Vinod; Hunter, Peter J.; Bradley, Christopher P.
2014-01-01
OpenCMISS is an open-source modeling environment aimed, in particular, at the solution of bioengineering problems. OpenCMISS consists of two main parts: a computational library (OpenCMISS-Iron) and a field manipulation and visualization library (OpenCMISS-Zinc). OpenCMISS is designed for the solution of coupled multi-scale, multi-physics problems in a general-purpose parallel environment. CellML is an XML format designed to encode biophysically based systems of ordinary differential equations and both linear and non-linear algebraic equations. A primary design goal of CellML is to allow mathematical models to be encoded in a modular and reusable format to aid reproducibility and interoperability of modeling studies. In OpenCMISS, we make use of CellML models to enable users to configure various aspects of their multi-scale physiological models. This avoids the need for users to be familiar with the OpenCMISS internal code in order to perform customized computational experiments. Examples of this are: cellular electrophysiology models embedded in tissue electrical propagation models; material constitutive relationships for mechanical growth and deformation simulations; time-varying boundary conditions for various problem domains; and fluid constitutive relationships and lumped-parameter models. In this paper, we provide implementation details describing how CellML models are integrated into multi-scale physiological models in OpenCMISS. The external interface OpenCMISS presents to users is also described, including specific examples exemplifying the extensibility and usability these tools provide the physiological modeling and simulation community. We conclude with some thoughts on future extension of OpenCMISS to make use of other community developed information standards, such as FieldML, SED-ML, and BioSignalML. Plans for the integration of accelerator code (graphical processing unit and field programmable gate array) generated from CellML models is also discussed. PMID:25601911
Non-Gaussian Multi-resolution Modeling of Magnetosphere-Ionosphere Coupling Processes
NASA Astrophysics Data System (ADS)
Fan, M.; Paul, D.; Lee, T. C. M.; Matsuo, T.
2016-12-01
The most dynamic coupling between the magnetosphere and ionosphere occurs in the Earth's polar atmosphere. Our objective is to model scale-dependent stochastic characteristics of high-latitude ionospheric electric fields that originate from solar wind magnetosphere-ionosphere interactions. The Earth's high-latitude ionospheric electric field exhibits considerable variability, with increasing non-Gaussian characteristics at decreasing spatio-temporal scales. Accurately representing the underlying stochastic physical process through random field modeling is crucial not only for scientific understanding of the energy, momentum and mass exchanges between the Earth's magnetosphere and ionosphere, but also for modern technological systems including telecommunication, navigation, positioning and satellite tracking. While a lot of efforts have been made to characterize the large-scale variability of the electric field in the context of Gaussian processes, no attempt has been made so far to model the small-scale non-Gaussian stochastic process observed in the high-latitude ionosphere. We construct a novel random field model using spherical needlets as building blocks. The double localization of spherical needlets in both spatial and frequency domains enables the model to capture the non-Gaussian and multi-resolutional characteristics of the small-scale variability. The estimation procedure is computationally feasible due to the utilization of an adaptive Gibbs sampler. We apply the proposed methodology to the computational simulation output from the Lyon-Fedder-Mobarry (LFM) global magnetohydrodynamics (MHD) magnetosphere model. Our non-Gaussian multi-resolution model results in characterizing significantly more energy associated with the small-scale ionospheric electric field variability in comparison to Gaussian models. By accurately representing unaccounted-for additional energy and momentum sources to the Earth's upper atmosphere, our novel random field modeling approach will provide a viable remedy to the current numerical models' systematic biases resulting from the underestimation of high-latitude energy and momentum sources.
Mathematics, Information, and Life Sciences
2012-03-05
INS • Chip -scale atomic clocks • Ad hoc networks • Polymorphic networks • Agile networks • Laser communications • Frequency-agile RF systems...FY12 BAA Bionavigation (Bio) Neuromorphic Computing (Human) Multi-scale Modeling (Math) Foundations of Information Systems (Info) BRI
Multi-dimensional modelling of gas turbine combustion using a flame sheet model in KIVA II
NASA Technical Reports Server (NTRS)
Cheng, W. K.; Lai, M.-C.; Chue, T.-H.
1991-01-01
A flame sheet model for heat release is incorporated into a multi-dimensional fluid mechanical simulation for gas turbine application. The model assumes that the chemical reaction takes place in thin sheets compared to the length scale of mixing, which is valid for the primary combustion zone in a gas turbine combustor. In this paper, the details of the model are described and computational results are discussed.
Multi-scale and Multi-physics Numerical Methods for Modeling Transport in Mesoscopic Systems
2014-10-13
function and wide band Fast multipole methods for Hankel waves. (2) a new linear scaling discontinuous Galerkin density functional theory, which provide a...inflow boundary condition for Wigner quantum transport equations. Also, a book titled "Computational Methods for Electromagnetic Phenomena...equationsin layered media with FMM for Bessel functions , Science China Mathematics, (12 2013): 2561. doi: TOTAL: 6 Number of Papers published in peer
Orientation-selective aVLSI spiking neurons.
Liu, S C; Kramer, J; Indiveri, G; Delbrück, T; Burg, T; Douglas, R
2001-01-01
We describe a programmable multi-chip VLSI neuronal system that can be used for exploring spike-based information processing models. The system consists of a silicon retina, a PIC microcontroller, and a transceiver chip whose integrate-and-fire neurons are connected in a soft winner-take-all architecture. The circuit on this multi-neuron chip approximates a cortical microcircuit. The neurons can be configured for different computational properties by the virtual connections of a selected set of pixels on the silicon retina. The virtual wiring between the different chips is effected by an event-driven communication protocol that uses asynchronous digital pulses, similar to spikes in a neuronal system. We used the multi-chip spike-based system to synthesize orientation-tuned neurons using both a feedforward model and a feedback model. The performance of our analog hardware spiking model matched the experimental observations and digital simulations of continuous-valued neurons. The multi-chip VLSI system has advantages over computer neuronal models in that it is real-time, and the computational time does not scale with the size of the neuronal network.
NASA Astrophysics Data System (ADS)
Slaughter, A. E.; Permann, C.; Peterson, J. W.; Gaston, D.; Andrs, D.; Miller, J.
2014-12-01
The Idaho National Laboratory (INL)-developed Multiphysics Object Oriented Simulation Environment (MOOSE; www.mooseframework.org), is an open-source, parallel computational framework for enabling the solution of complex, fully implicit multiphysics systems. MOOSE provides a set of computational tools that scientists and engineers can use to create sophisticated multiphysics simulations. Applications built using MOOSE have computed solutions for chemical reaction and transport equations, computational fluid dynamics, solid mechanics, heat conduction, mesoscale materials modeling, geomechanics, and others. To facilitate the coupling of diverse and highly-coupled physical systems, MOOSE employs the Jacobian-free Newton-Krylov (JFNK) method when solving the coupled nonlinear systems of equations arising in multiphysics applications. The MOOSE framework is written in C++, and leverages other high-quality, open-source scientific software packages such as LibMesh, Hypre, and PETSc. MOOSE uses a "hybrid parallel" model which combines both shared memory (thread-based) and distributed memory (MPI-based) parallelism to ensure efficient resource utilization on a wide range of computational hardware. MOOSE-based applications are inherently modular, which allows for simulation expansion (via coupling of additional physics modules) and the creation of multi-scale simulations. Any application developed with MOOSE supports running (in parallel) any other MOOSE-based application. Each application can be developed independently, yet easily communicate with other applications (e.g., conductivity in a slope-scale model could be a constant input, or a complete phase-field micro-structure simulation) without additional code being written. This method of development has proven effective at INL and expedites the development of sophisticated, sustainable, and collaborative simulation tools.
The role of zonal flows in the saturation of multi-scale gyrokinetic turbulence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Staebler, G. M.; Candy, J.; Howard, N. T.
2016-06-15
The 2D spectrum of the saturated electric potential from gyrokinetic turbulence simulations that include both ion and electron scales (multi-scale) in axisymmetric tokamak geometry is analyzed. The paradigm that the turbulence is saturated when the zonal (axisymmetic) ExB flow shearing rate competes with linear growth is shown to not apply to the electron scale turbulence. Instead, it is the mixing rate by the zonal ExB velocity spectrum with the turbulent distribution function that competes with linear growth. A model of this mechanism is shown to be able to capture the suppression of electron-scale turbulence by ion-scale turbulence and the thresholdmore » for the increase in electron scale turbulence when the ion-scale turbulence is reduced. The model computes the strength of the zonal flow velocity and the saturated potential spectrum from the linear growth rate spectrum. The model for the saturated electric potential spectrum is applied to a quasilinear transport model and shown to accurately reproduce the electron and ion energy fluxes of the non-linear gyrokinetic multi-scale simulations. The zonal flow mixing saturation model is also shown to reproduce the non-linear upshift in the critical temperature gradient caused by zonal flows in ion-scale gyrokinetic simulations.« less
Reduced-Order Biogeochemical Flux Model for High-Resolution Multi-Scale Biophysical Simulations
NASA Astrophysics Data System (ADS)
Smith, Katherine; Hamlington, Peter; Pinardi, Nadia; Zavatarelli, Marco
2017-04-01
Biogeochemical tracers and their interactions with upper ocean physical processes such as submesoscale circulations and small-scale turbulence are critical for understanding the role of the ocean in the global carbon cycle. These interactions can cause small-scale spatial and temporal heterogeneity in tracer distributions that can, in turn, greatly affect carbon exchange rates between the atmosphere and interior ocean. For this reason, it is important to take into account small-scale biophysical interactions when modeling the global carbon cycle. However, explicitly resolving these interactions in an earth system model (ESM) is currently infeasible due to the enormous associated computational cost. As a result, understanding and subsequently parameterizing how these small-scale heterogeneous distributions develop and how they relate to larger resolved scales is critical for obtaining improved predictions of carbon exchange rates in ESMs. In order to address this need, we have developed the reduced-order, 17 state variable Biogeochemical Flux Model (BFM-17) that follows the chemical functional group approach, which allows for non-Redfield stoichiometric ratios and the exchange of matter through units of carbon, nitrate, and phosphate. This model captures the behavior of open-ocean biogeochemical systems without substantially increasing computational cost, thus allowing the model to be combined with computationally-intensive, fully three-dimensional, non-hydrostatic large eddy simulations (LES). In this talk, we couple BFM-17 with the Princeton Ocean Model and show good agreement between predicted monthly-averaged results and Bermuda testbed area field data (including the Bermuda-Atlantic Time-series Study and Bermuda Testbed Mooring). Through these tests, we demonstrate the capability of BFM-17 to accurately model open-ocean biochemistry. Additionally, we discuss the use of BFM-17 within a multi-scale LES framework and outline how this will further our understanding of turbulent biophysical interactions in the upper ocean.
Reduced-Order Biogeochemical Flux Model for High-Resolution Multi-Scale Biophysical Simulations
NASA Astrophysics Data System (ADS)
Smith, K.; Hamlington, P.; Pinardi, N.; Zavatarelli, M.; Milliff, R. F.
2016-12-01
Biogeochemical tracers and their interactions with upper ocean physical processes such as submesoscale circulations and small-scale turbulence are critical for understanding the role of the ocean in the global carbon cycle. These interactions can cause small-scale spatial and temporal heterogeneity in tracer distributions which can, in turn, greatly affect carbon exchange rates between the atmosphere and interior ocean. For this reason, it is important to take into account small-scale biophysical interactions when modeling the global carbon cycle. However, explicitly resolving these interactions in an earth system model (ESM) is currently infeasible due to the enormous associated computational cost. As a result, understanding and subsequently parametrizing how these small-scale heterogeneous distributions develop and how they relate to larger resolved scales is critical for obtaining improved predictions of carbon exchange rates in ESMs. In order to address this need, we have developed the reduced-order, 17 state variable Biogeochemical Flux Model (BFM-17). This model captures the behavior of open-ocean biogeochemical systems without substantially increasing computational cost, thus allowing the model to be combined with computationally-intensive, fully three-dimensional, non-hydrostatic large eddy simulations (LES). In this talk, we couple BFM-17 with the Princeton Ocean Model and show good agreement between predicted monthly-averaged results and Bermuda testbed area field data (including the Bermuda-Atlantic Time Series and Bermuda Testbed Mooring). Through these tests, we demonstrate the capability of BFM-17 to accurately model open-ocean biochemistry. Additionally, we discuss the use of BFM-17 within a multi-scale LES framework and outline how this will further our understanding of turbulent biophysical interactions in the upper ocean.
NASA Astrophysics Data System (ADS)
Liu, Yushi; Poh, Hee Joo
2014-11-01
The Computational Fluid Dynamics analysis has become increasingly important in modern urban planning in order to create highly livable city. This paper presents a multi-scale modeling methodology which couples Weather Research and Forecasting (WRF) Model with open source CFD simulation tool, OpenFOAM. This coupling enables the simulation of the wind flow and pollutant dispersion in urban built-up area with high resolution mesh. In this methodology meso-scale model WRF provides the boundary condition for the micro-scale CFD model OpenFOAM. The advantage is that the realistic weather condition is taken into account in the CFD simulation and complexity of building layout can be handled with ease by meshing utility of OpenFOAM. The result is validated against the Joint Urban 2003 Tracer Field Tests in Oklahoma City and there is reasonably good agreement between the CFD simulation and field observation. The coupling of WRF- OpenFOAM provide urban planners with reliable environmental modeling tool in actual urban built-up area; and it can be further extended with consideration of future weather conditions for the scenario studies on climate change impact.
MultiPhyl: a high-throughput phylogenomics webserver using distributed computing
Keane, Thomas M.; Naughton, Thomas J.; McInerney, James O.
2007-01-01
With the number of fully sequenced genomes increasing steadily, there is greater interest in performing large-scale phylogenomic analyses from large numbers of individual gene families. Maximum likelihood (ML) has been shown repeatedly to be one of the most accurate methods for phylogenetic construction. Recently, there have been a number of algorithmic improvements in maximum-likelihood-based tree search methods. However, it can still take a long time to analyse the evolutionary history of many gene families using a single computer. Distributed computing refers to a method of combining the computing power of multiple computers in order to perform some larger overall calculation. In this article, we present the first high-throughput implementation of a distributed phylogenetics platform, MultiPhyl, capable of using the idle computational resources of many heterogeneous non-dedicated machines to form a phylogenetics supercomputer. MultiPhyl allows a user to upload hundreds or thousands of amino acid or nucleotide alignments simultaneously and perform computationally intensive tasks such as model selection, tree searching and bootstrapping of each of the alignments using many desktop machines. The program implements a set of 88 amino acid models and 56 nucleotide maximum likelihood models and a variety of statistical methods for choosing between alternative models. A MultiPhyl webserver is available for public use at: http://www.cs.nuim.ie/distributed/multiphyl.php. PMID:17553837
Computational Modeling of Multi-Scale Material Features in Cement Paste - An Overview
2015-05-25
and concrete ; though commonly used are one of the most complex in terms of material morphology and structure than most materials, for example...across the multiple scales are required. In this paper, recent work from our research group on the nano to continuum level modeling of cementitious...of our research work consisting of, • Molecular Dynamics (MD) modeling for the nano scale features of the cementitious material chemistry. • Micro
The role of zonal flows in the saturation of multi-scale gyrokinetic turbulence
Staebler, Gary M.; Candy, John; Howard, Nathan T.; ...
2016-06-29
The 2D spectrum of the saturated electric potential from gyrokinetic turbulence simulations that include both ion and electron scales (multi-scale) in axisymmetric tokamak geometry is analyzed. The paradigm that the turbulence is saturated when the zonal (axisymmetic) ExB flow shearing rate competes with linear growth is shown to not apply to the electron scale turbulence. Instead, it is the mixing rate by the zonal ExB velocity spectrum with the turbulent distribution function that competes with linear growth. A model of this mechanism is shown to be able to capture the suppression of electron-scale turbulence by ion-scale turbulence and the thresholdmore » for the increase in electron scale turbulence when the ion-scale turbulence is reduced. The model computes the strength of the zonal flow velocity and the saturated potential spectrum from the linear growth rate spectrum. The model for the saturated electric potential spectrum is applied to a quasilinear transport model and shown to accurately reproduce the electron and ion energy fluxes of the non-linear gyrokinetic multi-scale simulations. Finally, the zonal flow mixing saturation model is also shown to reproduce the non-linear upshift in the critical temperature gradient caused by zonal flows in ionscale gyrokinetic simulations.« less
Multiscale Simulation of Blood Flow in Brain Arteries with an Aneurysm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leopold Grinberg; Vitali Morozov; Dmitry A. Fedosov
2013-04-24
Multi-scale modeling of arterial blood flow can shed light on the interaction between events happening at micro- and meso-scales (i.e., adhesion of red blood cells to the arterial wall, clot formation) and at macro-scales (i.e., change in flow patterns due to the clot). Coupled numerical simulations of such multi-scale flow require state-of-the-art computers and algorithms, along with techniques for multi-scale visualizations.This animation presents results of studies used in the development of a multi-scale visualization methodology. First we use streamlines to show the path the flow is taking as it moves through the system, including the aneurysm. Next we investigate themore » process of thrombus (blood clot) formation, which may be responsible for the rupture of aneurysms, by concentrating on the platelet blood cells, observing as they aggregate on the wall of the aneurysm.« less
Multi-scale computation methods: Their applications in lithium-ion battery research and development
NASA Astrophysics Data System (ADS)
Siqi, Shi; Jian, Gao; Yue, Liu; Yan, Zhao; Qu, Wu; Wangwei, Ju; Chuying, Ouyang; Ruijuan, Xiao
2016-01-01
Based upon advances in theoretical algorithms, modeling and simulations, and computer technologies, the rational design of materials, cells, devices, and packs in the field of lithium-ion batteries is being realized incrementally and will at some point trigger a paradigm revolution by combining calculations and experiments linked by a big shared database, enabling accelerated development of the whole industrial chain. Theory and multi-scale modeling and simulation, as supplements to experimental efforts, can help greatly to close some of the current experimental and technological gaps, as well as predict path-independent properties and help to fundamentally understand path-independent performance in multiple spatial and temporal scales. Project supported by the National Natural Science Foundation of China (Grant Nos. 51372228 and 11234013), the National High Technology Research and Development Program of China (Grant No. 2015AA034201), and Shanghai Pujiang Program, China (Grant No. 14PJ1403900).
The Parallel System for Integrating Impact Models and Sectors (pSIMS)
NASA Technical Reports Server (NTRS)
Elliott, Joshua; Kelly, David; Chryssanthacopoulos, James; Glotter, Michael; Jhunjhnuwala, Kanika; Best, Neil; Wilde, Michael; Foster, Ian
2014-01-01
We present a framework for massively parallel climate impact simulations: the parallel System for Integrating Impact Models and Sectors (pSIMS). This framework comprises a) tools for ingesting and converting large amounts of data to a versatile datatype based on a common geospatial grid; b) tools for translating this datatype into custom formats for site-based models; c) a scalable parallel framework for performing large ensemble simulations, using any one of a number of different impacts models, on clusters, supercomputers, distributed grids, or clouds; d) tools and data standards for reformatting outputs to common datatypes for analysis and visualization; and e) methodologies for aggregating these datatypes to arbitrary spatial scales such as administrative and environmental demarcations. By automating many time-consuming and error-prone aspects of large-scale climate impacts studies, pSIMS accelerates computational research, encourages model intercomparison, and enhances reproducibility of simulation results. We present the pSIMS design and use example assessments to demonstrate its multi-model, multi-scale, and multi-sector versatility.
NASA Technical Reports Server (NTRS)
Raju, M. S.
2016-01-01
The open national combustion code (Open- NCC) is developed with the aim of advancing the current multi-dimensional computational tools used in the design of advanced technology combustors. In this paper we provide an overview of the spray module, LSPRAY-V, developed as a part of this effort. The spray solver is mainly designed to predict the flow, thermal, and transport properties of a rapidly evaporating multi-component liquid spray. The modeling approach is applicable over a wide-range of evaporating conditions (normal, superheat, and supercritical). The modeling approach is based on several well-established atomization, vaporization, and wall/droplet impingement models. It facilitates large-scale combustor computations through the use of massively parallel computers with the ability to perform the computations on either structured & unstructured grids. The spray module has a multi-liquid and multi-injector capability, and can be used in the calculation of both steady and unsteady computations. We conclude the paper by providing the results for a reacting spray generated by a single injector element with 600 axially swept swirler vanes. It is a configuration based on the next-generation lean-direct injection (LDI) combustor concept. The results include comparisons for both combustor exit temperature and EINOX at three different fuel/air ratios.
Multi-Scale Modeling of a Graphite-Epoxy-Nanotube System
NASA Technical Reports Server (NTRS)
Frankland, S. J. V.; Riddick, J. C.; Gates, T. S.
2005-01-01
A multi-scale method is utilized to determine some of the constitutive properties of a three component graphite-epoxy-nanotube system. This system is of interest because carbon nanotubes have been proposed as stiffening and toughening agents in the interlaminar regions of carbon fiber/epoxy laminates. The multi-scale method uses molecular dynamics simulation and equivalent-continuum modeling to compute three of the elastic constants of the graphite-epoxy-nanotube system: C11, C22, and C33. The 1-direction is along the nanotube axis, and the graphene sheets lie in the 1-2 plane. It was found that the C11 is only 4% larger than the C22. The nanotube therefore does have a small, but positive effect on the constitutive properties in the interlaminar region.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kornreich, Drew E; Vaidya, Rajendra U; Ammerman, Curtt N
Integrated Computational Materials Engineering (ICME) is a novel overarching approach to bridge length and time scales in computational materials science and engineering. This approach integrates all elements of multi-scale modeling (including various empirical and science-based models) with materials informatics to provide users the opportunity to tailor material selections based on stringent application needs. Typically, materials engineering has focused on structural requirements (stress, strain, modulus, fracture toughness etc.) while multi-scale modeling has been science focused (mechanical threshold strength model, grain-size models, solid-solution strengthening models etc.). Materials informatics (mechanical property inventories) on the other hand, is extensively data focused. All of thesemore » elements are combined within the framework of ICME to create architecture for the development, selection and design new composite materials for challenging environments. We propose development of the foundations for applying ICME to composite materials development for nuclear and high-radiation environments (including nuclear-fusion energy reactors, nuclear-fission reactors, and accelerators). We expect to combine all elements of current material models (including thermo-mechanical and finite-element models) into the ICME framework. This will be accomplished through the use of a various mathematical modeling constructs. These constructs will allow the integration of constituent models, which in tum would allow us to use the adaptive strengths of using a combinatorial scheme (fabrication and computational) for creating new composite materials. A sample problem where these concepts are used is provided in this summary.« less
Numerical Upscaling of Solute Transport in Fractured Porous Media Based on Flow Aligned Blocks
NASA Astrophysics Data System (ADS)
Leube, P.; Nowak, W.; Sanchez-Vila, X.
2013-12-01
High-contrast or fractured-porous media (FPM) pose one of the largest unresolved challenges for simulating large hydrogeological systems. The high contrast in advective transport between fast conduits and low-permeability rock matrix, including complex mass transfer processes, leads to the typical complex characteristics of early bulk arrivals and long tailings. Adequate direct representation of FPM requires enormous numerical resolutions. For large scales, e.g. the catchment scale, and when allowing for uncertainty in the fracture network architecture or in matrix properties, computational costs quickly reach an intractable level. In such cases, multi-scale simulation techniques have become useful tools. They allow decreasing the complexity of models by aggregating and transferring their parameters to coarser scales and so drastically reduce the computational costs. However, these advantages come at a loss of detail and accuracy. In this work, we develop and test a new multi-scale or upscaled modeling approach based on block upscaling. The novelty is that individual blocks are defined by and aligned with the local flow coordinates. We choose a multi-rate mass transfer (MRMT) model to represent the remaining sub-block non-Fickian behavior within these blocks on the coarse scale. To make the scale transition simple and to save computational costs, we capture sub-block features by temporal moments (TM) of block-wise particle arrival times to be matched with the MRMT model. By predicting spatial mass distributions of injected tracers in a synthetic test scenario, our coarse-scale solution matches reasonably well with the corresponding fine-scale reference solution. For predicting higher TM-orders (such as arrival time and effective dispersion), the prediction accuracy steadily decreases. This is compensated to some extent by the MRMT model. If the MRMT model becomes too complex, it loses its effect. We also found that prediction accuracy is sensitive to the choice of the effective dispersion coefficients and on the block resolution. A key advantage of the flow-aligned blocks is that the small-scale velocity field is reproduced quite accurately on the block-scale through their flow alignment. Thus, the block-scale transverse dispersivities remain in the similar magnitude as local ones, and they do not have to represent macroscopic uncertainty. Also, the flow-aligned blocks minimize numerical dispersion when solving the large-scale transport problem.
Multi-atlas learner fusion: An efficient segmentation approach for large-scale data.
Asman, Andrew J; Huo, Yuankai; Plassard, Andrew J; Landman, Bennett A
2015-12-01
We propose multi-atlas learner fusion (MLF), a framework for rapidly and accurately replicating the highly accurate, yet computationally expensive, multi-atlas segmentation framework based on fusing local learners. In the largest whole-brain multi-atlas study yet reported, multi-atlas segmentations are estimated for a training set of 3464 MR brain images. Using these multi-atlas estimates we (1) estimate a low-dimensional representation for selecting locally appropriate example images, and (2) build AdaBoost learners that map a weak initial segmentation to the multi-atlas segmentation result. Thus, to segment a new target image we project the image into the low-dimensional space, construct a weak initial segmentation, and fuse the trained, locally selected, learners. The MLF framework cuts the runtime on a modern computer from 36 h down to 3-8 min - a 270× speedup - by completely bypassing the need for deformable atlas-target registrations. Additionally, we (1) describe a technique for optimizing the weak initial segmentation and the AdaBoost learning parameters, (2) quantify the ability to replicate the multi-atlas result with mean accuracies approaching the multi-atlas intra-subject reproducibility on a testing set of 380 images, (3) demonstrate significant increases in the reproducibility of intra-subject segmentations when compared to a state-of-the-art multi-atlas framework on a separate reproducibility dataset, (4) show that under the MLF framework the large-scale data model significantly improve the segmentation over the small-scale model under the MLF framework, and (5) indicate that the MLF framework has comparable performance as state-of-the-art multi-atlas segmentation algorithms without using non-local information. Copyright © 2015 Elsevier B.V. All rights reserved.
The Australian Computational Earth Systems Simulator
NASA Astrophysics Data System (ADS)
Mora, P.; Muhlhaus, H.; Lister, G.; Dyskin, A.; Place, D.; Appelbe, B.; Nimmervoll, N.; Abramson, D.
2001-12-01
Numerical simulation of the physics and dynamics of the entire earth system offers an outstanding opportunity for advancing earth system science and technology but represents a major challenge due to the range of scales and physical processes involved, as well as the magnitude of the software engineering effort required. However, new simulation and computer technologies are bringing this objective within reach. Under a special competitive national funding scheme to establish new Major National Research Facilities (MNRF), the Australian government together with a consortium of Universities and research institutions have funded construction of the Australian Computational Earth Systems Simulator (ACcESS). The Simulator or computational virtual earth will provide the research infrastructure to the Australian earth systems science community required for simulations of dynamical earth processes at scales ranging from microscopic to global. It will consist of thematic supercomputer infrastructure and an earth systems simulation software system. The Simulator models and software will be constructed over a five year period by a multi-disciplinary team of computational scientists, mathematicians, earth scientists, civil engineers and software engineers. The construction team will integrate numerical simulation models (3D discrete elements/lattice solid model, particle-in-cell large deformation finite-element method, stress reconstruction models, multi-scale continuum models etc) with geophysical, geological and tectonic models, through advanced software engineering and visualization technologies. When fully constructed, the Simulator aims to provide the software and hardware infrastructure needed to model solid earth phenomena including global scale dynamics and mineralisation processes, crustal scale processes including plate tectonics, mountain building, interacting fault system dynamics, and micro-scale processes that control the geological, physical and dynamic behaviour of earth systems. ACcESS represents a part of Australia's contribution to the APEC Cooperation for Earthquake Simulation (ACES) international initiative. Together with other national earth systems science initiatives including the Japanese Earth Simulator and US General Earthquake Model projects, ACcESS aims to provide a driver for scientific advancement and technological breakthroughs including: quantum leaps in understanding of earth evolution at global, crustal, regional and microscopic scales; new knowledge of the physics of crustal fault systems required to underpin the grand challenge of earthquake prediction; new understanding and predictive capabilities of geological processes such as tectonics and mineralisation.
The importance of structural anisotropy in computational models of traumatic brain injury.
Carlsen, Rika W; Daphalapurkar, Nitin P
2015-01-01
Understanding the mechanisms of injury might prove useful in assisting the development of methods for the management and mitigation of traumatic brain injury (TBI). Computational head models can provide valuable insight into the multi-length-scale complexity associated with the primary nature of diffuse axonal injury. It involves understanding how the trauma to the head (at the centimeter length scale) translates to the white-matter tissue (at the millimeter length scale), and even further down to the axonal-length scale, where physical injury to axons (e.g., axon separation) may occur. However, to accurately represent the development of TBI, the biofidelity of these computational models is of utmost importance. There has been a focused effort to improve the biofidelity of computational models by including more sophisticated material definitions and implementing physiologically relevant measures of injury. This paper summarizes recent computational studies that have incorporated structural anisotropy in both the material definition of the white matter and the injury criterion as a means to improve the predictive capabilities of computational models for TBI. We discuss the role of structural anisotropy on both the mechanical response of the brain tissue and on the development of injury. We also outline future directions in the computational modeling of TBI.
2014-09-30
continuation of the evolution of the Regional Oceanic Modeling System (ROMS) as a multi-scale, multi-process model and its utilization for...hydrostatic component of ROMS (Kanarska et al., 2007) is required to increase its efficiency and generality. The non-hydrostatic ROMS involves the solution...instability and wind-driven mixing. For the computational regime where those processes can be partially, but not yet fully resolved, it will
A FAST BAYESIAN METHOD FOR UPDATING AND FORECASTING HOURLY OZONE LEVELS
A Bayesian hierarchical space-time model is proposed by combining information from real-time ambient AIRNow air monitoring data, and output from a computer simulation model known as the Community Multi-scale Air Quality (Eta-CMAQ) forecast model. A model validation analysis shows...
Collaborative Multi-Scale 3d City and Infrastructure Modeling and Simulation
NASA Astrophysics Data System (ADS)
Breunig, M.; Borrmann, A.; Rank, E.; Hinz, S.; Kolbe, T.; Schilcher, M.; Mundani, R.-P.; Jubierre, J. R.; Flurl, M.; Thomsen, A.; Donaubauer, A.; Ji, Y.; Urban, S.; Laun, S.; Vilgertshofer, S.; Willenborg, B.; Menninghaus, M.; Steuer, H.; Wursthorn, S.; Leitloff, J.; Al-Doori, M.; Mazroobsemnani, N.
2017-09-01
Computer-aided collaborative and multi-scale 3D planning are challenges for complex railway and subway track infrastructure projects in the built environment. Many legal, economic, environmental, and structural requirements have to be taken into account. The stringent use of 3D models in the different phases of the planning process facilitates communication and collaboration between the stake holders such as civil engineers, geological engineers, and decision makers. This paper presents concepts, developments, and experiences gained by an interdisciplinary research group coming from civil engineering informatics and geo-informatics banding together skills of both, the Building Information Modeling and the 3D GIS world. New approaches including the development of a collaborative platform and 3D multi-scale modelling are proposed for collaborative planning and simulation to improve the digital 3D planning of subway tracks and other infrastructures. Experiences during this research and lessons learned are presented as well as an outlook on future research focusing on Building Information Modeling and 3D GIS applications for cities of the future.
Zhang, Yanhang; Barocas, Victor H; Berceli, Scott A; Clancy, Colleen E; Eckmann, David M; Garbey, Marc; Kassab, Ghassan S; Lochner, Donna R; McCulloch, Andrew D; Tran-Son-Tay, Roger; Trayanova, Natalia A
2016-09-01
Cardiovascular diseases (CVDs) are the leading cause of death in the western world. With the current development of clinical diagnostics to more accurately measure the extent and specifics of CVDs, a laudable goal is a better understanding of the structure-function relation in the cardiovascular system. Much of this fundamental understanding comes from the development and study of models that integrate biology, medicine, imaging, and biomechanics. Information from these models provides guidance for developing diagnostics, and implementation of these diagnostics to the clinical setting, in turn, provides data for refining the models. In this review, we introduce multi-scale and multi-physical models for understanding disease development, progression, and designing clinical interventions. We begin with multi-scale models of cardiac electrophysiology and mechanics for diagnosis, clinical decision support, personalized and precision medicine in cardiology with examples in arrhythmia and heart failure. We then introduce computational models of vasculature mechanics and associated mechanical forces for understanding vascular disease progression, designing clinical interventions, and elucidating mechanisms that underlie diverse vascular conditions. We conclude with a discussion of barriers that must be overcome to provide enhanced insights, predictions, and decisions in pre-clinical and clinical applications.
Zhang, Yanhang; Barocas, Victor H.; Berceli, Scott A.; Clancy, Colleen E.; Eckmann, David M.; Garbey, Marc; Kassab, Ghassan S.; Lochner, Donna R.; McCulloch, Andrew D.; Tran-Son-Tay, Roger; Trayanova, Natalia A.
2016-01-01
Cardiovascular diseases (CVDs) are the leading cause of death in the western world. With the current development of clinical diagnostics to more accurately measure the extent and specifics of CVDs, a laudable goal is a better understanding of the structure-function relation in the cardiovascular system. Much of this fundamental understanding comes from the development and study of models that integrate biology, medicine, imaging, and biomechanics. Information from these models provides guidance for developing diagnostics, and implementation of these diagnostics to the clinical setting, in turn, provides data for refining the models. In this review, we introduce multi-scale and multi-physical models for understanding disease development, progression, and designing clinical interventions. We begin with multi-scale models of cardiac electrophysiology and mechanics for diagnosis, clinical decision support, personalized and precision medicine in cardiology with examples in arrhythmia and heart failure. We then introduce computational models of vasculature mechanics and associated mechanical forces for understanding vascular disease progression, designing clinical interventions, and elucidating mechanisms that underlie diverse vascular conditions. We conclude with a discussion of barriers that must be overcome to provide enhanced insights, predictions, and decisions in pre-clinical and clinical applications. PMID:27138523
Influences of coupled fire-atmosphere interaction on wildfire behavior
NASA Astrophysics Data System (ADS)
Linn, R.; Winterkamp, J.; Jonko, A. K.; Runde, I.; Canfield, J.; Parsons, R.; Sieg, C.
2017-12-01
Two-way interactions between fire and the environment affect fire behavior at scales ranging from buoyancy-induced mixing and turbulence to fire-scale circulations that retard or increase fire spread. Advances in computing have created new opportunities for the exploration of coupled fire-atmosphere behavior using numerical models that represent interactions between the dominant processes driving wildfire behavior, including convective and radiative heat transfer, aerodynamic drag and buoyant response of the atmosphere to heat released by the fire. Such models are not practical for operational, faster-than-real-time fire prediction due to their computational and data requirements. However, they are valuable tools for exploring influences of fire-atmosphere feedbacks on fire behavior as they explicitly simulate atmospheric motions surrounding fires from meter to kilometer scales. We use the coupled fire-atmosphere model FIRETEC to gain new insights into aspects of fire behavior that have been observed in the field and laboratory, to carry out sensitivity analysis that is impractical through observations and to pose new hypotheses that can be tested experimentally. Specifically, we use FIRETEC to study the following multi-scale coupled fire-atmosphere interactions: 1) 3D fire-atmosphere interaction that dictates multi-scale fire line dynamics; 2) influence of vegetation heterogeneity and variability in wind fields on predictability of fire spread; 3) fundamental impacts of topography on fire spread. These numerical studies support new conceptual models for the dominant roles of multi-scale fluid dynamics in determining fire spread, including the roles of crosswind fire line-intensity variations on heat transfer to unburned fuels and the role of fire line depth expansion in upslope acceleration of fires.
Cockrell, Robert Chase; Christley, Scott; Chang, Eugene; An, Gary
2015-01-01
Perhaps the greatest challenge currently facing the biomedical research community is the ability to integrate highly detailed cellular and molecular mechanisms to represent clinical disease states as a pathway to engineer effective therapeutics. This is particularly evident in the representation of organ-level pathophysiology in terms of abnormal tissue structure, which, through histology, remains a mainstay in disease diagnosis and staging. As such, being able to generate anatomic scale simulations is a highly desirable goal. While computational limitations have previously constrained the size and scope of multi-scale computational models, advances in the capacity and availability of high-performance computing (HPC) resources have greatly expanded the ability of computational models of biological systems to achieve anatomic, clinically relevant scale. Diseases of the intestinal tract are exemplary examples of pathophysiological processes that manifest at multiple scales of spatial resolution, with structural abnormalities present at the microscopic, macroscopic and organ-levels. In this paper, we describe a novel, massively parallel computational model of the gut, the Spatially Explicitly General-purpose Model of Enteric Tissue_HPC (SEGMEnT_HPC), which extends an existing model of the gut epithelium, SEGMEnT, in order to create cell-for-cell anatomic scale simulations. We present an example implementation of SEGMEnT_HPC that simulates the pathogenesis of ileal pouchitis, and important clinical entity that affects patients following remedial surgery for ulcerative colitis.
Scale effect challenges in urban hydrology highlighted with a distributed hydrological model
NASA Astrophysics Data System (ADS)
Ichiba, Abdellah; Gires, Auguste; Tchiguirinskaia, Ioulia; Schertzer, Daniel; Bompard, Philippe; Ten Veldhuis, Marie-Claire
2018-01-01
Hydrological models are extensively used in urban water management, development and evaluation of future scenarios and research activities. There is a growing interest in the development of fully distributed and grid-based models. However, some complex questions related to scale effects are not yet fully understood and still remain open issues in urban hydrology. In this paper we propose a two-step investigation framework to illustrate the extent of scale effects in urban hydrology. First, fractal tools are used to highlight the scale dependence observed within distributed data input into urban hydrological models. Then an intensive multi-scale modelling work is carried out to understand scale effects on hydrological model performance. Investigations are conducted using a fully distributed and physically based model, Multi-Hydro, developed at Ecole des Ponts ParisTech. The model is implemented at 17 spatial resolutions ranging from 100 to 5 m. Results clearly exhibit scale effect challenges in urban hydrology modelling. The applicability of fractal concepts highlights the scale dependence observed within distributed data. Patterns of geophysical data change when the size of the observation pixel changes. The multi-scale modelling investigation confirms scale effects on hydrological model performance. Results are analysed over three ranges of scales identified in the fractal analysis and confirmed through modelling. This work also discusses some remaining issues in urban hydrology modelling related to the availability of high-quality data at high resolutions, and model numerical instabilities as well as the computation time requirements. The main findings of this paper enable a replacement of traditional methods of model calibration
by innovative methods of model resolution alteration
based on the spatial data variability and scaling of flows in urban hydrology.
Crops In Silico: Generating Virtual Crops Using an Integrative and Multi-scale Modeling Platform.
Marshall-Colon, Amy; Long, Stephen P; Allen, Douglas K; Allen, Gabrielle; Beard, Daniel A; Benes, Bedrich; von Caemmerer, Susanne; Christensen, A J; Cox, Donna J; Hart, John C; Hirst, Peter M; Kannan, Kavya; Katz, Daniel S; Lynch, Jonathan P; Millar, Andrew J; Panneerselvam, Balaji; Price, Nathan D; Prusinkiewicz, Przemyslaw; Raila, David; Shekar, Rachel G; Shrivastava, Stuti; Shukla, Diwakar; Srinivasan, Venkatraman; Stitt, Mark; Turk, Matthew J; Voit, Eberhard O; Wang, Yu; Yin, Xinyou; Zhu, Xin-Guang
2017-01-01
Multi-scale models can facilitate whole plant simulations by linking gene networks, protein synthesis, metabolic pathways, physiology, and growth. Whole plant models can be further integrated with ecosystem, weather, and climate models to predict how various interactions respond to environmental perturbations. These models have the potential to fill in missing mechanistic details and generate new hypotheses to prioritize directed engineering efforts. Outcomes will potentially accelerate improvement of crop yield, sustainability, and increase future food security. It is time for a paradigm shift in plant modeling, from largely isolated efforts to a connected community that takes advantage of advances in high performance computing and mechanistic understanding of plant processes. Tools for guiding future crop breeding and engineering, understanding the implications of discoveries at the molecular level for whole plant behavior, and improved prediction of plant and ecosystem responses to the environment are urgently needed. The purpose of this perspective is to introduce Crops in silico (cropsinsilico.org), an integrative and multi-scale modeling platform, as one solution that combines isolated modeling efforts toward the generation of virtual crops, which is open and accessible to the entire plant biology community. The major challenges involved both in the development and deployment of a shared, multi-scale modeling platform, which are summarized in this prospectus, were recently identified during the first Crops in silico Symposium and Workshop.
Crops In Silico: Generating Virtual Crops Using an Integrative and Multi-scale Modeling Platform
Marshall-Colon, Amy; Long, Stephen P.; Allen, Douglas K.; Allen, Gabrielle; Beard, Daniel A.; Benes, Bedrich; von Caemmerer, Susanne; Christensen, A. J.; Cox, Donna J.; Hart, John C.; Hirst, Peter M.; Kannan, Kavya; Katz, Daniel S.; Lynch, Jonathan P.; Millar, Andrew J.; Panneerselvam, Balaji; Price, Nathan D.; Prusinkiewicz, Przemyslaw; Raila, David; Shekar, Rachel G.; Shrivastava, Stuti; Shukla, Diwakar; Srinivasan, Venkatraman; Stitt, Mark; Turk, Matthew J.; Voit, Eberhard O.; Wang, Yu; Yin, Xinyou; Zhu, Xin-Guang
2017-01-01
Multi-scale models can facilitate whole plant simulations by linking gene networks, protein synthesis, metabolic pathways, physiology, and growth. Whole plant models can be further integrated with ecosystem, weather, and climate models to predict how various interactions respond to environmental perturbations. These models have the potential to fill in missing mechanistic details and generate new hypotheses to prioritize directed engineering efforts. Outcomes will potentially accelerate improvement of crop yield, sustainability, and increase future food security. It is time for a paradigm shift in plant modeling, from largely isolated efforts to a connected community that takes advantage of advances in high performance computing and mechanistic understanding of plant processes. Tools for guiding future crop breeding and engineering, understanding the implications of discoveries at the molecular level for whole plant behavior, and improved prediction of plant and ecosystem responses to the environment are urgently needed. The purpose of this perspective is to introduce Crops in silico (cropsinsilico.org), an integrative and multi-scale modeling platform, as one solution that combines isolated modeling efforts toward the generation of virtual crops, which is open and accessible to the entire plant biology community. The major challenges involved both in the development and deployment of a shared, multi-scale modeling platform, which are summarized in this prospectus, were recently identified during the first Crops in silico Symposium and Workshop. PMID:28555150
NASA Astrophysics Data System (ADS)
Marsh, C.; Pomeroy, J. W.; Wheater, H. S.
2016-12-01
There is a need for hydrological land surface schemes that can link to atmospheric models, provide hydrological prediction at multiple scales and guide the development of multiple objective water predictive systems. Distributed raster-based models suffer from an overrepresentation of topography, leading to wasted computational effort that increases uncertainty due to greater numbers of parameters and initial conditions. The Canadian Hydrological Model (CHM) is a modular, multiphysics, spatially distributed modelling framework designed for representing hydrological processes, including those that operate in cold-regions. Unstructured meshes permit variable spatial resolution, allowing coarse resolutions at low spatial variability and fine resolutions as required. Model uncertainty is reduced by lessening the necessary computational elements relative to high-resolution rasters. CHM uses a novel multi-objective approach for unstructured triangular mesh generation that fulfills hydrologically important constraints (e.g., basin boundaries, water bodies, soil classification, land cover, elevation, and slope/aspect). This provides an efficient spatial representation of parameters and initial conditions, as well as well-formed and well-graded triangles that are suitable for numerical discretization. CHM uses high-quality open source libraries and high performance computing paradigms to provide a framework that allows for integrating current state-of-the-art process algorithms. The impact of changes to model structure, including individual algorithms, parameters, initial conditions, driving meteorology, and spatial/temporal discretization can be easily tested. Initial testing of CHM compared spatial scales and model complexity for a spring melt period at a sub-arctic mountain basin. The meshing algorithm reduced the total number of computational elements and preserved the spatial heterogeneity of predictions.
Leveraging unsupervised training sets for multi-scale compartmentalization in renal pathology
NASA Astrophysics Data System (ADS)
Lutnick, Brendon; Tomaszewski, John E.; Sarder, Pinaki
2017-03-01
Clinical pathology relies on manual compartmentalization and quantification of biological structures, which is time consuming and often error-prone. Application of computer vision segmentation algorithms to histopathological image analysis, in contrast, can offer fast, reproducible, and accurate quantitative analysis to aid pathologists. Algorithms tunable to different biologically relevant structures can allow accurate, precise, and reproducible estimates of disease states. In this direction, we have developed a fast, unsupervised computational method for simultaneously separating all biologically relevant structures from histopathological images in multi-scale. Segmentation is achieved by solving an energy optimization problem. Representing the image as a graph, nodes (pixels) are grouped by minimizing a Potts model Hamiltonian, adopted from theoretical physics, modeling interacting electron spins. Pixel relationships (modeled as edges) are used to update the energy of the partitioned graph. By iteratively improving the clustering, the optimal number of segments is revealed. To reduce computational time, the graph is simplified using a Cantor pairing function to intelligently reduce the number of included nodes. The classified nodes are then used to train a multiclass support vector machine to apply the segmentation over the full image. Accurate segmentations of images with as many as 106 pixels can be completed only in 5 sec, allowing for attainable multi-scale visualization. To establish clinical potential, we employed our method in renal biopsies to quantitatively visualize for the first time scale variant compartments of heterogeneous intra- and extraglomerular structures simultaneously. Implications of the utility of our method extend to fields such as oncology, genomics, and non-biological problems.
NASA Astrophysics Data System (ADS)
Yang, Hong-Yong; Lu, Lan; Cao, Ke-Cai; Zhang, Si-Ying
2010-04-01
In this paper, the relations of the network topology and the moving consensus of multi-agent systems are studied. A consensus-prestissimo scale-free network model with the static preferential-consensus attachment is presented on the rewired link of the regular network. The effects of the static preferential-consensus BA network on the algebraic connectivity of the topology graph are compared with the regular network. The robustness gain to delay is analyzed for variable network topology with the same scale. The time to reach the consensus is studied for the dynamic network with and without communication delays. By applying the computer simulations, it is validated that the speed of the convergence of multi-agent systems can be greatly improved in the preferential-consensus BA network model with different configuration.
Evaluation of the Community Multi-scale Air Quality Model Version 5.2
The Community Multiscale Air Quality (CMAQ) model is a state-of-the-science air quality model that simulates the emission, transport and fate of numerous air pollutants, including ozone and particulate matter. The Computational Exposure Division (CED) of the U.S. Environmental Pr...
Computer Aided Battery Engineering Consortium
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pesaran, Ahmad
A multi-national lab collaborative team was assembled that includes experts from academia and industry to enhance recently developed Computer-Aided Battery Engineering for Electric Drive Vehicles (CAEBAT)-II battery crush modeling tools and to develop microstructure models for electrode design - both computationally efficient. Task 1. The new Multi-Scale Multi-Domain model framework (GH-MSMD) provides 100x to 1,000x computation speed-up in battery electrochemical/thermal simulation while retaining modularity of particles and electrode-, cell-, and pack-level domains. The increased speed enables direct use of the full model in parameter identification. Task 2. Mechanical-electrochemical-thermal (MECT) models for mechanical abuse simulation were simultaneously coupled, enabling simultaneous modelingmore » of electrochemical reactions during the short circuit, when necessary. The interactions between mechanical failure and battery cell performance were studied, and the flexibility of the model for various batteries structures and loading conditions was improved. Model validation is ongoing to compare with test data from Sandia National Laboratories. The ABDT tool was established in ANSYS. Task 3. Microstructural modeling was conducted to enhance next-generation electrode designs. This 3- year project will validate models for a variety of electrodes, complementing Advanced Battery Research programs. Prototype tools have been developed for electrochemical simulation and geometric reconstruction.« less
NASA Astrophysics Data System (ADS)
Ji, X.; Shen, C.
2017-12-01
Flood inundation presents substantial societal hazards and also changes biogeochemistry for systems like the Amazon. It is often expensive to simulate high-resolution flood inundation and propagation in a long-term watershed-scale model. Due to the Courant-Friedrichs-Lewy (CFL) restriction, high resolution and large local flow velocity both demand prohibitively small time steps even for parallel codes. Here we develop a parallel surface-subsurface process-based model enhanced by multi-resolution meshes that are adaptively switched on or off. The high-resolution overland flow meshes are enabled only when the flood wave invades to floodplains. This model applies semi-implicit, semi-Lagrangian (SISL) scheme in solving dynamic wave equations, and with the assistant of the multi-mesh method, it also adaptively chooses the dynamic wave equation only in the area of deep inundation. Therefore, the model achieves a balance between accuracy and computational cost.
NASA Astrophysics Data System (ADS)
Ichiba, Abdellah; Gires, Auguste; Tchiguirinskaia, Ioulia; Schertzer, Daniel; Bompard, Philippe; Ten Veldhuis, Marie-Claire
2017-04-01
Nowadays, there is a growing interest on small-scale rainfall information, provided by weather radars, to be used in urban water management and decision-making. Therefore, an increasing interest is in parallel devoted to the development of fully distributed and grid-based models following the increase of computation capabilities, the availability of high-resolution GIS information needed for such models implementation. However, the choice of an appropriate implementation scale to integrate the catchment heterogeneity and the whole measured rainfall variability provided by High-resolution radar technologies still issues. This work proposes a two steps investigation of scale effects in urban hydrology and its effects on modeling works. In the first step fractal tools are used to highlight the scale dependency observed within distributed data used to describe the catchment heterogeneity, both the structure of the sewer network and the distribution of impervious areas are analyzed. Then an intensive multi-scale modeling work is carried out to understand scaling effects on hydrological model performance. Investigations were conducted using a fully distributed and physically based model, Multi-Hydro, developed at Ecole des Ponts ParisTech. The model was implemented at 17 spatial resolutions ranging from 100 m to 5 m and modeling investigations were performed using both rain gauge rainfall information as well as high resolution X band radar data in order to assess the sensitivity of the model to small scale rainfall variability. Results coming out from this work demonstrate scale effect challenges in urban hydrology modeling. In fact, fractal concept highlights the scale dependency observed within distributed data used to implement hydrological models. Patterns of geophysical data change when we change the observation pixel size. The multi-scale modeling investigation performed with Multi-Hydro model at 17 spatial resolutions confirms scaling effect on hydrological model performance. Results were analyzed at three ranges of scales identified in the fractal analysis and confirmed in the modeling work. The sensitivity of the model to small-scale rainfall variability was discussed as well.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Lijian, E-mail: ljjiang@hnu.edu.cn; Li, Xinping, E-mail: exping@126.com
Stochastic multiscale modeling has become a necessary approach to quantify uncertainty and characterize multiscale phenomena for many practical problems such as flows in stochastic porous media. The numerical treatment of the stochastic multiscale models can be very challengeable as the existence of complex uncertainty and multiple physical scales in the models. To efficiently take care of the difficulty, we construct a computational reduced model. To this end, we propose a multi-element least square high-dimensional model representation (HDMR) method, through which the random domain is adaptively decomposed into a few subdomains, and a local least square HDMR is constructed in eachmore » subdomain. These local HDMRs are represented by a finite number of orthogonal basis functions defined in low-dimensional random spaces. The coefficients in the local HDMRs are determined using least square methods. We paste all the local HDMR approximations together to form a global HDMR approximation. To further reduce computational cost, we present a multi-element reduced least-square HDMR, which improves both efficiency and approximation accuracy in certain conditions. To effectively treat heterogeneity properties and multiscale features in the models, we integrate multiscale finite element methods with multi-element least-square HDMR for stochastic multiscale model reduction. This approach significantly reduces the original model's complexity in both the resolution of the physical space and the high-dimensional stochastic space. We analyze the proposed approach, and provide a set of numerical experiments to demonstrate the performance of the presented model reduction techniques. - Highlights: • Multi-element least square HDMR is proposed to treat stochastic models. • Random domain is adaptively decomposed into some subdomains to obtain adaptive multi-element HDMR. • Least-square reduced HDMR is proposed to enhance computation efficiency and approximation accuracy in certain conditions. • Integrating MsFEM and multi-element least square HDMR can significantly reduce computation complexity.« less
NASA Astrophysics Data System (ADS)
Sreekanth, J.; Moore, Catherine
2018-04-01
The application of global sensitivity and uncertainty analysis techniques to groundwater models of deep sedimentary basins are typically challenged by large computational burdens combined with associated numerical stability issues. The highly parameterized approaches required for exploring the predictive uncertainty associated with the heterogeneous hydraulic characteristics of multiple aquifers and aquitards in these sedimentary basins exacerbate these issues. A novel Patch Modelling Methodology is proposed for improving the computational feasibility of stochastic modelling analysis of large-scale and complex groundwater models. The method incorporates a nested groundwater modelling framework that enables efficient simulation of groundwater flow and transport across multiple spatial and temporal scales. The method also allows different processes to be simulated within different model scales. Existing nested model methodologies are extended by employing 'joining predictions' for extrapolating prediction-salient information from one model scale to the next. This establishes a feedback mechanism supporting the transfer of information from child models to parent models as well as parent models to child models in a computationally efficient manner. This feedback mechanism is simple and flexible and ensures that while the salient small scale features influencing larger scale prediction are transferred back to the larger scale, this does not require the live coupling of models. This method allows the modelling of multiple groundwater flow and transport processes using separate groundwater models that are built for the appropriate spatial and temporal scales, within a stochastic framework, while also removing the computational burden associated with live model coupling. The utility of the method is demonstrated by application to an actual large scale aquifer injection scheme in Australia.
Evaluation of the Community Multi-scale Air Quality (CMAQ) Model Version 5.1
The Community Multiscale Air Quality (CMAQ) model is a state-of-the-science air quality model that simulates the emission, transport and fate of numerous air pollutants, including ozone and particulate matter. The Computational Exposure Division (CED) of the U.S. Environmental Pr...
Evaluation of the Community Multi-scale Air Quality (CMAQ) Model Version 5.2
The Community Multiscale Air Quality (CMAQ) model is a state-of-the-science air quality model that simulates the emission, transport and fate of numerous air pollutants, including ozone and particulate matter. The Computational Exposure Division (CED) of the U.S. Environmental Pr...
High performance computing (HPC) requirements for the new generation variable grid resolution (VGR) global climate models differ from that of traditional global models. A VGR global model with 15 km grids over the CONUS stretching to 60 km grids elsewhere will have about ~2.5 tim...
Multi-scale Modeling of Chromosomal DNA in Living Cells
NASA Astrophysics Data System (ADS)
Spakowitz, Andrew
The organization and dynamics of chromosomal DNA play a pivotal role in a range of biological processes, including gene regulation, homologous recombination, replication, and segregation. Establishing a quantitative theoretical model of DNA organization and dynamics would be valuable in bridging the gap between the molecular-level packaging of DNA and genome-scale chromosomal processes. Our research group utilizes analytical theory and computational modeling to establish a predictive theoretical model of chromosomal organization and dynamics. In this talk, I will discuss our efforts to develop multi-scale polymer models of chromosomal DNA that are both sufficiently detailed to address specific protein-DNA interactions while capturing experimentally relevant time and length scales. I will demonstrate how these modeling efforts are capable of quantitatively capturing aspects of behavior of chromosomal DNA in both prokaryotic and eukaryotic cells. This talk will illustrate that capturing dynamical behavior of chromosomal DNA at various length scales necessitates a range of theoretical treatments that accommodate the critical physical contributions that are relevant to in vivo behavior at these disparate length and time scales. National Science Foundation, Physics of Living Systems Program (PHY-1305516).
Development of mpi_EPIC model for global agroecosystem modeling
Kang, Shujiang; Wang, Dali; Jeff A. Nichols; ...
2014-12-31
Models that address policy-maker concerns about multi-scale effects of food and bioenergy production systems are computationally demanding. We integrated the message passing interface algorithm into the process-based EPIC model to accelerate computation of ecosystem effects. Simulation performance was further enhanced by applying the Vampir framework. When this enhanced mpi_EPIC model was tested, total execution time for a global 30-year simulation of a switchgrass cropping system was shortened to less than 0.5 hours on a supercomputer. The results illustrate that mpi_EPIC using parallel design can balance simulation workloads and facilitate large-scale, high-resolution analysis of agricultural production systems, management alternatives and environmentalmore » effects.« less
Fu, Min; Wu, Wenming; Hong, Xiafei; Liu, Qiuhua; Jiang, Jialin; Ou, Yaobin; Zhao, Yupei; Gong, Xinqi
2018-04-24
Efficient computational recognition and segmentation of target organ from medical images are foundational in diagnosis and treatment, especially about pancreas cancer. In practice, the diversity in appearance of pancreas and organs in abdomen, makes detailed texture information of objects important in segmentation algorithm. According to our observations, however, the structures of previous networks, such as the Richer Feature Convolutional Network (RCF), are too coarse to segment the object (pancreas) accurately, especially the edge. In this paper, we extend the RCF, proposed to the field of edge detection, for the challenging pancreas segmentation, and put forward a novel pancreas segmentation network. By employing multi-layer up-sampling structure replacing the simple up-sampling operation in all stages, the proposed network fully considers the multi-scale detailed contexture information of object (pancreas) to perform per-pixel segmentation. Additionally, using the CT scans, we supply and train our network, thus get an effective pipeline. Working with our pipeline with multi-layer up-sampling model, we achieve better performance than RCF in the task of single object (pancreas) segmentation. Besides, combining with multi scale input, we achieve the 76.36% DSC (Dice Similarity Coefficient) value in testing data. The results of our experiments show that our advanced model works better than previous networks in our dataset. On the other words, it has better ability in catching detailed contexture information. Therefore, our new single object segmentation model has practical meaning in computational automatic diagnosis.
Cockrell, Robert Chase; Christley, Scott; Chang, Eugene; An, Gary
2015-01-01
Perhaps the greatest challenge currently facing the biomedical research community is the ability to integrate highly detailed cellular and molecular mechanisms to represent clinical disease states as a pathway to engineer effective therapeutics. This is particularly evident in the representation of organ-level pathophysiology in terms of abnormal tissue structure, which, through histology, remains a mainstay in disease diagnosis and staging. As such, being able to generate anatomic scale simulations is a highly desirable goal. While computational limitations have previously constrained the size and scope of multi-scale computational models, advances in the capacity and availability of high-performance computing (HPC) resources have greatly expanded the ability of computational models of biological systems to achieve anatomic, clinically relevant scale. Diseases of the intestinal tract are exemplary examples of pathophysiological processes that manifest at multiple scales of spatial resolution, with structural abnormalities present at the microscopic, macroscopic and organ-levels. In this paper, we describe a novel, massively parallel computational model of the gut, the Spatially Explicitly General-purpose Model of Enteric Tissue_HPC (SEGMEnT_HPC), which extends an existing model of the gut epithelium, SEGMEnT, in order to create cell-for-cell anatomic scale simulations. We present an example implementation of SEGMEnT_HPC that simulates the pathogenesis of ileal pouchitis, and important clinical entity that affects patients following remedial surgery for ulcerative colitis. PMID:25806784
Hierarchical algorithms for modeling the ocean on hierarchical architectures
NASA Astrophysics Data System (ADS)
Hill, C. N.
2012-12-01
This presentation will describe an approach to using accelerator/co-processor technology that maps hierarchical, multi-scale modeling techniques to an underlying hierarchical hardware architecture. The focus of this work is on making effective use of both CPU and accelerator/co-processor parts of a system, for large scale ocean modeling. In the work, a lower resolution basin scale ocean model is locally coupled to multiple, "embedded", limited area higher resolution sub-models. The higher resolution models execute on co-processor/accelerator hardware and do not interact directly with other sub-models. The lower resolution basin scale model executes on the system CPU(s). The result is a multi-scale algorithm that aligns with hardware designs in the co-processor/accelerator space. We demonstrate this approach being used to substitute explicit process models for standard parameterizations. Code for our sub-models is implemented through a generic abstraction layer, so that we can target multiple accelerator architectures with different programming environments. We will present two application and implementation examples. One uses the CUDA programming environment and targets GPU hardware. This example employs a simple non-hydrostatic two dimensional sub-model to represent vertical motion more accurately. The second example uses a highly threaded three-dimensional model at high resolution. This targets a MIC/Xeon Phi like environment and uses sub-models as a way to explicitly compute sub-mesoscale terms. In both cases the accelerator/co-processor capability provides extra compute cycles that allow improved model fidelity for little or no extra wall-clock time cost.
NASA Technical Reports Server (NTRS)
Mohr, Karen Irene; Tao, Wei-Kuo; Chern, Jiun-Dar; Kumar, Sujay V.; Peters-Lidard, Christa D.
2013-01-01
The present generation of general circulation models (GCM) use parameterized cumulus schemes and run at hydrostatic grid resolutions. To improve the representation of cloud-scale moist processes and landeatmosphere interactions, a global, Multi-scale Modeling Framework (MMF) coupled to the Land Information System (LIS) has been developed at NASA-Goddard Space Flight Center. The MMFeLIS has three components, a finite-volume (fv) GCM (Goddard Earth Observing System Ver. 4, GEOS-4), a 2D cloud-resolving model (Goddard Cumulus Ensemble, GCE), and the LIS, representing the large-scale atmospheric circulation, cloud processes, and land surface processes, respectively. The non-hydrostatic GCE model replaces the single-column cumulus parameterization of fvGCM. The model grid is composed of an array of fvGCM gridcells each with a series of embedded GCE models. A horizontal coupling strategy, GCE4fvGCM4Coupler4LIS, offered significant computational efficiency, with the scalability and I/O capabilities of LIS permitting landeatmosphere interactions at cloud-scale. Global simulations of 2007e2008 and comparisons to observations and reanalysis products were conducted. Using two different versions of the same land surface model but the same initial conditions, divergence in regional, synoptic-scale surface pressure patterns emerged within two weeks. The sensitivity of largescale circulations to land surface model physics revealed significant functional value to using a scalable, multi-model land surface modeling system in global weather and climate prediction.
Modeling the Webgraph: How Far We Are
NASA Astrophysics Data System (ADS)
Donato, Debora; Laura, Luigi; Leonardi, Stefano; Millozzi, Stefano
The following sections are included: * Introduction * Preliminaries * WebBase * In-degree and out-degree * PageRank * Bipartite cliques * Strongly connected components * Stochastic models of the webgraph * Models of the webgraph * A multi-layer model * Large scale simulation * Algorithmic techniques for generating and measuring webgraphs * Data representation and multifiles * Generating webgraphs * Traversal with two bits for each node * Semi-external breadth first search * Semi-external depth first search * Computation of the SCCs * Computation of the bow-tie regions * Disjoint bipartite cliques * PageRank * Summary and outlook
Zheng, Wei; Yan, Xiaoyong; Zhao, Wei; Qian, Chengshan
2017-12-20
A novel large-scale multi-hop localization algorithm based on regularized extreme learning is proposed in this paper. The large-scale multi-hop localization problem is formulated as a learning problem. Unlike other similar localization algorithms, the proposed algorithm overcomes the shortcoming of the traditional algorithms which are only applicable to an isotropic network, therefore has a strong adaptability to the complex deployment environment. The proposed algorithm is composed of three stages: data acquisition, modeling and location estimation. In data acquisition stage, the training information between nodes of the given network is collected. In modeling stage, the model among the hop-counts and the physical distances between nodes is constructed using regularized extreme learning. In location estimation stage, each node finds its specific location in a distributed manner. Theoretical analysis and several experiments show that the proposed algorithm can adapt to the different topological environments with low computational cost. Furthermore, high accuracy can be achieved by this method without setting complex parameters.
Modelling strategies to predict the multi-scale effects of rural land management change
NASA Astrophysics Data System (ADS)
Bulygina, N.; Ballard, C. E.; Jackson, B. M.; McIntyre, N.; Marshall, M.; Reynolds, B.; Wheater, H. S.
2011-12-01
Changes to the rural landscape due to agricultural land management are ubiquitous, yet predicting the multi-scale effects of land management change on hydrological response remains an important scientific challenge. Much empirical research has been of little generic value due to inadequate design and funding of monitoring programmes, while the modelling issues challenge the capability of data-based, conceptual and physics-based modelling approaches. In this paper we report on a major UK research programme, motivated by a national need to quantify effects of agricultural intensification on flood risk. Working with a consortium of farmers in upland Wales, a multi-scale experimental programme (from experimental plots to 2nd order catchments) was developed to address issues of upland agricultural intensification. This provided data support for a multi-scale modelling programme, in which highly detailed physics-based models were conditioned on the experimental data and used to explore effects of potential field-scale interventions. A meta-modelling strategy was developed to represent detailed modelling in a computationally-efficient manner for catchment-scale simulation; this allowed catchment-scale quantification of potential management options. For more general application to data-sparse areas, alternative approaches were needed. Physics-based models were developed for a range of upland management problems, including the restoration of drained peatlands, afforestation, and changing grazing practices. Their performance was explored using literature and surrogate data; although subject to high levels of uncertainty, important insights were obtained, of practical relevance to management decisions. In parallel, regionalised conceptual modelling was used to explore the potential of indices of catchment response, conditioned on readily-available catchment characteristics, to represent ungauged catchments subject to land management change. Although based in part on speculative relationships, significant predictive power was derived from this approach. Finally, using a formal Bayesian procedure, these different sources of information were combined with local flow data in a catchment-scale conceptual model application , i.e. using small-scale physical properties, regionalised signatures of flow and available flow measurements.
A Bayesian method for assessing multiscalespecies-habitat relationships
Stuber, Erica F.; Gruber, Lutz F.; Fontaine, Joseph J.
2017-01-01
ContextScientists face several theoretical and methodological challenges in appropriately describing fundamental wildlife-habitat relationships in models. The spatial scales of habitat relationships are often unknown, and are expected to follow a multi-scale hierarchy. Typical frequentist or information theoretic approaches often suffer under collinearity in multi-scale studies, fail to converge when models are complex or represent an intractable computational burden when candidate model sets are large.ObjectivesOur objective was to implement an automated, Bayesian method for inference on the spatial scales of habitat variables that best predict animal abundance.MethodsWe introduce Bayesian latent indicator scale selection (BLISS), a Bayesian method to select spatial scales of predictors using latent scale indicator variables that are estimated with reversible-jump Markov chain Monte Carlo sampling. BLISS does not suffer from collinearity, and substantially reduces computation time of studies. We present a simulation study to validate our method and apply our method to a case-study of land cover predictors for ring-necked pheasant (Phasianus colchicus) abundance in Nebraska, USA.ResultsOur method returns accurate descriptions of the explanatory power of multiple spatial scales, and unbiased and precise parameter estimates under commonly encountered data limitations including spatial scale autocorrelation, effect size, and sample size. BLISS outperforms commonly used model selection methods including stepwise and AIC, and reduces runtime by 90%.ConclusionsGiven the pervasiveness of scale-dependency in ecology, and the implications of mismatches between the scales of analyses and ecological processes, identifying the spatial scales over which species are integrating habitat information is an important step in understanding species-habitat relationships. BLISS is a widely applicable method for identifying important spatial scales, propagating scale uncertainty, and testing hypotheses of scaling relationships.
NASA Astrophysics Data System (ADS)
Lin, Y.; O'Malley, D.; Vesselinov, V. V.
2015-12-01
Inverse modeling seeks model parameters given a set of observed state variables. However, for many practical problems due to the facts that the observed data sets are often large and model parameters are often numerous, conventional methods for solving the inverse modeling can be computationally expensive. We have developed a new, computationally-efficient Levenberg-Marquardt method for solving large-scale inverse modeling. Levenberg-Marquardt methods require the solution of a dense linear system of equations which can be prohibitively expensive to compute for large-scale inverse problems. Our novel method projects the original large-scale linear problem down to a Krylov subspace, such that the dimensionality of the measurements can be significantly reduced. Furthermore, instead of solving the linear system for every Levenberg-Marquardt damping parameter, we store the Krylov subspace computed when solving the first damping parameter and recycle it for all the following damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved by using these computational techniques. We apply this new inverse modeling method to invert for a random transitivity field. Our algorithm is fast enough to solve for the distributed model parameters (transitivity) at each computational node in the model domain. The inversion is also aided by the use regularization techniques. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). Julia is an advanced high-level scientific programing language that allows for efficient memory management and utilization of high-performance computational resources. By comparing with a Levenberg-Marquardt method using standard linear inversion techniques, our Levenberg-Marquardt method yields speed-up ratio of 15 in a multi-core computational environment and a speed-up ratio of 45 in a single-core computational environment. Therefore, our new inverse modeling method is a powerful tool for large-scale applications.
Multi-phase models for water and thermal management of proton exchange membrane fuel cell: A review
NASA Astrophysics Data System (ADS)
Zhang, Guobin; Jiao, Kui
2018-07-01
The 3D (three-dimensional) multi-phase CFD (computational fluid dynamics) model is widely utilized in optimizing water and thermal management of PEM (proton exchange membrane) fuel cell. However, a satisfactory 3D multi-phase CFD model which is able to simulate the detailed gas and liquid two-phase flow in channels and reflect its effect on performance precisely is still not developed due to the coupling difficulties and computation amount. Meanwhile, the agglomerate model of CL (catalyst layer) should also be added in 3D CFD model so as to better reflect the concentration loss and optimize CL structure in macroscopic scale. Besides, the effect of thermal management is perhaps underestimated in current 3D multi-phase CFD simulations due to the lack of coolant channel in computation domain and constant temperature boundary condition. Therefore, the 3D CFD simulations in cell and stack levels with convection boundary condition are suggested to simulate the water and thermal management more accurately. Nevertheless, with the rapid development of PEM fuel cell, current 3D CFD simulations are far from practical demand, especially at high current density and low to zero humidity and for the novel designs developed recently, such as: metal foam flow field, 3D fine mesh flow field, anode circulation etc.
NASA Astrophysics Data System (ADS)
Weiss, Chester J.
2013-08-01
An essential element for computational hypothesis testing, data inversion and experiment design for electromagnetic geophysics is a robust forward solver, capable of easily and quickly evaluating the electromagnetic response of arbitrary geologic structure. The usefulness of such a solver hinges on the balance among competing desires like ease of use, speed of forward calculation, scalability to large problems or compute clusters, parsimonious use of memory access, accuracy and by necessity, the ability to faithfully accommodate a broad range of geologic scenarios over extremes in length scale and frequency content. This is indeed a tall order. The present study addresses recent progress toward the development of a forward solver with these properties. Based on the Lorenz-gauged Helmholtz decomposition, a new finite volume solution over Cartesian model domains endowed with complex-valued electrical properties is shown to be stable over the frequency range 10-2-1010 Hz and range 10-3-105 m in length scale. Benchmark examples are drawn from magnetotellurics, exploration geophysics, geotechnical mapping and laboratory-scale analysis, showing excellent agreement with reference analytic solutions. Computational efficiency is achieved through use of a matrix-free implementation of the quasi-minimum-residual (QMR) iterative solver, which eliminates explicit storage of finite volume matrix elements in favor of "on the fly" computation as needed by the iterative Krylov sequence. Further efficiency is achieved through sparse coupling matrices between the vector and scalar potentials whose non-zero elements arise only in those parts of the model domain where the conductivity gradient is non-zero. Multi-thread parallelization in the QMR solver through OpenMP pragmas is used to reduce the computational cost of its most expensive step: the single matrix-vector product at each iteration. High-level MPI communicators farm independent processes to available compute nodes for simultaneous computation of multi-frequency or multi-transmitter responses.
Multi-scale diffuse interface modeling of multi-component two-phase flow with partial miscibility
NASA Astrophysics Data System (ADS)
Kou, Jisheng; Sun, Shuyu
2016-08-01
In this paper, we introduce a diffuse interface model to simulate multi-component two-phase flow with partial miscibility based on a realistic equation of state (e.g. Peng-Robinson equation of state). Because of partial miscibility, thermodynamic relations are used to model not only interfacial properties but also bulk properties, including density, composition, pressure, and realistic viscosity. As far as we know, this effort is the first time to use diffuse interface modeling based on equation of state for modeling of multi-component two-phase flow with partial miscibility. In numerical simulation, the key issue is to resolve the high contrast of scales from the microscopic interface composition to macroscale bulk fluid motion since the interface has a nanoscale thickness only. To efficiently solve this challenging problem, we develop a multi-scale simulation method. At the microscopic scale, we deduce a reduced interfacial equation under reasonable assumptions, and then we propose a formulation of capillary pressure, which is consistent with macroscale flow equations. Moreover, we show that Young-Laplace equation is an approximation of this capillarity formulation, and this formulation is also consistent with the concept of Tolman length, which is a correction of Young-Laplace equation. At the macroscopical scale, the interfaces are treated as discontinuous surfaces separating two phases of fluids. Our approach differs from conventional sharp-interface two-phase flow model in that we use the capillary pressure directly instead of a combination of surface tension and Young-Laplace equation because capillarity can be calculated from our proposed capillarity formulation. A compatible condition is also derived for the pressure in flow equations. Furthermore, based on the proposed capillarity formulation, we design an efficient numerical method for directly computing the capillary pressure between two fluids composed of multiple components. Finally, numerical tests are carried out to verify the effectiveness of the proposed multi-scale method.
Multi-scale diffuse interface modeling of multi-component two-phase flow with partial miscibility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kou, Jisheng; Sun, Shuyu, E-mail: shuyu.sun@kaust.edu.sa; School of Mathematics and Statistics, Xi'an Jiaotong University, Xi'an 710049
2016-08-01
In this paper, we introduce a diffuse interface model to simulate multi-component two-phase flow with partial miscibility based on a realistic equation of state (e.g. Peng–Robinson equation of state). Because of partial miscibility, thermodynamic relations are used to model not only interfacial properties but also bulk properties, including density, composition, pressure, and realistic viscosity. As far as we know, this effort is the first time to use diffuse interface modeling based on equation of state for modeling of multi-component two-phase flow with partial miscibility. In numerical simulation, the key issue is to resolve the high contrast of scales from themore » microscopic interface composition to macroscale bulk fluid motion since the interface has a nanoscale thickness only. To efficiently solve this challenging problem, we develop a multi-scale simulation method. At the microscopic scale, we deduce a reduced interfacial equation under reasonable assumptions, and then we propose a formulation of capillary pressure, which is consistent with macroscale flow equations. Moreover, we show that Young–Laplace equation is an approximation of this capillarity formulation, and this formulation is also consistent with the concept of Tolman length, which is a correction of Young–Laplace equation. At the macroscopical scale, the interfaces are treated as discontinuous surfaces separating two phases of fluids. Our approach differs from conventional sharp-interface two-phase flow model in that we use the capillary pressure directly instead of a combination of surface tension and Young–Laplace equation because capillarity can be calculated from our proposed capillarity formulation. A compatible condition is also derived for the pressure in flow equations. Furthermore, based on the proposed capillarity formulation, we design an efficient numerical method for directly computing the capillary pressure between two fluids composed of multiple components. Finally, numerical tests are carried out to verify the effectiveness of the proposed multi-scale method.« less
OpenMP parallelization of a gridded SWAT (SWATG)
NASA Astrophysics Data System (ADS)
Zhang, Ying; Hou, Jinliang; Cao, Yongpan; Gu, Juan; Huang, Chunlin
2017-12-01
Large-scale, long-term and high spatial resolution simulation is a common issue in environmental modeling. A Gridded Hydrologic Response Unit (HRU)-based Soil and Water Assessment Tool (SWATG) that integrates grid modeling scheme with different spatial representations also presents such problems. The time-consuming problem affects applications of very high resolution large-scale watershed modeling. The OpenMP (Open Multi-Processing) parallel application interface is integrated with SWATG (called SWATGP) to accelerate grid modeling based on the HRU level. Such parallel implementation takes better advantage of the computational power of a shared memory computer system. We conducted two experiments at multiple temporal and spatial scales of hydrological modeling using SWATG and SWATGP on a high-end server. At 500-m resolution, SWATGP was found to be up to nine times faster than SWATG in modeling over a roughly 2000 km2 watershed with 1 CPU and a 15 thread configuration. The study results demonstrate that parallel models save considerable time relative to traditional sequential simulation runs. Parallel computations of environmental models are beneficial for model applications, especially at large spatial and temporal scales and at high resolutions. The proposed SWATGP model is thus a promising tool for large-scale and high-resolution water resources research and management in addition to offering data fusion and model coupling ability.
A Physiologically Based, Multi-Scale Model of Skeletal Muscle Structure and Function
Röhrle, O.; Davidson, J. B.; Pullan, A. J.
2012-01-01
Models of skeletal muscle can be classified as phenomenological or biophysical. Phenomenological models predict the muscle’s response to a specified input based on experimental measurements. Prominent phenomenological models are the Hill-type muscle models, which have been incorporated into rigid-body modeling frameworks, and three-dimensional continuum-mechanical models. Biophysically based models attempt to predict the muscle’s response as emerging from the underlying physiology of the system. In this contribution, the conventional biophysically based modeling methodology is extended to include several structural and functional characteristics of skeletal muscle. The result is a physiologically based, multi-scale skeletal muscle finite element model that is capable of representing detailed, geometrical descriptions of skeletal muscle fibers and their grouping. Together with a well-established model of motor-unit recruitment, the electro-physiological behavior of single muscle fibers within motor units is computed and linked to a continuum-mechanical constitutive law. The bridging between the cellular level and the organ level has been achieved via a multi-scale constitutive law and homogenization. The effect of homogenization has been investigated by varying the number of embedded skeletal muscle fibers and/or motor units and computing the resulting exerted muscle forces while applying the same excitatory input. All simulations were conducted using an anatomically realistic finite element model of the tibialis anterior muscle. Given the fact that the underlying electro-physiological cellular muscle model is capable of modeling metabolic fatigue effects such as potassium accumulation in the T-tubular space and inorganic phosphate build-up, the proposed framework provides a novel simulation-based way to investigate muscle behavior ranging from motor-unit recruitment to force generation and fatigue. PMID:22993509
Human connectome module pattern detection using a new multi-graph MinMax cut model.
De, Wang; Wang, Yang; Nie, Feiping; Yan, Jingwen; Cai, Weidong; Saykin, Andrew J; Shen, Li; Huang, Heng
2014-01-01
Many recent scientific efforts have been devoted to constructing the human connectome using Diffusion Tensor Imaging (DTI) data for understanding the large-scale brain networks that underlie higher-level cognition in human. However, suitable computational network analysis tools are still lacking in human connectome research. To address this problem, we propose a novel multi-graph min-max cut model to detect the consistent network modules from the brain connectivity networks of all studied subjects. A new multi-graph MinMax cut model is introduced to solve this challenging computational neuroscience problem and the efficient optimization algorithm is derived. In the identified connectome module patterns, each network module shows similar connectivity patterns in all subjects, which potentially associate to specific brain functions shared by all subjects. We validate our method by analyzing the weighted fiber connectivity networks. The promising empirical results demonstrate the effectiveness of our method.
Multi-fluid Dynamics for Supersonic Jet-and-Crossflows and Liquid Plug Rupture
NASA Astrophysics Data System (ADS)
Hassan, Ezeldin A.
Multi-fluid dynamics simulations require appropriate numerical treatments based on the main flow characteristics, such as flow speed, turbulence, thermodynamic state, and time and length scales. In this thesis, two distinct problems are investigated: supersonic jet and crossflow interactions; and liquid plug propagation and rupture in an airway. Gaseous non-reactive ethylene jet and air crossflow simulation represents essential physics for fuel injection in SCRAMJET engines. The regime is highly unsteady, involving shocks, turbulent mixing, and large-scale vortical structures. An eddy-viscosity-based multi-scale turbulence model is proposed to resolve turbulent structures consistent with grid resolution and turbulence length scales. Predictions of the time-averaged fuel concentration from the multi-scale model is improved over Reynolds-averaged Navier-Stokes models originally derived from stationary flow. The response to the multi-scale model alone is, however, limited, in cases where the vortical structures are small and scattered thus requiring prohibitively expensive grids in order to resolve the flow field accurately. Statistical information related to turbulent fluctuations is utilized to estimate an effective turbulent Schmidt number, which is shown to be highly varying in space. Accordingly, an adaptive turbulent Schmidt number approach is proposed, by allowing the resolved field to adaptively influence the value of turbulent Schmidt number in the multi-scale turbulence model. The proposed model estimates a time-averaged turbulent Schmidt number adapted to the computed flowfield, instead of the constant value common to the eddy-viscosity-based Navier-Stokes models. This approach is assessed using a grid-refinement study for the normal injection case, and tested with 30 degree injection, showing improved results over the constant turbulent Schmidt model both in mean and variance of fuel concentration predictions. For the incompressible liquid plug propagation and rupture study, numerical simulations are conducted using an Eulerian-Lagrangian approach with a continuous-interface method. A reconstruction scheme is developed to allow topological changes during plug rupture by altering the connectivity information of the interface mesh. Rupture time is shown to be delayed as the initial precursor film thickness increases. During the plug rupture process, a sudden increase of mechanical stresses on the tube wall is recorded, which can cause tissue damage.
A Multi-Scale Energy Food Systems Modeling Framework For Climate Adaptation
NASA Astrophysics Data System (ADS)
Siddiqui, S.; Bakker, C.; Zaitchik, B. F.; Hobbs, B. F.; Broaddus, E.; Neff, R.; Haskett, J.; Parker, C.
2016-12-01
Our goal is to understand coupled system dynamics across scales in a manner that allows us to quantify the sensitivity of critical human outcomes (nutritional satisfaction, household economic well-being) to development strategies and to climate or market induced shocks in sub-Saharan Africa. We adopt both bottom-up and top-down multi-scale modeling approaches focusing our efforts on food, energy, water (FEW) dynamics to define, parameterize, and evaluate modeled processes nationally as well as across climate zones and communities. Our framework comprises three complementary modeling techniques spanning local, sub-national and national scales to capture interdependencies between sectors, across time scales, and on multiple levels of geographic aggregation. At the center is a multi-player micro-economic (MME) partial equilibrium model for the production, consumption, storage, and transportation of food, energy, and fuels, which is the focus of this presentation. We show why such models can be very useful for linking and integrating across time and spatial scales, as well as a wide variety of models including an agent-based model applied to rural villages and larger population centers, an optimization-based electricity infrastructure model at a regional scale, and a computable general equilibrium model, which is applied to understand FEW resources and economic patterns at national scale. The MME is based on aggregating individual optimization problems for relevant players in an energy, electricity, or food market and captures important food supply chain components of trade and food distribution accounting for infrastructure and geography. Second, our model considers food access and utilization by modeling food waste and disaggregating consumption by income and age. Third, the model is set up to evaluate the effects of seasonality and system shocks on supply, demand, infrastructure, and transportation in both energy and food.
Multi -omics and metabolic modelling pipelines: challenges and tools for systems microbiology.
Fondi, Marco; Liò, Pietro
2015-02-01
Integrated -omics approaches are quickly spreading across microbiology research labs, leading to (i) the possibility of detecting previously hidden features of microbial cells like multi-scale spatial organization and (ii) tracing molecular components across multiple cellular functional states. This promises to reduce the knowledge gap between genotype and phenotype and poses new challenges for computational microbiologists. We underline how the capability to unravel the complexity of microbial life will strongly depend on the integration of the huge and diverse amount of information that can be derived today from -omics experiments. In this work, we present opportunities and challenges of multi -omics data integration in current systems biology pipelines. We here discuss which layers of biological information are important for biotechnological and clinical purposes, with a special focus on bacterial metabolism and modelling procedures. A general review of the most recent computational tools for performing large-scale datasets integration is also presented, together with a possible framework to guide the design of systems biology experiments by microbiologists. Copyright © 2015. Published by Elsevier GmbH.
NASA Astrophysics Data System (ADS)
Zhang, Jingwen; Wang, Xu; Liu, Pan; Lei, Xiaohui; Li, Zejun; Gong, Wei; Duan, Qingyun; Wang, Hao
2017-01-01
The optimization of large-scale reservoir system is time-consuming due to its intrinsic characteristics of non-commensurable objectives and high dimensionality. One way to solve the problem is to employ an efficient multi-objective optimization algorithm in the derivation of large-scale reservoir operating rules. In this study, the Weighted Multi-Objective Adaptive Surrogate Model Optimization (WMO-ASMO) algorithm is used. It consists of three steps: (1) simplifying the large-scale reservoir operating rules by the aggregation-decomposition model, (2) identifying the most sensitive parameters through multivariate adaptive regression splines (MARS) for dimensional reduction, and (3) reducing computational cost and speeding the searching process by WMO-ASMO, embedded with weighted non-dominated sorting genetic algorithm II (WNSGAII). The intercomparison of non-dominated sorting genetic algorithm (NSGAII), WNSGAII and WMO-ASMO are conducted in the large-scale reservoir system of Xijiang river basin in China. Results indicate that: (1) WNSGAII surpasses NSGAII in the median of annual power generation, increased by 1.03% (from 523.29 to 528.67 billion kW h), and the median of ecological index, optimized by 3.87% (from 1.879 to 1.809) with 500 simulations, because of the weighted crowding distance and (2) WMO-ASMO outperforms NSGAII and WNSGAII in terms of better solutions (annual power generation (530.032 billion kW h) and ecological index (1.675)) with 1000 simulations and computational time reduced by 25% (from 10 h to 8 h) with 500 simulations. Therefore, the proposed method is proved to be more efficient and could provide better Pareto frontier.
NASA Astrophysics Data System (ADS)
Verma, Aman; Mahesh, Krishnan
2012-08-01
The dynamic Lagrangian averaging approach for the dynamic Smagorinsky model for large eddy simulation is extended to an unstructured grid framework and applied to complex flows. The Lagrangian time scale is dynamically computed from the solution and does not need any adjustable parameter. The time scale used in the standard Lagrangian model contains an adjustable parameter θ. The dynamic time scale is computed based on a "surrogate-correlation" of the Germano-identity error (GIE). Also, a simple material derivative relation is used to approximate GIE at different events along a pathline instead of Lagrangian tracking or multi-linear interpolation. Previously, the time scale for homogeneous flows was computed by averaging along directions of homogeneity. The present work proposes modifications for inhomogeneous flows. This development allows the Lagrangian averaged dynamic model to be applied to inhomogeneous flows without any adjustable parameter. The proposed model is applied to LES of turbulent channel flow on unstructured zonal grids at various Reynolds numbers. Improvement is observed when compared to other averaging procedures for the dynamic Smagorinsky model, especially at coarse resolutions. The model is also applied to flow over a cylinder at two Reynolds numbers and good agreement with previous computations and experiments is obtained. Noticeable improvement is obtained using the proposed model over the standard Lagrangian model. The improvement is attributed to a physically consistent Lagrangian time scale. The model also shows good performance when applied to flow past a marine propeller in an off-design condition; it regularizes the eddy viscosity and adjusts locally to the dominant flow features.
Ji, Zhiwei; Su, Jing; Wu, Dan; Peng, Huiming; Zhao, Weiling; Nlong Zhao, Brian; Zhou, Xiaobo
2017-01-31
Multiple myeloma is a malignant still incurable plasma cell disorder. This is due to refractory disease relapse, immune impairment, and development of multi-drug resistance. The growth of malignant plasma cells is dependent on the bone marrow (BM) microenvironment and evasion of the host's anti-tumor immune response. Hence, we hypothesized that targeting tumor-stromal cell interaction and endogenous immune system in BM will potentially improve the response of multiple myeloma (MM). Therefore, we proposed a computational simulation of the myeloma development in the complicated microenvironment which includes immune cell components and bone marrow stromal cells and predicted the effects of combined treatment with multi-drugs on myeloma cell growth. We constructed a hybrid multi-scale agent-based model (HABM) that combines an ODE system and Agent-based model (ABM). The ODEs was used for modeling the dynamic changes of intracellular signal transductions and ABM for modeling the cell-cell interactions between stromal cells, tumor, and immune components in the BM. This model simulated myeloma growth in the bone marrow microenvironment and revealed the important role of immune system in this process. The predicted outcomes were consistent with the experimental observations from previous studies. Moreover, we applied this model to predict the treatment effects of three key therapeutic drugs used for MM, and found that the combination of these three drugs potentially suppress the growth of myeloma cells and reactivate the immune response. In summary, the proposed model may serve as a novel computational platform for simulating the formation of MM and evaluating the treatment response of MM to multiple drugs.
Computational Materials: Modeling and Simulation of Nanostructured Materials and Systems
NASA Technical Reports Server (NTRS)
Gates, Thomas S.; Hinkley, Jeffrey A.
2003-01-01
The paper provides details on the structure and implementation of the Computational Materials program at the NASA Langley Research Center. Examples are given that illustrate the suggested approaches to predicting the behavior and influencing the design of nanostructured materials such as high-performance polymers, composites, and nanotube-reinforced polymers. Primary simulation and measurement methods applicable to multi-scale modeling are outlined. Key challenges including verification and validation of models are highlighted and discussed within the context of NASA's broad mission objectives.
Sibole, Scott C.; Erdemir, Ahmet
2012-01-01
Cells of the musculoskeletal system are known to respond to mechanical loading and chondrocytes within the cartilage are not an exception. However, understanding how joint level loads relate to cell level deformations, e.g. in the cartilage, is not a straightforward task. In this study, a multi-scale analysis pipeline was implemented to post-process the results of a macro-scale finite element (FE) tibiofemoral joint model to provide joint mechanics based displacement boundary conditions to micro-scale cellular FE models of the cartilage, for the purpose of characterizing chondrocyte deformations in relation to tibiofemoral joint loading. It was possible to identify the load distribution within the knee among its tissue structures and ultimately within the cartilage among its extracellular matrix, pericellular environment and resident chondrocytes. Various cellular deformation metrics (aspect ratio change, volumetric strain, cellular effective strain and maximum shear strain) were calculated. To illustrate further utility of this multi-scale modeling pipeline, two micro-scale cartilage constructs were considered: an idealized single cell at the centroid of a 100×100×100 μm block commonly used in past research studies, and an anatomically based (11 cell model of the same volume) representation of the middle zone of tibiofemoral cartilage. In both cases, chondrocytes experienced amplified deformations compared to those at the macro-scale, predicted by simulating one body weight compressive loading on the tibiofemoral joint. In the 11 cell case, all cells experienced less deformation than the single cell case, and also exhibited a larger variance in deformation compared to other cells residing in the same block. The coupling method proved to be highly scalable due to micro-scale model independence that allowed for exploitation of distributed memory computing architecture. The method’s generalized nature also allows for substitution of any macro-scale and/or micro-scale model providing application for other multi-scale continuum mechanics problems. PMID:22649535
NASA Astrophysics Data System (ADS)
Hussein, Rafid M.; Chandrashekhara, K.
2017-11-01
A multi-scale modeling approach is presented to simulate and validate thermo-oxidation shrinkage and cracking damage of a high temperature polymer composite. The multi-scale approach investigates coupled transient diffusion-reaction and static structural at macro- to micro-scale. The micro-scale shrinkage deformation and cracking damage are simulated and validated using 2D and 3D simulations. Localized shrinkage displacement boundary conditions for the micro-scale simulations are determined from the respective meso- and macro-scale simulations, conducted for a cross-ply laminate. The meso-scale geometrical domain and the micro-scale geometry and mesh are developed using the object oriented finite element (OOF). The macro-scale shrinkage and weight loss are measured using unidirectional coupons and used to build the macro-shrinkage model. The cross-ply coupons are used to validate the macro-shrinkage model by the shrinkage profiles acquired using scanning electron images at the cracked surface. The macro-shrinkage model deformation shows a discrepancy when the micro-scale image-based cracking is computed. The local maximum shrinkage strain is assumed to be 13 times the maximum macro-shrinkage strain of 2.5 × 10-5, upon which the discrepancy is minimized. The microcrack damage of the composite is modeled using a static elastic analysis with extended finite element and cohesive surfaces by considering the modulus spatial evolution. The 3D shrinkage displacements are fed to the model using node-wise boundary/domain conditions of the respective oxidized region. Microcrack simulation results: length, meander, and opening are closely matched to the crack in the area of interest for the scanning electron images.
NASA Astrophysics Data System (ADS)
Fonseca, R. A.; Vieira, J.; Fiuza, F.; Davidson, A.; Tsung, F. S.; Mori, W. B.; Silva, L. O.
2013-12-01
A new generation of laser wakefield accelerators (LWFA), supported by the extreme accelerating fields generated in the interaction of PW-Class lasers and underdense targets, promises the production of high quality electron beams in short distances for multiple applications. Achieving this goal will rely heavily on numerical modelling to further understand the underlying physics and identify optimal regimes, but large scale modelling of these scenarios is computationally heavy and requires the efficient use of state-of-the-art petascale supercomputing systems. We discuss the main difficulties involved in running these simulations and the new developments implemented in the OSIRIS framework to address these issues, ranging from multi-dimensional dynamic load balancing and hybrid distributed/shared memory parallelism to the vectorization of the PIC algorithm. We present the results of the OASCR Joule Metric program on the issue of large scale modelling of LWFA, demonstrating speedups of over 1 order of magnitude on the same hardware. Finally, scalability to over ˜106 cores and sustained performance over ˜2 P Flops is demonstrated, opening the way for large scale modelling of LWFA scenarios.
NASA Astrophysics Data System (ADS)
Kim, S. C.; Hayter, E. J.; Pruhs, R.; Luong, P.; Lackey, T. C.
2016-12-01
The geophysical scale circulation of the Mid Atlantic Bight and hydrologic inputs from adjacent Chesapeake Bay watersheds and tributaries influences the hydrodynamics and transport of the James River estuary. Both barotropic and baroclinic transport govern the hydrodynamics of this partially stratified estuary. Modeling the placement of dredged sediment requires accommodating this wide spectrum of atmospheric and hydrodynamic scales. The Geophysical Scale Multi-Block (GSMB) Transport Modeling System is a collection of multiple well established and USACE approved process models. Taking advantage of the parallel computing capability of multi-block modeling, we performed one year three-dimensional modeling of hydrodynamics in supporting simulation of dredged sediment placements transport and morphology changes. Model forcing includes spatially and temporally varying meteorological conditions and hydrological inputs from the watershed. Surface heat flux estimates were derived from the National Solar Radiation Database (NSRDB). The open water boundary condition for water level was obtained from an ADCIRC model application of the U. S. East Coast. Temperature-salinity boundary conditions were obtained from the Environmental Protection Agency (EPA) Chesapeake Bay Program (CBP) long-term monitoring stations database. Simulated water levels were calibrated and verified by comparison with National Oceanic and Atmospheric Administration (NOAA) tide gage locations. A harmonic analysis of the modeled tides was performed and compared with NOAA tide prediction data. In addition, project specific circulation was verified using US Army Corps of Engineers (USACE) drogue data. Salinity and temperature transport was verified at seven CBP long term monitoring stations along the navigation channel. Simulation and analysis of model results suggest that GSMB is capable of resolving the long duration, multi-scale processes inherent to practical engineering problems such as dredged material placement stability.
Asynchronous adaptive time step in quantitative cellular automata modeling
Zhu, Hao; Pang, Peter YH; Sun, Yan; Dhar, Pawan
2004-01-01
Background The behaviors of cells in metazoans are context dependent, thus large-scale multi-cellular modeling is often necessary, for which cellular automata are natural candidates. Two related issues are involved in cellular automata based multi-cellular modeling: how to introduce differential equation based quantitative computing to precisely describe cellular activity, and upon it, how to solve the heavy time consumption issue in simulation. Results Based on a modified, language based cellular automata system we extended that allows ordinary differential equations in models, we introduce a method implementing asynchronous adaptive time step in simulation that can considerably improve efficiency yet without a significant sacrifice of accuracy. An average speedup rate of 4–5 is achieved in the given example. Conclusions Strategies for reducing time consumption in simulation are indispensable for large-scale, quantitative multi-cellular models, because even a small 100 × 100 × 100 tissue slab contains one million cells. Distributed and adaptive time step is a practical solution in cellular automata environment. PMID:15222901
Multi-scale finite element modeling allows the mechanics of amphibian neurulation to be elucidated
NASA Astrophysics Data System (ADS)
Chen, Xiaoguang; Brodland, G. Wayne
2008-03-01
The novel multi-scale computational approach introduced here makes possible a new means for testing hypotheses about the forces that drive specific morphogenetic movements. A 3D model based on this approach is used to investigate neurulation in the axolotl (Ambystoma mexicanum), a type of amphibian. The model is based on geometric data from 3D surface reconstructions of live embryos and from serial sections. Tissue properties are described by a system of cell-based constitutive equations, and parameters in the equations are determined from physical tests. The model includes the effects of Shroom-activated neural ridge reshaping and lamellipodium-driven convergent extension. A typical whole-embryo model consists of 10 239 elements and to run its 100 incremental time steps requires 2 days. The model shows that a normal phenotype does not result if lamellipodium forces are uniform across the width of the neural plate; but it can result if the lamellipodium forces decrease from a maximum value at the mid-sagittal plane to zero at the plate edge. Even the seemingly simple motions of neurulation are found to contain important features that would remain hidden, they were not studied using an advanced computational model. The present model operates in a setting where data are extremely sparse and an important outcome of the study is a better understanding of the role of computational models in such environments.
Multi-scale finite element modeling allows the mechanics of amphibian neurulation to be elucidated.
Chen, Xiaoguang; Brodland, G Wayne
2008-04-11
The novel multi-scale computational approach introduced here makes possible a new means for testing hypotheses about the forces that drive specific morphogenetic movements. A 3D model based on this approach is used to investigate neurulation in the axolotl (Ambystoma mexicanum), a type of amphibian. The model is based on geometric data from 3D surface reconstructions of live embryos and from serial sections. Tissue properties are described by a system of cell-based constitutive equations, and parameters in the equations are determined from physical tests. The model includes the effects of Shroom-activated neural ridge reshaping and lamellipodium-driven convergent extension. A typical whole-embryo model consists of 10,239 elements and to run its 100 incremental time steps requires 2 days. The model shows that a normal phenotype does not result if lamellipodium forces are uniform across the width of the neural plate; but it can result if the lamellipodium forces decrease from a maximum value at the mid-sagittal plane to zero at the plate edge. Even the seemingly simple motions of neurulation are found to contain important features that would remain hidden, they were not studied using an advanced computational model. The present model operates in a setting where data are extremely sparse and an important outcome of the study is a better understanding of the role of computational models in such environments.
Simulation of Left Atrial Function Using a Multi-Scale Model of the Cardiovascular System
Pironet, Antoine; Dauby, Pierre C.; Paeme, Sabine; Kosta, Sarah; Chase, J. Geoffrey; Desaive, Thomas
2013-01-01
During a full cardiac cycle, the left atrium successively behaves as a reservoir, a conduit and a pump. This complex behavior makes it unrealistic to apply the time-varying elastance theory to characterize the left atrium, first, because this theory has known limitations, and second, because it is still uncertain whether the load independence hypothesis holds. In this study, we aim to bypass this uncertainty by relying on another kind of mathematical model of the cardiac chambers. In the present work, we describe both the left atrium and the left ventricle with a multi-scale model. The multi-scale property of this model comes from the fact that pressure inside a cardiac chamber is derived from a model of the sarcomere behavior. Macroscopic model parameters are identified from reference dog hemodynamic data. The multi-scale model of the cardiovascular system including the left atrium is then simulated to show that the physiological roles of the left atrium are correctly reproduced. This include a biphasic pressure wave and an eight-shaped pressure-volume loop. We also test the validity of our model in non basal conditions by reproducing a preload reduction experiment by inferior vena cava occlusion with the model. We compute the variation of eight indices before and after this experiment and obtain the same variation as experimentally observed for seven out of the eight indices. In summary, the multi-scale mathematical model presented in this work is able to correctly account for the three roles of the left atrium and also exhibits a realistic left atrial pressure-volume loop. Furthermore, the model has been previously presented and validated for the left ventricle. This makes it a proper alternative to the time-varying elastance theory if the focus is set on precisely representing the left atrial and left ventricular behaviors. PMID:23755183
Multi-GPU accelerated three-dimensional FDTD method for electromagnetic simulation.
Nagaoka, Tomoaki; Watanabe, Soichi
2011-01-01
Numerical simulation with a numerical human model using the finite-difference time domain (FDTD) method has recently been performed in a number of fields in biomedical engineering. To improve the method's calculation speed and realize large-scale computing with the numerical human model, we adapt three-dimensional FDTD code to a multi-GPU environment using Compute Unified Device Architecture (CUDA). In this study, we used NVIDIA Tesla C2070 as GPGPU boards. The performance of multi-GPU is evaluated in comparison with that of a single GPU and vector supercomputer. The calculation speed with four GPUs was approximately 3.5 times faster than with a single GPU, and was slightly (approx. 1.3 times) slower than with the supercomputer. Calculation speed of the three-dimensional FDTD method using GPUs can significantly improve with an expanding number of GPUs.
A multi-scale model for geared transmission aero-thermodynamics
NASA Astrophysics Data System (ADS)
McIntyre, Sean M.
A multi-scale, multi-physics computational tool for the simulation of high-per- formance gearbox aero-thermodynamics was developed and applied to equilibrium and pathological loss-of-lubrication performance simulation. The physical processes at play in these systems include multiphase compressible ow of the air and lubricant within the gearbox, meshing kinematics and tribology, as well as heat transfer by conduction, and free and forced convection. These physics are coupled across their representative space and time scales in the computational framework developed in this dissertation. These scales span eight orders of magnitude, from the thermal response of the full gearbox O(100 m; 10 2 s), through effects at the tooth passage time scale O(10-2 m; 10-4 s), down to tribological effects on the meshing gear teeth O(10-6 m; 10-6 s). Direct numerical simulation of these coupled physics and scales is intractable. Accordingly, a scale-segregated simulation strategy was developed by partitioning and treating the contributing physical mechanisms as sub-problems, each with associated space and time scales, and appropriate coupling mechanisms. These are: (1) the long time scale thermal response of the system, (2) the multiphase (air, droplets, and film) aerodynamic flow and convective heat transfer within the gearbox, (3) the high-frequency, time-periodic thermal effects of gear tooth heating while in mesh and its subsequent cooling through the rest of rotation, (4) meshing effects including tribology and contact mechanics. The overarching goal of this dissertation was to develop software and analysis procedures for gearbox loss-of-lubrication performance. To accommodate these four physical effects and their coupling, each is treated in the CFD code as a sub problem. These physics modules are coupled algorithmically. Specifically, the high- frequency conduction analysis derives its local heat transfer coefficient and near-wall air temperature boundary conditions from a quasi-steady cyclic-symmetric simulation of the internal flow. This high-frequency conduction solution is coupled directly with a model for the meshing friction, developed by a collaborator, which was adapted for use in a finite-volume CFD code. The local surface heat flux on solid surfaces is calculated by time-averaging the heat flux in the high-frequency analysis. This serves as a fixed-flux boundary condition in the long time scale conduction module. The temperature distribution from this long time scale heat transfer calculation serves as a boundary condition for the internal convection simulation, and as the initial condition for the high-frequency heat transfer module. Using this multi-scale model, simulations were performed for equilibrium and loss-of-lubrication operation of the NASA Glenn Research Center test stand. Results were compared with experimental measurements. In addition to the multi-scale model itself, several other specific contributions were made. Eulerian models for droplets and wall-films were developed and im- plemented in the CFD code. A novel approach to retaining liquid film on the solid surfaces, and strategies for its mass exchange with droplets, were developed and verified. Models for interfacial transfer between droplets and wall-film were implemented, and include the effects of droplet deposition, splashing, bouncing, as well as film breakup. These models were validated against airfoil data. To mitigate the observed slow convergence of CFD simulations of the enclosed aerodynamic flows within gearboxes, Fourier stability analysis was applied to the SIMPLE-C fractional-step algorithm. From this, recommendations to accelerate the convergence rate through enhanced pressure-velocity coupling were made. These were shown to be effective. A fast-running finite-volume reduced-order-model of the gearbox aero-thermo- dynamics was developed, and coupled with the tribology model to investigate the sensitivity of loss-of-lubrication predictions to various model and physical param- eters. This sensitivity study was instrumental in guiding efforts toward improving the accuracy of the multi-scale model without undue increase in computational cost. In addition, the reduced-order model is now used extensively by a collaborator in tribology model development and testing. Experimental measurements of high-speed gear windage in partially and fully- shrouded configurations were performed to supplement the paucity of available validation data. This measurement program provided measurements of windage loss for a gear of design-relevant size and operating speed, as well as guidance for increasing the accuracy of future measurements.
Energy and time determine scaling in biological and computer designs
Bezerra, George; Edwards, Benjamin; Brown, James; Forrest, Stephanie
2016-01-01
Metabolic rate in animals and power consumption in computers are analogous quantities that scale similarly with size. We analyse vascular systems of mammals and on-chip networks of microprocessors, where natural selection and human engineering, respectively, have produced systems that minimize both energy dissipation and delivery times. Using a simple network model that simultaneously minimizes energy and time, our analysis explains empirically observed trends in the scaling of metabolic rate in mammals and power consumption and performance in microprocessors across several orders of magnitude in size. Just as the evolutionary transitions from unicellular to multicellular animals in biology are associated with shifts in metabolic scaling, our model suggests that the scaling of power and performance will change as computer designs transition to decentralized multi-core and distributed cyber-physical systems. More generally, a single energy–time minimization principle may govern the design of many complex systems that process energy, materials and information. This article is part of the themed issue ‘The major synthetic evolutionary transitions’. PMID:27431524
Energy and time determine scaling in biological and computer designs.
Moses, Melanie; Bezerra, George; Edwards, Benjamin; Brown, James; Forrest, Stephanie
2016-08-19
Metabolic rate in animals and power consumption in computers are analogous quantities that scale similarly with size. We analyse vascular systems of mammals and on-chip networks of microprocessors, where natural selection and human engineering, respectively, have produced systems that minimize both energy dissipation and delivery times. Using a simple network model that simultaneously minimizes energy and time, our analysis explains empirically observed trends in the scaling of metabolic rate in mammals and power consumption and performance in microprocessors across several orders of magnitude in size. Just as the evolutionary transitions from unicellular to multicellular animals in biology are associated with shifts in metabolic scaling, our model suggests that the scaling of power and performance will change as computer designs transition to decentralized multi-core and distributed cyber-physical systems. More generally, a single energy-time minimization principle may govern the design of many complex systems that process energy, materials and information.This article is part of the themed issue 'The major synthetic evolutionary transitions'. © 2016 The Author(s).
Contention Modeling for Multithreaded Distributed Shared Memory Machines: The Cray XMT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Secchi, Simone; Tumeo, Antonino; Villa, Oreste
Distributed Shared Memory (DSM) machines are a wide class of multi-processor computing systems where a large virtually-shared address space is mapped on a network of physically distributed memories. High memory latency and network contention are two of the main factors that limit performance scaling of such architectures. Modern high-performance computing DSM systems have evolved toward exploitation of massive hardware multi-threading and fine-grained memory hashing to tolerate irregular latencies, avoid network hot-spots and enable high scaling. In order to model the performance of such large-scale machines, parallel simulation has been proved to be a promising approach to achieve good accuracy inmore » reasonable times. One of the most critical factors in solving the simulation speed-accuracy trade-off is network modeling. The Cray XMT is a massively multi-threaded supercomputing architecture that belongs to the DSM class, since it implements a globally-shared address space abstraction on top of a physically distributed memory substrate. In this paper, we discuss the development of a contention-aware network model intended to be integrated in a full-system XMT simulator. We start by measuring the effects of network contention in a 128-processor XMT machine and then investigate the trade-off that exists between simulation accuracy and speed, by comparing three network models which operate at different levels of accuracy. The comparison and model validation is performed by executing a string-matching algorithm on the full-system simulator and on the XMT, using three datasets that generate noticeably different contention patterns.« less
From macro-scale to micro-scale computational anatomy: a perspective on the next 20 years.
Mori, Kensaku
2016-10-01
This paper gives our perspective on the next two decades of computational anatomy, which has made great strides in the recognition and understanding of human anatomy from conventional clinical images. The results from this field are now used in a variety of medical applications, including quantitative analysis of organ shapes, interventional assistance, surgical navigation, and population analysis. Several anatomical models have also been used in computational anatomy, and these mainly target millimeter-scale shapes. For example, liver-shape models are almost completely modeled at the millimeter scale, and shape variations are described at such scales. Most clinical 3D scanning devices have had just under 1 or 0.5 mm per voxel resolution for over 25 years, and this resolution has not changed drastically in that time. Although Z-axis (head-to-tail direction) resolution has been drastically improved by the introduction of multi-detector CT scanning devices, in-plane resolutions have not changed very much either. When we look at human anatomy, we can see different anatomical structures at different scales. For example, pulmonary blood vessels and lung lobes can be observed in millimeter-scale images. If we take 10-µm-scale images of a lung specimen, the alveoli and bronchiole regions can be located in them. Most work in millimeter-scale computational anatomy has been done by the medical-image analysis community. In the next two decades, we encourage our community to focus on micro-scale computational anatomy. In this perspective paper, we briefly review the achievements of computational anatomy and its impacts on clinical applications; furthermore, we show several possibilities from the viewpoint of microscopic computational anatomy by discussing experimental results from our recent research activities. Copyright © 2016 Elsevier B.V. All rights reserved.
Crops in silico: A community wide multi-scale computational modeling framework of plant canopies
NASA Astrophysics Data System (ADS)
Srinivasan, V.; Christensen, A.; Borkiewic, K.; Yiwen, X.; Ellis, A.; Panneerselvam, B.; Kannan, K.; Shrivastava, S.; Cox, D.; Hart, J.; Marshall-Colon, A.; Long, S.
2016-12-01
Current crop models predict a looming gap between supply and demand for primary foodstuffs over the next 100 years. While significant yield increases were achieved in major food crops during the early years of the green revolution, the current rates of yield increases are insufficient to meet future projected food demand. Furthermore, with projected reduction in arable land, decrease in water availability, and increasing impacts of climate change on future food production, innovative technologies are required to sustainably improve crop yield. To meet these challenges, we are developing Crops in silico (Cis), a biologically informed, multi-scale, computational modeling framework that can facilitate whole plant simulations of crop systems. The Cis framework is capable of linking models of gene networks, protein synthesis, metabolic pathways, physiology, growth, and development in order to investigate crop response to different climate scenarios and resource constraints. This modeling framework will provide the mechanistic details to generate testable hypotheses toward accelerating directed breeding and engineering efforts to increase future food security. A primary objective for building such a framework is to create synergy among an inter-connected community of biologists and modelers to create a realistic virtual plant. This framework advantageously casts the detailed mechanistic understanding of individual plant processes across various scales in a common scalable framework that makes use of current advances in high performance and parallel computing. We are currently designing a user friendly interface that will make this tool equally accessible to biologists and computer scientists. Critically, this framework will provide the community with much needed tools for guiding future crop breeding and engineering, understanding the emergent implications of discoveries at the molecular level for whole plant behavior, and improved prediction of plant and ecosystem responses to the environment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heroux, Michael; Lethin, Richard
Programming models and environments play the essential roles in high performance computing of enabling the conception, design, implementation and execution of science and engineering application codes. Programmer productivity is strongly influenced by the effectiveness of our programming models and environments, as is software sustainability since our codes have lifespans measured in decades, so the advent of new computing architectures, increased concurrency, concerns for resilience, and the increasing demands for high-fidelity, multi-physics, multi-scale and data-intensive computations mean that we have new challenges to address as part of our fundamental R&D requirements. Fortunately, we also have new tools and environments that makemore » design, prototyping and delivery of new programming models easier than ever. The combination of new and challenging requirements and new, powerful toolsets enables significant synergies for the next generation of programming models and environments R&D. This report presents the topics discussed and results from the 2014 DOE Office of Science Advanced Scientific Computing Research (ASCR) Programming Models & Environments Summit, and subsequent discussions among the summit participants and contributors to topics in this report.« less
Computer Laboratory for Multi-scale Simulations of Novel Nanomaterials
2014-09-15
schemes for multiscale modeling of polymers. Permselective ion-exchange membranes for protective clothing, fuel cells , and batteries are of special...polyelectrolyte membranes ( PEM ) with chemical warfare agents (CWA) and their simulants and (2) development of new simulation methods and computational...chemical potential using gauge cell method and calculation of density profiles. However, the code does not run in parallel environments. For mesoscale
Multi-Tissue Computational Modeling Analyzes Pathophysiology of Type 2 Diabetes in MKR Mice
Kumar, Amit; Harrelson, Thomas; Lewis, Nathan E.; Gallagher, Emily J.; LeRoith, Derek; Shiloach, Joseph; Betenbaugh, Michael J.
2014-01-01
Computational models using metabolic reconstructions for in silico simulation of metabolic disorders such as type 2 diabetes mellitus (T2DM) can provide a better understanding of disease pathophysiology and avoid high experimentation costs. There is a limited amount of computational work, using metabolic reconstructions, performed in this field for the better understanding of T2DM. In this study, a new algorithm for generating tissue-specific metabolic models is presented, along with the resulting multi-confidence level (MCL) multi-tissue model. The effect of T2DM on liver, muscle, and fat in MKR mice was first studied by microarray analysis and subsequently the changes in gene expression of frank T2DM MKR mice versus healthy mice were applied to the multi-tissue model to test the effect. Using the first multi-tissue genome-scale model of all metabolic pathways in T2DM, we found out that branched-chain amino acids' degradation and fatty acids oxidation pathway is downregulated in T2DM MKR mice. Microarray data showed low expression of genes in MKR mice versus healthy mice in the degradation of branched-chain amino acids and fatty-acid oxidation pathways. In addition, the flux balance analysis using the MCL multi-tissue model showed that the degradation pathways of branched-chain amino acid and fatty acid oxidation were significantly downregulated in MKR mice versus healthy mice. Validation of the model was performed using data derived from the literature regarding T2DM. Microarray data was used in conjunction with the model to predict fluxes of various other metabolic pathways in the T2DM mouse model and alterations in a number of pathways were detected. The Type 2 Diabetes MCL multi-tissue model may explain the high level of branched-chain amino acids and free fatty acids in plasma of Type 2 Diabetic subjects from a metabolic fluxes perspective. PMID:25029527
Multi-source Geospatial Data Analysis with Google Earth Engine
NASA Astrophysics Data System (ADS)
Erickson, T.
2014-12-01
The Google Earth Engine platform is a cloud computing environment for data analysis that combines a public data catalog with a large-scale computational facility optimized for parallel processing of geospatial data. The data catalog is a multi-petabyte archive of georeferenced datasets that include images from Earth observing satellite and airborne sensors (examples: USGS Landsat, NASA MODIS, USDA NAIP), weather and climate datasets, and digital elevation models. Earth Engine supports both a just-in-time computation model that enables real-time preview and debugging during algorithm development for open-ended data exploration, and a batch computation mode for applying algorithms over large spatial and temporal extents. The platform automatically handles many traditionally-onerous data management tasks, such as data format conversion, reprojection, and resampling, which facilitates writing algorithms that combine data from multiple sensors and/or models. Although the primary use of Earth Engine, to date, has been the analysis of large Earth observing satellite datasets, the computational platform is generally applicable to a wide variety of use cases that require large-scale geospatial data analyses. This presentation will focus on how Earth Engine facilitates the analysis of geospatial data streams that originate from multiple separate sources (and often communities) and how it enables collaboration during algorithm development and data exploration. The talk will highlight current projects/analyses that are enabled by this functionality.https://earthengine.google.org
Biglino, Giovanni; Corsini, Chiara; Schievano, Silvia; Dubini, Gabriele; Giardini, Alessandro; Hsia, Tain-Yen; Pennati, Giancarlo; Taylor, Andrew M
2014-05-01
Reliability of computational models for cardiovascular investigations strongly depends on their validation against physical data. This study aims to experimentally validate a computational model of complex congenital heart disease (i.e., surgically palliated hypoplastic left heart syndrome with aortic coarctation) thus demonstrating that hemodynamic information can be reliably extrapolated from the model for clinically meaningful investigations. A patient-specific aortic arch model was tested in a mock circulatory system and the same flow conditions were re-created in silico, by setting an appropriate lumped parameter network (LPN) attached to the same three-dimensional (3D) aortic model (i.e., multi-scale approach). The model included a modified Blalock-Taussig shunt and coarctation of the aorta. Different flow regimes were tested as well as the impact of uncertainty in viscosity. Computational flow and pressure results were in good agreement with the experimental signals, both qualitatively, in terms of the shape of the waveforms, and quantitatively (mean aortic pressure 62.3 vs. 65.1 mmHg, 4.8% difference; mean aortic flow 28.0 vs. 28.4% inlet flow, 1.4% difference; coarctation pressure drop 30.0 vs. 33.5 mmHg, 10.4% difference), proving the reliability of the numerical approach. It was observed that substantial changes in fluid viscosity or using a turbulent model in the numerical simulations did not significantly affect flows and pressures of the investigated physiology. Results highlighted how the non-linear fluid dynamic phenomena occurring in vitro must be properly described to ensure satisfactory agreement. This study presents methodological considerations for using experimental data to preliminarily set up a computational model, and then simulate a complex congenital physiology using a multi-scale approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Behafarid, F.; Shaver, D. R.; Bolotnov, I. A.
The required technological and safety standards for future Gen IV Reactors can only be achieved if advanced simulation capabilities become available, which combine high performance computing with the necessary level of modeling detail and high accuracy of predictions. The purpose of this paper is to present new results of multi-scale three-dimensional (3D) simulations of the inter-related phenomena, which occur as a result of fuel element heat-up and cladding failure, including the injection of a jet of gaseous fission products into a partially blocked Sodium Fast Reactor (SFR) coolant channel, and gas/molten sodium transport along the coolant channels. The computational approachmore » to the analysis of the overall accident scenario is based on using two different inter-communicating computational multiphase fluid dynamics (CMFD) codes: a CFD code, PHASTA, and a RANS code, NPHASE-CMFD. Using the geometry and time history of cladding failure and the gas injection rate, direct numerical simulations (DNS), combined with the Level Set method, of two-phase turbulent flow have been performed by the PHASTA code. The model allows one to track the evolution of gas/liquid interfaces at a centimeter scale. The simulated phenomena include the formation and breakup of the jet of fission products injected into the liquid sodium coolant. The PHASTA outflow has been averaged over time to obtain mean phasic velocities and volumetric concentrations, as well as the liquid turbulent kinetic energy and turbulence dissipation rate, all of which have served as the input to the core-scale simulations using the NPHASE-CMFD code. A sliding window time averaging has been used to capture mean flow parameters for transient cases. The results presented in the paper include testing and validation of the proposed models, as well the predictions of fission-gas/liquid-sodium transport along a multi-rod fuel assembly of SFR during a partial loss-of-flow accident. (authors)« less
3D virtual human atria: A computational platform for studying clinical atrial fibrillation
Aslanidi, Oleg V; Colman, Michael A; Stott, Jonathan; Dobrzynski, Halina; Boyett, Mark R; Holden, Arun V; Zhang, Henggui
2011-01-01
Despite a vast amount of experimental and clinical data on the underlying ionic, cellular and tissue substrates, the mechanisms of common atrial arrhythmias (such as atrial fibrillation, AF) arising from the functional interactions at the whole atria level remain unclear. Computational modelling provides a quantitative framework for integrating such multi-scale data and understanding the arrhythmogenic behaviour that emerges from the collective spatio-temporal dynamics in all parts of the heart. In this study, we have developed a multi-scale hierarchy of biophysically detailed computational models for the human atria – 3D virtual human atria. Primarily, diffusion tensor MRI reconstruction of the tissue geometry and fibre orientation in the human sinoatrial node (SAN) and surrounding atrial muscle was integrated into the 3D model of the whole atria dissected from the Visible Human dataset. The anatomical models were combined with the heterogeneous atrial action potential (AP) models, and used to simulate the AP conduction in the human atria under various conditions: SAN pacemaking and atrial activation in the normal rhythm, break-down of regular AP wave-fronts during rapid atrial pacing, and the genesis of multiple re-entrant wavelets characteristic of AF. Contributions of different properties of the tissue to the mechanisms of the normal rhythm and AF arrhythmogenesis are investigated and discussed. The 3D model of the atria itself was incorporated into the torso model to simulate the body surface ECG patterns in the normal and arrhythmic conditions. Therefore, a state-of-the-art computational platform has been developed, which can be used for studying multi-scale electrical phenomena during atrial conduction and arrhythmogenesis. Results of such simulations can be directly compared with experimental electrophysiological and endocardial mapping data, as well as clinical ECG recordings. More importantly, the virtual human atria can provide validated means for directly dissecting 3D excitation propagation processes within the atrial walls from an in vivo whole heart, which are beyond the current technical capabilities of experimental or clinical set-ups. PMID:21762716
2014-06-12
interferometry and polarimetry . In the paper, the model was used to simulate SAR data for Mangrove (tropical) and Nezer (temperate) forests for P-band and...Scattering Model Applied to Radiometry, Interferometry, and Polarimetry at P- and L-Band. IEEE Transactions on Geoscience and Remote Sensing 44(4): 849
NASA Astrophysics Data System (ADS)
Lawry, B. J.; Encarnacao, A.; Hipp, J. R.; Chang, M.; Young, C. J.
2011-12-01
With the rapid growth of multi-core computing hardware, it is now possible for scientific researchers to run complex, computationally intensive software on affordable, in-house commodity hardware. Multi-core CPUs (Central Processing Unit) and GPUs (Graphics Processing Unit) are now commonplace in desktops and servers. Developers today have access to extremely powerful hardware that enables the execution of software that could previously only be run on expensive, massively-parallel systems. It is no longer cost-prohibitive for an institution to build a parallel computing cluster consisting of commodity multi-core servers. In recent years, our research team has developed a distributed, multi-core computing system and used it to construct global 3D earth models using seismic tomography. Traditionally, computational limitations forced certain assumptions and shortcuts in the calculation of tomographic models; however, with the recent rapid growth in computational hardware including faster CPU's, increased RAM, and the development of multi-core computers, we are now able to perform seismic tomography, 3D ray tracing and seismic event location using distributed parallel algorithms running on commodity hardware, thereby eliminating the need for many of these shortcuts. We describe Node Resource Manager (NRM), a system we developed that leverages the capabilities of a parallel computing cluster. NRM is a software-based parallel computing management framework that works in tandem with the Java Parallel Processing Framework (JPPF, http://www.jppf.org/), a third party library that provides a flexible and innovative way to take advantage of modern multi-core hardware. NRM enables multiple applications to use and share a common set of networked computers, regardless of their hardware platform or operating system. Using NRM, algorithms can be parallelized to run on multiple processing cores of a distributed computing cluster of servers and desktops, which results in a dramatic speedup in execution time. NRM is sufficiently generic to support applications in any domain, as long as the application is parallelizable (i.e., can be subdivided into multiple individual processing tasks). At present, NRM has been effective in decreasing the overall runtime of several algorithms: 1) the generation of a global 3D model of the compressional velocity distribution in the Earth using tomographic inversion, 2) the calculation of the model resolution matrix, model covariance matrix, and travel time uncertainty for the aforementioned velocity model, and 3) the correlation of waveforms with archival data on a massive scale for seismic event detection. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
NASA Astrophysics Data System (ADS)
Ryu, Hoon; Jeong, Yosang; Kang, Ji-Hoon; Cho, Kyu Nam
2016-12-01
Modelling of multi-million atomic semiconductor structures is important as it not only predicts properties of physically realizable novel materials, but can accelerate advanced device designs. This work elaborates a new Technology-Computer-Aided-Design (TCAD) tool for nanoelectronics modelling, which uses a sp3d5s∗ tight-binding approach to describe multi-million atomic structures, and simulate electronic structures with high performance computing (HPC), including atomic effects such as alloy and dopant disorders. Being named as Quantum simulation tool for Advanced Nanoscale Devices (Q-AND), the tool shows nice scalability on traditional multi-core HPC clusters implying the strong capability of large-scale electronic structure simulations, particularly with remarkable performance enhancement on latest clusters of Intel Xeon PhiTM coprocessors. A review of the recent modelling study conducted to understand an experimental work of highly phosphorus-doped silicon nanowires, is presented to demonstrate the utility of Q-AND. Having been developed via Intel Parallel Computing Center project, Q-AND will be open to public to establish a sound framework of nanoelectronics modelling with advanced HPC clusters of a many-core base. With details of the development methodology and exemplary study of dopant electronics, this work will present a practical guideline for TCAD development to researchers in the field of computational nanoelectronics.
Tučník, Petr; Bureš, Vladimír
2016-01-01
Multi-criteria decision-making (MCDM) can be formally implemented by various methods. This study compares suitability of four selected MCDM methods, namely WPM, TOPSIS, VIKOR, and PROMETHEE, for future applications in agent-based computational economic (ACE) models of larger scale (i.e., over 10 000 agents in one geographical region). These four MCDM methods were selected according to their appropriateness for computational processing in ACE applications. Tests of the selected methods were conducted on four hardware configurations. For each method, 100 tests were performed, which represented one testing iteration. With four testing iterations conducted on each hardware setting and separated testing of all configurations with the-server parameter de/activated, altogether, 12800 data points were collected and consequently analyzed. An illustrational decision-making scenario was used which allows the mutual comparison of all of the selected decision making methods. Our test results suggest that although all methods are convenient and can be used in practice, the VIKOR method accomplished the tests with the best results and thus can be recommended as the most suitable for simulations of large-scale agent-based models.
Materials-by-design: computation, synthesis, and characterization from atoms to structures
NASA Astrophysics Data System (ADS)
Yeo, Jingjie; Jung, Gang Seob; Martín-Martínez, Francisco J.; Ling, Shengjie; Gu, Grace X.; Qin, Zhao; Buehler, Markus J.
2018-05-01
In the 50 years that succeeded Richard Feynman’s exposition of the idea that there is ‘plenty of room at the bottom’ for manipulating individual atoms for the synthesis and manufacturing processing of materials, the materials-by-design paradigm is being developed gradually through synergistic integration of experimental material synthesis and characterization with predictive computational modeling and optimization. This paper reviews how this paradigm creates the possibility to develop materials according to specific, rational designs from the molecular to the macroscopic scale. We discuss promising techniques in experimental small-scale material synthesis and large-scale fabrication methods to manipulate atomistic or macroscale structures, which can be designed by computational modeling. These include recombinant protein technology to produce peptides and proteins with tailored sequences encoded by recombinant DNA, self-assembly processes induced by conformational transition of proteins, additive manufacturing for designing complex structures, and qualitative and quantitative characterization of materials at different length scales. We describe important material characterization techniques using numerous methods of spectroscopy and microscopy. We detail numerous multi-scale computational modeling techniques that complements these experimental techniques: DFT at the atomistic scale; fully atomistic and coarse-grain molecular dynamics at the molecular to mesoscale; continuum modeling at the macroscale. Additionally, we present case studies that utilize experimental and computational approaches in an integrated manner to broaden our understanding of the properties of two-dimensional materials and materials based on silk and silk-elastin-like proteins.
NASA Astrophysics Data System (ADS)
Zhou, Di; Lu, Zhiliang; Guo, Tongqing; Shen, Ennan
2016-06-01
In this paper, the research on two types of unsteady flow problems in turbomachinery including blade flutter and rotor-stator interaction is made by means of numerical simulation. For the former, the energy method is often used to predict the aeroelastic stability by calculating the aerodynamic work per vibration cycle. The inter-blade phase angle (IBPA) is an important parameter in computation and may have significant effects on aeroelastic behavior. For the latter, the numbers of blades in each row are usually not equal and the unsteady rotor-stator interactions could be strong. An effective way to perform multi-row calculations is the domain scaling method (DSM). These two cases share a common point that the computational domain has to be extended to multi passages (MP) considering their respective features. The present work is aimed at modeling these two issues with the developed MP model. Computational fluid dynamics (CFD) technique is applied to resolve the unsteady Reynolds-averaged Navier-Stokes (RANS) equations and simulate the flow fields. With the parallel technique, the additional time cost due to modeling more passages can be largely decreased. Results are presented on two test cases including a vibrating rotor blade and a turbine stage.
NASA Astrophysics Data System (ADS)
Fu, Yao; Song, Jeong-Hoon
2014-08-01
Hardy stress definition has been restricted to pair potentials and embedded-atom method potentials due to the basic assumptions in the derivation of a symmetric microscopic stress tensor. Force decomposition required in the Hardy stress expression becomes obscure for multi-body potentials. In this work, we demonstrate the invariance of the Hardy stress expression for a polymer system modeled with multi-body interatomic potentials including up to four atoms interaction, by applying central force decomposition of the atomic force. The balance of momentum has been demonstrated to be valid theoretically and tested under various numerical simulation conditions. The validity of momentum conservation justifies the extension of Hardy stress expression to multi-body potential systems. Computed Hardy stress has been observed to converge to the virial stress of the system with increasing spatial averaging volume. This work provides a feasible and reliable linkage between the atomistic and continuum scales for multi-body potential systems.
Salient contour extraction from complex natural scene in night vision image
NASA Astrophysics Data System (ADS)
Han, Jing; Yue, Jiang; Zhang, Yi; Bai, Lian-fa
2014-03-01
The theory of center-surround interaction in non-classical receptive field can be applied in night vision information processing. In this work, an optimized compound receptive field modulation method is proposed to extract salient contour from complex natural scene in low-light-level (LLL) and infrared images. The kernel idea is that multi-feature analysis can recognize the inhomogeneity in modulatory coverage more accurately and that center and surround with the grouping structure satisfying Gestalt rule deserves high connection-probability. Computationally, a multi-feature contrast weighted inhibition model is presented to suppress background and lower mutual inhibition among contour elements; a fuzzy connection facilitation model is proposed to achieve the enhancement of contour response, the connection of discontinuous contour and the further elimination of randomly distributed noise and texture; a multi-scale iterative attention method is designed to accomplish dynamic modulation process and extract contours of targets in multi-size. This work provides a series of biologically motivated computational visual models with high-performance for contour detection from cluttered scene in night vision images.
Computational Modeling System for Deformation and Failure in Polycrystalline Metals
2009-03-29
FIB/EHSD 3.3 The Voronoi Cell FEM for Micromechanical Modeling 3.4 VCFEM for Microstructural Damage Modeling 3.5 Adaptive Multiscale Simulations...accurate and efficient image-based micromechanical finite element model, for crystal plasticity and damage , incorporating real morphological and...topology with evolving strain localization and damage . (v) Development of multi-scaling algorithms in the time domain for compression and localization in
Dark matter and neutrino masses from a scale-invariant multi-Higgs portal
NASA Astrophysics Data System (ADS)
Karam, Alexandros; Tamvakis, Kyriakos
2015-10-01
We consider a classically scale invariant version of the Standard Model, extended by an extra dark S U (2 )X gauge group. Apart from the dark gauge bosons and a dark scalar doublet which is coupled to the Standard Model Higgs through a portal coupling, we incorporate right-handed neutrinos and an additional real singlet scalar field. After symmetry breaking à la Coleman-Weinberg, we examine the multi-Higgs sector and impose theoretical and experimental constraints. In addition, by computing the dark matter relic abundance and the spin-independent scattering cross section off a nucleon we determine the viable dark matter mass range in accordance with present limits. The model can be tested in the near future by collider experiments and direct detection searches such as XENON 1T.
NASA Astrophysics Data System (ADS)
Li, Gen; Tang, Chun-An; Liang, Zheng-Zhao
2017-01-01
Multi-scale high-resolution modeling of rock failure process is a powerful means in modern rock mechanics studies to reveal the complex failure mechanism and to evaluate engineering risks. However, multi-scale continuous modeling of rock, from deformation, damage to failure, has raised high requirements on the design, implementation scheme and computation capacity of the numerical software system. This study is aimed at developing the parallel finite element procedure, a parallel rock failure process analysis (RFPA) simulator that is capable of modeling the whole trans-scale failure process of rock. Based on the statistical meso-damage mechanical method, the RFPA simulator is able to construct heterogeneous rock models with multiple mechanical properties, deal with and represent the trans-scale propagation of cracks, in which the stress and strain fields are solved for the damage evolution analysis of representative volume element by the parallel finite element method (FEM) solver. This paper describes the theoretical basis of the approach and provides the details of the parallel implementation on a Windows - Linux interactive platform. A numerical model is built to test the parallel performance of FEM solver. Numerical simulations are then carried out on a laboratory-scale uniaxial compression test, and field-scale net fracture spacing and engineering-scale rock slope examples, respectively. The simulation results indicate that relatively high speedup and computation efficiency can be achieved by the parallel FEM solver with a reasonable boot process. In laboratory-scale simulation, the well-known physical phenomena, such as the macroscopic fracture pattern and stress-strain responses, can be reproduced. In field-scale simulation, the formation process of net fracture spacing from initiation, propagation to saturation can be revealed completely. In engineering-scale simulation, the whole progressive failure process of the rock slope can be well modeled. It is shown that the parallel FE simulator developed in this study is an efficient tool for modeling the whole trans-scale failure process of rock from meso- to engineering-scale.
Towards European-scale convection-resolving climate simulations with GPUs: a study with COSMO 4.19
NASA Astrophysics Data System (ADS)
Leutwyler, David; Fuhrer, Oliver; Lapillonne, Xavier; Lüthi, Daniel; Schär, Christoph
2016-09-01
The representation of moist convection in climate models represents a major challenge, due to the small scales involved. Using horizontal grid spacings of O(1km), convection-resolving weather and climate models allows one to explicitly resolve deep convection. However, due to their extremely demanding computational requirements, they have so far been limited to short simulations and/or small computational domains. Innovations in supercomputing have led to new hybrid node designs, mixing conventional multi-core hardware and accelerators such as graphics processing units (GPUs). One of the first atmospheric models that has been fully ported to these architectures is the COSMO (Consortium for Small-scale Modeling) model.Here we present the convection-resolving COSMO model on continental scales using a version of the model capable of using GPU accelerators. The verification of a week-long simulation containing winter storm Kyrill shows that, for this case, convection-parameterizing simulations and convection-resolving simulations agree well. Furthermore, we demonstrate the applicability of the approach to longer simulations by conducting a 3-month-long simulation of the summer season 2006. Its results corroborate the findings found on smaller domains such as more credible representation of the diurnal cycle of precipitation in convection-resolving models and a tendency to produce more intensive hourly precipitation events. Both simulations also show how the approach allows for the representation of interactions between synoptic-scale and meso-scale atmospheric circulations at scales ranging from 1000 to 10 km. This includes the formation of sharp cold frontal structures, convection embedded in fronts and small eddies, or the formation and organization of propagating cold pools. Finally, we assess the performance gain from using heterogeneous hardware equipped with GPUs relative to multi-core hardware. With the COSMO model, we now use a weather and climate model that has all the necessary modules required for real-case convection-resolving regional climate simulations on GPUs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Guoping; D'Azevedo, Ed F; Zhang, Fan
2010-01-01
Calibration of groundwater models involves hundreds to thousands of forward solutions, each of which may solve many transient coupled nonlinear partial differential equations, resulting in a computationally intensive problem. We describe a hybrid MPI/OpenMP approach to exploit two levels of parallelisms in software and hardware to reduce calibration time on multi-core computers. HydroGeoChem 5.0 (HGC5) is parallelized using OpenMP for direct solutions for a reactive transport model application, and a field-scale coupled flow and transport model application. In the reactive transport model, a single parallelizable loop is identified to account for over 97% of the total computational time using GPROF.more » Addition of a few lines of OpenMP compiler directives to the loop yields a speedup of about 10 on a 16-core compute node. For the field-scale model, parallelizable loops in 14 of 174 HGC5 subroutines that require 99% of the execution time are identified. As these loops are parallelized incrementally, the scalability is found to be limited by a loop where Cray PAT detects over 90% cache missing rates. With this loop rewritten, similar speedup as the first application is achieved. The OpenMP-parallelized code can be run efficiently on multiple workstations in a network or multiple compute nodes on a cluster as slaves using parallel PEST to speedup model calibration. To run calibration on clusters as a single task, the Levenberg Marquardt algorithm is added to HGC5 with the Jacobian calculation and lambda search parallelized using MPI. With this hybrid approach, 100 200 compute cores are used to reduce the calibration time from weeks to a few hours for these two applications. This approach is applicable to most of the existing groundwater model codes for many applications.« less
Dick, Thomas E.; Molkov, Yaroslav I.; Nieman, Gary; Hsieh, Yee-Hsee; Jacono, Frank J.; Doyle, John; Scheff, Jeremy D.; Calvano, Steve E.; Androulakis, Ioannis P.; An, Gary; Vodovotz, Yoram
2012-01-01
Acute inflammation leads to organ failure by engaging catastrophic feedback loops in which stressed tissue evokes an inflammatory response and, in turn, inflammation damages tissue. Manifestations of this maladaptive inflammatory response include cardio-respiratory dysfunction that may be reflected in reduced heart rate and ventilatory pattern variabilities. We have developed signal-processing algorithms that quantify non-linear deterministic characteristics of variability in biologic signals. Now, coalescing under the aegis of the NIH Computational Biology Program and the Society for Complexity in Acute Illness, two research teams performed iterative experiments and computational modeling on inflammation and cardio-pulmonary dysfunction in sepsis as well as on neural control of respiration and ventilatory pattern variability. These teams, with additional collaborators, have recently formed a multi-institutional, interdisciplinary consortium, whose goal is to delineate the fundamental interrelationship between the inflammatory response and physiologic variability. Multi-scale mathematical modeling and complementary physiological experiments will provide insight into autonomic neural mechanisms that may modulate the inflammatory response to sepsis and simultaneously reduce heart rate and ventilatory pattern variabilities associated with sepsis. This approach integrates computational models of neural control of breathing and cardio-respiratory coupling with models that combine inflammation, cardiovascular function, and heart rate variability. The resulting integrated model will provide mechanistic explanations for the phenomena of respiratory sinus-arrhythmia and cardio-ventilatory coupling observed under normal conditions, and the loss of these properties during sepsis. This approach holds the potential of modeling cross-scale physiological interactions to improve both basic knowledge and clinical management of acute inflammatory diseases such as sepsis and trauma. PMID:22783197
Dick, Thomas E; Molkov, Yaroslav I; Nieman, Gary; Hsieh, Yee-Hsee; Jacono, Frank J; Doyle, John; Scheff, Jeremy D; Calvano, Steve E; Androulakis, Ioannis P; An, Gary; Vodovotz, Yoram
2012-01-01
Acute inflammation leads to organ failure by engaging catastrophic feedback loops in which stressed tissue evokes an inflammatory response and, in turn, inflammation damages tissue. Manifestations of this maladaptive inflammatory response include cardio-respiratory dysfunction that may be reflected in reduced heart rate and ventilatory pattern variabilities. We have developed signal-processing algorithms that quantify non-linear deterministic characteristics of variability in biologic signals. Now, coalescing under the aegis of the NIH Computational Biology Program and the Society for Complexity in Acute Illness, two research teams performed iterative experiments and computational modeling on inflammation and cardio-pulmonary dysfunction in sepsis as well as on neural control of respiration and ventilatory pattern variability. These teams, with additional collaborators, have recently formed a multi-institutional, interdisciplinary consortium, whose goal is to delineate the fundamental interrelationship between the inflammatory response and physiologic variability. Multi-scale mathematical modeling and complementary physiological experiments will provide insight into autonomic neural mechanisms that may modulate the inflammatory response to sepsis and simultaneously reduce heart rate and ventilatory pattern variabilities associated with sepsis. This approach integrates computational models of neural control of breathing and cardio-respiratory coupling with models that combine inflammation, cardiovascular function, and heart rate variability. The resulting integrated model will provide mechanistic explanations for the phenomena of respiratory sinus-arrhythmia and cardio-ventilatory coupling observed under normal conditions, and the loss of these properties during sepsis. This approach holds the potential of modeling cross-scale physiological interactions to improve both basic knowledge and clinical management of acute inflammatory diseases such as sepsis and trauma.
Simulating the Thermal Response of High Explosives on Time Scales of Days to Microseconds
NASA Astrophysics Data System (ADS)
Yoh, Jack J.; McClelland, Matthew A.
2004-07-01
We present an overview of computational techniques for simulating the thermal cookoff of high explosives using a multi-physics hydrodynamics code, ALE3D. Recent improvements to the code have aided our computational capability in modeling the response of energetic materials systems exposed to extreme thermal environments, such as fires. We consider an idealized model process for a confined explosive involving the transition from slow heating to rapid deflagration in which the time scale changes from days to hundreds of microseconds. The heating stage involves thermal expansion and decomposition according to an Arrhenius kinetics model while a pressure-dependent burn model is employed during the explosive phase. We describe and demonstrate the numerical strategies employed to make the transition from slow to fast dynamics.
Santos, Guido; Lai, Xin; Eberhardt, Martin; Vera, Julio
2018-01-01
Pneumococcal infection is the most frequent cause of pneumonia, and one of the most prevalent diseases worldwide. The population groups at high risk of death from bacterial pneumonia are infants, elderly and immunosuppressed people. These groups are more vulnerable because they have immature or impaired immune systems, the efficacy of their response to vaccines is lower, and antibiotic treatment often does not take place until the inflammatory response triggered is already overwhelming. The immune response to bacterial lung infections involves dynamic interactions between several types of cells whose activation is driven by intracellular molecular networks. A feasible approach to the integration of knowledge and data linking tissue, cellular and intracellular events and the construction of hypotheses in this area is the use of mathematical modeling. For this paper, we used a multi-level computational model to analyse the role of cellular and molecular interactions during the first 10 h after alveolar invasion of Streptococcus pneumoniae bacteria. By “multi-level” we mean that we simulated the interplay between different temporal and spatial scales in a single computational model. In this instance, we included the intracellular scale of processes driving lung epithelial cell activation together with the scale of cell-to-cell interactions at the alveolar tissue. In our analysis, we combined systematic model simulations with logistic regression analysis and decision trees to find genotypic-phenotypic signatures that explain differences in bacteria strain infectivity. According to our simulations, pneumococci benefit from a high dwelling probability and a high proliferation rate during the first stages of infection. In addition to this, the model predicts that during the very early phases of infection the bacterial capsule could be an impediment to the establishment of the alveolar infection because it impairs bacterial colonization. PMID:29868515
Santos, Guido; Lai, Xin; Eberhardt, Martin; Vera, Julio
2018-01-01
Pneumococcal infection is the most frequent cause of pneumonia, and one of the most prevalent diseases worldwide. The population groups at high risk of death from bacterial pneumonia are infants, elderly and immunosuppressed people. These groups are more vulnerable because they have immature or impaired immune systems, the efficacy of their response to vaccines is lower, and antibiotic treatment often does not take place until the inflammatory response triggered is already overwhelming. The immune response to bacterial lung infections involves dynamic interactions between several types of cells whose activation is driven by intracellular molecular networks. A feasible approach to the integration of knowledge and data linking tissue, cellular and intracellular events and the construction of hypotheses in this area is the use of mathematical modeling. For this paper, we used a multi-level computational model to analyse the role of cellular and molecular interactions during the first 10 h after alveolar invasion of Streptococcus pneumoniae bacteria. By "multi-level" we mean that we simulated the interplay between different temporal and spatial scales in a single computational model. In this instance, we included the intracellular scale of processes driving lung epithelial cell activation together with the scale of cell-to-cell interactions at the alveolar tissue. In our analysis, we combined systematic model simulations with logistic regression analysis and decision trees to find genotypic-phenotypic signatures that explain differences in bacteria strain infectivity. According to our simulations, pneumococci benefit from a high dwelling probability and a high proliferation rate during the first stages of infection. In addition to this, the model predicts that during the very early phases of infection the bacterial capsule could be an impediment to the establishment of the alveolar infection because it impairs bacterial colonization.
Microphysics in Multi-scale Modeling System with Unified Physics
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo
2012-01-01
Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, a review of developments and applications of the multi-scale modeling system will be presented. In particular, the microphysics development and its performance for the multi-scale modeling system will be presented.
Advances in multi-scale modeling of solidification and casting processes
NASA Astrophysics Data System (ADS)
Liu, Baicheng; Xu, Qingyan; Jing, Tao; Shen, Houfa; Han, Zhiqiang
2011-04-01
The development of the aviation, energy and automobile industries requires an advanced integrated product/process R&D systems which could optimize the product and the process design as well. Integrated computational materials engineering (ICME) is a promising approach to fulfill this requirement and make the product and process development efficient, economic, and environmentally friendly. Advances in multi-scale modeling of solidification and casting processes, including mathematical models as well as engineering applications are presented in the paper. Dendrite morphology of magnesium and aluminum alloy of solidification process by using phase field and cellular automaton methods, mathematical models of segregation of large steel ingot, and microstructure models of unidirectionally solidified turbine blade casting are studied and discussed. In addition, some engineering case studies, including microstructure simulation of aluminum casting for automobile industry, segregation of large steel ingot for energy industry, and microstructure simulation of unidirectionally solidified turbine blade castings for aviation industry are discussed.
Multi-scale Material Parameter Identification Using LS-DYNA® and LS-OPT®
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stander, Nielen; Basudhar, Anirban; Basu, Ushnish
2015-06-15
Ever-tightening regulations on fuel economy and carbon emissions demand continual innovation in finding ways for reducing vehicle mass. Classical methods for computational mass reduction include sizing, shape and topology optimization. One of the few remaining options for weight reduction can be found in materials engineering and material design optimization. Apart from considering different types of materials by adding material diversity, an appealing option in automotive design is to engineer steel alloys for the purpose of reducing thickness while retaining sufficient strength and ductility required for durability and safety. Such a project was proposed and is currently being executed under themore » auspices of the United States Automotive Materials Partnership (USAMP) funded by the Department of Energy. Under this program, new steel alloys (Third Generation Advanced High Strength Steel or 3GAHSS) are being designed, tested and integrated with the remaining design variables of a benchmark vehicle Finite Element model. In this project the principal phases identified are (i) material identification, (ii) formability optimization and (iii) multi-disciplinary vehicle optimization. This paper serves as an introduction to the LS-OPT methodology and therefore mainly focuses on the first phase, namely an approach to integrate material identification using material models of different length scales. For this purpose, a multi-scale material identification strategy, consisting of a Crystal Plasticity (CP) material model and a Homogenized State Variable (SV) model, is discussed and demonstrated. The paper concludes with proposals for integrating the multi-scale methodology into the overall vehicle design.« less
Multi-scale and multi-physics model of the uterine smooth muscle with mechanotransduction.
Yochum, Maxime; Laforêt, Jérémy; Marque, Catherine
2018-02-01
Preterm labor is an important public health problem. However, the efficiency of the uterine muscle during labor is complex and still poorly understood. This work is a first step towards a model of the uterine muscle, including its electrical and mechanical components, to reach a better understanding of the uterus synchronization. This model is proposed to investigate, by simulation, the possible role of mechanotransduction for the global synchronization of the uterus. The electrical diffusion indeed explains the local propagation of contractile activity, while the tissue stretching may play a role in the synchronization of distant parts of the uterine muscle. This work proposes a multi-physics (electrical, mechanical) and multi-scales (cell, tissue, whole uterus) model, which is applied to a realistic uterus 3D mesh. This model includes electrical components at different scales: generation of action potentials at the cell level, electrical diffusion at the tissue level. It then links these electrical events to the mechanical behavior, at the cellular level (via the intracellular calcium concentration), by simulating the force generated by each active cell. It thus computes an estimation of the intra uterine pressure (IUP) by integrating the forces generated by each active cell at the whole uterine level, as well as the stretching of the tissue (by using a viscoelastic law for the behavior of the tissue). It finally includes at the cellular level stretch activated channels (SACs) that permit to create a loop between the mechanical and the electrical behavior (mechanotransduction). The simulation of different activated regions of the uterus, which in this first "proof of concept" case are electrically isolated, permits the activation of inactive regions through the stretching (induced by the electrically active regions) computed at the whole organ scale. This permits us to evidence the role of the mechanotransduction in the global synchronization of the uterus. The results also permit us to evidence the effect on IUP of this enhanced synchronization induced by the presence of SACs. This proposed simplified model will be further improved in order to permit a better understanding of the global uterine synchronization occurring during efficient labor contractions. Copyright © 2017 Elsevier Ltd. All rights reserved.
2016-01-01
The problem of multi-scale modelling of damage development in a SiC ceramic fibre-reinforced SiC matrix ceramic composite tube is addressed, with the objective of demonstrating the ability of the finite-element microstructure meshfree (FEMME) model to introduce important aspects of the microstructure into a larger scale model of the component. These are particularly the location, orientation and geometry of significant porosity and the load-carrying capability and quasi-brittle failure behaviour of the fibre tows. The FEMME model uses finite-element and cellular automata layers, connected by a meshfree layer, to efficiently couple the damage in the microstructure with the strain field at the component level. Comparison is made with experimental observations of damage development in an axially loaded composite tube, studied by X-ray computed tomography and digital volume correlation. Recommendations are made for further development of the model to achieve greater fidelity to the microstructure. This article is part of the themed issue ‘Multiscale modelling of the structural integrity of composite materials’. PMID:27242308
Estimation of NH3 Bi-Directional Flux from Managed Agricultural Soils
The Community Multi-Scale Air Quality model (CMAQ v4.7) contains a bi-directional ammonia (NH3) flux option that computes emission and deposition of ammonia derived from commercial fertilizer via a temperature dependent parameterization of canopy and soil compensation ...
Yamaguchi, Satoshi; Inoue, Sayuri; Sakai, Takahiko; Abe, Tomohiro; Kitagawa, Haruaki; Imazato, Satoshi
2017-05-01
The objective of this study was to assess the effect of silica nano-filler particle diameters in a computer-aided design/manufacturing (CAD/CAM) composite resin (CR) block on physical properties at the multi-scale in silico. CAD/CAM CR blocks were modeled, consisting of silica nano-filler particles (20, 40, 60, 80, and 100 nm) and matrix (Bis-GMA/TEGDMA), with filler volume contents of 55.161%. Calculation of Young's moduli and Poisson's ratios for the block at macro-scale were analyzed by homogenization. Macro-scale CAD/CAM CR blocks (3 × 3 × 3 mm) were modeled and compressive strengths were defined when the fracture loads exceeded 6075 N. MPS values of the nano-scale models were compared by localization analysis. As the filler size decreased, Young's moduli and compressive strength increased, while Poisson's ratios and MPS decreased. All parameters were significantly correlated with the diameters of the filler particles (Pearson's correlation test, r = -0.949, 0.943, -0.951, 0.976, p < 0.05). The in silico multi-scale model established in this study demonstrates that the Young's moduli, Poisson's ratios, and compressive strengths of CAD/CAM CR blocks can be enhanced by loading silica nanofiller particles of smaller diameter. CAD/CAM CR blocks by using smaller silica nano-filler particles have a potential to increase fracture resistance.
A Decade-Long European-Scale Convection-Resolving Climate Simulation on GPUs
NASA Astrophysics Data System (ADS)
Leutwyler, D.; Fuhrer, O.; Ban, N.; Lapillonne, X.; Lüthi, D.; Schar, C.
2016-12-01
Convection-resolving models have proven to be very useful tools in numerical weather prediction and in climate research. However, due to their extremely demanding computational requirements, they have so far been limited to short simulations and/or small computational domains. Innovations in the supercomputing domain have led to new supercomputer designs that involve conventional multi-core CPUs and accelerators such as graphics processing units (GPUs). One of the first atmospheric models that has been fully ported to GPUs is the Consortium for Small-Scale Modeling weather and climate model COSMO. This new version allows us to expand the size of the simulation domain to areas spanning continents and the time period up to one decade. We present results from a decade-long, convection-resolving climate simulation over Europe using the GPU-enabled COSMO version on a computational domain with 1536x1536x60 gridpoints. The simulation is driven by the ERA-interim reanalysis. The results illustrate how the approach allows for the representation of interactions between synoptic-scale and meso-scale atmospheric circulations at scales ranging from 1000 to 10 km. We discuss some of the advantages and prospects from using GPUs, and focus on the performance of the convection-resolving modeling approach on the European scale. Specifically we investigate the organization of convective clouds and on validate hourly rainfall distributions with various high-resolution data sets.
Multi-Dimensional Quantum Tunneling and Transport Using the Density-Gradient Model
NASA Technical Reports Server (NTRS)
Biegel, Bryan A.; Yu, Zhi-Ping; Ancona, Mario; Rafferty, Conor; Saini, Subhash (Technical Monitor)
1999-01-01
We show that quantum effects are likely to significantly degrade the performance of MOSFETs (metal oxide semiconductor field effect transistor) as these devices are scaled below 100 nm channel length and 2 nm oxide thickness over the next decade. A general and computationally efficient electronic device model including quantum effects would allow us to monitor and mitigate these effects. Full quantum models are too expensive in multi-dimensions. Using a general but efficient PDE solver called PROPHET, we implemented the density-gradient (DG) quantum correction to the industry-dominant classical drift-diffusion (DD) model. The DG model efficiently includes quantum carrier profile smoothing and tunneling in multi-dimensions and for any electronic device structure. We show that the DG model reduces DD model error from as much as 50% down to a few percent in comparison to thin oxide MOS capacitance measurements. We also show the first DG simulations of gate oxide tunneling and transverse current flow in ultra-scaled MOSFETs. The advantages of rapid model implementation using the PDE solver approach will be demonstrated, as well as the applicability of the DG model to any electronic device structure.
SciSpark's SRDD : A Scientific Resilient Distributed Dataset for Multidimensional Data
NASA Astrophysics Data System (ADS)
Palamuttam, R. S.; Wilson, B. D.; Mogrovejo, R. M.; Whitehall, K. D.; Mattmann, C. A.; McGibbney, L. J.; Ramirez, P.
2015-12-01
Remote sensing data and climate model output are multi-dimensional arrays of massive sizes locked away in heterogeneous file formats (HDF5/4, NetCDF 3/4) and metadata models (HDF-EOS, CF) making it difficult to perform multi-stage, iterative science processing since each stage requires writing and reading data to and from disk. We have developed SciSpark, a robust Big Data framework, that extends ApacheTM Spark for scaling scientific computations. Apache Spark improves the map-reduce implementation in ApacheTM Hadoop for parallel computing on a cluster, by emphasizing in-memory computation, "spilling" to disk only as needed, and relying on lazy evaluation. Central to Spark is the Resilient Distributed Dataset (RDD), an in-memory distributed data structure that extends the functional paradigm provided by the Scala programming language. However, RDDs are ideal for tabular or unstructured data, and not for highly dimensional data. The SciSpark project introduces the Scientific Resilient Distributed Dataset (sRDD), a distributed-computing array structure which supports iterative scientific algorithms for multidimensional data. SciSpark processes data stored in NetCDF and HDF files by partitioning them across time or space and distributing the partitions among a cluster of compute nodes. We show usability and extensibility of SciSpark by implementing distributed algorithms for geospatial operations on large collections of multi-dimensional grids. In particular we address the problem of scaling an automated method for finding Mesoscale Convective Complexes. SciSpark provides a tensor interface to support the pluggability of different matrix libraries. We evaluate performance of the various matrix libraries in distributed pipelines, such as Nd4jTM and BreezeTM. We detail the architecture and design of SciSpark, our efforts to integrate climate science algorithms, parallel ingest and partitioning (sharding) of A-Train satellite observations from model grids. These solutions are encompassed in SciSpark, an open-source software framework for distributed computing on scientific data.
2013-01-01
fabricated today are based on polymer matrix composites containing Kevlarw KM2 reinforcements , the present work will deal with generic PPTA fibers . In...Multi-length scale enriched continuum-level material model for Kevlarw- fiber reinforced polymer-matrix composites”, Journal of Materials...mechanical transverse behavior of p-phenylene terephthalamide (PPTA) fibers Purpose – A series of all-atom molecular-level computational analyses is
Multi-scale gyrokinetic simulations of an Alcator C-Mod, ELM-y H-mode plasma
NASA Astrophysics Data System (ADS)
Howard, N. T.; Holland, C.; White, A. E.; Greenwald, M.; Rodriguez-Fernandez, P.; Candy, J.; Creely, A. J.
2018-01-01
High fidelity, multi-scale gyrokinetic simulations capable of capturing both ion ({k}θ {ρ }s∼ { O }(1.0)) and electron-scale ({k}θ {ρ }e∼ { O }(1.0)) turbulence were performed in the core of an Alcator C-Mod ELM-y H-mode discharge which exhibits reactor-relevant characteristics. These simulations, performed with all experimental inputs and realistic ion to electron mass ratio ({({m}i/{m}e)}1/2=60.0) provide insight into the physics fidelity that may be needed for accurate simulation of the core of fusion reactor discharges. Three multi-scale simulations and series of separate ion and electron-scale simulations performed using the GYRO code (Candy and Waltz 2003 J. Comput. Phys. 186 545) are presented. As with earlier multi-scale results in L-mode conditions (Howard et al 2016 Nucl. Fusion 56 014004), both ion and multi-scale simulations results are compared with experimentally inferred ion and electron heat fluxes, as well as the measured values of electron incremental thermal diffusivities—indicative of the experimental electron temperature profile stiffness. Consistent with the L-mode results, cross-scale coupling is found to play an important role in the simulation of these H-mode conditions. Extremely stiff ion-scale transport is observed in these high-performance conditions which is shown to likely play and important role in the reproduction of measurements of perturbative transport. These results provide important insight into the role of multi-scale plasma turbulence in the core of reactor-relevant plasmas and establish important constraints on the the fidelity of models needed for predictive simulations.
Modeling and Characterization of Near-Crack-Tip Plasticity from Micro- to Nano-Scales
NASA Technical Reports Server (NTRS)
Glaessgen, Edward H.; Saether, Erik; Hochhalter, Jacob; Smith, Stephen W.; Ransom, Jonathan B.; Yamakov, Vesselin; Gupta, Vipul
2010-01-01
Methodologies for understanding the plastic deformation mechanisms related to crack propagation at the nano-, meso- and micro-length scales are being developed. These efforts include the development and application of several computational methods including atomistic simulation, discrete dislocation plasticity, strain gradient plasticity and crystal plasticity; and experimental methods including electron backscattered diffraction and video image correlation. Additionally, methodologies for multi-scale modeling and characterization that can be used to bridge the relevant length scales from nanometers to millimeters are being developed. The paper focuses on the discussion of newly developed methodologies in these areas and their application to understanding damage processes in aluminum and its alloys.
Modeling and Characterization of Near-Crack-Tip Plasticity from Micro- to Nano-Scales
NASA Technical Reports Server (NTRS)
Glaessgen, Edward H.; Saether, Erik; Hochhalter, Jacob; Smith, Stephen W.; Ransom, Jonathan B.; Yamakov, Vesselin; Gupta, Vipul
2011-01-01
Methodologies for understanding the plastic deformation mechanisms related 10 crack propagation at the nano, meso- and micro-length scales are being developed. These efforts include the development and application of several computational methods including atomistic simulation, discrete dislocation plasticity, strain gradient plasticity and crystal plasticity; and experimental methods including electron backscattered diffraction and video image correlation. Additionally, methodologies for multi-scale modeling and characterization that can be used to bridge the relevant length scales from nanometers to millimeters are being developed. The paper focuses on the discussion of newly developed methodologies in these areas and their application to understanding damage processes in aluminum and its alloys.
Computational Challenges in the Analysis of Petrophysics Using Microtomography and Upscaling
NASA Astrophysics Data System (ADS)
Liu, J.; Pereira, G.; Freij-Ayoub, R.; Regenauer-Lieb, K.
2014-12-01
Microtomography provides detailed 3D internal structures of rocks in micro- to tens of nano-meter resolution and is quickly turning into a new technology for studying petrophysical properties of materials. An important step is the upscaling of these properties as micron or sub-micron resolution can only be done on the sample-scale of millimeters or even less than a millimeter. We present here a recently developed computational workflow for the analysis of microstructures including the upscaling of material properties. Computations of properties are first performed using conventional material science simulations at micro to nano-scale. The subsequent upscaling of these properties is done by a novel renormalization procedure based on percolation theory. We have tested the workflow using different rock samples, biological and food science materials. We have also applied the technique on high-resolution time-lapse synchrotron CT scans. In this contribution we focus on the computational challenges that arise from the big data problem of analyzing petrophysical properties and its subsequent upscaling. We discuss the following challenges: 1) Characterization of microtomography for extremely large data sets - our current capability. 2) Computational fluid dynamics simulations at pore-scale for permeability estimation - methods, computing cost and accuracy. 3) Solid mechanical computations at pore-scale for estimating elasto-plastic properties - computational stability, cost, and efficiency. 4) Extracting critical exponents from derivative models for scaling laws - models, finite element meshing, and accuracy. Significant progress in each of these challenges is necessary to transform microtomography from the current research problem into a robust computational big data tool for multi-scale scientific and engineering problems.
Towards large scale multi-target tracking
NASA Astrophysics Data System (ADS)
Vo, Ba-Ngu; Vo, Ba-Tuong; Reuter, Stephan; Lam, Quang; Dietmayer, Klaus
2014-06-01
Multi-target tracking is intrinsically an NP-hard problem and the complexity of multi-target tracking solutions usually do not scale gracefully with problem size. Multi-target tracking for on-line applications involving a large number of targets is extremely challenging. This article demonstrates the capability of the random finite set approach to provide large scale multi-target tracking algorithms. In particular it is shown that an approximate filter known as the labeled multi-Bernoulli filter can simultaneously track one thousand five hundred targets in clutter on a standard laptop computer.
Benchmarking NWP Kernels on Multi- and Many-core Processors
NASA Astrophysics Data System (ADS)
Michalakes, J.; Vachharajani, M.
2008-12-01
Increased computing power for weather, climate, and atmospheric science has provided direct benefits for defense, agriculture, the economy, the environment, and public welfare and convenience. Today, very large clusters with many thousands of processors are allowing scientists to move forward with simulations of unprecedented size. But time-critical applications such as real-time forecasting or climate prediction need strong scaling: faster nodes and processors, not more of them. Moreover, the need for good cost- performance has never been greater, both in terms of performance per watt and per dollar. For these reasons, the new generations of multi- and many-core processors being mass produced for commercial IT and "graphical computing" (video games) are being scrutinized for their ability to exploit the abundant fine- grain parallelism in atmospheric models. We present results of our work to date identifying key computational kernels within the dynamics and physics of a large community NWP model, the Weather Research and Forecast (WRF) model. We benchmark and optimize these kernels on several different multi- and many-core processors. The goals are to (1) characterize and model performance of the kernels in terms of computational intensity, data parallelism, memory bandwidth pressure, memory footprint, etc. (2) enumerate and classify effective strategies for coding and optimizing for these new processors, (3) assess difficulties and opportunities for tool or higher-level language support, and (4) establish a continuing set of kernel benchmarks that can be used to measure and compare effectiveness of current and future designs of multi- and many-core processors for weather and climate applications.
NASA Astrophysics Data System (ADS)
Du, Wenbo
A common attribute of electric-powered aerospace vehicles and systems such as unmanned aerial vehicles, hybrid- and fully-electric aircraft, and satellites is that their performance is usually limited by the energy density of their batteries. Although lithium-ion batteries offer distinct advantages such as high voltage and low weight over other battery technologies, they are a relatively new development, and thus significant gaps in the understanding of the physical phenomena that govern battery performance remain. As a result of this limited understanding, batteries must often undergo a cumbersome design process involving many manual iterations based on rules of thumb and ad-hoc design principles. A systematic study of the relationship between operational, geometric, morphological, and material-dependent properties and performance metrics such as energy and power density is non-trivial due to the multiphysics, multiphase, and multiscale nature of the battery system. To address these challenges, two numerical frameworks are established in this dissertation: a process for analyzing and optimizing several key design variables using surrogate modeling tools and gradient-based optimizers, and a multi-scale model that incorporates more detailed microstructural information into the computationally efficient but limited macro-homogeneous model. In the surrogate modeling process, multi-dimensional maps for the cell energy density with respect to design variables such as the particle size, ion diffusivity, and electron conductivity of the porous cathode material are created. A combined surrogate- and gradient-based approach is employed to identify optimal values for cathode thickness and porosity under various operating conditions, and quantify the uncertainty in the surrogate model. The performance of multiple cathode materials is also compared by defining dimensionless transport parameters. The multi-scale model makes use of detailed 3-D FEM simulations conducted at the particle-level. A monodisperse system of ellipsoidal particles is used to simulate the effective transport coefficients and interfacial reaction current density within the porous microstructure. Microscopic simulation results are shown to match well with experimental measurements, while differing significantly from homogenization approximations used in the macroscopic model. Global sensitivity analysis and surrogate modeling tools are applied to couple the two length scales and complete the multi-scale model.
Multi-scale computational study of the mechanical regulation of cell mitotic rounding in epithelia
Xu, Zhiliang; Zartman, Jeremiah J.; Alber, Mark
2017-01-01
Mitotic rounding during cell division is critical for preventing daughter cells from inheriting an abnormal number of chromosomes, a condition that occurs frequently in cancer cells. Cells must significantly expand their apical area and transition from a polygonal to circular apical shape to achieve robust mitotic rounding in epithelial tissues, which is where most cancers initiate. However, how cells mechanically regulate robust mitotic rounding within packed tissues is unknown. Here, we analyze mitotic rounding using a newly developed multi-scale subcellular element computational model that is calibrated using experimental data. Novel biologically relevant features of the model include separate representations of the sub-cellular components including the apical membrane and cytoplasm of the cell at the tissue scale level as well as detailed description of cell properties during mitotic rounding. Regression analysis of predictive model simulation results reveals the relative contributions of osmotic pressure, cell-cell adhesion and cortical stiffness to mitotic rounding. Mitotic area expansion is largely driven by regulation of cytoplasmic pressure. Surprisingly, mitotic shape roundness within physiological ranges is most sensitive to variation in cell-cell adhesivity and stiffness. An understanding of how perturbed mechanical properties impact mitotic rounding has important potential implications on, amongst others, how tumors progressively become more genetically unstable due to increased chromosomal aneuploidy and more aggressive. PMID:28531187
Application of process tomography in gas-solid fluidised beds in different scales and structures
NASA Astrophysics Data System (ADS)
Wang, H. G.; Che, H. Q.; Ye, J. M.; Tu, Q. Y.; Wu, Z. P.; Yang, W. Q.; Ocone, R.
2018-04-01
Gas-solid fluidised beds are commonly used in particle-related processes, e.g. for coal combustion and gasification in the power industry, and the coating and granulation process in the pharmaceutical industry. Because the operation efficiency depends on the gas-solid flow characteristics, it is necessary to investigate the flow behaviour. This paper is about the application of process tomography, including electrical capacitance tomography (ECT) and microwave tomography (MWT), in multi-scale gas-solid fluidisation processes in the pharmaceutical and power industries. This is the first time that both ECT and MWT have been applied for this purpose in multi-scale and complex structure. To evaluate the sensor design and image reconstruction and to investigate the effects of sensor structure and dimension on the image quality, a normalised sensitivity coefficient is introduced. In the meantime, computational fluid dynamic (CFD) analysis based on a computational particle fluid dynamic (CPFD) model and a two-phase fluid model (TFM) is used. Part of the CPFD-TFM simulation results are compared and validated by experimental results from ECT and/or MWT. By both simulation and experiment, the complex flow hydrodynamic behaviour in different scales is analysed. Time-series capacitance data are analysed both in time and frequency domains to reveal the flow characteristics.
NASA Astrophysics Data System (ADS)
Vo, Kiet T.; Sowmya, Arcot
A directional multi-scale modeling scheme based on wavelet and contourlet transforms is employed to describe HRCT lung image textures for classifying four diffuse lung disease patterns: normal, emphysema, ground glass opacity (GGO) and honey-combing. Generalized Gaussian density parameters are used to represent the detail sub-band features obtained by wavelet and contourlet transforms. In addition, support vector machines (SVMs) with excellent performance in a variety of pattern classification problems are used as classifier. The method is tested on a collection of 89 slices from 38 patients, each slice of size 512x512, 16 bits/pixel in DICOM format. The dataset contains 70,000 ROIs of those slices marked by experienced radiologists. We employ this technique at different wavelet and contourlet transform scales for diffuse lung disease classification. The technique presented here has best overall sensitivity 93.40% and specificity 98.40%.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McNab, W; Ezzedine, S; Detwiler, R
2007-02-26
Industrial organic solvents such as trichloroethylene (TCE) and tetrachloroethylene (PCE) constitute a principal class of groundwater contaminants. Cleanup of groundwater plume source areas associated with these compounds is problematic, in part, because the compounds often exist in the subsurface as dense nonaqueous phase liquids (DNAPLs). Ganglia (or 'blobs') of DNAPL serve as persistent sources of contaminants that are difficult to locate and remediate (e.g. Fenwick and Blunt, 1998). Current understanding of the physical and chemical processes associated with dissolution of DNAPLs in the subsurface is incomplete and yet is critical for evaluating long-term behavior of contaminant migration, groundwater cleanup, andmore » the efficacy of source area cleanup technologies. As such, a goal of this project has been to contribute to this critical understanding by investigating the multi-phase, multi-component physics of DNAPL dissolution using state-of-the-art experimental and computational techniques. Through this research, we have explored efficient and accurate conceptual and numerical models for source area contaminant transport that can be used to better inform the modeling of source area contaminants, including those at the LLNL Superfund sites, to re-evaluate existing remediation technologies, and to inspire or develop new remediation strategies. The problem of DNAPL dissolution in natural porous media must be viewed in the context of several scales (Khachikian and Harmon, 2000), including the microscopic level at which capillary forces, viscous forces, and gravity/buoyancy forces are manifested at the scale of individual pores (Wilson and Conrad, 1984; Chatzis et al., 1988), the mesoscale where dissolution rates are strongly influenced by the local hydrodynamics, and the field-scale. Historically, the physico-chemical processes associated with DNAPL dissolution have been addressed through the use of lumped mass transfer coefficients which attempt to quantify the dissolution rate in response to local dissolved-phase concentrations distributed across the source area using a volume-averaging approach (Figure 1). The fundamental problem with the lumped mass transfer parameter is that its value is typically derived empirically through column-scale experiments that combine the effects of pore-scale flow, diffusion, and pore-scale geometry in a manner that does not provide a robust theoretical basis for upscaling. In our view, upscaling processes from the pore-scale to the field-scale requires new computational approaches (Held and Celia, 2001) that are directly linked to experimental studies of dissolution at the pore scale. As such, our investigation has been multi-pronged, combining theory, experiments, numerical modeling, new data analysis approaches, and a synthesis of previous studies (e.g. Glass et al, 2001; Keller et al., 2002) aimed at quantifying how the mechanisms controlling dissolution at the pore-scale control the long-term dissolution of source areas at larger scales.« less
2016-01-01
Multi-criteria decision-making (MCDM) can be formally implemented by various methods. This study compares suitability of four selected MCDM methods, namely WPM, TOPSIS, VIKOR, and PROMETHEE, for future applications in agent-based computational economic (ACE) models of larger scale (i.e., over 10 000 agents in one geographical region). These four MCDM methods were selected according to their appropriateness for computational processing in ACE applications. Tests of the selected methods were conducted on four hardware configurations. For each method, 100 tests were performed, which represented one testing iteration. With four testing iterations conducted on each hardware setting and separated testing of all configurations with the–server parameter de/activated, altogether, 12800 data points were collected and consequently analyzed. An illustrational decision-making scenario was used which allows the mutual comparison of all of the selected decision making methods. Our test results suggest that although all methods are convenient and can be used in practice, the VIKOR method accomplished the tests with the best results and thus can be recommended as the most suitable for simulations of large-scale agent-based models. PMID:27806061
Single Image Super-Resolution Based on Multi-Scale Competitive Convolutional Neural Network
Qu, Xiaobo; He, Yifan
2018-01-01
Deep convolutional neural networks (CNNs) are successful in single-image super-resolution. Traditional CNNs are limited to exploit multi-scale contextual information for image reconstruction due to the fixed convolutional kernel in their building modules. To restore various scales of image details, we enhance the multi-scale inference capability of CNNs by introducing competition among multi-scale convolutional filters, and build up a shallow network under limited computational resources. The proposed network has the following two advantages: (1) the multi-scale convolutional kernel provides the multi-context for image super-resolution, and (2) the maximum competitive strategy adaptively chooses the optimal scale of information for image reconstruction. Our experimental results on image super-resolution show that the performance of the proposed network outperforms the state-of-the-art methods. PMID:29509666
Single Image Super-Resolution Based on Multi-Scale Competitive Convolutional Neural Network.
Du, Xiaofeng; Qu, Xiaobo; He, Yifan; Guo, Di
2018-03-06
Deep convolutional neural networks (CNNs) are successful in single-image super-resolution. Traditional CNNs are limited to exploit multi-scale contextual information for image reconstruction due to the fixed convolutional kernel in their building modules. To restore various scales of image details, we enhance the multi-scale inference capability of CNNs by introducing competition among multi-scale convolutional filters, and build up a shallow network under limited computational resources. The proposed network has the following two advantages: (1) the multi-scale convolutional kernel provides the multi-context for image super-resolution, and (2) the maximum competitive strategy adaptively chooses the optimal scale of information for image reconstruction. Our experimental results on image super-resolution show that the performance of the proposed network outperforms the state-of-the-art methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dombroski, M; Melius, C; Edmunds, T
2008-09-24
This study uses the Multi-scale Epidemiologic Simulation and Analysis (MESA) system developed for foreign animal diseases to assess consequences of nationwide human infectious disease outbreaks. A literature review identified the state of the art in both small-scale regional models and large-scale nationwide models and characterized key aspects of a nationwide epidemiological model. The MESA system offers computational advantages over existing epidemiological models and enables a broader array of stochastic analyses of model runs to be conducted because of those computational advantages. However, it has only been demonstrated on foreign animal diseases. This paper applied the MESA modeling methodology to humanmore » epidemiology. The methodology divided 2000 US Census data at the census tract level into school-bound children, work-bound workers, elderly, and stay at home individuals. The model simulated mixing among these groups by incorporating schools, workplaces, households, and long-distance travel via airports. A baseline scenario with fixed input parameters was run for a nationwide influenza outbreak using relatively simple social distancing countermeasures. Analysis from the baseline scenario showed one of three possible results: (1) the outbreak burned itself out before it had a chance to spread regionally, (2) the outbreak spread regionally and lasted a relatively long time, although constrained geography enabled it to eventually be contained without affecting a disproportionately large number of people, or (3) the outbreak spread through air travel and lasted a long time with unconstrained geography, becoming a nationwide pandemic. These results are consistent with empirical influenza outbreak data. The results showed that simply scaling up a regional small-scale model is unlikely to account for all the complex variables and their interactions involved in a nationwide outbreak. There are several limitations of the methodology that should be explored in future work including validating the model against reliable historical disease data, improving contact rates, spread methods, and disease parameters through discussions with epidemiological experts, and incorporating realistic behavioral assumptions.« less
Sigala, Rodrigo; Haufe, Sebastian; Roy, Dipanjan; Dinse, Hubert R.; Ritter, Petra
2014-01-01
During the past two decades growing evidence indicates that brain oscillations in the alpha band (~10 Hz) not only reflect an “idle” state of cortical activity, but also take a more active role in the generation of complex cognitive functions. A recent study shows that more than 60% of the observed inter-subject variability in perceptual learning can be ascribed to ongoing alpha activity. This evidence indicates a significant role of alpha oscillations for perceptual learning and hence motivates to explore the potential underlying mechanisms. Hence, it is the purpose of this review to highlight existent evidence that ascribes intrinsic alpha oscillations a role in shaping our ability to learn. In the review, we disentangle the alpha rhythm into different neural signatures that control information processing within individual functional building blocks of perceptual learning. We further highlight computational studies that shed light on potential mechanisms regarding how alpha oscillations may modulate information transfer and connectivity changes relevant for learning. To enable testing of those model based hypotheses, we emphasize the need for multidisciplinary approaches combining assessment of behavior and multi-scale neuronal activity, active modulation of ongoing brain states and computational modeling to reveal the mathematical principles of the complex neuronal interactions. In particular we highlight the relevance of multi-scale modeling frameworks such as the one currently being developed by “The Virtual Brain” project. PMID:24772077
NASA Astrophysics Data System (ADS)
Sinha, Neeraj; Zambon, Andrea; Ott, James; Demagistris, Michael
2015-06-01
Driven by the continuing rapid advances in high-performance computing, multi-dimensional high-fidelity modeling is an increasingly reliable predictive tool capable of providing valuable physical insight into complex post-detonation reacting flow fields. Utilizing a series of test cases featuring blast waves interacting with combustible dispersed clouds in a small-scale test setup under well-controlled conditions, the predictive capabilities of a state-of-the-art code are demonstrated and validated. Leveraging physics-based, first principle models and solving large system of equations on highly-resolved grids, the combined effects of finite-rate/multi-phase chemical processes (including thermal ignition), turbulent mixing and shock interactions are captured across the spectrum of relevant time-scales and length scales. Since many scales of motion are generated in a post-detonation environment, even if the initial ambient conditions are quiescent, turbulent mixing plays a major role in the fireball afterburning as well as in dispersion, mixing, ignition and burn-out of combustible clouds in its vicinity. Validating these capabilities at the small scale is critical to establish a reliable predictive tool applicable to more complex and large-scale geometries of practical interest.
3D virtual human atria: A computational platform for studying clinical atrial fibrillation.
Aslanidi, Oleg V; Colman, Michael A; Stott, Jonathan; Dobrzynski, Halina; Boyett, Mark R; Holden, Arun V; Zhang, Henggui
2011-10-01
Despite a vast amount of experimental and clinical data on the underlying ionic, cellular and tissue substrates, the mechanisms of common atrial arrhythmias (such as atrial fibrillation, AF) arising from the functional interactions at the whole atria level remain unclear. Computational modelling provides a quantitative framework for integrating such multi-scale data and understanding the arrhythmogenic behaviour that emerges from the collective spatio-temporal dynamics in all parts of the heart. In this study, we have developed a multi-scale hierarchy of biophysically detailed computational models for the human atria--the 3D virtual human atria. Primarily, diffusion tensor MRI reconstruction of the tissue geometry and fibre orientation in the human sinoatrial node (SAN) and surrounding atrial muscle was integrated into the 3D model of the whole atria dissected from the Visible Human dataset. The anatomical models were combined with the heterogeneous atrial action potential (AP) models, and used to simulate the AP conduction in the human atria under various conditions: SAN pacemaking and atrial activation in the normal rhythm, break-down of regular AP wave-fronts during rapid atrial pacing, and the genesis of multiple re-entrant wavelets characteristic of AF. Contributions of different properties of the tissue to mechanisms of the normal rhythm and arrhythmogenesis were investigated. Primarily, the simulations showed that tissue heterogeneity caused the break-down of the normal AP wave-fronts at rapid pacing rates, which initiated a pair of re-entrant spiral waves; and tissue anisotropy resulted in a further break-down of the spiral waves into multiple meandering wavelets characteristic of AF. The 3D virtual atria model itself was incorporated into the torso model to simulate the body surface ECG patterns in the normal and arrhythmic conditions. Therefore, a state-of-the-art computational platform has been developed, which can be used for studying multi-scale electrical phenomena during atrial conduction and AF arrhythmogenesis. Results of such simulations can be directly compared with electrophysiological and endocardial mapping data, as well as clinical ECG recordings. The virtual human atria can provide in-depth insights into 3D excitation propagation processes within atrial walls of a whole heart in vivo, which is beyond the current technical capabilities of experimental or clinical set-ups. Copyright © 2011 Elsevier Ltd. All rights reserved.
An interactive display system for large-scale 3D models
NASA Astrophysics Data System (ADS)
Liu, Zijian; Sun, Kun; Tao, Wenbing; Liu, Liman
2018-04-01
With the improvement of 3D reconstruction theory and the rapid development of computer hardware technology, the reconstructed 3D models are enlarging in scale and increasing in complexity. Models with tens of thousands of 3D points or triangular meshes are common in practical applications. Due to storage and computing power limitation, it is difficult to achieve real-time display and interaction with large scale 3D models for some common 3D display software, such as MeshLab. In this paper, we propose a display system for large-scale 3D scene models. We construct the LOD (Levels of Detail) model of the reconstructed 3D scene in advance, and then use an out-of-core view-dependent multi-resolution rendering scheme to realize the real-time display of the large-scale 3D model. With the proposed method, our display system is able to render in real time while roaming in the reconstructed scene and 3D camera poses can also be displayed. Furthermore, the memory consumption can be significantly decreased via internal and external memory exchange mechanism, so that it is possible to display a large scale reconstructed scene with over millions of 3D points or triangular meshes in a regular PC with only 4GB RAM.
2013-06-01
Applications of Molecular Modeling to Challenges in Clean Energy; Fitzgerald, G., et al .; ACS Symposium Series; American Chemical Society: Washington, DC...to 178 In Applications of Molecular Modeling to Challenges in Clean Energy; Fitzgerald, G., et al .; ACS Symposium Series; American Chemical Society...Washington, DC, 2013. developmodels of spectral properties and energy transfer kinetics (20–22). Ivashin et al . optimized select ligands (α
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Judith C.
The purpose of this grant is to develop the multi-scale theoretical methods to describe the nanoscale oxidation of metal thin films, as the PI (Yang) extensive previous experience in the experimental elucidation of the initial stages of Cu oxidation by primarily in situ transmission electron microscopy methods. Through the use and development of computational tools at varying length (and time) scales, from atomistic quantum mechanical calculation, force field mesoscale simulations, to large scale Kinetic Monte Carlo (KMC) modeling, the fundamental underpinings of the initial stages of Cu oxidation have been elucidated. The development of computational modeling tools allows for acceleratedmore » materials discovery. The theoretical tools developed from this program impact a wide range of technologies that depend on surface reactions, including corrosion, catalysis, and nanomaterials fabrication.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Li; He, Ya-Ling; Kang, Qinjun
2013-12-15
A coupled (hybrid) simulation strategy spatially combining the finite volume method (FVM) and the lattice Boltzmann method (LBM), called CFVLBM, is developed to simulate coupled multi-scale multi-physicochemical processes. In the CFVLBM, computational domain of multi-scale problems is divided into two sub-domains, i.e., an open, free fluid region and a region filled with porous materials. The FVM and LBM are used for these two regions, respectively, with information exchanged at the interface between the two sub-domains. A general reconstruction operator (RO) is proposed to derive the distribution functions in the LBM from the corresponding macro scalar, the governing equation of whichmore » obeys the convection–diffusion equation. The CFVLBM and the RO are validated in several typical physicochemical problems and then are applied to simulate complex multi-scale coupled fluid flow, heat transfer, mass transport, and chemical reaction in a wall-coated micro reactor. The maximum ratio of the grid size between the FVM and LBM regions is explored and discussed. -- Highlights: •A coupled simulation strategy for simulating multi-scale phenomena is developed. •Finite volume method and lattice Boltzmann method are coupled. •A reconstruction operator is derived to transfer information at the sub-domains interface. •Coupled multi-scale multiple physicochemical processes in micro reactor are simulated. •Techniques to save computational resources and improve the efficiency are discussed.« less
Laghari, Samreen; Niazi, Muaz A
2016-01-01
Computer Networks have a tendency to grow at an unprecedented scale. Modern networks involve not only computers but also a wide variety of other interconnected devices ranging from mobile phones to other household items fitted with sensors. This vision of the "Internet of Things" (IoT) implies an inherent difficulty in modeling problems. It is practically impossible to implement and test all scenarios for large-scale and complex adaptive communication networks as part of Complex Adaptive Communication Networks and Environments (CACOONS). The goal of this study is to explore the use of Agent-based Modeling as part of the Cognitive Agent-based Computing (CABC) framework to model a Complex communication network problem. We use Exploratory Agent-based Modeling (EABM), as part of the CABC framework, to develop an autonomous multi-agent architecture for managing carbon footprint in a corporate network. To evaluate the application of complexity in practical scenarios, we have also introduced a company-defined computer usage policy. The conducted experiments demonstrated two important results: Primarily CABC-based modeling approach such as using Agent-based Modeling can be an effective approach to modeling complex problems in the domain of IoT. Secondly, the specific problem of managing the Carbon footprint can be solved using a multiagent system approach.
NASA Astrophysics Data System (ADS)
Zimoń, M. J.; Prosser, R.; Emerson, D. R.; Borg, M. K.; Bray, D. J.; Grinberg, L.; Reese, J. M.
2016-11-01
Filtering of particle-based simulation data can lead to reduced computational costs and enable more efficient information transfer in multi-scale modelling. This paper compares the effectiveness of various signal processing methods to reduce numerical noise and capture the structures of nano-flow systems. In addition, a novel combination of these algorithms is introduced, showing the potential of hybrid strategies to improve further the de-noising performance for time-dependent measurements. The methods were tested on velocity and density fields, obtained from simulations performed with molecular dynamics and dissipative particle dynamics. Comparisons between the algorithms are given in terms of performance, quality of the results and sensitivity to the choice of input parameters. The results provide useful insights on strategies for the analysis of particle-based data and the reduction of computational costs in obtaining ensemble solutions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tourret, D.; Mertens, J. C. E.; Lieberman, E.
We follow an Al-12 at. pct Cu alloy sample from the liquid state to mechanical failure, using in situ X-ray radiography during directional solidification and tensile testing, as well as three-dimensional computed tomography of the microstructure before and after mechanical testing. The solidification processing stage is simulated with a multi-scale dendritic needle network model, and the micromechanical behavior of the solidified microstructure is simulated using voxelized tomography data and an elasto-viscoplastic fast Fourier transform model. This study demonstrates the feasibility of direct in situ monitoring of a metal alloy microstructure from the liquid processing stage up to its mechanical failure,more » supported by quantitative simulations of microstructure formation and its mechanical behavior.« less
Tourret, D.; Mertens, J. C. E.; Lieberman, E.; ...
2017-09-13
We follow an Al-12 at. pct Cu alloy sample from the liquid state to mechanical failure, using in situ X-ray radiography during directional solidification and tensile testing, as well as three-dimensional computed tomography of the microstructure before and after mechanical testing. The solidification processing stage is simulated with a multi-scale dendritic needle network model, and the micromechanical behavior of the solidified microstructure is simulated using voxelized tomography data and an elasto-viscoplastic fast Fourier transform model. This study demonstrates the feasibility of direct in situ monitoring of a metal alloy microstructure from the liquid processing stage up to its mechanical failure,more » supported by quantitative simulations of microstructure formation and its mechanical behavior.« less
NASA Astrophysics Data System (ADS)
Tourret, D.; Mertens, J. C. E.; Lieberman, E.; Imhoff, S. D.; Gibbs, J. W.; Henderson, K.; Fezzaa, K.; Deriy, A. L.; Sun, T.; Lebensohn, R. A.; Patterson, B. M.; Clarke, A. J.
2017-11-01
We follow an Al-12 at. pct Cu alloy sample from the liquid state to mechanical failure, using in situ X-ray radiography during directional solidification and tensile testing, as well as three-dimensional computed tomography of the microstructure before and after mechanical testing. The solidification processing stage is simulated with a multi-scale dendritic needle network model, and the micromechanical behavior of the solidified microstructure is simulated using voxelized tomography data and an elasto-viscoplastic fast Fourier transform model. This study demonstrates the feasibility of direct in situ monitoring of a metal alloy microstructure from the liquid processing stage up to its mechanical failure, supported by quantitative simulations of microstructure formation and its mechanical behavior.
This paper proposes a general procedure to link meteorological data with air quality models, such as U.S. EPA's Models-3 Community Multi-scale Air Quality (CMAQ) modeling system. CMAQ is intended to be used for studying multi-scale (urban and regional) and multi-pollutant (ozon...
Quinn, T. Alexander; Kohl, Peter
2013-01-01
Since the development of the first mathematical cardiac cell model 50 years ago, computational modelling has become an increasingly powerful tool for the analysis of data and for the integration of information related to complex cardiac behaviour. Current models build on decades of iteration between experiment and theory, representing a collective understanding of cardiac function. All models, whether computational, experimental, or conceptual, are simplified representations of reality and, like tools in a toolbox, suitable for specific applications. Their range of applicability can be explored (and expanded) by iterative combination of ‘wet’ and ‘dry’ investigation, where experimental or clinical data are used to first build and then validate computational models (allowing integration of previous findings, quantitative assessment of conceptual models, and projection across relevant spatial and temporal scales), while computational simulations are utilized for plausibility assessment, hypotheses-generation, and prediction (thereby defining further experimental research targets). When implemented effectively, this combined wet/dry research approach can support the development of a more complete and cohesive understanding of integrated biological function. This review illustrates the utility of such an approach, based on recent examples of multi-scale studies of cardiac structure and mechano-electric function. PMID:23334215
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fu, Yao, E-mail: fu5@mailbox.sc.edu, E-mail: jhsong@cec.sc.edu; Song, Jeong-Hoon, E-mail: fu5@mailbox.sc.edu, E-mail: jhsong@cec.sc.edu
2014-08-07
Hardy stress definition has been restricted to pair potentials and embedded-atom method potentials due to the basic assumptions in the derivation of a symmetric microscopic stress tensor. Force decomposition required in the Hardy stress expression becomes obscure for multi-body potentials. In this work, we demonstrate the invariance of the Hardy stress expression for a polymer system modeled with multi-body interatomic potentials including up to four atoms interaction, by applying central force decomposition of the atomic force. The balance of momentum has been demonstrated to be valid theoretically and tested under various numerical simulation conditions. The validity of momentum conservation justifiesmore » the extension of Hardy stress expression to multi-body potential systems. Computed Hardy stress has been observed to converge to the virial stress of the system with increasing spatial averaging volume. This work provides a feasible and reliable linkage between the atomistic and continuum scales for multi-body potential systems.« less
Coupled multi-disciplinary simulation of composite engine structures in propulsion environment
NASA Technical Reports Server (NTRS)
Chamis, Christos C.; Singhal, Surendra N.
1992-01-01
A computational simulation procedure is described for the coupled response of multi-layered multi-material composite engine structural components which are subjected to simultaneous multi-disciplinary thermal, structural, vibration, and acoustic loadings including the effect of hostile environments. The simulation is based on a three dimensional finite element analysis technique in conjunction with structural mechanics codes and with acoustic analysis methods. The composite material behavior is assessed at the various composite scales, i.e., the laminate/ply/constituents (fiber/matrix), via a nonlinear material characterization model. Sample cases exhibiting nonlinear geometrical, material, loading, and environmental behavior of aircraft engine fan blades, are presented. Results for deformed shape, vibration frequency, mode shapes, and acoustic noise emitted from the fan blade, are discussed for their coupled effect in hot and humid environments. Results such as acoustic noise for coupled composite-mechanics/heat transfer/structural/vibration/acoustic analyses demonstrate the effectiveness of coupled multi-disciplinary computational simulation and the various advantages of composite materials compared to metals.
Dash, Ranjan K; Li, Yanjun; Kim, Jaeyeon; Beard, Daniel A; Saidel, Gerald M; Cabrera, Marco E
2008-09-09
Control mechanisms of cellular metabolism and energetics in skeletal muscle that may become evident in response to physiological stresses such as reduction in blood flow and oxygen supply to mitochondria can be quantitatively understood using a multi-scale computational model. The analysis of dynamic responses from such a model can provide insights into mechanisms of metabolic regulation that may not be evident from experimental studies. For the purpose, a physiologically-based, multi-scale computational model of skeletal muscle cellular metabolism and energetics was developed to describe dynamic responses of key chemical species and reaction fluxes to muscle ischemia. The model, which incorporates key transport and metabolic processes and subcellular compartmentalization, is based on dynamic mass balances of 30 chemical species in both capillary blood and tissue cells (cytosol and mitochondria) domains. The reaction fluxes in cytosol and mitochondria are expressed in terms of a general phenomenological Michaelis-Menten equation involving the compartmentalized energy controller ratios ATP/ADP and NADH/NAD(+). The unknown transport and reaction parameters in the model are estimated simultaneously by minimizing the differences between available in vivo experimental data on muscle ischemia and corresponding model outputs in coupled with the resting linear flux balance constraints using a robust, nonlinear, constrained-based, reduced gradient optimization algorithm. With the optimal parameter values, the model is able to simulate dynamic responses to reduced blood flow and oxygen supply to mitochondria associated with muscle ischemia of several key metabolite concentrations and metabolic fluxes in the subcellular cytosolic and mitochondrial compartments, some that can be measured and others that can not be measured with the current experimental techniques. The model can be applied to test complex hypotheses involving dynamic regulation of cellular metabolism and energetics in skeletal muscle during physiological stresses such as ischemia, hypoxia, and exercise.
Multi-Scale Models for the Scale Interaction of Organized Tropical Convection
NASA Astrophysics Data System (ADS)
Yang, Qiu
Assessing the upscale impact of organized tropical convection from small spatial and temporal scales is a research imperative, not only for having a better understanding of the multi-scale structures of dynamical and convective fields in the tropics, but also for eventually helping in the design of new parameterization strategies to improve the next-generation global climate models. Here self-consistent multi-scale models are derived systematically by following the multi-scale asymptotic methods and used to describe the hierarchical structures of tropical atmospheric flows. The advantages of using these multi-scale models lie in isolating the essential components of multi-scale interaction and providing assessment of the upscale impact of the small-scale fluctuations onto the large-scale mean flow through eddy flux divergences of momentum and temperature in a transparent fashion. Specifically, this thesis includes three research projects about multi-scale interaction of organized tropical convection, involving tropical flows at different scaling regimes and utilizing different multi-scale models correspondingly. Inspired by the observed variability of tropical convection on multiple temporal scales, including daily and intraseasonal time scales, the goal of the first project is to assess the intraseasonal impact of the diurnal cycle on the planetary-scale circulation such as the Hadley cell. As an extension of the first project, the goal of the second project is to assess the intraseasonal impact of the diurnal cycle over the Maritime Continent on the Madden-Julian Oscillation. In the third project, the goals are to simulate the baroclinic aspects of the ITCZ breakdown and assess its upscale impact on the planetary-scale circulation over the eastern Pacific. These simple multi-scale models should be useful to understand the scale interaction of organized tropical convection and help improve the parameterization of unresolved processes in global climate models.
NASA Astrophysics Data System (ADS)
Zhu, Aichun; Wang, Tian; Snoussi, Hichem
2018-03-01
This paper addresses the problems of the graphical-based human pose estimation in still images, including the diversity of appearances and confounding background clutter. We present a new architecture for estimating human pose using a Convolutional Neural Network (CNN). Firstly, a Relative Mixture Deformable Model (RMDM) is defined by each pair of connected parts to compute the relative spatial information in the graphical model. Secondly, a Local Multi-Resolution Convolutional Neural Network (LMR-CNN) is proposed to train and learn the multi-scale representation of each body parts by combining different levels of part context. Thirdly, a LMR-CNN based hierarchical model is defined to explore the context information of limb parts. Finally, the experimental results demonstrate the effectiveness of the proposed deep learning approach for human pose estimation.
Salavati, Hooman; Soltani, M; Amanpour, Saeid
2018-05-06
The mechanisms involved in tumor growth mainly occur at the microenvironment, where the interactions between the intracellular, intercellular and extracellular scales mediate the dynamics of tumor. In this work, we present a multi-scale model of solid tumor dynamics to simulate the avascular and vascular growth as well as tumor-induced angiogenesis. The extracellular and intercellular scales are modeled using partial differential equations and cellular Potts model, respectively. Also, few biochemical and biophysical rules control the dynamics of intracellular level. On the other hand, the growth of melanoma tumors is modeled in an animal in-vivo study to evaluate the simulation. The simulation shows that the model successfully reproduces a completed image of processes involved in tumor growth such as avascular and vascular growth as well as angiogenesis. The model incorporates the phenotypes of cancerous cells including proliferating, quiescent and necrotic cells, as well as endothelial cells during angiogenesis. The results clearly demonstrate the pivotal effect of angiogenesis on the progression of cancerous cells. Also, the model exhibits important events in tumor-induced angiogenesis like anastomosis. Moreover, the computational trend of tumor growth closely follows the observations in the experimental study. Copyright © 2018 Elsevier Inc. All rights reserved.
Tackling some of the most intricate geophysical challenges via high-performance computing
NASA Astrophysics Data System (ADS)
Khosronejad, A.
2016-12-01
Recently, world has been witnessing significant enhancements in computing power of supercomputers. Computer clusters in conjunction with the advanced mathematical algorithms has set the stage for developing and applying powerful numerical tools to tackle some of the most intricate geophysical challenges that today`s engineers face. One such challenge is to understand how turbulent flows, in real-world settings, interact with (a) rigid and/or mobile complex bed bathymetry of waterways and sea-beds in the coastal areas; (b) objects with complex geometry that are fully or partially immersed; and (c) free-surface of waterways and water surface waves in the coastal area. This understanding is especially important because the turbulent flows in real-world environments are often bounded by geometrically complex boundaries, which dynamically deform and give rise to multi-scale and multi-physics transport phenomena, and characterized by multi-lateral interactions among various phases (e.g. air/water/sediment phases). Herein, I present some of the multi-scale and multi-physics geophysical fluid mechanics processes that I have attempted to study using an in-house high-performance computational model, the so-called VFS-Geophysics. More specifically, I will present the simulation results of turbulence/sediment/solute/turbine interactions in real-world settings. Parts of the simulations I present are performed to gain scientific insights into the processes such as sand wave formation (A. Khosronejad, and F. Sotiropoulos, (2014), Numerical simulation of sand waves in a turbulent open channel flow, Journal of Fluid Mechanics, 753:150-216), while others are carried out to predict the effects of climate change and large flood events on societal infrastructures ( A. Khosronejad, et al., (2016), Large eddy simulation of turbulence and solute transport in a forested headwater stream, Journal of Geophysical Research:, doi: 10.1002/2014JF003423).
AN OVERVIEW OF REDUCED ORDER MODELING TECHNIQUES FOR SAFETY APPLICATIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mandelli, D.; Alfonsi, A.; Talbot, P.
2016-10-01
The RISMC project is developing new advanced simulation-based tools to perform Computational Risk Analysis (CRA) for the existing fleet of U.S. nuclear power plants (NPPs). These tools numerically model not only the thermal-hydraulic behavior of the reactors primary and secondary systems, but also external event temporal evolution and component/system ageing. Thus, this is not only a multi-physics problem being addressed, but also a multi-scale problem (both spatial, µm-mm-m, and temporal, seconds-hours-years). As part of the RISMC CRA approach, a large amount of computationally-expensive simulation runs may be required. An important aspect is that even though computational power is growing, themore » overall computational cost of a RISMC analysis using brute-force methods may be not viable for certain cases. A solution that is being evaluated to assist the computational issue is the use of reduced order modeling techniques. During the FY2015, we investigated and applied reduced order modeling techniques to decrease the RISMC analysis computational cost by decreasing the number of simulation runs; for this analysis improvement we used surrogate models instead of the actual simulation codes. This article focuses on the use of reduced order modeling techniques that can be applied to RISMC analyses in order to generate, analyze, and visualize data. In particular, we focus on surrogate models that approximate the simulation results but in a much faster time (microseconds instead of hours/days).« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collins, William D; Johansen, Hans; Evans, Katherine J
We present a survey of physical and computational techniques that have the potential to con- tribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enabling improved accuracy andmore » fidelity in simulation of dynamics and allow more complete representations of climate features at the global scale. At the same time, part- nerships with computer science teams have focused on taking advantage of evolving computer architectures, such as many-core processors and GPUs, so that these approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.« less
Advanced computations in plasma physics
NASA Astrophysics Data System (ADS)
Tang, W. M.
2002-05-01
Scientific simulation in tandem with theory and experiment is an essential tool for understanding complex plasma behavior. In this paper we review recent progress and future directions for advanced simulations in magnetically confined plasmas with illustrative examples chosen from magnetic confinement research areas such as microturbulence, magnetohydrodynamics, magnetic reconnection, and others. Significant recent progress has been made in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics, giving increasingly good agreement between experimental observations and computational modeling. This was made possible by innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning widely disparate temporal and spatial scales together with access to powerful new computational resources. In particular, the fusion energy science community has made excellent progress in developing advanced codes for which computer run-time and problem size scale well with the number of processors on massively parallel machines (MPP's). A good example is the effective usage of the full power of multi-teraflop (multi-trillion floating point computations per second) MPP's to produce three-dimensional, general geometry, nonlinear particle simulations which have accelerated progress in understanding the nature of turbulence self-regulation by zonal flows. It should be emphasized that these calculations, which typically utilized billions of particles for thousands of time-steps, would not have been possible without access to powerful present generation MPP computers and the associated diagnostic and visualization capabilities. In general, results from advanced simulations provide great encouragement for being able to include increasingly realistic dynamics to enable deeper physics insights into plasmas in both natural and laboratory environments. The associated scientific excitement should serve to stimulate improved cross-cutting collaborations with other fields and also to help attract bright young talent to plasma science.
2015-12-02
simplification of the equations but at the expense of introducing modeling errors. We have shown that the Wick solutions have accuracy comparable to...the system of equations for the coefficients of formal power series solutions . Moreover, the structure of this propagator is seemingly universal, i.e...the problem of computing the numerical solution to kinetic partial differential equa- tions involving many phase variables. These types of equations
A Parallel Finite Set Statistical Simulator for Multi-Target Detection and Tracking
NASA Astrophysics Data System (ADS)
Hussein, I.; MacMillan, R.
2014-09-01
Finite Set Statistics (FISST) is a powerful Bayesian inference tool for the joint detection, classification and tracking of multi-target environments. FISST is capable of handling phenomena such as clutter, misdetections, and target birth and decay. Implicit within the approach are solutions to the data association and target label-tracking problems. Finally, FISST provides generalized information measures that can be used for sensor allocation across different types of tasks such as: searching for new targets, and classification and tracking of known targets. These FISST capabilities have been demonstrated on several small-scale illustrative examples. However, for implementation in a large-scale system as in the Space Situational Awareness problem, these capabilities require a lot of computational power. In this paper, we implement FISST in a parallel environment for the joint detection and tracking of multi-target systems. In this implementation, false alarms and misdetections will be modeled. Target birth and decay will not be modeled in the present paper. We will demonstrate the success of the method for as many targets as we possibly can in a desktop parallel environment. Performance measures will include: number of targets in the simulation, certainty of detected target tracks, computational time as a function of clutter returns and number of targets, among other factors.
A self-consistent first-principle based approach to model carrier mobility in organic materials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meded, Velimir; Friederich, Pascal; Symalla, Franz
2015-12-31
Transport through thin organic amorphous films, utilized in OLEDs and OPVs, has been a challenge to model by using ab-initio methods. Charge carrier mobility depends strongly on the disorder strength and reorganization energy, both of which are significantly affected by the details in environment of each molecule. Here we present a multi-scale approach to describe carrier mobility in which the materials morphology is generated using DEPOSIT, a Monte Carlo based atomistic simulation approach, or, alternatively by molecular dynamics calculations performed with GROMACS. From this morphology we extract the material specific hopping rates, as well as the on-site energies using amore » fully self-consistent embedding approach to compute the electronic structure parameters, which are then used in an analytic expression for the carrier mobility. We apply this strategy to compute the carrier mobility for a set of widely studied molecules and obtain good agreement between experiment and theory varying over several orders of magnitude in the mobility without any freely adjustable parameters. The work focuses on the quantum mechanical step of the multi-scale workflow, explains the concept along with the recently published workflow optimization, which combines density functional with semi-empirical tight binding approaches. This is followed by discussion on the analytic formula and its agreement with established percolation fits as well as kinetic Monte Carlo numerical approaches. Finally, we skatch an unified multi-disciplinary approach that integrates materials science simulation and high performance computing, developed within EU project MMM@HPC.« less
Constructing Neuronal Network Models in Massively Parallel Environments.
Ippen, Tammo; Eppler, Jochen M; Plesser, Hans E; Diesmann, Markus
2017-01-01
Recent advances in the development of data structures to represent spiking neuron network models enable us to exploit the complete memory of petascale computers for a single brain-scale network simulation. In this work, we investigate how well we can exploit the computing power of such supercomputers for the creation of neuronal networks. Using an established benchmark, we divide the runtime of simulation code into the phase of network construction and the phase during which the dynamical state is advanced in time. We find that on multi-core compute nodes network creation scales well with process-parallel code but exhibits a prohibitively large memory consumption. Thread-parallel network creation, in contrast, exhibits speedup only up to a small number of threads but has little overhead in terms of memory. We further observe that the algorithms creating instances of model neurons and their connections scale well for networks of ten thousand neurons, but do not show the same speedup for networks of millions of neurons. Our work uncovers that the lack of scaling of thread-parallel network creation is due to inadequate memory allocation strategies and demonstrates that thread-optimized memory allocators recover excellent scaling. An analysis of the loop order used for network construction reveals that more complex tests on the locality of operations significantly improve scaling and reduce runtime by allowing construction algorithms to step through large networks more efficiently than in existing code. The combination of these techniques increases performance by an order of magnitude and harnesses the increasingly parallel compute power of the compute nodes in high-performance clusters and supercomputers.
Constructing Neuronal Network Models in Massively Parallel Environments
Ippen, Tammo; Eppler, Jochen M.; Plesser, Hans E.; Diesmann, Markus
2017-01-01
Recent advances in the development of data structures to represent spiking neuron network models enable us to exploit the complete memory of petascale computers for a single brain-scale network simulation. In this work, we investigate how well we can exploit the computing power of such supercomputers for the creation of neuronal networks. Using an established benchmark, we divide the runtime of simulation code into the phase of network construction and the phase during which the dynamical state is advanced in time. We find that on multi-core compute nodes network creation scales well with process-parallel code but exhibits a prohibitively large memory consumption. Thread-parallel network creation, in contrast, exhibits speedup only up to a small number of threads but has little overhead in terms of memory. We further observe that the algorithms creating instances of model neurons and their connections scale well for networks of ten thousand neurons, but do not show the same speedup for networks of millions of neurons. Our work uncovers that the lack of scaling of thread-parallel network creation is due to inadequate memory allocation strategies and demonstrates that thread-optimized memory allocators recover excellent scaling. An analysis of the loop order used for network construction reveals that more complex tests on the locality of operations significantly improve scaling and reduce runtime by allowing construction algorithms to step through large networks more efficiently than in existing code. The combination of these techniques increases performance by an order of magnitude and harnesses the increasingly parallel compute power of the compute nodes in high-performance clusters and supercomputers. PMID:28559808
DOE Office of Scientific and Technical Information (OSTI.GOV)
Procassini, R.J.
1997-12-31
The fine-scale, multi-space resolution that is envisioned for accurate simulations of complex weapons systems in three spatial dimensions implies flop-rate and memory-storage requirements that will only be obtained in the near future through the use of parallel computational techniques. Since the Monte Carlo transport models in these simulations usually stress both of these computational resources, they are prime candidates for parallelization. The MONACO Monte Carlo transport package, which is currently under development at LLNL, will utilize two types of parallelism within the context of a multi-physics design code: decomposition of the spatial domain across processors (spatial parallelism) and distribution ofmore » particles in a given spatial subdomain across additional processors (particle parallelism). This implementation of the package will utilize explicit data communication between domains (message passing). Such a parallel implementation of a Monte Carlo transport model will result in non-deterministic communication patterns. The communication of particles between subdomains during a Monte Carlo time step may require a significant level of effort to achieve a high parallel efficiency.« less
NASA Astrophysics Data System (ADS)
Hamlet, C. L.; Hoffman, K.; Fauci, L.; Tytell, E.
2016-02-01
The lamprey is a model organism for both neurophysiology and locomotion studies. To study the role of sensory feedback as an organism moves through its environment, a 2D, integrative, multi-scale model of an anguilliform swimmer driven by neural activation from a central pattern generator (CPG) is constructed. The CPG in turn drives muscle kinematics and is fully coupled to the surrounding fluid. The system is numerically evolved in time using an immersed boundary framework producing an emergent swimming mode. Proprioceptive feedback to the CPG based on experimental observations adjust the activation signal as the organism interacts with its environment. Effects on the speed, stability and cost (metabolic work) of swimming due to nonlinear dependencies associated with muscle force development combined with proprioceptive feedback to neural activation are estimated and examined.
NASA Astrophysics Data System (ADS)
Elder, Delwin L.; Johnson, Lewis E.; Tillack, Andreas F.; Robinson, Bruce H.; Haffner, Christian; Heni, Wolfgang; Hoessbacher, Claudia; Fedoryshyn, Yuriy; Salamin, Yannick; Baeuerle, Benedikt; Josten, Arne; Ayata, Masafumi; Koch, Ueli; Leuthold, Juerg; Dalton, Larry R.
2018-02-01
Multi-scale (correlated quantum and statistical mechanics) modeling methods have been advanced and employed to guide the improvement of organic electro-optic (OEO) materials, including by analyzing electric field poling induced electro-optic activity in nanoscopic plasmonic-organic hybrid (POH) waveguide devices. The analysis of in-device electro-optic activity emphasizes the importance of considering both the details of intermolecular interactions within organic electro-optic materials and interactions at interfaces between OEO materials and device architectures. Dramatic improvement in electro-optic device performance-including voltage-length performance, bandwidth, energy efficiency, and lower optical losses have been realized. These improvements are critical to applications in telecommunications, computing, sensor technology, and metrology. Multi-scale modeling methods illustrate the complexity of improving the electro-optic activity of organic materials, including the necessity of considering the trade-off between improving poling-induced acentric order through chromophore modification and the reduction of chromophore number density associated with such modification. Computational simulations also emphasize the importance of developing chromophore modifications that serve multiple purposes including matrix hardening for enhanced thermal and photochemical stability, control of matrix dimensionality, influence on material viscoelasticity, improvement of chromophore molecular hyperpolarizability, control of material dielectric permittivity and index of refraction properties, and control of material conductance. Consideration of new device architectures is critical to the implementation of chipscale integration of electronics and photonics and achieving the high bandwidths for applications such as next generation (e.g., 5G) telecommunications.
PAM: Particle automata model in simulation of Fusarium graminearum pathogen expansion.
Wcisło, Rafał; Miller, S Shea; Dzwinel, Witold
2016-01-21
The multi-scale nature and inherent complexity of biological systems are a great challenge for computer modeling and classical modeling paradigms. We present a novel particle automata modeling metaphor in the context of developing a 3D model of Fusarium graminearum infection in wheat. The system consisting of the host plant and Fusarium pathogen cells can be represented by an ensemble of discrete particles defined by a set of attributes. The cells-particles can interact with each other mimicking mechanical resistance of the cell walls and cell coalescence. The particles can move, while some of their attributes can be changed according to prescribed rules. The rules can represent cellular scales of a complex system, while the integrated particle automata model (PAM) simulates its overall multi-scale behavior. We show that due to the ability of mimicking mechanical interactions of Fusarium tip cells with the host tissue, the model is able to simulate realistic penetration properties of the colonization process reproducing both vertical and lateral Fusarium invasion scenarios. The comparison of simulation results with micrographs from laboratory experiments shows encouraging qualitative agreement between the two. Copyright © 2015 Elsevier Ltd. All rights reserved.
Multi-scale genetic dynamic modelling I : an algorithm to compute generators.
Kirkilionis, Markus; Janus, Ulrich; Sbano, Luca
2011-09-01
We present a new approach or framework to model dynamic regulatory genetic activity. The framework is using a multi-scale analysis based upon generic assumptions on the relative time scales attached to the different transitions of molecular states defining the genetic system. At micro-level such systems are regulated by the interaction of two kinds of molecular players: macro-molecules like DNA or polymerases, and smaller molecules acting as transcription factors. The proposed genetic model then represents the larger less abundant molecules with a finite discrete state space, for example describing different conformations of these molecules. This is in contrast to the representations of the transcription factors which are-like in classical reaction kinetics-represented by their particle number only. We illustrate the method by considering the genetic activity associated to certain configurations of interacting genes that are fundamental to modelling (synthetic) genetic clocks. A largely unknown question is how different molecular details incorporated via this more realistic modelling approach lead to different macroscopic regulatory genetic models which dynamical behaviour might-in general-be different for different model choices. The theory will be applied to a real synthetic clock in a second accompanying article (Kirkilioniset al., Theory Biosci, 2011).
Multi-scale Modeling of Plasticity in Tantalum.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lim, Hojun; Battaile, Corbett Chandler.; Carroll, Jay
In this report, we present a multi-scale computational model to simulate plastic deformation of tantalum and validating experiments. In atomistic/ dislocation level, dislocation kink- pair theory is used to formulate temperature and strain rate dependent constitutive equations. The kink-pair theory is calibrated to available data from single crystal experiments to produce accurate and convenient constitutive laws. The model is then implemented into a BCC crystal plasticity finite element method (CP-FEM) model to predict temperature and strain rate dependent yield stresses of single and polycrystalline tantalum and compared with existing experimental data from the literature. Furthermore, classical continuum constitutive models describingmore » temperature and strain rate dependent flow behaviors are fit to the yield stresses obtained from the CP-FEM polycrystal predictions. The model is then used to conduct hydro- dynamic simulations of Taylor cylinder impact test and compared with experiments. In order to validate the proposed tantalum CP-FEM model with experiments, we introduce a method for quantitative comparison of CP-FEM models with various experimental techniques. To mitigate the effects of unknown subsurface microstructure, tantalum tensile specimens with a pseudo-two-dimensional grain structure and grain sizes on the order of millimeters are used. A technique combining an electron back scatter diffraction (EBSD) and high resolution digital image correlation (HR-DIC) is used to measure the texture and sub-grain strain fields upon uniaxial tensile loading at various applied strains. Deformed specimens are also analyzed with optical profilometry measurements to obtain out-of- plane strain fields. These high resolution measurements are directly compared with large-scale CP-FEM predictions. This computational method directly links fundamental dislocation physics to plastic deformations in the grain-scale and to the engineering-scale applications. Furthermore, direct and quantitative comparisons between experimental measurements and simulation show that the proposed model accurately captures plasticity in deformation of polycrystalline tantalum.« less
A Machine Learning Method for the Prediction of Receptor Activation in the Simulation of Synapses
Montes, Jesus; Gomez, Elena; Merchán-Pérez, Angel; DeFelipe, Javier; Peña, Jose-Maria
2013-01-01
Chemical synaptic transmission involves the release of a neurotransmitter that diffuses in the extracellular space and interacts with specific receptors located on the postsynaptic membrane. Computer simulation approaches provide fundamental tools for exploring various aspects of the synaptic transmission under different conditions. In particular, Monte Carlo methods can track the stochastic movements of neurotransmitter molecules and their interactions with other discrete molecules, the receptors. However, these methods are computationally expensive, even when used with simplified models, preventing their use in large-scale and multi-scale simulations of complex neuronal systems that may involve large numbers of synaptic connections. We have developed a machine-learning based method that can accurately predict relevant aspects of the behavior of synapses, such as the percentage of open synaptic receptors as a function of time since the release of the neurotransmitter, with considerably lower computational cost compared with the conventional Monte Carlo alternative. The method is designed to learn patterns and general principles from a corpus of previously generated Monte Carlo simulations of synapses covering a wide range of structural and functional characteristics. These patterns are later used as a predictive model of the behavior of synapses under different conditions without the need for additional computationally expensive Monte Carlo simulations. This is performed in five stages: data sampling, fold creation, machine learning, validation and curve fitting. The resulting procedure is accurate, automatic, and it is general enough to predict synapse behavior under experimental conditions that are different to the ones it has been trained on. Since our method efficiently reproduces the results that can be obtained with Monte Carlo simulations at a considerably lower computational cost, it is suitable for the simulation of high numbers of synapses and it is therefore an excellent tool for multi-scale simulations. PMID:23894367
Assessment of Molecular Modeling & Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2002-01-03
This report reviews the development and applications of molecular and materials modeling in Europe and Japan in comparison to those in the United States. Topics covered include computational quantum chemistry, molecular simulations by molecular dynamics and Monte Carlo methods, mesoscale modeling of material domains, molecular-structure/macroscale property correlations like QSARs and QSPRs, and related information technologies like informatics and special-purpose molecular-modeling computers. The panel's findings include the following: The United States leads this field in many scientific areas. However, Canada has particular strengths in DFT methods and homogeneous catalysis; Europe in heterogeneous catalysis, mesoscale, and materials modeling; and Japan in materialsmore » modeling and special-purpose computing. Major government-industry initiatives are underway in Europe and Japan, notably in multi-scale materials modeling and in development of chemistry-capable ab-initio molecular dynamics codes.« less
Numerical models for fluid-grains interactions: opportunities and limitations
NASA Astrophysics Data System (ADS)
Esteghamatian, Amir; Rahmani, Mona; Wachs, Anthony
2017-06-01
In the framework of a multi-scale approach, we develop numerical models for suspension flows. At the micro scale level, we perform particle-resolved numerical simulations using a Distributed Lagrange Multiplier/Fictitious Domain approach. At the meso scale level, we use a two-way Euler/Lagrange approach with a Gaussian filtering kernel to model fluid-solid momentum transfer. At both the micro and meso scale levels, particles are individually tracked in a Lagrangian way and all inter-particle collisions are computed by a Discrete Element/Soft-sphere method. The previous numerical models have been extended to handle particles of arbitrary shape (non-spherical, angular and even non-convex) as well as to treat heat and mass transfer. All simulation tools are fully-MPI parallel with standard domain decomposition and run on supercomputers with a satisfactory scalability on up to a few thousands of cores. The main asset of multi scale analysis is the ability to extend our comprehension of the dynamics of suspension flows based on the knowledge acquired from the high-fidelity micro scale simulations and to use that knowledge to improve the meso scale model. We illustrate how we can benefit from this strategy for a fluidized bed, where we introduce a stochastic drag force model derived from micro-scale simulations to recover the proper level of particle fluctuations. Conversely, we discuss the limitations of such modelling tools such as their limited ability to capture lubrication forces and boundary layers in highly inertial flows. We suggest ways to overcome these limitations in order to enhance further the capabilities of the numerical models.
NASA Astrophysics Data System (ADS)
Smith, R. C.; Collins, G. S.; Hill, J.; Piggott, M. D.; Mouradian, S. L.
2015-12-01
Numerical modelling informs risk assessment of tsunami generated by submarine slides; however, for large-scale slides modelling can be complex and computationally challenging. Many previous numerical studies have approximated slides as rigid blocks that moved according to prescribed motion. However, wave characteristics are strongly dependent on the motion of the slide and previous work has recommended that more accurate representation of slide dynamics is needed. We have used the finite-element, adaptive-mesh CFD model Fluidity, to perform multi-material simulations of deformable submarine slide-generated waves at real world scales for a 2D scenario in the Gulf of Mexico. Our high-resolution approach represents slide dynamics with good accuracy, compared to other numerical simulations of this scenario, but precludes tracking of wave propagation over large distances. To enable efficient modelling of further propagation of the waves, we investigate an approach to extract information about the slide evolution from our multi-material simulations in order to drive a single-layer wave propagation model, also using Fluidity, which is much less computationally expensive. The extracted submarine slide geometry and position as a function of time are parameterised using simple polynomial functions. The polynomial functions are used to inform a prescribed velocity boundary condition in a single-layer simulation, mimicking the effect the submarine slide motion has on the water column. The approach is verified by successful comparison of wave generation in the single-layer model with that recorded in the multi-material, multi-layer simulations. We then extend this approach to 3D for further validation of this methodology (using the Gulf of Mexico scenario proposed by Horrillo et al., 2013) and to consider the effect of lateral spreading. This methodology is then used to simulate a series of hypothetical submarine slide events in the Arctic Ocean (based on evidence of historic slides) and examine the hazard posed to the UK coast.
Large-Scale, Parallel, Multi-Sensor Atmospheric Data Fusion Using Cloud Computing
NASA Astrophysics Data System (ADS)
Wilson, B. D.; Manipon, G.; Hua, H.; Fetzer, E. J.
2013-12-01
NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the 'A-Train' platforms (AIRS, AMSR-E, MODIS, MISR, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over decades. Moving to multi-sensor, long-duration analyses of important climate variables presents serious challenges for large-scale data mining and fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another (MODIS), and to a model (MERRA), stratify the comparisons using a classification of the 'cloud scenes' from CloudSat, and repeat the entire analysis over 10 years of data. To efficiently assemble such datasets, we are utilizing Elastic Computing in the Cloud and parallel map/reduce-based algorithms. However, these problems are Data Intensive computing so the data transfer times and storage costs (for caching) are key issues. SciReduce is a Hadoop-like parallel analysis system, programmed in parallel python, that is designed from the ground up for Earth science. SciReduce executes inside VMWare images and scales to any number of nodes in the Cloud. Unlike Hadoop, SciReduce operates on bundles of named numeric arrays, which can be passed in memory or serialized to disk in netCDF4 or HDF5. Figure 1 shows the architecture of the full computational system, with SciReduce at the core. Multi-year datasets are automatically 'sharded' by time and space across a cluster of nodes so that years of data (millions of files) can be processed in a massively parallel way. Input variables (arrays) are pulled on-demand into the Cloud using OPeNDAP URLs or other subsetting services, thereby minimizing the size of the cached input and intermediate datasets. We are using SciReduce to automate the production of multiple versions of a ten-year A-Train water vapor climatology under a NASA MEASURES grant. We will present the architecture of SciReduce, describe the achieved 'clock time' speedups in fusing datasets on our own compute nodes and in the public Cloud, and discuss the Cloud cost tradeoffs for storage, compute, and data transfer. We will also present a concept/prototype for staging NASA's A-Train Atmospheric datasets (Levels 2 & 3) in the Amazon Cloud so that any number of compute jobs can be executed 'near' the multi-sensor data. Given such a system, multi-sensor climate studies over 10-20 years of data could be performed in an efficient way, with the researcher paying only his own Cloud compute bill. SciReduce Architecture
Tawhai, Merryn H.; Clark, Alys R.; Burrowes, Kelly S.
2011-01-01
Biophysically-based computational models provide a tool for integrating and explaining experimental data, observations, and hypotheses. Computational models of the pulmonary circulation have evolved from minimal and efficient constructs that have been used to study individual mechanisms that contribute to lung perfusion, to sophisticated multi-scale and -physics structure-based models that predict integrated structure-function relationships within a heterogeneous organ. This review considers the utility of computational models in providing new insights into the function of the pulmonary circulation, and their application in clinically motivated studies. We review mathematical and computational models of the pulmonary circulation based on their application; we begin with models that seek to answer questions in basic science and physiology and progress to models that aim to have clinical application. In looking forward, we discuss the relative merits and clinical relevance of computational models: what important features are still lacking; and how these models may ultimately be applied to further increasing our understanding of the mechanisms occurring in disease of the pulmonary circulation. PMID:22034608
Mesoscale Effective Property Simulations Incorporating Conductive Binder
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trembacki, Bradley L.; Noble, David R.; Brunini, Victor E.
Lithium-ion battery electrodes are composed of active material particles, binder, and conductive additives that form an electrolyte-filled porous particle composite. The mesoscale (particle-scale) interplay of electrochemistry, mechanical deformation, and transport through this tortuous multi-component network dictates the performance of a battery at the cell-level. Effective electrode properties connect mesoscale phenomena with computationally feasible battery-scale simulations. We utilize published tomography data to reconstruct a large subsection (1000+ particles) of an NMC333 cathode into a computational mesh and extract electrode-scale effective properties from finite element continuum-scale simulations. We present a novel method to preferentially place a composite binder phase throughout the mesostructure,more » a necessary approach due difficulty distinguishing between non-active phases in tomographic data. We compare stress generation and effective thermal, electrical, and ionic conductivities across several binder placement approaches. Isotropic lithiation-dependent mechanical swelling of the NMC particles and the consideration of strain-dependent composite binder conductivity significantly impact the resulting effective property trends and stresses generated. Lastly, our results suggest that composite binder location significantly affects mesoscale behavior, indicating that a binder coating on active particles is not sufficient and that more accurate approaches should be used when calculating effective properties that will inform battery-scale models in this inherently multi-scale battery simulation challenge.« less
Mesoscale Effective Property Simulations Incorporating Conductive Binder
Trembacki, Bradley L.; Noble, David R.; Brunini, Victor E.; ...
2017-07-26
Lithium-ion battery electrodes are composed of active material particles, binder, and conductive additives that form an electrolyte-filled porous particle composite. The mesoscale (particle-scale) interplay of electrochemistry, mechanical deformation, and transport through this tortuous multi-component network dictates the performance of a battery at the cell-level. Effective electrode properties connect mesoscale phenomena with computationally feasible battery-scale simulations. We utilize published tomography data to reconstruct a large subsection (1000+ particles) of an NMC333 cathode into a computational mesh and extract electrode-scale effective properties from finite element continuum-scale simulations. We present a novel method to preferentially place a composite binder phase throughout the mesostructure,more » a necessary approach due difficulty distinguishing between non-active phases in tomographic data. We compare stress generation and effective thermal, electrical, and ionic conductivities across several binder placement approaches. Isotropic lithiation-dependent mechanical swelling of the NMC particles and the consideration of strain-dependent composite binder conductivity significantly impact the resulting effective property trends and stresses generated. Lastly, our results suggest that composite binder location significantly affects mesoscale behavior, indicating that a binder coating on active particles is not sufficient and that more accurate approaches should be used when calculating effective properties that will inform battery-scale models in this inherently multi-scale battery simulation challenge.« less
Stochastic Simulation Service: Bridging the Gap between the Computational Expert and the Biologist
Banerjee, Debjani; Bellesia, Giovanni; Daigle, Bernie J.; Douglas, Geoffrey; Gu, Mengyuan; Gupta, Anand; Hellander, Stefan; Horuk, Chris; Nath, Dibyendu; Takkar, Aviral; Lötstedt, Per; Petzold, Linda R.
2016-01-01
We present StochSS: Stochastic Simulation as a Service, an integrated development environment for modeling and simulation of both deterministic and discrete stochastic biochemical systems in up to three dimensions. An easy to use graphical user interface enables researchers to quickly develop and simulate a biological model on a desktop or laptop, which can then be expanded to incorporate increasing levels of complexity. StochSS features state-of-the-art simulation engines. As the demand for computational power increases, StochSS can seamlessly scale computing resources in the cloud. In addition, StochSS can be deployed as a multi-user software environment where collaborators share computational resources and exchange models via a public model repository. We demonstrate the capabilities and ease of use of StochSS with an example of model development and simulation at increasing levels of complexity. PMID:27930676
Stochastic Simulation Service: Bridging the Gap between the Computational Expert and the Biologist
Drawert, Brian; Hellander, Andreas; Bales, Ben; ...
2016-12-08
We present StochSS: Stochastic Simulation as a Service, an integrated development environment for modeling and simulation of both deterministic and discrete stochastic biochemical systems in up to three dimensions. An easy to use graphical user interface enables researchers to quickly develop and simulate a biological model on a desktop or laptop, which can then be expanded to incorporate increasing levels of complexity. StochSS features state-of-the-art simulation engines. As the demand for computational power increases, StochSS can seamlessly scale computing resources in the cloud. In addition, StochSS can be deployed as a multi-user software environment where collaborators share computational resources andmore » exchange models via a public model repository. We also demonstrate the capabilities and ease of use of StochSS with an example of model development and simulation at increasing levels of complexity.« less
Sanga, Sandeep; Frieboes, Hermann B.; Zheng, Xiaoming; Gatenby, Robert; Bearer, Elaine L.; Cristini, Vittorio
2007-01-01
Empirical evidence and theoretical studies suggest that the phenotype, i.e., cellular- and molecular-scale dynamics, including proliferation rate and adhesiveness due to microenvironmental factors and gene expression that govern tumor growth and invasiveness, also determine gross tumor-scale morphology. It has been difficult to quantify the relative effect of these links on disease progression and prognosis using conventional clinical and experimental methods and observables. As a result, successful individualized treatment of highly malignant and invasive cancers, such as glioblastoma, via surgical resection and chemotherapy cannot be offered and outcomes are generally poor. What is needed is a deterministic, quantifiable method to enable understanding of the connections between phenotype and tumor morphology. Here, we critically review advantages and disadvantages of recent computational modeling efforts (e.g., continuum, discrete, and cellular automata models) that have pursued this understanding. Based on this assessment, we propose and discuss a multi-scale, i.e., from the molecular to the gross tumor scale, mathematical and computational “first-principle” approach based on mass conservation and other physical laws, such as employed in reaction-diffusion systems. Model variables describe known characteristics of tumor behavior, and parameters and functional relationships across scales are informed from in vitro, in vivo and ex vivo biology. We demonstrate that this methodology, once coupled to tumor imaging and tumor biopsy or cell culture data, should enable prediction of tumor growth and therapy outcome through quantification of the relation between the underlying dynamics and morphological characteristics. In particular, morphologic stability analysis of this mathematical model reveals that tumor cell patterning at the tumor-host interface is regulated by cell proliferation, adhesion and other phenotypic characteristics: histopathology information of tumor boundary can be inputted to the mathematical model and used as phenotype-diagnostic tool and thus to predict collective and individual tumor cell invasion of surrounding host. This approach further provides a means to deterministically test effects of novel and hypothetical therapy strategies on tumor behavior. PMID:17629503
Toward GEOS-6, A Global Cloud System Resolving Atmospheric Model
NASA Technical Reports Server (NTRS)
Putman, William M.
2010-01-01
NASA is committed to observing and understanding the weather and climate of our home planet through the use of multi-scale modeling systems and space-based observations. Global climate models have evolved to take advantage of the influx of multi- and many-core computing technologies and the availability of large clusters of multi-core microprocessors. GEOS-6 is a next-generation cloud system resolving atmospheric model that will place NASA at the forefront of scientific exploration of our atmosphere and climate. Model simulations with GEOS-6 will produce a realistic representation of our atmosphere on the scale of typical satellite observations, bringing a visual comprehension of model results to a new level among the climate enthusiasts. In preparation for GEOS-6, the agency's flagship Earth System Modeling Framework [JDl] has been enhanced to support cutting-edge high-resolution global climate and weather simulations. Improvements include a cubed-sphere grid that exposes parallelism; a non-hydrostatic finite volume dynamical core, and algorithm designed for co-processor technologies, among others. GEOS-6 represents a fundamental advancement in the capability of global Earth system models. The ability to directly compare global simulations at the resolution of spaceborne satellite images will lead to algorithm improvements and better utilization of space-based observations within the GOES data assimilation system
Logic circuits based on molecular spider systems.
Mo, Dandan; Lakin, Matthew R; Stefanovic, Darko
2016-08-01
Spatial locality brings the advantages of computation speed-up and sequence reuse to molecular computing. In particular, molecular walkers that undergo localized reactions are of interest for implementing logic computations at the nanoscale. We use molecular spider walkers to implement logic circuits. We develop an extended multi-spider model with a dynamic environment wherein signal transmission is triggered via localized reactions, and use this model to implement three basic gates (AND, OR, NOT) and a cascading mechanism. We develop an algorithm to automatically generate the layout of the circuit. We use a kinetic Monte Carlo algorithm to simulate circuit computations, and we analyze circuit complexity: our design scales linearly with formula size and has a logarithmic time complexity. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Biomanufacturing: a US-China National Science Foundation-sponsored workshop.
Sun, Wei; Yan, Yongnian; Lin, Feng; Spector, Myron
2006-05-01
A recent US-China National Science Foundation-sponsored workshop on biomanufacturing reviewed the state-of-the-art of an array of new technologies for producing scaffolds for tissue engineering, providing precision multi-scale control of material, architecture, and cells. One broad category of such techniques has been termed solid freeform fabrication. The techniques in this category include: stereolithography, selected laser sintering, single- and multiple-nozzle deposition and fused deposition modeling, and three-dimensional printing. The precise and repetitive placement of material and cells in a three-dimensional construct at the micrometer length scale demands computer control. These novel computer-controlled scaffold production techniques, when coupled with computer-based imaging and structural modeling methods for the production of the templates for the scaffolds, define an emerging field of computer-aided tissue engineering. In formulating the questions that remain to be answered and discussing the knowledge required to further advance the field, the Workshop provided a basis for recommendations for future work.
Recent Advances in Transferable Coarse-Grained Modeling of Proteins
Kar, Parimal; Feig, Michael
2017-01-01
Computer simulations are indispensable tools for studying the structure and dynamics of biological macromolecules. Biochemical processes occur on different scales of length and time. Atomistic simulations cannot cover the relevant spatiotemporal scales at which the cellular processes occur. To address this challenge, coarse-grained (CG) modeling of the biological systems are employed. Over the last few years, many CG models for proteins continue to be developed. However, many of them are not transferable with respect to different systems and different environments. In this review, we discuss those CG protein models that are transferable and that retain chemical specificity. We restrict ourselves to CG models of soluble proteins only. We also briefly review recent progress made in the multi-scale hybrid all-atom/coarse-grained simulations of proteins. PMID:25443957
Anti-arrhythmic strategies for atrial fibrillation
Grandi, Eleonora; Maleckar, Mary M.
2016-01-01
Atrial fibrillation (AF), the most common cardiac arrhythmia, is associated with increased risk of cerebrovascular stroke, and with several other pathologies, including heart failure. Current therapies for AF are targeted at reducing risk of stroke (anticoagulation) and tachycardia-induced cardiomyopathy (rate or rhythm control). Rate control, typically achieved by atrioventricular nodal blocking drugs, is often insufficient to alleviate symptoms. Rhythm control approaches include antiarrhythmic drugs, electrical cardioversion, and ablation strategies. Here, we offer several examples of how computational modeling can provide a quantitative framework for integrating multi scale data to: (a) gain insight into multi-scale mechanisms of AF; (b) identify and test pharmacological and electrical therapy and interventions; and (c) support clinical decisions. We review how modeling approaches have evolved and contributed to the research pipeline and preclinical development and discuss future directions and challenges in the field. PMID:27612549
SPARK: A Framework for Multi-Scale Agent-Based Biomedical Modeling.
Solovyev, Alexey; Mikheev, Maxim; Zhou, Leming; Dutta-Moscato, Joyeeta; Ziraldo, Cordelia; An, Gary; Vodovotz, Yoram; Mi, Qi
2010-01-01
Multi-scale modeling of complex biological systems remains a central challenge in the systems biology community. A method of dynamic knowledge representation known as agent-based modeling enables the study of higher level behavior emerging from discrete events performed by individual components. With the advancement of computer technology, agent-based modeling has emerged as an innovative technique to model the complexities of systems biology. In this work, the authors describe SPARK (Simple Platform for Agent-based Representation of Knowledge), a framework for agent-based modeling specifically designed for systems-level biomedical model development. SPARK is a stand-alone application written in Java. It provides a user-friendly interface, and a simple programming language for developing Agent-Based Models (ABMs). SPARK has the following features specialized for modeling biomedical systems: 1) continuous space that can simulate real physical space; 2) flexible agent size and shape that can represent the relative proportions of various cell types; 3) multiple spaces that can concurrently simulate and visualize multiple scales in biomedical models; 4) a convenient graphical user interface. Existing ABMs of diabetic foot ulcers and acute inflammation were implemented in SPARK. Models of identical complexity were run in both NetLogo and SPARK; the SPARK-based models ran two to three times faster.
Todd, Robert G.; van der Zee, Lucas
2016-01-01
Abstract The eukaryotic cell cycle is robustly designed, with interacting molecules organized within a definite topology that ensures temporal precision of its phase transitions. Its underlying dynamics are regulated by molecular switches, for which remarkable insights have been provided by genetic and molecular biology efforts. In a number of cases, this information has been made predictive, through computational models. These models have allowed for the identification of novel molecular mechanisms, later validated experimentally. Logical modeling represents one of the youngest approaches to address cell cycle regulation. We summarize the advances that this type of modeling has achieved to reproduce and predict cell cycle dynamics. Furthermore, we present the challenge that this type of modeling is now ready to tackle: its integration with intracellular networks, and its formalisms, to understand crosstalks underlying systems level properties, ultimate aim of multi-scale models. Specifically, we discuss and illustrate how such an integration may be realized, by integrating a minimal logical model of the cell cycle with a metabolic network. PMID:27993914
A Stratified Acoustic Model Accounting for Phase Shifts for Underwater Acoustic Networks
Wang, Ping; Zhang, Lin; Li, Victor O. K.
2013-01-01
Accurate acoustic channel models are critical for the study of underwater acoustic networks. Existing models include physics-based models and empirical approximation models. The former enjoy good accuracy, but incur heavy computational load, rendering them impractical in large networks. On the other hand, the latter are computationally inexpensive but inaccurate since they do not account for the complex effects of boundary reflection losses, the multi-path phenomenon and ray bending in the stratified ocean medium. In this paper, we propose a Stratified Acoustic Model (SAM) based on frequency-independent geometrical ray tracing, accounting for each ray's phase shift during the propagation. It is a feasible channel model for large scale underwater acoustic network simulation, allowing us to predict the transmission loss with much lower computational complexity than the traditional physics-based models. The accuracy of the model is validated via comparisons with the experimental measurements in two different oceans. Satisfactory agreements with the measurements and with other computationally intensive classical physics-based models are demonstrated. PMID:23669708
A stratified acoustic model accounting for phase shifts for underwater acoustic networks.
Wang, Ping; Zhang, Lin; Li, Victor O K
2013-05-13
Accurate acoustic channel models are critical for the study of underwater acoustic networks. Existing models include physics-based models and empirical approximation models. The former enjoy good accuracy, but incur heavy computational load, rendering them impractical in large networks. On the other hand, the latter are computationally inexpensive but inaccurate since they do not account for the complex effects of boundary reflection losses, the multi-path phenomenon and ray bending in the stratified ocean medium. In this paper, we propose a Stratified Acoustic Model (SAM) based on frequency-independent geometrical ray tracing, accounting for each ray's phase shift during the propagation. It is a feasible channel model for large scale underwater acoustic network simulation, allowing us to predict the transmission loss with much lower computational complexity than the traditional physics-based models. The accuracy of the model is validated via comparisons with the experimental measurements in two different oceans. Satisfactory agreements with the measurements and with other computationally intensive classical physics-based models are demonstrated.
NASA Astrophysics Data System (ADS)
Bishay, Peter L.
This study presents a new family of highly accurate and efficient computational methods for modeling the multi-physics of multifunctional materials and composites in the micro-scale named "Multi-Physics Computational Grains" (MPCGs). Each "mathematical grain" has a random polygonal/polyhedral geometrical shape that resembles the natural shapes of the material grains in the micro-scale where each grain is surrounded by an arbitrary number of neighboring grains. The physics that are incorporated in this study include: Linear Elasticity, Electrostatics, Magnetostatics, Piezoelectricity, Piezomagnetism and Ferroelectricity. However, the methods proposed here can be extended to include more physics (thermo-elasticity, pyroelectricity, electric conduction, heat conduction, etc.) in their formulation, different analysis types (dynamics, fracture, fatigue, etc.), nonlinearities, different defect shapes, and some of the 2D methods can also be extended to 3D formulation. We present "Multi-Region Trefftz Collocation Grains" (MTCGs) as a simple and efficient method for direct and inverse problems, "Trefftz-Lekhnitskii Computational Gains" (TLCGs) for modeling porous and composite smart materials, "Hybrid Displacement Computational Grains" (HDCGs) as a general method for modeling multifunctional materials and composites, and finally "Radial-Basis-Functions Computational Grains" (RBFCGs) for modeling functionally-graded materials, magneto-electro-elastic (MEE) materials and the switching phenomena in ferroelectric materials. The first three proposed methods are suitable for direct numerical simulation (DNS) of the micromechanics of smart composite/porous materials with non-symmetrical arrangement of voids/inclusions, and provide minimal effort in meshing and minimal time in computations, since each grain can represent the matrix of a composite and can include a pore or an inclusion. The last three methods provide stiffness matrix in their formulation and hence can be readily implemented in a finite element routine. Several numerical examples are provided to show the ability and accuracy of the proposed methods to determine the effective material properties of different types of piezo-composites, and detect the damage-prone sites in a microstructure under certain loading types. The last method (RBFCGs) is also suitable for modeling the switching phenomena in ferro-materials (ferroelectric, ferromagnetic, etc.) after incorporating a certain nonlinear constitutive model and a switching criterion. Since the interaction between grains during loading cycles has a profound influence on the switching phenomena, it is important to simulate the grains with geometrical shapes that are similar to the real shapes of grains as seen in lab experiments. Hence the use of the 3D RBFCGs, which allow for the presence of all the six variants of the constitutive relations, together with the randomly generated crystallographic axes in each grain, as done in the present study, is considered to be the most realistic model that can be used for the direct mesoscale numerical simulation (DMNS) of polycrystalline ferro-materials.
Regional-scale calculation of the LS factor using parallel processing
NASA Astrophysics Data System (ADS)
Liu, Kai; Tang, Guoan; Jiang, Ling; Zhu, A.-Xing; Yang, Jianyi; Song, Xiaodong
2015-05-01
With the increase of data resolution and the increasing application of USLE over large areas, the existing serial implementation of algorithms for computing the LS factor is becoming a bottleneck. In this paper, a parallel processing model based on message passing interface (MPI) is presented for the calculation of the LS factor, so that massive datasets at a regional scale can be processed efficiently. The parallel model contains algorithms for calculating flow direction, flow accumulation, drainage network, slope, slope length and the LS factor. According to the existence of data dependence, the algorithms are divided into local algorithms and global algorithms. Parallel strategy are designed according to the algorithm characters including the decomposition method for maintaining the integrity of the results, optimized workflow for reducing the time taken for exporting the unnecessary intermediate data and a buffer-communication-computation strategy for improving the communication efficiency. Experiments on a multi-node system show that the proposed parallel model allows efficient calculation of the LS factor at a regional scale with a massive dataset.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trahan, Christine Alexandra; Garcia, Omar Fidel; Martino, Anthony A.
2010-08-01
The search is on for new renewable energy and algal-derived biofuel is a critical piece in the multi-faceted renewable energy puzzle. It has 30x more oil than any terrestrial oilseed crop, ideal composition for biodiesel, no competition with food crops, can be grown in waste water, and is cleaner than petroleum based fuels. This project discusses these three goals: (1) Conduct fundamental research into the effects that dynamic biotic and abiotic stressors have on algal growth and lipid production - Genomics/Transcriptomics, Bioanalytical spectroscopy/Chemical imaging; (2) Discover spectral signatures for algal health at the benchtop and greenhouse scale - Remote sensing,more » Bioanalytical spectroscopy; and (3) Develop computational model for algal growth and productivity at the raceway scale - Computational modeling.« less
2016-01-01
Background Computer Networks have a tendency to grow at an unprecedented scale. Modern networks involve not only computers but also a wide variety of other interconnected devices ranging from mobile phones to other household items fitted with sensors. This vision of the "Internet of Things" (IoT) implies an inherent difficulty in modeling problems. Purpose It is practically impossible to implement and test all scenarios for large-scale and complex adaptive communication networks as part of Complex Adaptive Communication Networks and Environments (CACOONS). The goal of this study is to explore the use of Agent-based Modeling as part of the Cognitive Agent-based Computing (CABC) framework to model a Complex communication network problem. Method We use Exploratory Agent-based Modeling (EABM), as part of the CABC framework, to develop an autonomous multi-agent architecture for managing carbon footprint in a corporate network. To evaluate the application of complexity in practical scenarios, we have also introduced a company-defined computer usage policy. Results The conducted experiments demonstrated two important results: Primarily CABC-based modeling approach such as using Agent-based Modeling can be an effective approach to modeling complex problems in the domain of IoT. Secondly, the specific problem of managing the Carbon footprint can be solved using a multiagent system approach. PMID:26812235
A blended continuous–discontinuous finite element method for solving the multi-fluid plasma model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sousa, E.M., E-mail: sousae@uw.edu; Shumlak, U., E-mail: shumlak@uw.edu
The multi-fluid plasma model represents electrons, multiple ion species, and multiple neutral species as separate fluids that interact through short-range collisions and long-range electromagnetic fields. The model spans a large range of temporal and spatial scales, which renders the model stiff and presents numerical challenges. To address the large range of timescales, a blended continuous and discontinuous Galerkin method is proposed, where the massive ion and neutral species are modeled using an explicit discontinuous Galerkin method while the electrons and electromagnetic fields are modeled using an implicit continuous Galerkin method. This approach is able to capture large-gradient ion and neutralmore » physics like shock formation, while resolving high-frequency electron dynamics in a computationally efficient manner. The details of the Blended Finite Element Method (BFEM) are presented. The numerical method is benchmarked for accuracy and tested using two-fluid one-dimensional soliton problem and electromagnetic shock problem. The results are compared to conventional finite volume and finite element methods, and demonstrate that the BFEM is particularly effective in resolving physics in stiff problems involving realistic physical parameters, including realistic electron mass and speed of light. The benefit is illustrated by computing a three-fluid plasma application that demonstrates species separation in multi-component plasmas.« less
Bordbar, Aarash; Palsson, Bernhard O.
2016-01-01
Progress in systems medicine brings promise to addressing patient heterogeneity and individualized therapies. Recently, genome-scale models of metabolism have been shown to provide insight into the mechanistic link between drug therapies and systems-level off-target effects while being expanded to explicitly include the three-dimensional structure of proteins. The integration of these molecular-level details, such as the physical, structural, and dynamical properties of proteins, notably expands the computational description of biochemical network-level properties and the possibility of understanding and predicting whole cell phenotypes. In this study, we present a multi-scale modeling framework that describes biological processes which range in scale from atomistic details to an entire metabolic network. Using this approach, we can understand how genetic variation, which impacts the structure and reactivity of a protein, influences both native and drug-induced metabolic states. As a proof-of-concept, we study three enzymes (catechol-O-methyltransferase, glucose-6-phosphate dehydrogenase, and glyceraldehyde-3-phosphate dehydrogenase) and their respective genetic variants which have clinically relevant associations. Using all-atom molecular dynamic simulations enables the sampling of long timescale conformational dynamics of the proteins (and their mutant variants) in complex with their respective native metabolites or drug molecules. We find that changes in a protein’s structure due to a mutation influences protein binding affinity to metabolites and/or drug molecules, and inflicts large-scale changes in metabolism. PMID:27467583
Mih, Nathan; Brunk, Elizabeth; Bordbar, Aarash; Palsson, Bernhard O
2016-07-01
Progress in systems medicine brings promise to addressing patient heterogeneity and individualized therapies. Recently, genome-scale models of metabolism have been shown to provide insight into the mechanistic link between drug therapies and systems-level off-target effects while being expanded to explicitly include the three-dimensional structure of proteins. The integration of these molecular-level details, such as the physical, structural, and dynamical properties of proteins, notably expands the computational description of biochemical network-level properties and the possibility of understanding and predicting whole cell phenotypes. In this study, we present a multi-scale modeling framework that describes biological processes which range in scale from atomistic details to an entire metabolic network. Using this approach, we can understand how genetic variation, which impacts the structure and reactivity of a protein, influences both native and drug-induced metabolic states. As a proof-of-concept, we study three enzymes (catechol-O-methyltransferase, glucose-6-phosphate dehydrogenase, and glyceraldehyde-3-phosphate dehydrogenase) and their respective genetic variants which have clinically relevant associations. Using all-atom molecular dynamic simulations enables the sampling of long timescale conformational dynamics of the proteins (and their mutant variants) in complex with their respective native metabolites or drug molecules. We find that changes in a protein's structure due to a mutation influences protein binding affinity to metabolites and/or drug molecules, and inflicts large-scale changes in metabolism.
NASA Technical Reports Server (NTRS)
Engwirda, Darren
2017-01-01
An algorithm for the generation of non-uniform, locally orthogonal staggered unstructured spheroidal grids is described. This technique is designed to generate very high-quality staggered VoronoiDelaunay meshes appropriate for general circulation modelling on the sphere, including applications to atmospheric simulation, ocean-modelling and numerical weather prediction. Using a recently developed Frontal-Delaunay refinement technique, a method for the construction of high-quality unstructured spheroidal Delaunay triangulations is introduced. A locally orthogonal polygonal grid, derived from the associated Voronoi diagram, is computed as the staggered dual. It is shown that use of the Frontal-Delaunay refinement technique allows for the generation of very high-quality unstructured triangulations, satisfying a priori bounds on element size and shape. Grid quality is further improved through the application of hill-climbing-type optimisation techniques. Overall, the algorithm is shown to produce grids with very high element quality and smooth grading characteristics, while imposing relatively low computational expense. A selection of uniform and non-uniform spheroidal grids appropriate for high-resolution, multi-scale general circulation modelling are presented. These grids are shown to satisfy the geometric constraints associated with contemporary unstructured C-grid-type finite-volume models, including the Model for Prediction Across Scales (MPAS-O). The use of user-defined mesh-spacing functions to generate smoothly graded, non-uniform grids for multi-resolution-type studies is discussed in detail.
NASA Astrophysics Data System (ADS)
Engwirda, Darren
2017-06-01
An algorithm for the generation of non-uniform, locally orthogonal staggered unstructured spheroidal grids is described. This technique is designed to generate very high-quality staggered Voronoi-Delaunay meshes appropriate for general circulation modelling on the sphere, including applications to atmospheric simulation, ocean-modelling and numerical weather prediction. Using a recently developed Frontal-Delaunay refinement technique, a method for the construction of high-quality unstructured spheroidal Delaunay triangulations is introduced. A locally orthogonal polygonal grid, derived from the associated Voronoi diagram, is computed as the staggered dual. It is shown that use of the Frontal-Delaunay refinement technique allows for the generation of very high-quality unstructured triangulations, satisfying a priori bounds on element size and shape. Grid quality is further improved through the application of hill-climbing-type optimisation techniques. Overall, the algorithm is shown to produce grids with very high element quality and smooth grading characteristics, while imposing relatively low computational expense. A selection of uniform and non-uniform spheroidal grids appropriate for high-resolution, multi-scale general circulation modelling are presented. These grids are shown to satisfy the geometric constraints associated with contemporary unstructured C-grid-type finite-volume models, including the Model for Prediction Across Scales (MPAS-O). The use of user-defined mesh-spacing functions to generate smoothly graded, non-uniform grids for multi-resolution-type studies is discussed in detail.
TOPICAL REVIEW: Advances and challenges in computational plasma science
NASA Astrophysics Data System (ADS)
Tang, W. M.; Chan, V. S.
2005-02-01
Scientific simulation, which provides a natural bridge between theory and experiment, is an essential tool for understanding complex plasma behaviour. Recent advances in simulations of magnetically confined plasmas are reviewed in this paper, with illustrative examples, chosen from associated research areas such as microturbulence, magnetohydrodynamics and other topics. Progress has been stimulated, in particular, by the exponential growth of computer speed along with significant improvements in computer technology. The advances in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics have produced increasingly good agreement between experimental observations and computational modelling. This was enabled by two key factors: (a) innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning widely disparate temporal and spatial scales and (b) access to powerful new computational resources. Excellent progress has been made in developing codes for which computer run-time and problem-size scale well with the number of processors on massively parallel processors (MPPs). Examples include the effective usage of the full power of multi-teraflop (multi-trillion floating point computations per second) MPPs to produce three-dimensional, general geometry, nonlinear particle simulations that have accelerated advances in understanding the nature of turbulence self-regulation by zonal flows. These calculations, which typically utilized billions of particles for thousands of time-steps, would not have been possible without access to powerful present generation MPP computers and the associated diagnostic and visualization capabilities. In looking towards the future, the current results from advanced simulations provide great encouragement for being able to include increasingly realistic dynamics to enable deeper physics insights into plasmas in both natural and laboratory environments. This should produce the scientific excitement which will help to (a) stimulate enhanced cross-cutting collaborations with other fields and (b) attract the bright young talent needed for the future health of the field of plasma science.
Advances and challenges in computational plasma science
NASA Astrophysics Data System (ADS)
Tang, W. M.
2005-02-01
Scientific simulation, which provides a natural bridge between theory and experiment, is an essential tool for understanding complex plasma behaviour. Recent advances in simulations of magnetically confined plasmas are reviewed in this paper, with illustrative examples, chosen from associated research areas such as microturbulence, magnetohydrodynamics and other topics. Progress has been stimulated, in particular, by the exponential growth of computer speed along with significant improvements in computer technology. The advances in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics have produced increasingly good agreement between experimental observations and computational modelling. This was enabled by two key factors: (a) innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning widely disparate temporal and spatial scales and (b) access to powerful new computational resources. Excellent progress has been made in developing codes for which computer run-time and problem-size scale well with the number of processors on massively parallel processors (MPPs). Examples include the effective usage of the full power of multi-teraflop (multi-trillion floating point computations per second) MPPs to produce three-dimensional, general geometry, nonlinear particle simulations that have accelerated advances in understanding the nature of turbulence self-regulation by zonal flows. These calculations, which typically utilized billions of particles for thousands of time-steps, would not have been possible without access to powerful present generation MPP computers and the associated diagnostic and visualization capabilities. In looking towards the future, the current results from advanced simulations provide great encouragement for being able to include increasingly realistic dynamics to enable deeper physics insights into plasmas in both natural and laboratory environments. This should produce the scientific excitement which will help to (a) stimulate enhanced cross-cutting collaborations with other fields and (b) attract the bright young talent needed for the future health of the field of plasma science.
Multi-petascale highly efficient parallel supercomputer
Asaad, Sameh; Bellofatto, Ralph E.; Blocksome, Michael A.; Blumrich, Matthias A.; Boyle, Peter; Brunheroto, Jose R.; Chen, Dong; Cher, Chen -Yong; Chiu, George L.; Christ, Norman; Coteus, Paul W.; Davis, Kristan D.; Dozsa, Gabor J.; Eichenberger, Alexandre E.; Eisley, Noel A.; Ellavsky, Matthew R.; Evans, Kahn C.; Fleischer, Bruce M.; Fox, Thomas W.; Gara, Alan; Giampapa, Mark E.; Gooding, Thomas M.; Gschwind, Michael K.; Gunnels, John A.; Hall, Shawn A.; Haring, Rudolf A.; Heidelberger, Philip; Inglett, Todd A.; Knudson, Brant L.; Kopcsay, Gerard V.; Kumar, Sameer; Mamidala, Amith R.; Marcella, James A.; Megerian, Mark G.; Miller, Douglas R.; Miller, Samuel J.; Muff, Adam J.; Mundy, Michael B.; O'Brien, John K.; O'Brien, Kathryn M.; Ohmacht, Martin; Parker, Jeffrey J.; Poole, Ruth J.; Ratterman, Joseph D.; Salapura, Valentina; Satterfield, David L.; Senger, Robert M.; Smith, Brian; Steinmacher-Burow, Burkhard; Stockdell, William M.; Stunkel, Craig B.; Sugavanam, Krishnan; Sugawara, Yutaka; Takken, Todd E.; Trager, Barry M.; Van Oosten, James L.; Wait, Charles D.; Walkup, Robert E.; Watson, Alfred T.; Wisniewski, Robert W.; Wu, Peng
2015-07-14
A Multi-Petascale Highly Efficient Parallel Supercomputer of 100 petaOPS-scale computing, at decreased cost, power and footprint, and that allows for a maximum packaging density of processing nodes from an interconnect point of view. The Supercomputer exploits technological advances in VLSI that enables a computing model where many processors can be integrated into a single Application Specific Integrated Circuit (ASIC). Each ASIC computing node comprises a system-on-chip ASIC utilizing four or more processors integrated into one die, with each having full access to all system resources and enabling adaptive partitioning of the processors to functions such as compute or messaging I/O on an application by application basis, and preferably, enable adaptive partitioning of functions in accordance with various algorithmic phases within an application, or if I/O or other processors are underutilized, then can participate in computation or communication nodes are interconnected by a five dimensional torus network with DMA that optimally maximize the throughput of packet communications between nodes and minimize latency.
Scheduling based on a dynamic resource connection
NASA Astrophysics Data System (ADS)
Nagiyev, A. E.; Botygin, I. A.; Shersntneva, A. I.; Konyaev, P. A.
2017-02-01
The practical using of distributed computing systems associated with many problems, including troubles with the organization of an effective interaction between the agents located at the nodes of the system, with the specific configuration of each node of the system to perform a certain task, with the effective distribution of the available information and computational resources of the system, with the control of multithreading which implements the logic of solving research problems and so on. The article describes the method of computing load balancing in distributed automatic systems, focused on the multi-agency and multi-threaded data processing. The scheme of the control of processing requests from the terminal devices, providing the effective dynamic scaling of computing power under peak load is offered. The results of the model experiments research of the developed load scheduling algorithm are set out. These results show the effectiveness of the algorithm even with a significant expansion in the number of connected nodes and zoom in the architecture distributed computing system.
NASA Astrophysics Data System (ADS)
Ercan, Mehmet Bulent
Watershed-scale hydrologic models are used for a variety of applications from flood prediction, to drought analysis, to water quality assessments. A particular challenge in applying these models is calibration of the model parameters, many of which are difficult to measure at the watershed-scale. A primary goal of this dissertation is to contribute new computational methods and tools for calibration of watershed-scale hydrologic models and the Soil and Water Assessment Tool (SWAT) model, in particular. SWAT is a physically-based, watershed-scale hydrologic model developed to predict the impact of land management practices on water quality and quantity. The dissertation follows a manuscript format meaning it is comprised of three separate but interrelated research studies. The first two research studies focus on SWAT model calibration, and the third research study presents an application of the new calibration methods and tools to study climate change impacts on water resources in the Upper Neuse Watershed of North Carolina using SWAT. The objective of the first two studies is to overcome computational challenges associated with calibration of SWAT models. The first study evaluates a parallel SWAT calibration tool built using the Windows Azure cloud environment and a parallel version of the Dynamically Dimensioned Search (DDS) calibration method modified to run in Azure. The calibration tool was tested for six model scenarios constructed using three watersheds of increasing size (the Eno, Upper Neuse, and Neuse) for both a 2 year and 10 year simulation duration. Leveraging the cloud as an on demand computing resource allowed for a significantly reduced calibration time such that calibration of the Neuse watershed went from taking 207 hours on a personal computer to only 3.4 hours using 256 cores in the Azure cloud. The second study aims at increasing SWAT model calibration efficiency by creating an open source, multi-objective calibration tool using the Non-Dominated Sorting Genetic Algorithm II (NSGA-II). This tool was demonstrated through an application for the Upper Neuse Watershed in North Carolina, USA. The objective functions used for the calibration were Nash-Sutcliffe (E) and Percent Bias (PB), and the objective sites were the Flat, Little, and Eno watershed outlets. The results show that the use of multi-objective calibration algorithms for SWAT calibration improved model performance especially in terms of minimizing PB compared to the single objective model calibration. The third study builds upon the first two studies by leveraging the new calibration methods and tools to study future climate impacts on the Upper Neuse watershed. Statistically downscaled outputs from eight Global Circulation Models (GCMs) were used for both low and high emission scenarios to drive a well calibrated SWAT model of the Upper Neuse watershed. The objective of the study was to understand the potential hydrologic response of the watershed, which serves as a public water supply for the growing Research Triangle Park region of North Carolina, under projected climate change scenarios. The future climate change scenarios, in general, indicate an increase in precipitation and temperature for the watershed in coming decades. The SWAT simulations using the future climate scenarios, in general, suggest an increase in soil water and water yield, and a decrease in evapotranspiration within the Upper Neuse watershed. In summary, this dissertation advances the field of watershed-scale hydrologic modeling by (i) providing some of the first work to apply cloud computing for the computationally-demanding task of model calibration; (ii) providing a new, open source library that can be used by SWAT modelers to perform multi-objective calibration of their models; and (iii) advancing understanding of climate change impacts on water resources for an important watershed in the Research Triangle Park region of North Carolina. The third study leveraged the methodological advances presented in the first two studies. Therefore, the dissertation contains three independent by interrelated studies that collectively advance the field of watershed-scale hydrologic modeling and analysis.
Multi-phase CFD modeling of solid sorbent carbon capture system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ryan, E. M.; DeCroix, D.; Breault, R.
2013-07-01
Computational fluid dynamics (CFD) simulations are used to investigate a low temperature post-combustion carbon capture reactor. The CFD models are based on a small scale solid sorbent carbon capture reactor design from ADA-ES and Southern Company. The reactor is a fluidized bed design based on a silica-supported amine sorbent. CFD models using both Eulerian–Eulerian and Eulerian–Lagrangian multi-phase modeling methods are developed to investigate the hydrodynamics and adsorption of carbon dioxide in the reactor. Models developed in both FLUENT® and BARRACUDA are presented to explore the strengths and weaknesses of state of the art CFD codes for modeling multi-phase carbon capturemore » reactors. The results of the simulations show that the FLUENT® Eulerian–Lagrangian simulations (DDPM) are unstable for the given reactor design; while the BARRACUDA Eulerian–Lagrangian model is able to simulate the system given appropriate simplifying assumptions. FLUENT® Eulerian–Eulerian simulations also provide a stable solution for the carbon capture reactor given the appropriate simplifying assumptions.« less
Multi-Phase CFD Modeling of Solid Sorbent Carbon Capture System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ryan, Emily M.; DeCroix, David; Breault, Ronald W.
2013-07-30
Computational fluid dynamics (CFD) simulations are used to investigate a low temperature post-combustion carbon capture reactor. The CFD models are based on a small scale solid sorbent carbon capture reactor design from ADA-ES and Southern Company. The reactor is a fluidized bed design based on a silica-supported amine sorbent. CFD models using both Eulerian-Eulerian and Eulerian-Lagrangian multi-phase modeling methods are developed to investigate the hydrodynamics and adsorption of carbon dioxide in the reactor. Models developed in both FLUENT® and BARRACUDA are presented to explore the strengths and weaknesses of state of the art CFD codes for modeling multi-phase carbon capturemore » reactors. The results of the simulations show that the FLUENT® Eulerian-Lagrangian simulations (DDPM) are unstable for the given reactor design; while the BARRACUDA Eulerian-Lagrangian model is able to simulate the system given appropriate simplifying assumptions. FLUENT® Eulerian-Eulerian simulations also provide a stable solution for the carbon capture reactor given the appropriate simplifying assumptions.« less
Large-Scale, Parallel, Multi-Sensor Atmospheric Data Fusion Using Cloud Computing
NASA Astrophysics Data System (ADS)
Wilson, B. D.; Manipon, G.; Hua, H.; Fetzer, E.
2013-05-01
NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the "A-Train" platforms (AIRS, AMSR-E, MODIS, MISR, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over decades. Moving to multi-sensor, long-duration analyses of important climate variables presents serious challenges for large-scale data mining and fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another (MODIS), and to a model (ECMWF), stratify the comparisons using a classification of the "cloud scenes" from CloudSat, and repeat the entire analysis over 10 years of data. To efficiently assemble such datasets, we are utilizing Elastic Computing in the Cloud and parallel map/reduce-based algorithms. However, these problems are Data Intensive computing so the data transfer times and storage costs (for caching) are key issues. SciReduce is a Hadoop-like parallel analysis system, programmed in parallel python, that is designed from the ground up for Earth science. SciReduce executes inside VMWare images and scales to any number of nodes in the Cloud. Unlike Hadoop, SciReduce operates on bundles of named numeric arrays, which can be passed in memory or serialized to disk in netCDF4 or HDF5. Figure 1 shows the architecture of the full computational system, with SciReduce at the core. Multi-year datasets are automatically "sharded" by time and space across a cluster of nodes so that years of data (millions of files) can be processed in a massively parallel way. Input variables (arrays) are pulled on-demand into the Cloud using OPeNDAP URLs or other subsetting services, thereby minimizing the size of the cached input and intermediate datasets. We are using SciReduce to automate the production of multiple versions of a ten-year A-Train water vapor climatology under a NASA MEASURES grant. We will present the architecture of SciReduce, describe the achieved "clock time" speedups in fusing datasets on our own nodes and in the Cloud, and discuss the Cloud cost tradeoffs for storage, compute, and data transfer. We will also present a concept/prototype for staging NASA's A-Train Atmospheric datasets (Levels 2 & 3) in the Amazon Cloud so that any number of compute jobs can be executed "near" the multi-sensor data. Given such a system, multi-sensor climate studies over 10-20 years of data could be performed in an efficient way, with the researcher paying only his own Cloud compute bill.; Figure 1 -- Architecture.
Uniscale multi-view registration using double dog-leg method
NASA Astrophysics Data System (ADS)
Chen, Chao-I.; Sargent, Dusty; Tsai, Chang-Ming; Wang, Yuan-Fang; Koppel, Dan
2009-02-01
3D computer models of body anatomy can have many uses in medical research and clinical practices. This paper describes a robust method that uses videos of body anatomy to construct multiple, partial 3D structures and then fuse them to form a larger, more complete computer model using the structure-from-motion framework. We employ the Double Dog-Leg (DDL) method, a trust-region based nonlinear optimization method, to jointly optimize the camera motion parameters (rotation and translation) and determine a global scale that all partial 3D structures should agree upon. These optimized motion parameters are used for constructing local structures, and the global scale is essential for multi-view registration after all these partial structures are built. In order to provide a good initial guess of the camera movement parameters and outlier free 2D point correspondences for DDL, we also propose a two-stage scheme where multi-RANSAC with a normalized eight-point algorithm is first performed and then a few iterations of an over-determined five-point algorithm is used to polish the results. Our experimental results using colonoscopy video show that the proposed scheme always produces more accurate outputs than the standard RANSAC scheme. Furthermore, since we have obtained many reliable point correspondences, time-consuming and error-prone registration methods like the iterative closest points (ICP) based algorithms can be replaced by a simple rigid-body transformation solver when merging partial structures into a larger model.
NASA Astrophysics Data System (ADS)
Comer, Joanne; Indiana Olbert, Agnieszka; Nash, Stephen; Hartnett, Michael
2017-02-01
Urban developments in coastal zones are often exposed to natural hazards such as flooding. In this research, a state-of-the-art, multi-scale nested flood (MSN_Flood) model is applied to simulate complex coastal-fluvial urban flooding due to combined effects of tides, surges and river discharges. Cork city on Ireland's southwest coast is a study case. The flood modelling system comprises a cascade of four dynamically linked models that resolve the hydrodynamics of Cork Harbour and/or its sub-region at four scales: 90, 30, 6 and 2 m. Results demonstrate that the internalization of the nested boundary through the use of ghost cells combined with a tailored adaptive interpolation technique creates a highly dynamic moving boundary that permits flooding and drying of the nested boundary. This novel feature of MSN_Flood provides a high degree of choice regarding the location of the boundaries to the nested domain and therefore flexibility in model application. The nested MSN_Flood model through dynamic downscaling facilitates significant improvements in accuracy of model output without incurring the computational expense of high spatial resolution over the entire model domain. The urban flood model provides full characteristics of water levels and flow regimes necessary for flood hazard identification and flood risk assessment.
A real-time multi-scale 2D Gaussian filter based on FPGA
NASA Astrophysics Data System (ADS)
Luo, Haibo; Gai, Xingqin; Chang, Zheng; Hui, Bin
2014-11-01
Multi-scale 2-D Gaussian filter has been widely used in feature extraction (e.g. SIFT, edge etc.), image segmentation, image enhancement, image noise removing, multi-scale shape description etc. However, their computational complexity remains an issue for real-time image processing systems. Aimed at this problem, we propose a framework of multi-scale 2-D Gaussian filter based on FPGA in this paper. Firstly, a full-hardware architecture based on parallel pipeline was designed to achieve high throughput rate. Secondly, in order to save some multiplier, the 2-D convolution is separated into two 1-D convolutions. Thirdly, a dedicate first in first out memory named as CAFIFO (Column Addressing FIFO) was designed to avoid the error propagating induced by spark on clock. Finally, a shared memory framework was designed to reduce memory costs. As a demonstration, we realized a 3 scales 2-D Gaussian filter on a single ALTERA Cyclone III FPGA chip. Experimental results show that, the proposed framework can computing a Multi-scales 2-D Gaussian filtering within one pixel clock period, is further suitable for real-time image processing. Moreover, the main principle can be popularized to the other operators based on convolution, such as Gabor filter, Sobel operator and so on.
Using the cloud to speed-up calibration of watershed-scale hydrologic models (Invited)
NASA Astrophysics Data System (ADS)
Goodall, J. L.; Ercan, M. B.; Castronova, A. M.; Humphrey, M.; Beekwilder, N.; Steele, J.; Kim, I.
2013-12-01
This research focuses on using the cloud to address computational challenges associated with hydrologic modeling. One example is calibration of a watershed-scale hydrologic model, which can take days of execution time on typical computers. While parallel algorithms for model calibration exist and some researchers have used multi-core computers or clusters to run these algorithms, these solutions do not fully address the challenge because (i) calibration can still be too time consuming even on multicore personal computers and (ii) few in the community have the time and expertise needed to manage a compute cluster. Given this, another option for addressing this challenge that we are exploring through this work is the use of the cloud for speeding-up calibration of watershed-scale hydrologic models. The cloud used in this capacity provides a means for renting a specific number and type of machines for only the time needed to perform a calibration model run. The cloud allows one to precisely balance the duration of the calibration with the financial costs so that, if the budget allows, the calibration can be performed more quickly by renting more machines. Focusing specifically on the SWAT hydrologic model and a parallel version of the DDS calibration algorithm, we show significant speed-up time across a range of watershed sizes using up to 256 cores to perform a model calibration. The tool provides a simple web-based user interface and the ability to monitor the calibration job submission process during the calibration process. Finally this talk concludes with initial work to leverage the cloud for other tasks associated with hydrologic modeling including tasks related to preparing inputs for constructing place-based hydrologic models.
NASA Technical Reports Server (NTRS)
Kumar, Uttam; Nemani, Ramakrishna R.; Ganguly, Sangram; Kalia, Subodh; Michaelis, Andrew
2017-01-01
In this work, we use a Fully Constrained Least Squares Subpixel Learning Algorithm to unmix global WELD (Web Enabled Landsat Data) to obtain fractions or abundances of substrate (S), vegetation (V) and dark objects (D) classes. Because of the sheer nature of data and compute needs, we leveraged the NASA Earth Exchange (NEX) high performance computing architecture to optimize and scale our algorithm for large-scale processing. Subsequently, the S-V-D abundance maps were characterized into 4 classes namely, forest, farmland, water and urban areas (with NPP-VIIRS-national polar orbiting partnership visible infrared imaging radiometer suite nighttime lights data) over California, USA using Random Forest classifier. Validation of these land cover maps with NLCD (National Land Cover Database) 2011 products and NAFD (North American Forest Dynamics) static forest cover maps showed that an overall classification accuracy of over 91 percent was achieved, which is a 6 percent improvement in unmixing based classification relative to per-pixel-based classification. As such, abundance maps continue to offer an useful alternative to high-spatial resolution data derived classification maps for forest inventory analysis, multi-class mapping for eco-climatic models and applications, fast multi-temporal trend analysis and for societal and policy-relevant applications needed at the watershed scale.
NASA Astrophysics Data System (ADS)
Ganguly, S.; Kumar, U.; Nemani, R. R.; Kalia, S.; Michaelis, A.
2017-12-01
In this work, we use a Fully Constrained Least Squares Subpixel Learning Algorithm to unmix global WELD (Web Enabled Landsat Data) to obtain fractions or abundances of substrate (S), vegetation (V) and dark objects (D) classes. Because of the sheer nature of data and compute needs, we leveraged the NASA Earth Exchange (NEX) high performance computing architecture to optimize and scale our algorithm for large-scale processing. Subsequently, the S-V-D abundance maps were characterized into 4 classes namely, forest, farmland, water and urban areas (with NPP-VIIRS - national polar orbiting partnership visible infrared imaging radiometer suite nighttime lights data) over California, USA using Random Forest classifier. Validation of these land cover maps with NLCD (National Land Cover Database) 2011 products and NAFD (North American Forest Dynamics) static forest cover maps showed that an overall classification accuracy of over 91% was achieved, which is a 6% improvement in unmixing based classification relative to per-pixel based classification. As such, abundance maps continue to offer an useful alternative to high-spatial resolution data derived classification maps for forest inventory analysis, multi-class mapping for eco-climatic models and applications, fast multi-temporal trend analysis and for societal and policy-relevant applications needed at the watershed scale.
NASA Astrophysics Data System (ADS)
Schildgen, T. F.; Robinson, R. A. J.; Savi, S.; Bookhagen, B.; Tofelde, S.; Strecker, M. R.
2014-12-01
Numerical modelling informs risk assessment of tsunami generated by submarine slides; however, for large-scale slides modelling can be complex and computationally challenging. Many previous numerical studies have approximated slides as rigid blocks that moved according to prescribed motion. However, wave characteristics are strongly dependent on the motion of the slide and previous work has recommended that more accurate representation of slide dynamics is needed. We have used the finite-element, adaptive-mesh CFD model Fluidity, to perform multi-material simulations of deformable submarine slide-generated waves at real world scales for a 2D scenario in the Gulf of Mexico. Our high-resolution approach represents slide dynamics with good accuracy, compared to other numerical simulations of this scenario, but precludes tracking of wave propagation over large distances. To enable efficient modelling of further propagation of the waves, we investigate an approach to extract information about the slide evolution from our multi-material simulations in order to drive a single-layer wave propagation model, also using Fluidity, which is much less computationally expensive. The extracted submarine slide geometry and position as a function of time are parameterised using simple polynomial functions. The polynomial functions are used to inform a prescribed velocity boundary condition in a single-layer simulation, mimicking the effect the submarine slide motion has on the water column. The approach is verified by successful comparison of wave generation in the single-layer model with that recorded in the multi-material, multi-layer simulations. We then extend this approach to 3D for further validation of this methodology (using the Gulf of Mexico scenario proposed by Horrillo et al., 2013) and to consider the effect of lateral spreading. This methodology is then used to simulate a series of hypothetical submarine slide events in the Arctic Ocean (based on evidence of historic slides) and examine the hazard posed to the UK coast.
Quantifying uncertainty and computational complexity for pore-scale simulations
NASA Astrophysics Data System (ADS)
Chen, C.; Yuan, Z.; Wang, P.; Yang, X.; Zhenyan, L.
2016-12-01
Pore-scale simulation is an essential tool to understand the complex physical process in many environmental problems, from multi-phase flow in the subsurface to fuel cells. However, in practice, factors such as sample heterogeneity, data sparsity and in general, our insufficient knowledge of the underlying process, render many simulation parameters and hence the prediction results uncertain. Meanwhile, most pore-scale simulations (in particular, direct numerical simulation) incur high computational cost due to finely-resolved spatio-temporal scales, which further limits our data/samples collection. To address those challenges, we propose a novel framework based on the general polynomial chaos (gPC) and build a surrogate model representing the essential features of the underlying system. To be specific, we apply the novel framework to analyze the uncertainties of the system behavior based on a series of pore-scale numerical experiments, such as flow and reactive transport in 2D heterogeneous porous media and 3D packed beds. Comparing with recent pore-scale uncertainty quantification studies using Monte Carlo techniques, our new framework requires fewer number of realizations and hence considerably reduce the overall computational cost, while maintaining the desired accuracy.
Baseline process description for simulating plutonium oxide production for precalc project
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pike, J. A.
Savannah River National Laboratory (SRNL) started a multi-year project, the PreCalc Project, to develop a computational simulation of a plutonium oxide (PuO 2) production facility with the objective to study the fundamental relationships between morphological and physicochemical properties. This report provides a detailed baseline process description to be used by SRNL personnel and collaborators to facilitate the initial design and construction of the simulation. The PreCalc Project team selected the HB-Line Plutonium Finishing Facility as the basis for a nominal baseline process since the facility is operational and significant model validation data can be obtained. The process boundary as wellmore » as process and facility design details necessary for multi-scale, multi-physics models are provided.« less
NASA Astrophysics Data System (ADS)
Fiore, S.; Płóciennik, M.; Doutriaux, C.; Blanquer, I.; Barbera, R.; Williams, D. N.; Anantharaj, V. G.; Evans, B. J. K.; Salomoni, D.; Aloisio, G.
2017-12-01
The increased models resolution in the development of comprehensive Earth System Models is rapidly leading to very large climate simulations output that pose significant scientific data management challenges in terms of data sharing, processing, analysis, visualization, preservation, curation, and archiving.Large scale global experiments for Climate Model Intercomparison Projects (CMIP) have led to the development of the Earth System Grid Federation (ESGF), a federated data infrastructure which has been serving the CMIP5 experiment, providing access to 2PB of data for the IPCC Assessment Reports. In such a context, running a multi-model data analysis experiment is very challenging, as it requires the availability of a large amount of data related to multiple climate models simulations and scientific data management tools for large-scale data analytics. To address these challenges, a case study on climate models intercomparison data analysis has been defined and implemented in the context of the EU H2020 INDIGO-DataCloud project. The case study has been tested and validated on CMIP5 datasets, in the context of a large scale, international testbed involving several ESGF sites (LLNL, ORNL and CMCC), one orchestrator site (PSNC) and one more hosting INDIGO PaaS services (UPV). Additional ESGF sites, such as NCI (Australia) and a couple more in Europe, are also joining the testbed. The added value of the proposed solution is summarized in the following: it implements a server-side paradigm which limits data movement; it relies on a High-Performance Data Analytics (HPDA) stack to address performance; it exploits the INDIGO PaaS layer to support flexible, dynamic and automated deployment of software components; it provides user-friendly web access based on the INDIGO Future Gateway; and finally it integrates, complements and extends the support currently available through ESGF. Overall it provides a new "tool" for climate scientists to run multi-model experiments. At the time this contribution is being written, the proposed testbed represents the first implementation of a distributed large-scale, multi-model experiment in the ESGF/CMIP context, joining together server-side approaches for scientific data analysis, HPDA frameworks, end-to-end workflow management, and cloud computing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.
Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10 1 to ~10 2 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less
Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.
2016-09-01
Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10 1 to ~10 2 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less
NASA Astrophysics Data System (ADS)
Yamamoto, H.; Nakajima, K.; Zhang, K.; Nanai, S.
2015-12-01
Powerful numerical codes that are capable of modeling complex coupled processes of physics and chemistry have been developed for predicting the fate of CO2 in reservoirs as well as its potential impacts on groundwater and subsurface environments. However, they are often computationally demanding for solving highly non-linear models in sufficient spatial and temporal resolutions. Geological heterogeneity and uncertainties further increase the challenges in modeling works. Two-phase flow simulations in heterogeneous media usually require much longer computational time than that in homogeneous media. Uncertainties in reservoir properties may necessitate stochastic simulations with multiple realizations. Recently, massively parallel supercomputers with more than thousands of processors become available in scientific and engineering communities. Such supercomputers may attract attentions from geoscientist and reservoir engineers for solving the large and non-linear models in higher resolutions within a reasonable time. However, for making it a useful tool, it is essential to tackle several practical obstacles to utilize large number of processors effectively for general-purpose reservoir simulators. We have implemented massively-parallel versions of two TOUGH2 family codes (a multi-phase flow simulator TOUGH2 and a chemically reactive transport simulator TOUGHREACT) on two different types (vector- and scalar-type) of supercomputers with a thousand to tens of thousands of processors. After completing implementation and extensive tune-up on the supercomputers, the computational performance was measured for three simulations with multi-million grid models, including a simulation of the dissolution-diffusion-convection process that requires high spatial and temporal resolutions to simulate the growth of small convective fingers of CO2-dissolved water to larger ones in a reservoir scale. The performance measurement confirmed that the both simulators exhibit excellent scalabilities showing almost linear speedup against number of processors up to over ten thousand cores. Generally this allows us to perform coupled multi-physics (THC) simulations on high resolution geologic models with multi-million grid in a practical time (e.g., less than a second per time step).
Zhu, Lin; Dai, Zhenxue; Gong, Huili; ...
2015-06-12
Understanding the heterogeneity arising from the complex architecture of sedimentary sequences in alluvial fans is challenging. This study develops a statistical inverse framework in a multi-zone transition probability approach for characterizing the heterogeneity in alluvial fans. An analytical solution of the transition probability matrix is used to define the statistical relationships among different hydrofacies and their mean lengths, integral scales, and volumetric proportions. A statistical inversion is conducted to identify the multi-zone transition probability models and estimate the optimal statistical parameters using the modified Gauss–Newton–Levenberg–Marquardt method. The Jacobian matrix is computed by the sensitivity equation method, which results in anmore » accurate inverse solution with quantification of parameter uncertainty. We use the Chaobai River alluvial fan in the Beijing Plain, China, as an example for elucidating the methodology of alluvial fan characterization. The alluvial fan is divided into three sediment zones. In each zone, the explicit mathematical formulations of the transition probability models are constructed with optimized different integral scales and volumetric proportions. The hydrofacies distributions in the three zones are simulated sequentially by the multi-zone transition probability-based indicator simulations. Finally, the result of this study provides the heterogeneous structure of the alluvial fan for further study of flow and transport simulations.« less
NASA Astrophysics Data System (ADS)
Fiore, Sandro; Płóciennik, Marcin; Doutriaux, Charles; Blanquer, Ignacio; Barbera, Roberto; Donvito, Giacinto; Williams, Dean N.; Anantharaj, Valentine; Salomoni, Davide D.; Aloisio, Giovanni
2017-04-01
In many scientific domains such as climate, data is often n-dimensional and requires tools that support specialized data types and primitives to be properly stored, accessed, analysed and visualized. Moreover, new challenges arise in large-scale scenarios and eco-systems where petabytes (PB) of data can be available and data can be distributed and/or replicated, such as the Earth System Grid Federation (ESGF) serving the Coupled Model Intercomparison Project, Phase 5 (CMIP5) experiment, providing access to 2.5PB of data for the Intergovernmental Panel on Climate Change (IPCC) Fifth Assessment Report (AR5). A case study on climate models intercomparison data analysis addressing several classes of multi-model experiments is being implemented in the context of the EU H2020 INDIGO-DataCloud project. Such experiments require the availability of large amount of data (multi-terabyte order) related to the output of several climate models simulations as well as the exploitation of scientific data management tools for large-scale data analytics. More specifically, the talk discusses in detail a use case on precipitation trend analysis in terms of requirements, architectural design solution, and infrastructural implementation. The experiment has been tested and validated on CMIP5 datasets, in the context of a large scale distributed testbed across EU and US involving three ESGF sites (LLNL, ORNL, and CMCC) and one central orchestrator site (PSNC). The general "environment" of the case study relates to: (i) multi-model data analysis inter-comparison challenges; (ii) addressed on CMIP5 data; and (iii) which are made available through the IS-ENES/ESGF infrastructure. The added value of the solution proposed in the INDIGO-DataCloud project are summarized in the following: (i) it implements a different paradigm (from client- to server-side); (ii) it intrinsically reduces data movement; (iii) it makes lightweight the end-user setup; (iv) it fosters re-usability (of data, final/intermediate products, workflows, sessions, etc.) since everything is managed on the server-side; (v) it complements, extends and interoperates with the ESGF stack; (vi) it provides a "tool" for scientists to run multi-model experiments, and finally; and (vii) it can drastically reduce the time-to-solution for these experiments from weeks to hours. At the time the contribution is being written, the proposed testbed represents the first concrete implementation of a distributed multi-model experiment in the ESGF/CMIP context joining server-side and parallel processing, end-to-end workflow management and cloud computing. As opposed to the current scenario based on search & discovery, data download, and client-based data analysis, the INDIGO-DataCloud architectural solution described in this contribution addresses the scientific computing & analytics requirements by providing a paradigm shift based on server-side and high performance big data frameworks jointly with two-level workflow management systems realized at the PaaS level via a cloud infrastructure.
Biophysical Discovery through the Lens of a Computational Microscope
NASA Astrophysics Data System (ADS)
Amaro, Rommie
With exascale computing power on the horizon, improvements in the underlying algorithms and available structural experimental data are enabling new paradigms for chemical discovery. My work has provided key insights for the systematic incorporation of structural information resulting from state-of-the-art biophysical simulations into protocols for inhibitor and drug discovery. We have shown that many disease targets have druggable pockets that are otherwise ``hidden'' in high resolution x-ray structures, and that this is a common theme across a wide range of targets in different disease areas. We continue to push the limits of computational biophysical modeling by expanding the time and length scales accessible to molecular simulation. My sights are set on, ultimately, the development of detailed physical models of cells, as the fundamental unit of life, and two recent achievements highlight our efforts in this arena. First is the development of a molecular and Brownian dynamics multi-scale modeling framework, which allows us to investigate drug binding kinetics in addition to thermodynamics. In parallel, we have made significant progress developing new tools to extend molecular structure to cellular environments. Collectively, these achievements are enabling the investigation of the chemical and biophysical nature of cells at unprecedented scales.
Patel, Deepak K.
2016-01-01
This paper is concerned with predicting the progressive damage and failure of multi-layered hybrid textile composites subjected to uniaxial tensile loading, using a novel two-scale computational mechanics framework. These composites include three-dimensional woven textile composites (3DWTCs) with glass, carbon and Kevlar fibre tows. Progressive damage and failure of 3DWTCs at different length scales are captured in the present model by using a macroscale finite-element (FE) analysis at the representative unit cell (RUC) level, while a closed-form micromechanics analysis is implemented simultaneously at the subscale level using material properties of the constituents (fibre and matrix) as input. The N-layers concentric cylinder (NCYL) model (Zhang and Waas 2014 Acta Mech. 225, 1391–1417; Patel et al. submitted Acta Mech.) to compute local stress, srain and displacement fields in the fibre and matrix is used at the subscale. The 2-CYL fibre–matrix concentric cylinder model is extended to fibre and (N−1) matrix layers, keeping the volume fraction constant, and hence is called the NCYL model where the matrix damage can be captured locally within each discrete layer of the matrix volume. The influence of matrix microdamage at the subscale causes progressive degradation of fibre tow stiffness and matrix stiffness at the macroscale. The global RUC stiffness matrix remains positive definite, until the strain softening response resulting from different failure modes (such as fibre tow breakage, tow splitting in the transverse direction due to matrix cracking inside tow and surrounding matrix tensile failure outside of fibre tows) are initiated. At this stage, the macroscopic post-peak softening response is modelled using the mesh objective smeared crack approach (Rots et al. 1985 HERON 30, 1–48; Heinrich and Waas 2012 53rd AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference, Honolulu, HI, 23–26 April 2012. AIAA 2012-1537). Manufacturing-induced geometric imperfections are included in the simulation, where the FE mesh of the unit cell is generated directly from micro-computed tomography (MCT) real data using a code Simpleware. Results from multi-scale analysis for both an idealized perfect geometry and one that includes geometric imperfections are compared with experimental results (Pankow et al. 2012 53rd AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference, Honolulu, HI, 23–26 April 2012. AIAA 2012-1572). This article is part of the themed issue ‘Multiscale modelling of the structural integrity of composite materials’. PMID:27242294
Patel, Deepak K; Waas, Anthony M
2016-07-13
This paper is concerned with predicting the progressive damage and failure of multi-layered hybrid textile composites subjected to uniaxial tensile loading, using a novel two-scale computational mechanics framework. These composites include three-dimensional woven textile composites (3DWTCs) with glass, carbon and Kevlar fibre tows. Progressive damage and failure of 3DWTCs at different length scales are captured in the present model by using a macroscale finite-element (FE) analysis at the representative unit cell (RUC) level, while a closed-form micromechanics analysis is implemented simultaneously at the subscale level using material properties of the constituents (fibre and matrix) as input. The N-layers concentric cylinder (NCYL) model (Zhang and Waas 2014 Acta Mech. 225, 1391-1417; Patel et al. submitted Acta Mech.) to compute local stress, srain and displacement fields in the fibre and matrix is used at the subscale. The 2-CYL fibre-matrix concentric cylinder model is extended to fibre and (N-1) matrix layers, keeping the volume fraction constant, and hence is called the NCYL model where the matrix damage can be captured locally within each discrete layer of the matrix volume. The influence of matrix microdamage at the subscale causes progressive degradation of fibre tow stiffness and matrix stiffness at the macroscale. The global RUC stiffness matrix remains positive definite, until the strain softening response resulting from different failure modes (such as fibre tow breakage, tow splitting in the transverse direction due to matrix cracking inside tow and surrounding matrix tensile failure outside of fibre tows) are initiated. At this stage, the macroscopic post-peak softening response is modelled using the mesh objective smeared crack approach (Rots et al. 1985 HERON 30, 1-48; Heinrich and Waas 2012 53rd AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference, Honolulu, HI, 23-26 April 2012 AIAA 2012-1537). Manufacturing-induced geometric imperfections are included in the simulation, where the FE mesh of the unit cell is generated directly from micro-computed tomography (MCT) real data using a code Simpleware Results from multi-scale analysis for both an idealized perfect geometry and one that includes geometric imperfections are compared with experimental results (Pankow et al. 2012 53rd AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference, Honolulu, HI, 23-26 April 2012 AIAA 2012-1572). This article is part of the themed issue 'Multiscale modelling of the structural integrity of composite materials'. © 2016 The Author(s).
Understanding hydraulic fracturing: a multi-scale problem.
Hyman, J D; Jiménez-Martínez, J; Viswanathan, H S; Carey, J W; Porter, M L; Rougier, E; Karra, S; Kang, Q; Frash, L; Chen, L; Lei, Z; O'Malley, D; Makedonska, N
2016-10-13
Despite the impact that hydraulic fracturing has had on the energy sector, the physical mechanisms that control its efficiency and environmental impacts remain poorly understood in part because the length scales involved range from nanometres to kilometres. We characterize flow and transport in shale formations across and between these scales using integrated computational, theoretical and experimental efforts/methods. At the field scale, we use discrete fracture network modelling to simulate production of a hydraulically fractured well from a fracture network that is based on the site characterization of a shale gas reservoir. At the core scale, we use triaxial fracture experiments and a finite-discrete element model to study dynamic fracture/crack propagation in low permeability shale. We use lattice Boltzmann pore-scale simulations and microfluidic experiments in both synthetic and shale rock micromodels to study pore-scale flow and transport phenomena, including multi-phase flow and fluids mixing. A mechanistic description and integration of these multiple scales is required for accurate predictions of production and the eventual optimization of hydrocarbon extraction from unconventional reservoirs. Finally, we discuss the potential of CO2 as an alternative working fluid, both in fracturing and re-stimulating activities, beyond its environmental advantages.This article is part of the themed issue 'Energy and the subsurface'. © 2016 The Author(s).
Understanding hydraulic fracturing: a multi-scale problem
Hyman, J. D.; Jiménez-Martínez, J.; Viswanathan, H. S.; Carey, J. W.; Porter, M. L.; Rougier, E.; Karra, S.; Kang, Q.; Frash, L.; Chen, L.; Lei, Z.; O’Malley, D.; Makedonska, N.
2016-01-01
Despite the impact that hydraulic fracturing has had on the energy sector, the physical mechanisms that control its efficiency and environmental impacts remain poorly understood in part because the length scales involved range from nanometres to kilometres. We characterize flow and transport in shale formations across and between these scales using integrated computational, theoretical and experimental efforts/methods. At the field scale, we use discrete fracture network modelling to simulate production of a hydraulically fractured well from a fracture network that is based on the site characterization of a shale gas reservoir. At the core scale, we use triaxial fracture experiments and a finite-discrete element model to study dynamic fracture/crack propagation in low permeability shale. We use lattice Boltzmann pore-scale simulations and microfluidic experiments in both synthetic and shale rock micromodels to study pore-scale flow and transport phenomena, including multi-phase flow and fluids mixing. A mechanistic description and integration of these multiple scales is required for accurate predictions of production and the eventual optimization of hydrocarbon extraction from unconventional reservoirs. Finally, we discuss the potential of CO2 as an alternative working fluid, both in fracturing and re-stimulating activities, beyond its environmental advantages. This article is part of the themed issue ‘Energy and the subsurface’. PMID:27597789
Learning Natural Selection in 4th Grade with Multi-Agent-Based Computational Models
ERIC Educational Resources Information Center
Dickes, Amanda Catherine; Sengupta, Pratim
2013-01-01
In this paper, we investigate how elementary school students develop multi-level explanations of population dynamics in a simple predator-prey ecosystem, through scaffolded interactions with a multi-agent-based computational model (MABM). The term "agent" in an MABM indicates individual computational objects or actors (e.g., cars), and these…
A multi-sensor remote sensing approach for measuring primary production from space
NASA Technical Reports Server (NTRS)
Gautier, Catherine
1989-01-01
It is proposed to develop a multi-sensor remote sensing method for computing marine primary productivity from space, based on the capability to measure the primary ocean variables which regulate photosynthesis. The three variables and the sensors which measure them are: (1) downwelling photosynthetically available irradiance, measured by the VISSR sensor on the GOES satellite, (2) sea-surface temperature from AVHRR on NOAA series satellites, and (3) chlorophyll-like pigment concentration from the Nimbus-7/CZCS sensor. These and other measured variables would be combined within empirical or analytical models to compute primary productivity. With this proposed capability of mapping primary productivity on a regional scale, we could begin realizing a more precise and accurate global assessment of its magnitude and variability. Applications would include supplementation and expansion on the horizontal scale of ship-acquired biological data, which is more accurate and which supplies the vertical components of the field, monitoring oceanic response to increased atmospheric carbon dioxide levels, correlation with observed sedimentation patterns and processes, and fisheries management.
Visualization of Spatio-Temporal Relations in Movement Event Using Multi-View
NASA Astrophysics Data System (ADS)
Zheng, K.; Gu, D.; Fang, F.; Wang, Y.; Liu, H.; Zhao, W.; Zhang, M.; Li, Q.
2017-09-01
Spatio-temporal relations among movement events extracted from temporally varying trajectory data can provide useful information about the evolution of individual or collective movers, as well as their interactions with their spatial and temporal contexts. However, the pure statistical tools commonly used by analysts pose many difficulties, due to the large number of attributes embedded in multi-scale and multi-semantic trajectory data. The need for models that operate at multiple scales to search for relations at different locations within time and space, as well as intuitively interpret what these relations mean, also presents challenges. Since analysts do not know where or when these relevant spatio-temporal relations might emerge, these models must compute statistical summaries of multiple attributes at different granularities. In this paper, we propose a multi-view approach to visualize the spatio-temporal relations among movement events. We describe a method for visualizing movement events and spatio-temporal relations that uses multiple displays. A visual interface is presented, and the user can interactively select or filter spatial and temporal extents to guide the knowledge discovery process. We also demonstrate how this approach can help analysts to derive and explain the spatio-temporal relations of movement events from taxi trajectory data.
Discontinuous Galerkin methods for modeling Hurricane storm surge
NASA Astrophysics Data System (ADS)
Dawson, Clint; Kubatko, Ethan J.; Westerink, Joannes J.; Trahan, Corey; Mirabito, Christopher; Michoski, Craig; Panda, Nishant
2011-09-01
Storm surge due to hurricanes and tropical storms can result in significant loss of life, property damage, and long-term damage to coastal ecosystems and landscapes. Computer modeling of storm surge can be used for two primary purposes: forecasting of surge as storms approach land for emergency planning and evacuation of coastal populations, and hindcasting of storms for determining risk, development of mitigation strategies, coastal restoration and sustainability. Storm surge is modeled using the shallow water equations, coupled with wind forcing and in some events, models of wave energy. In this paper, we will describe a depth-averaged (2D) model of circulation in spherical coordinates. Tides, riverine forcing, atmospheric pressure, bottom friction, the Coriolis effect and wind stress are all important for characterizing the inundation due to surge. The problem is inherently multi-scale, both in space and time. To model these problems accurately requires significant investments in acquiring high-fidelity input (bathymetry, bottom friction characteristics, land cover data, river flow rates, levees, raised roads and railways, etc.), accurate discretization of the computational domain using unstructured finite element meshes, and numerical methods capable of capturing highly advective flows, wetting and drying, and multi-scale features of the solution. The discontinuous Galerkin (DG) method appears to allow for many of the features necessary to accurately capture storm surge physics. The DG method was developed for modeling shocks and advection-dominated flows on unstructured finite element meshes. It easily allows for adaptivity in both mesh ( h) and polynomial order ( p) for capturing multi-scale spatial events. Mass conservative wetting and drying algorithms can be formulated within the DG method. In this paper, we will describe the application of the DG method to hurricane storm surge. We discuss the general formulation, and new features which have been added to the model to better capture surge in complex coastal environments. These features include modifications to the method to handle spherical coordinates and maintain still flows, improvements in the stability post-processing (i.e. slope-limiting), and the modeling of internal barriers for capturing overtopping of levees and other structures. We will focus on applications of the model to recent events in the Gulf of Mexico, including Hurricane Ike.
2017-02-13
3550 Aberdeen Ave., SE 11. SPONSOR/MONITOR’S REPORT Kirtland AFB, NM 87117-5776 NUMBER(S) AFRL -RV-PS-TR-2016-0161 12. DISTRIBUTION / AVAILABILITY...RVIL Kirtland AFB, NM 87117-5776 2 cys Official Record Copy AFRL /RVSW/David Cardimona 1 cy 22 Approved for public release; distribution is unlimited. ... AFRL -RV-PS- AFRL -RV-PS- TR-2016-0161 TR-2016-0161 ATOMISTIC- AND MESO-SCALE COMPUTATIONAL SIMULATIONS FOR DEVELOPING MULTI-TIMESCALE THEORY FOR
Large-Scale, Multi-Sensor Atmospheric Data Fusion Using Hybrid Cloud Computing
NASA Astrophysics Data System (ADS)
Wilson, Brian; Manipon, Gerald; Hua, Hook; Fetzer, Eric
2014-05-01
NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the "A-Train" platforms (AIRS, AMSR-E, MODIS, MISR, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over decades. Moving to multi-sensor, long-duration analyses of important climate variables presents serious challenges for large-scale data mining and fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another (MODIS), and to a model (ECMWF), stratify the comparisons using a classification of the "cloud scenes" from CloudSat, and repeat the entire analysis over 10 years of data. To efficiently assemble such datasets, we are utilizing Elastic Computing in the Cloud and parallel map-reduce-based algorithms. However, these problems are Data Intensive computing so the data transfer times and storage costs (for caching) are key issues. SciReduce is a Hadoop-like parallel analysis system, programmed in parallel python, that is designed from the ground up for Earth science. SciReduce executes inside VMWare images and scales to any number of nodes in a hybrid Cloud (private eucalyptus & public Amazon). Unlike Hadoop, SciReduce operates on bundles of named numeric arrays, which can be passed in memory or serialized to disk in netCDF4 or HDF5. Multi-year datasets are automatically "sharded" by time and space across a cluster of nodes so that years of data (millions of files) can be processed in a massively parallel way. Input variables (arrays) are pulled on-demand into the Cloud using OPeNDAP URLs or other subsetting services, thereby minimizing the size of the cached input and intermediate datasets. We are using SciReduce to automate the production of multiple versions of a ten-year A-Train water vapor climatology under a NASA MEASURES grant. We will present the architecture of SciReduce, describe the achieved "clock time" speedups in fusing datasets on our own nodes and in the Cloud, and discuss the Cloud cost tradeoffs for storage, compute, and data transfer. We will also present a concept and prototype for staging NASA's A-Train Atmospheric datasets (Levels 2 & 3) in the Amazon Cloud so that any number of compute jobs can be executed "near" the multi-sensor data. Given such a system, multi-sensor climate studies over 10-20 years of data could be perform
NASA Astrophysics Data System (ADS)
Engel, Dave W.; Reichardt, Thomas A.; Kulp, Thomas J.; Graff, David L.; Thompson, Sandra E.
2016-05-01
Validating predictive models and quantifying uncertainties inherent in the modeling process is a critical component of the HARD Solids Venture program [1]. Our current research focuses on validating physics-based models predicting the optical properties of solid materials for arbitrary surface morphologies and characterizing the uncertainties in these models. We employ a systematic and hierarchical approach by designing physical experiments and comparing the experimental results with the outputs of computational predictive models. We illustrate this approach through an example comparing a micro-scale forward model to an idealized solid-material system and then propagating the results through a system model to the sensor level. Our efforts should enhance detection reliability of the hyper-spectral imaging technique and the confidence in model utilization and model outputs by users and stakeholders.
Multi-scale and multi-domain computational astrophysics.
van Elteren, Arjen; Pelupessy, Inti; Zwart, Simon Portegies
2014-08-06
Astronomical phenomena are governed by processes on all spatial and temporal scales, ranging from days to the age of the Universe (13.8 Gyr) as well as from kilometre size up to the size of the Universe. This enormous range in scales is contrived, but as long as there is a physical connection between the smallest and largest scales it is important to be able to resolve them all, and for the study of many astronomical phenomena this governance is present. Although covering all these scales is a challenge for numerical modellers, the most challenging aspect is the equally broad and complex range in physics, and the way in which these processes propagate through all scales. In our recent effort to cover all scales and all relevant physical processes on these scales, we have designed the Astrophysics Multipurpose Software Environment (AMUSE). AMUSE is a Python-based framework with production quality community codes and provides a specialized environment to connect this plethora of solvers to a homogeneous problem-solving environment. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
The Structure of Borders in a Small World
Thiemann, Christian; Theis, Fabian; Grady, Daniel; Brune, Rafael; Brockmann, Dirk
2010-01-01
Territorial subdivisions and geographic borders are essential for understanding phenomena in sociology, political science, history, and economics. They influence the interregional flow of information and cross-border trade and affect the diffusion of innovation and technology. However, it is unclear if existing administrative subdivisions that typically evolved decades ago still reflect the most plausible organizational structure of today. The complexity of modern human communication, the ease of long-distance movement, and increased interaction across political borders complicate the operational definition and assessment of geographic borders that optimally reflect the multi-scale nature of today's human connectivity patterns. What border structures emerge directly from the interplay of scales in human interactions is an open question. Based on a massive proxy dataset, we analyze a multi-scale human mobility network and compute effective geographic borders inherent to human mobility patterns in the United States. We propose two computational techniques for extracting these borders and for quantifying their strength. We find that effective borders only partially overlap with existing administrative borders, and show that some of the strongest mobility borders exist in unexpected regions. We show that the observed structures cannot be generated by gravity models for human traffic. Finally, we introduce the concept of link significance that clarifies the observed structure of effective borders. Our approach represents a novel type of quantitative, comparative analysis framework for spatially embedded multi-scale interaction networks in general and may yield important insight into a multitude of spatiotemporal phenomena generated by human activity. PMID:21124970
The structure of borders in a small world.
Thiemann, Christian; Theis, Fabian; Grady, Daniel; Brune, Rafael; Brockmann, Dirk
2010-11-18
Territorial subdivisions and geographic borders are essential for understanding phenomena in sociology, political science, history, and economics. They influence the interregional flow of information and cross-border trade and affect the diffusion of innovation and technology. However, it is unclear if existing administrative subdivisions that typically evolved decades ago still reflect the most plausible organizational structure of today. The complexity of modern human communication, the ease of long-distance movement, and increased interaction across political borders complicate the operational definition and assessment of geographic borders that optimally reflect the multi-scale nature of today's human connectivity patterns. What border structures emerge directly from the interplay of scales in human interactions is an open question. Based on a massive proxy dataset, we analyze a multi-scale human mobility network and compute effective geographic borders inherent to human mobility patterns in the United States. We propose two computational techniques for extracting these borders and for quantifying their strength. We find that effective borders only partially overlap with existing administrative borders, and show that some of the strongest mobility borders exist in unexpected regions. We show that the observed structures cannot be generated by gravity models for human traffic. Finally, we introduce the concept of link significance that clarifies the observed structure of effective borders. Our approach represents a novel type of quantitative, comparative analysis framework for spatially embedded multi-scale interaction networks in general and may yield important insight into a multitude of spatiotemporal phenomena generated by human activity.
Multi-scale Modeling of the Evolution of a Large-Scale Nourishment
NASA Astrophysics Data System (ADS)
Luijendijk, A.; Hoonhout, B.
2016-12-01
Morphological predictions are often computed using a single morphological model commonly forced with schematized boundary conditions representing the time scale of the prediction. Recent model developments are now allowing us to think and act differently. This study presents some recent developments in coastal morphological modeling focusing on flexible meshes, flexible coupling between models operating at different time scales, and a recently developed morphodynamic model for the intertidal and dry beach. This integrated modeling approach is applied to the Sand Engine mega nourishment in The Netherlands to illustrate the added-values of this integrated approach both in accuracy and computational efficiency. The state-of-the-art Delft3D Flexible Mesh (FM) model is applied at the study site under moderate wave conditions. One of the advantages is that the flexibility of the mesh structure allows a better representation of the water exchange with the lagoon and corresponding morphological behavior than with the curvilinear grid used in the previous version of Delft3D. The XBeach model is applied to compute the morphodynamic response to storm events in detail incorporating the long wave effects on bed level changes. The recently developed aeolian transport and bed change model AeoLiS is used to compute the bed changes in the intertidal and dry beach area. In order to enable flexible couplings between the three abovementioned models, a component-based environment has been developed using the BMI method. This allows a serial coupling of Delft3D FM and XBeach steered by a control module that uses a hydrodynamic time series as input (see figure). In addition, a parallel online coupling, with information exchange in each timestep will be made with the AeoLiS model that predicts the bed level changes at the intertidal and dry beach area. This study presents the first years of evolution of the Sand Engine computed with the integrated modelling approach. Detailed comparisons are made between the observed and computed morphological behaviour for the Sand Engine on an aggregated as well as sub-system level.
NASA Astrophysics Data System (ADS)
Parsakhoo, Zahra; Shao, Yaping
2017-04-01
Near-surface turbulent mixing has considerable effect on surface fluxes, cloud formation and convection in the atmospheric boundary layer (ABL). Its quantifications is however a modeling and computational challenge since the small eddies are not fully resolved in Eulerian models directly. We have developed a Lagrangian stochastic model to demonstrate multi-scale interactions between convection and land surface heterogeneity in the atmospheric boundary layer based on the Ito Stochastic Differential Equation (SDE) for air parcels (particles). Due to the complexity of the mixing in the ABL, we find that linear Ito SDE cannot represent convections properly. Three strategies have been tested to solve the problem: 1) to make the deterministic term in the Ito equation non-linear; 2) to change the random term in the Ito equation fractional, and 3) to modify the Ito equation by including Levy flights. We focus on the third strategy and interpret mixing as interaction between at least two stochastic processes with different Lagrangian time scales. The model is in progress to include the collisions among the particles with different characteristic and to apply the 3D model for real cases. One application of the model is emphasized: some land surface patterns are generated and then coupled with the Large Eddy Simulation (LES).
Time and length scales within a fire and implications for numerical simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
TIESZEN,SHELDON R.
2000-02-02
A partial non-dimensionalization of the Navier-Stokes equations is used to obtain order of magnitude estimates of the rate-controlling transport processes in the reacting portion of a fire plume as a function of length scale. Over continuum length scales, buoyant times scales vary as the square root of the length scale; advection time scales vary as the length scale, and diffusion time scales vary as the square of the length scale. Due to the variation with length scale, each process is dominant over a given range. The relationship of buoyancy and baroclinc vorticity generation is highlighted. For numerical simulation, first principlesmore » solution for fire problems is not possible with foreseeable computational hardware in the near future. Filtered transport equations with subgrid modeling will be required as two to three decades of length scale are captured by solution of discretized conservation equations. By whatever filtering process one employs, one must have humble expectations for the accuracy obtainable by numerical simulation for practical fire problems that contain important multi-physics/multi-length-scale coupling with up to 10 orders of magnitude in length scale.« less
Up-scaling of multi-variable flood loss models from objects to land use units at the meso-scale
NASA Astrophysics Data System (ADS)
Kreibich, Heidi; Schröter, Kai; Merz, Bruno
2016-05-01
Flood risk management increasingly relies on risk analyses, including loss modelling. Most of the flood loss models usually applied in standard practice have in common that complex damaging processes are described by simple approaches like stage-damage functions. Novel multi-variable models significantly improve loss estimation on the micro-scale and may also be advantageous for large-scale applications. However, more input parameters also reveal additional uncertainty, even more in upscaling procedures for meso-scale applications, where the parameters need to be estimated on a regional area-wide basis. To gain more knowledge about challenges associated with the up-scaling of multi-variable flood loss models the following approach is applied: Single- and multi-variable micro-scale flood loss models are up-scaled and applied on the meso-scale, namely on basis of ATKIS land-use units. Application and validation is undertaken in 19 municipalities, which were affected during the 2002 flood by the River Mulde in Saxony, Germany by comparison to official loss data provided by the Saxon Relief Bank (SAB).In the meso-scale case study based model validation, most multi-variable models show smaller errors than the uni-variable stage-damage functions. The results show the suitability of the up-scaling approach, and, in accordance with micro-scale validation studies, that multi-variable models are an improvement in flood loss modelling also on the meso-scale. However, uncertainties remain high, stressing the importance of uncertainty quantification. Thus, the development of probabilistic loss models, like BT-FLEMO used in this study, which inherently provide uncertainty information are the way forward.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ely, Geoffrey P.
2013-10-31
This project uses dynamic rupture simulations to investigate high-frequency seismic energy generation. The relevant phenomena (frictional breakdown, shear heating, effective normal-stress fluctuations, material damage, etc.) controlling rupture are strongly interacting and span many orders of magnitude in spatial scale, requiring highresolution simulations that couple disparate physical processes (e.g., elastodynamics, thermal weakening, pore-fluid transport, and heat conduction). Compounding the computational challenge, we know that natural faults are not planar, but instead have roughness that can be approximated by power laws potentially leading to large, multiscale fluctuations in normal stress. The capacity to perform 3D rupture simulations that couple these processes willmore » provide guidance for constructing appropriate source models for high-frequency ground motion simulations. The improved rupture models from our multi-scale dynamic rupture simulations will be used to conduct physicsbased (3D waveform modeling-based) probabilistic seismic hazard analysis (PSHA) for California. These calculation will provide numerous important seismic hazard results, including a state-wide extended earthquake rupture forecast with rupture variations for all significant events, a synthetic seismogram catalog for thousands of scenario events and more than 5000 physics-based seismic hazard curves for California.« less
Multi-Scale Multi-Physics Modeling of Matrix Transport Properties in Fractured Shale Reservoirs
NASA Astrophysics Data System (ADS)
Mehmani, A.; Prodanovic, M.
2014-12-01
Understanding the shale matrix flow behavior is imperative in successful reservoir development for hydrocarbon production and carbon storage. Without a predictive model, significant uncertainties in flowback from the formation, the communication between the fracture and matrix as well as proper fracturing practice will ensue. Informed by SEM images, we develop deterministic network models that couple pores from multiple scales and their respective fluid physics. The models are used to investigate sorption hysteresis as an affordable way of inferring the nanoscale pore structure in core scale. In addition, restricted diffusion as a function of pore shape, pore-throat size ratios and network connectivity is computed to make correct interpretation of the 2D NMR maps possible. Our novel pore network models have the ability to match sorption hysteresis measurements without any tuning parameters. The results clarify a common misconception of linking type 3 nitrogen hysteresis curves to only the shale pore shape and show promising sensitivty for nanopore structre inference in core scale. The results on restricted diffusion shed light on the importance of including shape factors in 2D NMR interpretations. A priori "weighting factors" as a function of pore-throat and throat-length ratio are presented and the effect of network connectivity on diffusion is quantitatively assessed. We are currently working on verifying our models with experimental data gathered from the Eagleford formation.
Validation and Trustworthiness of Multiscale Models of Cardiac Electrophysiology
Pathmanathan, Pras; Gray, Richard A.
2018-01-01
Computational models of cardiac electrophysiology have a long history in basic science applications and device design and evaluation, but have significant potential for clinical applications in all areas of cardiovascular medicine, including functional imaging and mapping, drug safety evaluation, disease diagnosis, patient selection, and therapy optimisation or personalisation. For all stakeholders to be confident in model-based clinical decisions, cardiac electrophysiological (CEP) models must be demonstrated to be trustworthy and reliable. Credibility, that is, the belief in the predictive capability, of a computational model is primarily established by performing validation, in which model predictions are compared to experimental or clinical data. However, there are numerous challenges to performing validation for highly complex multi-scale physiological models such as CEP models. As a result, credibility of CEP model predictions is usually founded upon a wide range of distinct factors, including various types of validation results, underlying theory, evidence supporting model assumptions, evidence from model calibration, all at a variety of scales from ion channel to cell to organ. Consequently, it is often unclear, or a matter for debate, the extent to which a CEP model can be trusted for a given application. The aim of this article is to clarify potential rationale for the trustworthiness of CEP models by reviewing evidence that has been (or could be) presented to support their credibility. We specifically address the complexity and multi-scale nature of CEP models which makes traditional model evaluation difficult. In addition, we make explicit some of the credibility justification that we believe is implicitly embedded in the CEP modeling literature. Overall, we provide a fresh perspective to CEP model credibility, and build a depiction and categorisation of the wide-ranging body of credibility evidence for CEP models. This paper also represents a step toward the extension of model evaluation methodologies that are currently being developed by the medical device community, to physiological models. PMID:29497385
NASA Astrophysics Data System (ADS)
Langford, Z. L.; Kumar, J.; Hoffman, F. M.
2015-12-01
Observations indicate that over the past several decades, landscape processes in the Arctic have been changing or intensifying. A dynamic Arctic landscape has the potential to alter ecosystems across a broad range of scales. Accurate characterization is useful to understand the properties and organization of the landscape, optimal sampling network design, measurement and process upscaling and to establish a landscape-based framework for multi-scale modeling of ecosystem processes. This study seeks to delineate the landscape at Seward Peninsula of Alaska into ecoregions using large volumes (terabytes) of high spatial resolution satellite remote-sensing data. Defining high-resolution ecoregion boundaries is difficult because many ecosystem processes in Arctic ecosystems occur at small local to regional scales, which are often resolved in by coarse resolution satellites (e.g., MODIS). We seek to use data-fusion techniques and data analytics algorithms applied to Phased Array type L-band Synthetic Aperture Radar (PALSAR), Interferometric Synthetic Aperture Radar (IFSAR), Satellite for Observation of Earth (SPOT), WorldView-2, WorldView-3, and QuickBird-2 to develop high-resolution (˜5m) ecoregion maps for multiple time periods. Traditional analysis methods and algorithms are insufficient for analyzing and synthesizing such large geospatial data sets, and those algorithms rarely scale out onto large distributed- memory parallel computer systems. We seek to develop computationally efficient algorithms and techniques using high-performance computing for characterization of Arctic landscapes. We will apply a variety of data analytics algorithms, such as cluster analysis, complex object-based image analysis (COBIA), and neural networks. We also propose to use representativeness analysis within the Seward Peninsula domain to determine optimal sampling locations for fine-scale measurements. This methodology should provide an initial framework for analyzing dynamic landscape trends in Arctic ecosystems, such as shrubification and disturbances, and integration of ecoregions into multi-scale models.
Bryant, Barbara
2012-01-01
In living cells, DNA is packaged along with protein and RNA into chromatin. Chemical modifications to nucleotides and histone proteins are added, removed and recognized by multi-functional molecular complexes. Here I define a new computational model, in which chromatin modifications are information units that can be written onto a one-dimensional string of nucleosomes, analogous to the symbols written onto cells of a Turing machine tape, and chromatin-modifying complexes are modeled as read-write rules that operate on a finite set of adjacent nucleosomes. I illustrate the use of this “chromatin computer” to solve an instance of the Hamiltonian path problem. I prove that chromatin computers are computationally universal – and therefore more powerful than the logic circuits often used to model transcription factor control of gene expression. Features of biological chromatin provide a rich instruction set for efficient computation of nontrivial algorithms in biological time scales. Modeling chromatin as a computer shifts how we think about chromatin function, suggests new approaches to medical intervention, and lays the groundwork for the engineering of a new class of biological computing machines. PMID:22567109
Dey, S.
2017-01-01
We present a method to construct and analyse 3D models of underwater scenes using a single cost-effective camera on a standard laptop with (a) free or low-cost software, (b) no computer programming ability, and (c) minimal man hours for both filming and analysis. This study focuses on four key structural complexity metrics: point-to-point distances, linear rugosity (R), fractal dimension (D), and vector dispersion (1/k). We present the first assessment of accuracy and precision of structure-from-motion (SfM) 3D models from an uncalibrated GoPro™ camera at a small scale (4 m2) and show that they can provide meaningful, ecologically relevant results. Models had root mean square errors of 1.48 cm in X-Y and 1.35 in Z, and accuracies of 86.8% (R), 99.6% (D at scales 30–60 cm), 93.6% (D at scales 1–5 cm), and 86.9 (1/k). Values of R were compared to in-situ chain-and-tape measurements, while values of D and 1/k were compared with ground truths from 3D printed objects modelled underwater. All metrics varied less than 3% between independently rendered models. We thereby improve and rigorously validate a tool for ecologists to non-invasively quantify coral reef structural complexity with a variety of multi-scale metrics. PMID:28406937
NASA Astrophysics Data System (ADS)
Pokhrel, A.; El Hannach, M.; Orfino, F. P.; Dutta, M.; Kjeang, E.
2016-10-01
X-ray computed tomography (XCT), a non-destructive technique, is proposed for three-dimensional, multi-length scale characterization of complex failure modes in fuel cell electrodes. Comparative tomography data sets are acquired for a conditioned beginning of life (BOL) and a degraded end of life (EOL) membrane electrode assembly subjected to cathode degradation by voltage cycling. Micro length scale analysis shows a five-fold increase in crack size and 57% thickness reduction in the EOL cathode catalyst layer, indicating widespread action of carbon corrosion. Complementary nano length scale analysis shows a significant reduction in porosity, increased pore size, and dramatically reduced effective diffusivity within the remaining porous structure of the catalyst layer at EOL. Collapsing of the structure is evident from the combination of thinning and reduced porosity, as uniquely determined by the multi-length scale approach. Additionally, a novel image processing based technique developed for nano scale segregation of pore, ionomer, and Pt/C dominated voxels shows an increase in ionomer volume fraction, Pt/C agglomerates, and severe carbon corrosion at the catalyst layer/membrane interface at EOL. In summary, XCT based multi-length scale analysis enables detailed information needed for comprehensive understanding of the complex failure modes observed in fuel cell electrodes.
NASA Astrophysics Data System (ADS)
Bijeljic, Branko; Icardi, Matteo; Prodanović, Maša
2018-05-01
Substantial progress has been made over last few decades on understanding the physics of multiphase flow and reactive transport phenomena in subsurface porous media. Confluence of advances in experimental techniques (including micromodels, X-ray microtomography, Nuclear Magnetic Resonance (NMR)) as well as computational power have made it possible to observe static and dynamic multi-scale flow, transport and reactive processes, thus stimulating development of new generation of modelling tools from pore to field scale. One of the key challenges is to make experiment and models as complementary as possible, with continuously improving experimental methods in order to increase predictive capabilities of theoretical models across scales. This creates need to establish rigorous benchmark studies of flow, transport and reaction in porous media which can then serve as the basis for introducing more complex phenomena in future developments.
Strategies for Large Scale Implementation of a Multiscale, Multiprocess Integrated Hydrologic Model
NASA Astrophysics Data System (ADS)
Kumar, M.; Duffy, C.
2006-05-01
Distributed models simulate hydrologic state variables in space and time while taking into account the heterogeneities in terrain, surface, subsurface properties and meteorological forcings. Computational cost and complexity associated with these model increases with its tendency to accurately simulate the large number of interacting physical processes at fine spatio-temporal resolution in a large basin. A hydrologic model run on a coarse spatial discretization of the watershed with limited number of physical processes needs lesser computational load. But this negatively affects the accuracy of model results and restricts physical realization of the problem. So it is imperative to have an integrated modeling strategy (a) which can be universally applied at various scales in order to study the tradeoffs between computational complexity (determined by spatio- temporal resolution), accuracy and predictive uncertainty in relation to various approximations of physical processes (b) which can be applied at adaptively different spatial scales in the same domain by taking into account the local heterogeneity of topography and hydrogeologic variables c) which is flexible enough to incorporate different number and approximation of process equations depending on model purpose and computational constraint. An efficient implementation of this strategy becomes all the more important for Great Salt Lake river basin which is relatively large (~89000 sq. km) and complex in terms of hydrologic and geomorphic conditions. Also the types and the time scales of hydrologic processes which are dominant in different parts of basin are different. Part of snow melt runoff generated in the Uinta Mountains infiltrates and contributes as base flow to the Great Salt Lake over a time scale of decades to centuries. The adaptive strategy helps capture the steep topographic and climatic gradient along the Wasatch front. Here we present the aforesaid modeling strategy along with an associated hydrologic modeling framework which facilitates a seamless, computationally efficient and accurate integration of the process model with the data model. The flexibility of this framework leads to implementation of multiscale, multiresolution, adaptive refinement/de-refinement and nested modeling simulations with least computational burden. However, performing these simulations and related calibration of these models over a large basin at higher spatio- temporal resolutions is computationally intensive and requires use of increasing computing power. With the advent of parallel processing architectures, high computing performance can be achieved by parallelization of existing serial integrated-hydrologic-model code. This translates to running the same model simulation on a network of large number of processors thereby reducing the time needed to obtain solution. The paper also discusses the implementation of the integrated model on parallel processors. Also will be discussed the mapping of the problem on multi-processor environment, method to incorporate coupling between hydrologic processes using interprocessor communication models, model data structure and parallel numerical algorithms to obtain high performance.
S3DB core: a framework for RDF generation and management in bioinformatics infrastructures
2010-01-01
Background Biomedical research is set to greatly benefit from the use of semantic web technologies in the design of computational infrastructure. However, beyond well defined research initiatives, substantial issues of data heterogeneity, source distribution, and privacy currently stand in the way towards the personalization of Medicine. Results A computational framework for bioinformatic infrastructure was designed to deal with the heterogeneous data sources and the sensitive mixture of public and private data that characterizes the biomedical domain. This framework consists of a logical model build with semantic web tools, coupled with a Markov process that propagates user operator states. An accompanying open source prototype was developed to meet a series of applications that range from collaborative multi-institution data acquisition efforts to data analysis applications that need to quickly traverse complex data structures. This report describes the two abstractions underlying the S3DB-based infrastructure, logical and numerical, and discusses its generality beyond the immediate confines of existing implementations. Conclusions The emergence of the "web as a computer" requires a formal model for the different functionalities involved in reading and writing to it. The S3DB core model proposed was found to address the design criteria of biomedical computational infrastructure, such as those supporting large scale multi-investigator research, clinical trials, and molecular epidemiology. PMID:20646315
Tuncer, Necibe; Gulbudak, Hayriye; Cannataro, Vincent L; Martcheva, Maia
2016-09-01
In this article, we discuss the structural and practical identifiability of a nested immuno-epidemiological model of arbovirus diseases, where host-vector transmission rate, host recovery, and disease-induced death rates are governed by the within-host immune system. We incorporate the newest ideas and the most up-to-date features of numerical methods to fit multi-scale models to multi-scale data. For an immunological model, we use Rift Valley Fever Virus (RVFV) time-series data obtained from livestock under laboratory experiments, and for an epidemiological model we incorporate a human compartment to the nested model and use the number of human RVFV cases reported by the CDC during the 2006-2007 Kenya outbreak. We show that the immunological model is not structurally identifiable for the measurements of time-series viremia concentrations in the host. Thus, we study the non-dimensionalized and scaled versions of the immunological model and prove that both are structurally globally identifiable. After fixing estimated parameter values for the immunological model derived from the scaled model, we develop a numerical method to fit observable RVFV epidemiological data to the nested model for the remaining parameter values of the multi-scale system. For the given (CDC) data set, Monte Carlo simulations indicate that only three parameters of the epidemiological model are practically identifiable when the immune model parameters are fixed. Alternatively, we fit the multi-scale data to the multi-scale model simultaneously. Monte Carlo simulations for the simultaneous fitting suggest that the parameters of the immunological model and the parameters of the immuno-epidemiological model are practically identifiable. We suggest that analytic approaches for studying the structural identifiability of nested models are a necessity, so that identifiable parameter combinations can be derived to reparameterize the nested model to obtain an identifiable one. This is a crucial step in developing multi-scale models which explain multi-scale data.
NASA Astrophysics Data System (ADS)
Wen, Xian-Huan; Gómez-Hernández, J. Jaime
1998-03-01
The macrodispersion of an inert solute in a 2-D heterogeneous porous media is estimated numerically in a series of fields of varying heterogeneity. Four different random function (RF) models are used to model log-transmissivity (ln T) spatial variability, and for each of these models, ln T variance is varied from 0.1 to 2.0. The four RF models share the same univariate Gaussian histogram and the same isotropic covariance, but differ from one another in terms of the spatial connectivity patterns at extreme transmissivity values. More specifically, model A is a multivariate Gaussian model for which, by definition, extreme values (both high and low) are spatially uncorrelated. The other three models are non-multi-Gaussian: model B with high connectivity of high extreme values, model C with high connectivity of low extreme values, and model D with high connectivities of both high and low extreme values. Residence time distributions (RTDs) and macrodispersivities (longitudinal and transverse) are computed on ln T fields corresponding to the different RF models, for two different flow directions and at several scales. They are compared with each other, as well as with predicted values based on first-order analytical results. Numerically derived RTDs and macrodispersivities for the multi-Gaussian model are in good agreement with analytically derived values using first-order theories for log-transmissivity variance up to 2.0. The results from the non-multi-Gaussian models differ from each other and deviate largely from the multi-Gaussian results even when ln T variance is small. RTDs in non-multi-Gaussian realizations with high connectivity at high extreme values display earlier breakthrough than in multi-Gaussian realizations, whereas later breakthrough and longer tails are observed for RTDs from non-multi-Gaussian realizations with high connectivity at low extreme values. Longitudinal macrodispersivities in the non-multi-Gaussian realizations are, in general, larger than in the multi-Gaussian ones, while transverse macrodispersivities in the non-multi-Gaussian realizations can be larger or smaller than in the multi-Gaussian ones depending on the type of connectivity at extreme values. Comparing the numerical results for different flow directions, it is confirmed that macrodispersivities in multi-Gaussian realizations with isotropic spatial correlation are not flow direction-dependent. Macrodispersivities in the non-multi-Gaussian realizations, however, are flow direction-dependent although the covariance of ln T is isotropic (the same for all four models). It is important to account for high connectivities at extreme transmissivity values, a likely situation in some geological formations. Some of the discrepancies between first-order-based analytical results and field-scale tracer test data may be due to the existence of highly connected paths of extreme conductivity values.
A dynamic multi-scale Markov model based methodology for remaining life prediction
NASA Astrophysics Data System (ADS)
Yan, Jihong; Guo, Chaozhong; Wang, Xing
2011-05-01
The ability to accurately predict the remaining life of partially degraded components is crucial in prognostics. In this paper, a performance degradation index is designed using multi-feature fusion techniques to represent deterioration severities of facilities. Based on this indicator, an improved Markov model is proposed for remaining life prediction. Fuzzy C-Means (FCM) algorithm is employed to perform state division for Markov model in order to avoid the uncertainty of state division caused by the hard division approach. Considering the influence of both historical and real time data, a dynamic prediction method is introduced into Markov model by a weighted coefficient. Multi-scale theory is employed to solve the state division problem of multi-sample prediction. Consequently, a dynamic multi-scale Markov model is constructed. An experiment is designed based on a Bently-RK4 rotor testbed to validate the dynamic multi-scale Markov model, experimental results illustrate the effectiveness of the methodology.
Coarse-grained molecular dynamics simulations for giant protein-DNA complexes
NASA Astrophysics Data System (ADS)
Takada, Shoji
Biomolecules are highly hierarchic and intrinsically flexible. Thus, computational modeling calls for multi-scale methodologies. We have been developing a coarse-grained biomolecular model where on-average 10-20 atoms are grouped into one coarse-grained (CG) particle. Interactions among CG particles are tuned based on atomistic interactions and the fluctuation matching algorithm. CG molecular dynamics methods enable us to simulate much longer time scale motions of much larger molecular systems than fully atomistic models. After broad sampling of structures with CG models, we can easily reconstruct atomistic models, from which one can continue conventional molecular dynamics simulations if desired. Here, we describe our CG modeling methodology for protein-DNA complexes, together with various biological applications, such as the DNA duplication initiation complex, model chromatins, and transcription factor dynamics on chromatin-like environment.
NASA Astrophysics Data System (ADS)
Sizov, Gennadi Y.
In this dissertation, a model-based multi-objective optimal design of permanent magnet ac machines, supplied by sine-wave current regulated drives, is developed and implemented. The design procedure uses an efficient electromagnetic finite element-based solver to accurately model nonlinear material properties and complex geometric shapes associated with magnetic circuit design. Application of an electromagnetic finite element-based solver allows for accurate computation of intricate performance parameters and characteristics. The first contribution of this dissertation is the development of a rapid computational method that allows accurate and efficient exploration of large multi-dimensional design spaces in search of optimum design(s). The computationally efficient finite element-based approach developed in this work provides a framework of tools that allow rapid analysis of synchronous electric machines operating under steady-state conditions. In the developed modeling approach, major steady-state performance parameters such as, winding flux linkages and voltages, average, cogging and ripple torques, stator core flux densities, core losses, efficiencies and saturated machine winding inductances, are calculated with minimum computational effort. In addition, the method includes means for rapid estimation of distributed stator forces and three-dimensional effects of stator and/or rotor skew on the performance of the machine. The second contribution of this dissertation is the development of the design synthesis and optimization method based on a differential evolution algorithm. The approach relies on the developed finite element-based modeling method for electromagnetic analysis and is able to tackle large-scale multi-objective design problems using modest computational resources. Overall, computational time savings of up to two orders of magnitude are achievable, when compared to current and prevalent state-of-the-art methods. These computational savings allow one to expand the optimization problem to achieve more complex and comprehensive design objectives. The method is used in the design process of several interior permanent magnet industrial motors. The presented case studies demonstrate that the developed finite element-based approach practically eliminates the need for using less accurate analytical and lumped parameter equivalent circuit models for electric machine design optimization. The design process and experimental validation of the case-study machines are detailed in the dissertation.
Efficient Geometric Sound Propagation Using Visibility Culling
NASA Astrophysics Data System (ADS)
Chandak, Anish
2011-07-01
Simulating propagation of sound can improve the sense of realism in interactive applications such as video games and can lead to better designs in engineering applications such as architectural acoustics. In this thesis, we present geometric sound propagation techniques which are faster than prior methods and map well to upcoming parallel multi-core CPUs. We model specular reflections by using the image-source method and model finite-edge diffraction by using the well-known Biot-Tolstoy-Medwin (BTM) model. We accelerate the computation of specular reflections by applying novel visibility algorithms, FastV and AD-Frustum, which compute visibility from a point. We accelerate finite-edge diffraction modeling by applying a novel visibility algorithm which computes visibility from a region. Our visibility algorithms are based on frustum tracing and exploit recent advances in fast ray-hierarchy intersections, data-parallel computations, and scalable, multi-core algorithms. The AD-Frustum algorithm adapts its computation to the scene complexity and allows small errors in computing specular reflection paths for higher computational efficiency. FastV and our visibility algorithm from a region are general, object-space, conservative visibility algorithms that together significantly reduce the number of image sources compared to other techniques while preserving the same accuracy. Our geometric propagation algorithms are an order of magnitude faster than prior approaches for modeling specular reflections and two to ten times faster for modeling finite-edge diffraction. Our algorithms are interactive, scale almost linearly on multi-core CPUs, and can handle large, complex, and dynamic scenes. We also compare the accuracy of our sound propagation algorithms with other methods. Once sound propagation is performed, it is desirable to listen to the propagated sound in interactive and engineering applications. We can generate smooth, artifact-free output audio signals by applying efficient audio-processing algorithms. We also present the first efficient audio-processing algorithm for scenarios with simultaneously moving source and moving receiver (MS-MR) which incurs less than 25% overhead compared to static source and moving receiver (SS-MR) or moving source and static receiver (MS-SR) scenario.
Scientific Discovery through Advanced Computing in Plasma Science
NASA Astrophysics Data System (ADS)
Tang, William
2005-03-01
Advanced computing is generally recognized to be an increasingly vital tool for accelerating progress in scientific research during the 21st Century. For example, the Department of Energy's ``Scientific Discovery through Advanced Computing'' (SciDAC) Program was motivated in large measure by the fact that formidable scientific challenges in its research portfolio could best be addressed by utilizing the combination of the rapid advances in super-computing technology together with the emergence of effective new algorithms and computational methodologies. The imperative is to translate such progress into corresponding increases in the performance of the scientific codes used to model complex physical systems such as those encountered in high temperature plasma research. If properly validated against experimental measurements and analytic benchmarks, these codes can provide reliable predictive capability for the behavior of a broad range of complex natural and engineered systems. This talk reviews recent progress and future directions for advanced simulations with some illustrative examples taken from the plasma science applications area. Significant recent progress has been made in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics, giving increasingly good agreement between experimental observations and computational modeling. This was made possible by the combination of access to powerful new computational resources together with innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning a huge range in time and space scales. In particular, the plasma science community has made excellent progress in developing advanced codes for which computer run-time and problem size scale well with the number of processors on massively parallel machines (MPP's). A good example is the effective usage of the full power of multi-teraflop (multi-trillion floating point computations per second) MPP's to produce three-dimensional, general geometry, nonlinear particle simulations which have accelerated progress in understanding the nature of plasma turbulence in magnetically-confined high temperature plasmas. These calculations, which typically utilized billions of particles for thousands of time-steps, would not have been possible without access to powerful present generation MPP computers and the associated diagnostic and visualization capabilities. In general, results from advanced simulations provide great encouragement for being able to include increasingly realistic dynamics to enable deeper physics insights into plasmas in both natural and laboratory environments. The associated scientific excitement should serve to stimulate improved cross-cutting collaborations with other fields and also to help attract bright young talent to the computational science area.
Mapping nonlinear receptive field structure in primate retina at single cone resolution
Li, Peter H; Greschner, Martin; Gunning, Deborah E; Mathieson, Keith; Sher, Alexander; Litke, Alan M; Paninski, Liam
2015-01-01
The function of a neural circuit is shaped by the computations performed by its interneurons, which in many cases are not easily accessible to experimental investigation. Here, we elucidate the transformation of visual signals flowing from the input to the output of the primate retina, using a combination of large-scale multi-electrode recordings from an identified ganglion cell type, visual stimulation targeted at individual cone photoreceptors, and a hierarchical computational model. The results reveal nonlinear subunits in the circuity of OFF midget ganglion cells, which subserve high-resolution vision. The model explains light responses to a variety of stimuli more accurately than a linear model, including stimuli targeted to cones within and across subunits. The recovered model components are consistent with known anatomical organization of midget bipolar interneurons. These results reveal the spatial structure of linear and nonlinear encoding, at the resolution of single cells and at the scale of complete circuits. DOI: http://dx.doi.org/10.7554/eLife.05241.001 PMID:26517879
Gao, Jie; Roan, Esra; Williams, John L
2015-01-01
The physis, or growth plate, is a complex disc-shaped cartilage structure that is responsible for longitudinal bone growth. In this study, a multi-scale computational approach was undertaken to better understand how physiological loads are experienced by chondrocytes embedded inside chondrons when subjected to moderate strain under instantaneous compressive loading of the growth plate. Models of representative samples of compressed bone/growth-plate/bone from a 0.67 mm thick 4-month old bovine proximal tibial physis were subjected to a prescribed displacement equal to 20% of the growth plate thickness. At the macroscale level, the applied compressive deformation resulted in an overall compressive strain across the proliferative-hypertrophic zone of 17%. The microscale model predicted that chondrocytes sustained compressive height strains of 12% and 6% in the proliferative and hypertrophic zones, respectively, in the interior regions of the plate. This pattern was reversed within the outer 300 μm region at the free surface where cells were compressed by 10% in the proliferative and 26% in the hypertrophic zones, in agreement with experimental observations. This work provides a new approach to study growth plate behavior under compression and illustrates the need for combining computational and experimental methods to better understand the chondrocyte mechanics in the growth plate cartilage. While the current model is relevant to fast dynamic events, such as heel strike in walking, we believe this approach provides new insight into the mechanical factors that regulate bone growth at the cell level and provides a basis for developing models to help interpret experimental results at varying time scales.
Gao, Jie; Roan, Esra; Williams, John L.
2015-01-01
The physis, or growth plate, is a complex disc-shaped cartilage structure that is responsible for longitudinal bone growth. In this study, a multi-scale computational approach was undertaken to better understand how physiological loads are experienced by chondrocytes embedded inside chondrons when subjected to moderate strain under instantaneous compressive loading of the growth plate. Models of representative samples of compressed bone/growth-plate/bone from a 0.67 mm thick 4-month old bovine proximal tibial physis were subjected to a prescribed displacement equal to 20% of the growth plate thickness. At the macroscale level, the applied compressive deformation resulted in an overall compressive strain across the proliferative-hypertrophic zone of 17%. The microscale model predicted that chondrocytes sustained compressive height strains of 12% and 6% in the proliferative and hypertrophic zones, respectively, in the interior regions of the plate. This pattern was reversed within the outer 300 μm region at the free surface where cells were compressed by 10% in the proliferative and 26% in the hypertrophic zones, in agreement with experimental observations. This work provides a new approach to study growth plate behavior under compression and illustrates the need for combining computational and experimental methods to better understand the chondrocyte mechanics in the growth plate cartilage. While the current model is relevant to fast dynamic events, such as heel strike in walking, we believe this approach provides new insight into the mechanical factors that regulate bone growth at the cell level and provides a basis for developing models to help interpret experimental results at varying time scales. PMID:25885547
Han, Guanghui; Liu, Xiabi; Zheng, Guangyuan; Wang, Murong; Huang, Shan
2018-06-06
Ground-glass opacity (GGO) is a common CT imaging sign on high-resolution CT, which means the lesion is more likely to be malignant compared to common solid lung nodules. The automatic recognition of GGO CT imaging signs is of great importance for early diagnosis and possible cure of lung cancers. The present GGO recognition methods employ traditional low-level features and system performance improves slowly. Considering the high-performance of CNN model in computer vision field, we proposed an automatic recognition method of 3D GGO CT imaging signs through the fusion of hybrid resampling and layer-wise fine-tuning CNN models in this paper. Our hybrid resampling is performed on multi-views and multi-receptive fields, which reduces the risk of missing small or large GGOs by adopting representative sampling panels and processing GGOs with multiple scales simultaneously. The layer-wise fine-tuning strategy has the ability to obtain the optimal fine-tuning model. Multi-CNN models fusion strategy obtains better performance than any single trained model. We evaluated our method on the GGO nodule samples in publicly available LIDC-IDRI dataset of chest CT scans. The experimental results show that our method yields excellent results with 96.64% sensitivity, 71.43% specificity, and 0.83 F1 score. Our method is a promising approach to apply deep learning method to computer-aided analysis of specific CT imaging signs with insufficient labeled images. Graphical abstract We proposed an automatic recognition method of 3D GGO CT imaging signs through the fusion of hybrid resampling and layer-wise fine-tuning CNN models in this paper. Our hybrid resampling reduces the risk of missing small or large GGOs by adopting representative sampling panels and processing GGOs with multiple scales simultaneously. The layer-wise fine-tuning strategy has ability to obtain the optimal fine-tuning model. Our method is a promising approach to apply deep learning method to computer-aided analysis of specific CT imaging signs with insufficient labeled images.
Multiscale modeling and computation of optically manipulated nano devices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bao, Gang, E-mail: baog@zju.edu.cn; Liu, Di, E-mail: richardl@math.msu.edu; Luo, Songting, E-mail: luos@iastate.edu
2016-07-01
We present a multiscale modeling and computational scheme for optical-mechanical responses of nanostructures. The multi-physical nature of the problem is a result of the interaction between the electromagnetic (EM) field, the molecular motion, and the electronic excitation. To balance accuracy and complexity, we adopt the semi-classical approach that the EM field is described classically by the Maxwell equations, and the charged particles follow the Schrödinger equations quantum mechanically. To overcome the numerical challenge of solving the high dimensional multi-component many-body Schrödinger equations, we further simplify the model with the Ehrenfest molecular dynamics to determine the motion of the nuclei, andmore » use the Time-Dependent Current Density Functional Theory (TD-CDFT) to calculate the excitation of the electrons. This leads to a system of coupled equations that computes the electromagnetic field, the nuclear positions, and the electronic current and charge densities simultaneously. In the regime of linear responses, the resonant frequencies initiating the out-of-equilibrium optical-mechanical responses can be formulated as an eigenvalue problem. A self-consistent multiscale method is designed to deal with the well separated space scales. The isomerization of azobenzene is presented as a numerical example.« less
Muraro, D; Larrieu, A; Lucas, M; Chopard, J; Byrne, H; Godin, C; King, J
2016-09-07
The growth of the root of Arabidopsis thaliana is sustained by the meristem, a region of cell proliferation and differentiation which is located in the root apex and generates cells which move shootwards, expanding rapidly to cause root growth. The balance between cell division and differentiation is maintained via a signalling network, primarily coordinated by the hormones auxin, cytokinin and gibberellin. Since these hormones interact at different levels of spatial organisation, we develop a multi-scale computational model which enables us to study the interplay between these signalling networks and cell-cell communication during the specification of the root meristem. We investigate the responses of our model to hormonal perturbations, validating the results of our simulations against experimental data. Our simulations suggest that one or more additional components are needed to explain the observed expression patterns of a regulator of cytokinin signalling, ARR1, in roots not producing gibberellin. By searching for novel network components, we identify two mutant lines that affect significantly both root length and meristem size, one of which also differentially expresses a central component of the interaction network (SHY2). More generally, our study demonstrates how a multi-scale investigation can provide valuable insight into the spatio-temporal dynamics of signalling networks in biological tissues. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zaveri, Mazad Shaheriar
The semiconductor/computer industry has been following Moore's law for several decades and has reaped the benefits in speed and density of the resultant scaling. Transistor density has reached almost one billion per chip, and transistor delays are in picoseconds. However, scaling has slowed down, and the semiconductor industry is now facing several challenges. Hybrid CMOS/nano technologies, such as CMOL, are considered as an interim solution to some of the challenges. Another potential architectural solution includes specialized architectures for applications/models in the intelligent computing domain, one aspect of which includes abstract computational models inspired from the neuro/cognitive sciences. Consequently in this dissertation, we focus on the hardware implementations of Bayesian Memory (BM), which is a (Bayesian) Biologically Inspired Computational Model (BICM). This model is a simplified version of George and Hawkins' model of the visual cortex, which includes an inference framework based on Judea Pearl's belief propagation. We then present a "hardware design space exploration" methodology for implementing and analyzing the (digital and mixed-signal) hardware for the BM. This particular methodology involves: analyzing the computational/operational cost and the related micro-architecture, exploring candidate hardware components, proposing various custom hardware architectures using both traditional CMOS and hybrid nanotechnology - CMOL, and investigating the baseline performance/price of these architectures. The results suggest that CMOL is a promising candidate for implementing a BM. Such implementations can utilize the very high density storage/computation benefits of these new nano-scale technologies much more efficiently; for example, the throughput per 858 mm2 (TPM) obtained for CMOL based architectures is 32 to 40 times better than the TPM for a CMOS based multiprocessor/multi-FPGA system, and almost 2000 times better than the TPM for a PC implementation. We later use this methodology to investigate the hardware implementations of cortex-scale spiking neural system, which is an approximate neural equivalent of BICM based cortex-scale system. The results of this investigation also suggest that CMOL is a promising candidate to implement such large-scale neuromorphic systems. In general, the assessment of such hypothetical baseline hardware architectures provides the prospects for building large-scale (mammalian cortex-scale) implementations of neuromorphic/Bayesian/intelligent systems using state-of-the-art and beyond state-of-the-art silicon structures.
NASA Astrophysics Data System (ADS)
Jiang, Zeyun; Couples, Gary D.; Lewis, Helen; Mangione, Alessandro
2018-07-01
Limestones containing abundant disc-shaped fossil Nummulites can form significant hydrocarbon reservoirs but they have a distinctly heterogeneous distribution of pore shapes, sizes and connectivities, which make it particularly difficult to calculate petrophysical properties and consequent flow outcomes. The severity of the problem rests on the wide length-scale range from the millimetre scale of the fossil's pore space to the micron scale of rock matrix pores. This work develops a technique to incorporate multi-scale void systems into a pore network, which is used to calculate the petrophysical properties for subsequent flow simulations at different stages in the limestone's petrophysical evolution. While rock pore size, shape and connectivity can be determined, with varying levels of fidelity, using techniques such as X-ray computed tomography (CT) or scanning electron microscopy (SEM), this work represents a more challenging class where the rock of interest is insufficiently sampled or, as here, has been overprinted by extensive chemical diagenesis. The main challenge is integrating multi-scale void structures derived from both SEM and CT images, into a single model or a pore-scale network while still honouring the nature of the connections across these length scales. Pore network flow simulations are used to illustrate the technique but of equal importance, to demonstrate how supportable earlier-stage petrophysical property distributions can be used to assess the viability of several potential geological event sequences. The results of our flow simulations on generated models highlight the requirement for correct determination of the dominant pore scales (one plus of nm, μm, mm, cm), the spatial correlation and the cross-scale connections.
Fast hierarchical knowledge-based approach for human face detection in color images
NASA Astrophysics Data System (ADS)
Jiang, Jun; Gong, Jie; Zhang, Guilin; Hu, Ruolan
2001-09-01
This paper presents a fast hierarchical knowledge-based approach for automatically detecting multi-scale upright faces in still color images. The approach consists of three levels. At the highest level, skin-like regions are determinated by skin model, which is based on the color attributes hue and saturation in HSV color space, as well color attributes red and green in normalized color space. In level 2, a new eye model is devised to select human face candidates in segmented skin-like regions. An important feature of the eye model is that it is independent of the scale of human face. So it is possible for finding human faces in different scale with scanning image only once, and it leads to reduction the computation time of face detection greatly. In level 3, a human face mosaic image model, which is consistent with physical structure features of human face well, is applied to judge whether there are face detects in human face candidate regions. This model includes edge and gray rules. Experiment results show that the approach has high robustness and fast speed. It has wide application perspective at human-computer interactions and visual telephone etc.
NASA Astrophysics Data System (ADS)
Crowell, Andrew Rippetoe
This dissertation describes model reduction techniques for the computation of aerodynamic heat flux and pressure loads for multi-disciplinary analysis of hypersonic vehicles. NASA and the Department of Defense have expressed renewed interest in the development of responsive, reusable hypersonic cruise vehicles capable of sustained high-speed flight and access to space. However, an extensive set of technical challenges have obstructed the development of such vehicles. These technical challenges are partially due to both the inability to accurately test scaled vehicles in wind tunnels and to the time intensive nature of high-fidelity computational modeling, particularly for the fluid using Computational Fluid Dynamics (CFD). The aim of this dissertation is to develop efficient and accurate models for the aerodynamic heat flux and pressure loads to replace the need for computationally expensive, high-fidelity CFD during coupled analysis. Furthermore, aerodynamic heating and pressure loads are systematically evaluated for a number of different operating conditions, including: simple two-dimensional flow over flat surfaces up to three-dimensional flows over deformed surfaces with shock-shock interaction and shock-boundary layer interaction. An additional focus of this dissertation is on the implementation and computation of results using the developed aerodynamic heating and pressure models in complex fluid-thermal-structural simulations. Model reduction is achieved using a two-pronged approach. One prong focuses on developing analytical corrections to isothermal, steady-state CFD flow solutions in order to capture flow effects associated with transient spatially-varying surface temperatures and surface pressures (e.g., surface deformation, surface vibration, shock impingements, etc.). The second prong is focused on minimizing the computational expense of computing the steady-state CFD solutions by developing an efficient surrogate CFD model. The developed two-pronged approach is found to exhibit balanced performance in terms of accuracy and computational expense, relative to several existing approaches. This approach enables CFD-based loads to be implemented into long duration fluid-thermal-structural simulations.
ACCURATE CHEMICAL MASTER EQUATION SOLUTION USING MULTI-FINITE BUFFERS
Cao, Youfang; Terebus, Anna; Liang, Jie
2016-01-01
The discrete chemical master equation (dCME) provides a fundamental framework for studying stochasticity in mesoscopic networks. Because of the multi-scale nature of many networks where reaction rates have large disparity, directly solving dCMEs is intractable due to the exploding size of the state space. It is important to truncate the state space effectively with quantified errors, so accurate solutions can be computed. It is also important to know if all major probabilistic peaks have been computed. Here we introduce the Accurate CME (ACME) algorithm for obtaining direct solutions to dCMEs. With multi-finite buffers for reducing the state space by O(n!), exact steady-state and time-evolving network probability landscapes can be computed. We further describe a theoretical framework of aggregating microstates into a smaller number of macrostates by decomposing a network into independent aggregated birth and death processes, and give an a priori method for rapidly determining steady-state truncation errors. The maximal sizes of the finite buffers for a given error tolerance can also be pre-computed without costly trial solutions of dCMEs. We show exactly computed probability landscapes of three multi-scale networks, namely, a 6-node toggle switch, 11-node phage-lambda epigenetic circuit, and 16-node MAPK cascade network, the latter two with no known solutions. We also show how probabilities of rare events can be computed from first-passage times, another class of unsolved problems challenging for simulation-based techniques due to large separations in time scales. Overall, the ACME method enables accurate and efficient solutions of the dCME for a large class of networks. PMID:27761104
Coupled Kardar-Parisi-Zhang Equations in One Dimension
NASA Astrophysics Data System (ADS)
Ferrari, Patrik L.; Sasamoto, Tomohiro; Spohn, Herbert
2013-11-01
Over the past years our understanding of the scaling properties of the solutions to the one-dimensional KPZ equation has advanced considerably, both theoretically and experimentally. In our contribution we export these insights to the case of coupled KPZ equations in one dimension. We establish equivalence with nonlinear fluctuating hydrodynamics for multi-component driven stochastic lattice gases. To check the predictions of the theory, we perform Monte Carlo simulations of the two-component AHR model. Its steady state is computed using the matrix product ansatz. Thereby all coefficients appearing in the coupled KPZ equations are deduced from the microscopic model. Time correlations in the steady state are simulated and we confirm not only the scaling exponent, but also the scaling function and the non-universal coefficients.
Human seizures couple across spatial scales through travelling wave dynamics
NASA Astrophysics Data System (ADS)
Martinet, L.-E.; Fiddyment, G.; Madsen, J. R.; Eskandar, E. N.; Truccolo, W.; Eden, U. T.; Cash, S. S.; Kramer, M. A.
2017-04-01
Epilepsy--the propensity toward recurrent, unprovoked seizures--is a devastating disease affecting 65 million people worldwide. Understanding and treating this disease remains a challenge, as seizures manifest through mechanisms and features that span spatial and temporal scales. Here we address this challenge through the analysis and modelling of human brain voltage activity recorded simultaneously across microscopic and macroscopic spatial scales. We show that during seizure large-scale neural populations spanning centimetres of cortex coordinate with small neural groups spanning cortical columns, and provide evidence that rapidly propagating waves of activity underlie this increased inter-scale coupling. We develop a corresponding computational model to propose specific mechanisms--namely, the effects of an increased extracellular potassium concentration diffusing in space--that support the observed spatiotemporal dynamics. Understanding the multi-scale, spatiotemporal dynamics of human seizures--and connecting these dynamics to specific biological mechanisms--promises new insights to treat this devastating disease.
Assembling Large, Multi-Sensor Climate Datasets Using the SciFlo Grid Workflow System
NASA Astrophysics Data System (ADS)
Wilson, B.; Manipon, G.; Xing, Z.; Fetzer, E.
2009-04-01
NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the "A-Train" platforms (AIRS, AMSR-E, MODIS, MISR, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over periods of years to decades. However, moving from predominantly single-instrument studies to a multi-sensor, measurement-based model for long-duration analysis of important climate variables presents serious challenges for large-scale data mining and data fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another instrument (MODIS), and to a model (ECMWF), stratify the comparisons using a classification of the "cloud scenes" from CloudSat, and repeat the entire analysis over years of AIRS data. To perform such an analysis, one must discover & access multiple datasets from remote sites, find the space/time "matchups" between instruments swaths and model grids, understand the quality flags and uncertainties for retrieved physical variables, assemble merged datasets, and compute fused products for further scientific and statistical analysis. To meet these large-scale challenges, we are utilizing a Grid computing and dataflow framework, named SciFlo, in which we are deploying a set of versatile and reusable operators for data query, access, subsetting, co-registration, mining, fusion, and advanced statistical analysis. SciFlo is a semantically-enabled ("smart") Grid Workflow system that ties together a peer-to-peer network of computers into an efficient engine for distributed computation. The SciFlo workflow engine enables scientists to do multi-instrument Earth Science by assembling remotely-invokable Web Services (SOAP or http GET URLs), native executables, command-line scripts, and Python codes into a distributed computing flow. A scientist visually authors the graph of operation in the VizFlow GUI, or uses a text editor to modify the simple XML workflow documents. The SciFlo client & server engines optimize the execution of such distributed workflows and allow the user to transparently find and use datasets and operators without worrying about the actual location of the Grid resources. The engine transparently moves data to the operators, and moves operators to the data (on the dozen trusted SciFlo nodes). SciFlo also deploys a variety of Data Grid services to: query datasets in space and time, locate & retrieve on-line data granules, provide on-the-fly variable and spatial subsetting, perform pairwise instrument matchups for A-Train datasets, and compute fused products. These services are combined into efficient workflows to assemble the desired large-scale, merged climate datasets. SciFlo is currently being applied in several large climate studies: comparisons of aerosol optical depth between MODIS, MISR, AERONET ground network, and U. Michigan's IMPACT aerosol transport model; characterization of long-term biases in microwave and infrared instruments (AIRS, MLS) by comparisons to GPS temperature retrievals accurate to 0.1 degrees Kelvin; and construction of a decade-long, multi-sensor water vapor climatology stratified by classified cloud scene by bringing together datasets from AIRS/AMSU, AMSR-E, MLS, MODIS, and CloudSat (NASA MEASUREs grant, Fetzer PI). The presentation will discuss the SciFlo technologies, their application in these distributed workflows, and the many challenges encountered in assembling and analyzing these massive datasets.
NASA Technical Reports Server (NTRS)
Chamis, Christos C.; Abumeri, Galib H.
2000-01-01
Aircraft engines are assemblies of dynamically interacting components. Engine updates to keep present aircraft flying safely and engines for new aircraft are progressively required to operate in more demanding technological and environmental requirements. Designs to effectively meet those requirements are necessarily collections of multi-scale, multi-level, multi-disciplinary analysis and optimization methods and probabilistic methods are necessary to quantify respective uncertainties. These types of methods are the only ones that can formally evaluate advanced composite designs which satisfy those progressively demanding requirements while assuring minimum cost, maximum reliability and maximum durability. Recent research activities at NASA Glenn Research Center have focused on developing multi-scale, multi-level, multidisciplinary analysis and optimization methods. Multi-scale refers to formal methods which describe complex material behavior metal or composite; multi-level refers to integration of participating disciplines to describe a structural response at the scale of interest; multidisciplinary refers to open-ended for various existing and yet to be developed discipline constructs required to formally predict/describe a structural response in engine operating environments. For example, these include but are not limited to: multi-factor models for material behavior, multi-scale composite mechanics, general purpose structural analysis, progressive structural fracture for evaluating durability and integrity, noise and acoustic fatigue, emission requirements, hot fluid mechanics, heat-transfer and probabilistic simulations. Many of these, as well as others, are encompassed in an integrated computer code identified as Engine Structures Technology Benefits Estimator (EST/BEST) or Multi-faceted/Engine Structures Optimization (MP/ESTOP). The discipline modules integrated in MP/ESTOP include: engine cycle (thermodynamics), engine weights, internal fluid mechanics, cost, mission and coupled structural/thermal, various composite property simulators and probabilistic methods to evaluate uncertainty effects (scatter ranges) in all the design parameters. The objective of the proposed paper is to briefly describe a multi-faceted design analysis and optimization capability for coupled multi-discipline engine structures optimization. Results are presented for engine and aircraft type metrics to illustrate the versatility of that capability. Results are also presented for reliability, noise and fatigue to illustrate its inclusiveness. For example, replacing metal rotors with composites reduces the engine weight by 20 percent, 15 percent noise reduction, and an order of magnitude improvement in reliability. Composite designs exist to increase fatigue life by at least two orders of magnitude compared to state-of-the-art metals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engel, David W.; Reichardt, Thomas A.; Kulp, Thomas J.
Validating predictive models and quantifying uncertainties inherent in the modeling process is a critical component of the HARD Solids Venture program [1]. Our current research focuses on validating physics-based models predicting the optical properties of solid materials for arbitrary surface morphologies and characterizing the uncertainties in these models. We employ a systematic and hierarchical approach by designing physical experiments and comparing the experimental results with the outputs of computational predictive models. We illustrate this approach through an example comparing a micro-scale forward model to an idealized solid-material system and then propagating the results through a system model to the sensormore » level. Our efforts should enhance detection reliability of the hyper-spectral imaging technique and the confidence in model utilization and model outputs by users and stakeholders.« less
Brad C. Timm; Kevin McGarigal; Samuel A. Cushman; Joseph L. Ganey
2016-01-01
Efficacy of future habitat selection studies will benefit by taking a multi-scale approach. In addition to potentially providing increased explanatory power and predictive capacity, multi-scale habitat models enhance our understanding of the scales at which species respond to their environment, which is critical knowledge required to implement effective...
Multi-scale Finite Element Modeling of Eustachian Tube Function: Influence of Mucosal Adhesion
Malik, J.E.; Swarts, J.D.; Ghadiali, S. N.
2017-01-01
The inability to open the collapsible Eustachian tube (ET) leads to the development of chronic Otitis Media (OM). Although mucosal inflammation during OM leads to increased mucin gene expression and elevated adhesion forces within the ET lumen, it is not known how changes in mucosal adhesion alter the biomechanical mechanisms of ET function. In this study, we developed a novel multi-scale finite element model of ET function in adults that utilizes adhesion spring elements to simulate changes in mucosal adhesion. Models were created for six adult subjects and dynamic patterns in muscle contraction were used to simulate the wave-like opening of the ET that occurs during swallowing. Results indicate that ET opening is highly sensitive to the level of mucosal adhesion and that exceeding a critical value of adhesion leads to rapid ET dysfunction. Parameter variation studies and sensitivity analysis indicate that increased mucosal adhesion alters the relative importance of several tissue biomechanical properties. For example, increases in mucosal adhesion reduced the sensitivity of ET function to tensor veli palatini muscle forces but did not alter the insensitivity of ET function to levator veli palatini muscle forces. Interestingly, although changes in cartilage stiffness did not significantly influence ET opening under low adhesion conditions, ET opening was highly sensitive to changes in cartilage stiffness under high adhesion conditions. Therefore, our multi-scale computational models indicate that changes in mucosal adhesion as would occur during inflammatory OM alter the biomechanical mechanisms of ET function. PMID:26891171
Uranium Hydride Nucleation and Growth Model FY'16 ESC Annual Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hill, Mary Ann; Richards, Andrew Walter; Holby, Edward F.
2016-12-20
Uranium hydride corrosion is of great interest to the nuclear industry. Uranium reacts with water and/or hydrogen to form uranium hydride which adversely affects material performance. Hydride nucleation is influenced by thermal history, mechanical defects, oxide thickness, and chemical defects. Information has been gathered from past hydride experiments to formulate a uranium hydride model to be used in a Canned Subassembly (CSA) lifetime prediction model. This multi-scale computer modeling effort started in FY’13, and the fourth generation model is now complete. Additional high-resolution experiments will be run to further test the model.
Anandakrishnan, Ramu; Scogland, Tom R W; Fenley, Andrew T; Gordon, John C; Feng, Wu-chun; Onufriev, Alexey V
2010-06-01
Tools that compute and visualize biomolecular electrostatic surface potential have been used extensively for studying biomolecular function. However, determining the surface potential for large biomolecules on a typical desktop computer can take days or longer using currently available tools and methods. Two commonly used techniques to speed-up these types of electrostatic computations are approximations based on multi-scale coarse-graining and parallelization across multiple processors. This paper demonstrates that for the computation of electrostatic surface potential, these two techniques can be combined to deliver significantly greater speed-up than either one separately, something that is in general not always possible. Specifically, the electrostatic potential computation, using an analytical linearized Poisson-Boltzmann (ALPB) method, is approximated using the hierarchical charge partitioning (HCP) multi-scale method, and parallelized on an ATI Radeon 4870 graphical processing unit (GPU). The implementation delivers a combined 934-fold speed-up for a 476,040 atom viral capsid, compared to an equivalent non-parallel implementation on an Intel E6550 CPU without the approximation. This speed-up is significantly greater than the 42-fold speed-up for the HCP approximation alone or the 182-fold speed-up for the GPU alone. Copyright (c) 2010 Elsevier Inc. All rights reserved.
Compact Multimedia Systems in Multi-chip Module Technology
NASA Technical Reports Server (NTRS)
Fang, Wai-Chi; Alkalaj, Leon
1995-01-01
This tutorial paper shows advanced multimedia system designs based on multi-chip module (MCM) technologies that provide essential computing, compression, communication, and storage capabilities for various large scale information highway applications.!.
Sub-discretized surface model with application to contact mechanics in multi-body simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, S; Williams, J
2008-02-28
The mechanics of contact between rough and imperfectly spherical adhesive powder grains are often complicated by a variety of factors, including several which vary over sub-grain length scales. These include several traction factors that vary spatially over the surface of the individual grains, including high energy electron and acceptor sites (electrostatic), hydrophobic and hydrophilic sites (electrostatic and capillary), surface energy (general adhesion), geometry (van der Waals and mechanical), and elasto-plastic deformation (mechanical). For mechanical deformation and reaction, coupled motions, such as twisting with bending and sliding, as well as surface roughness add an asymmetry to the contact force which invalidatesmore » assumptions for popular models of contact, such as the Hertzian and its derivatives, for the non-adhesive case, and the JKR and DMT models for adhesive contacts. Though several contact laws have been offered to ameliorate these drawbacks, they are often constrained to particular loading paths (most often normal loading) and are relatively complicated for computational implementation. This paper offers a simple and general computational method for augmenting contact law predictions in multi-body simulations through characterization of the contact surfaces using a hierarchically-defined surface sub-discretization. For the case of adhesive contact between powder grains in low stress regimes, this technique can allow a variety of existing contact laws to be resolved across scales, allowing for moments and torques about the contact area as well as normal and tangential tractions to be resolved. This is especially useful for multi-body simulation applications where the modeler desires statistical distributions and calibration for parameters in contact laws commonly used for resolving near-surface contact mechanics. The approach is verified against analytical results for the case of rough, elastic spheres.« less
Tebani, Abdellah; Afonso, Carlos; Marret, Stéphane; Bekri, Soumeya
2016-01-01
The rise of technologies that simultaneously measure thousands of data points represents the heart of systems biology. These technologies have had a huge impact on the discovery of next-generation diagnostics, biomarkers, and drugs in the precision medicine era. Systems biology aims to achieve systemic exploration of complex interactions in biological systems. Driven by high-throughput omics technologies and the computational surge, it enables multi-scale and insightful overviews of cells, organisms, and populations. Precision medicine capitalizes on these conceptual and technological advancements and stands on two main pillars: data generation and data modeling. High-throughput omics technologies allow the retrieval of comprehensive and holistic biological information, whereas computational capabilities enable high-dimensional data modeling and, therefore, accessible and user-friendly visualization. Furthermore, bioinformatics has enabled comprehensive multi-omics and clinical data integration for insightful interpretation. Despite their promise, the translation of these technologies into clinically actionable tools has been slow. In this review, we present state-of-the-art multi-omics data analysis strategies in a clinical context. The challenges of omics-based biomarker translation are discussed. Perspectives regarding the use of multi-omics approaches for inborn errors of metabolism (IEM) are presented by introducing a new paradigm shift in addressing IEM investigations in the post-genomic era. PMID:27649151
Tebani, Abdellah; Afonso, Carlos; Marret, Stéphane; Bekri, Soumeya
2016-09-14
The rise of technologies that simultaneously measure thousands of data points represents the heart of systems biology. These technologies have had a huge impact on the discovery of next-generation diagnostics, biomarkers, and drugs in the precision medicine era. Systems biology aims to achieve systemic exploration of complex interactions in biological systems. Driven by high-throughput omics technologies and the computational surge, it enables multi-scale and insightful overviews of cells, organisms, and populations. Precision medicine capitalizes on these conceptual and technological advancements and stands on two main pillars: data generation and data modeling. High-throughput omics technologies allow the retrieval of comprehensive and holistic biological information, whereas computational capabilities enable high-dimensional data modeling and, therefore, accessible and user-friendly visualization. Furthermore, bioinformatics has enabled comprehensive multi-omics and clinical data integration for insightful interpretation. Despite their promise, the translation of these technologies into clinically actionable tools has been slow. In this review, we present state-of-the-art multi-omics data analysis strategies in a clinical context. The challenges of omics-based biomarker translation are discussed. Perspectives regarding the use of multi-omics approaches for inborn errors of metabolism (IEM) are presented by introducing a new paradigm shift in addressing IEM investigations in the post-genomic era.
Effect of thematic map misclassification on landscape multi-metric assessment.
Kleindl, William J; Powell, Scott L; Hauer, F Richard
2015-06-01
Advancements in remote sensing and computational tools have increased our awareness of large-scale environmental problems, thereby creating a need for monitoring, assessment, and management at these scales. Over the last decade, several watershed and regional multi-metric indices have been developed to assist decision-makers with planning actions of these scales. However, these tools use remote-sensing products that are subject to land-cover misclassification, and these errors are rarely incorporated in the assessment results. Here, we examined the sensitivity of a landscape-scale multi-metric index (MMI) to error from thematic land-cover misclassification and the implications of this uncertainty for resource management decisions. Through a case study, we used a simplified floodplain MMI assessment tool, whose metrics were derived from Landsat thematic maps, to initially provide results that were naive to thematic misclassification error. Using a Monte Carlo simulation model, we then incorporated map misclassification error into our MMI, resulting in four important conclusions: (1) each metric had a different sensitivity to error; (2) within each metric, the bias between the error-naive metric scores and simulated scores that incorporate potential error varied in magnitude and direction depending on the underlying land cover at each assessment site; (3) collectively, when the metrics were combined into a multi-metric index, the effects were attenuated; and (4) the index bias indicated that our naive assessment model may overestimate floodplain condition of sites with limited human impacts and, to a lesser extent, either over- or underestimated floodplain condition of sites with mixed land use.
GPU Multi-Scale Particle Tracking and Multi-Fluid Simulations of the Radiation Belts
NASA Astrophysics Data System (ADS)
Ziemba, T.; Carscadden, J.; O'Donnell, D.; Winglee, R.; Harnett, E.; Cash, M.
2007-12-01
The properties of the radiation belts can vary dramatically under the influence of magnetic storms and storm-time substorms. The task of understanding and predicting radiation belt properties is made difficult because their properties determined by global processes as well as small-scale wave-particle interactions. A full solution to the problem will require major innovations in technique and computer hardware. The proposed work will demonstrates liked particle tracking codes with new multi-scale/multi-fluid global simulations that provide the first means to include small-scale processes within the global magnetospheric context. A large hurdle to the problem is having sufficient computer hardware that is able to handle the dissipate temporal and spatial scale sizes. A major innovation of the work is that the codes are designed to run of graphics processing units (GPUs). GPUs are intrinsically highly parallelized systems that provide more than an order of magnitude computing speed over a CPU based systems, for little more cost than a high end-workstation. Recent advancements in GPU technologies allow for full IEEE float specifications with performance up to several hundred GFLOPs per GPU and new software architectures have recently become available to ease the transition from graphics based to scientific applications. This allows for a cheap alternative to standard supercomputing methods and should increase the time to discovery. A demonstration of the code pushing more than 500,000 particles faster than real time is presented, and used to provide new insight into radiation belt dynamics.
NASA Astrophysics Data System (ADS)
Harfst, S.; Portegies Zwart, S.; McMillan, S.
2008-12-01
We present MUSE, a software framework for combining existing computational tools from different astrophysical domains into a single multi-physics, multi-scale application. MUSE facilitates the coupling of existing codes written in different languages by providing inter-language tools and by specifying an interface between each module and the framework that represents a balance between generality and computational efficiency. This approach allows scientists to use combinations of codes to solve highly-coupled problems without the need to write new codes for other domains or significantly alter their existing codes. MUSE currently incorporates the domains of stellar dynamics, stellar evolution and stellar hydrodynamics for studying generalized stellar systems. We have now reached a ``Noah's Ark'' milestone, with (at least) two available numerical solvers for each domain. MUSE can treat multi-scale and multi-physics systems in which the time- and size-scales are well separated, like simulating the evolution of planetary systems, small stellar associations, dense stellar clusters, galaxies and galactic nuclei. In this paper we describe two examples calculated using MUSE: the merger of two galaxies and an N-body simulation with live stellar evolution. In addition, we demonstrate an implementation of MUSE on a distributed computer which may also include special-purpose hardware, such as GRAPEs or GPUs, to accelerate computations. The current MUSE code base is publicly available as open source at http://muse.li.
Theory-based transport simulations of TFTR L-mode temperature profiles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bateman, G.
1992-03-01
The temperature profiles from a selection of Tokamak Fusion Test Reactor (TFTR) L-mode discharges (17{ital th} {ital European} {ital Conference} {ital on} {ital Controlled} {ital Fusion} {ital and} {ital Plasma} {ital Heating}, Amsterdam, 1990 (EPS, Petit-Lancy, Switzerland, 1990, p. 114)) are simulated with the 1 (1)/(2) -D baldur transport code (Comput. Phys. Commun. {bold 49}, 275 (1988)) using a combination of theoretically derived transport models, called the Multi-Mode Model (Comments Plasma Phys. Controlled Fusion {bold 11}, 165 (1988)). The present version of the Multi-Mode Model consists of effective thermal diffusivities resulting from trapped electron modes and ion temperature gradient ({eta}{submore » {ital i}}) modes, which dominate in the core of the plasma, together with resistive ballooning modes, which dominate in the periphery. Within the context of this transport model and the TFTR simulations reported here, the scaling of confinement with heating power comes from the temperature dependence of the {eta}{sub {ital i}} and trapped electron modes, while the scaling with current comes mostly from resistive ballooning modes.« less
NASA Astrophysics Data System (ADS)
Grunloh, Timothy P.
The objective of this dissertation is to develop a 3-D domain-overlapping coupling method that leverages the superior flow field resolution of the Computational Fluid Dynamics (CFD) code STAR-CCM+ and the fast execution of the System Thermal Hydraulic (STH) code TRACE to efficiently and accurately model thermal hydraulic transport properties in nuclear power plants under complex conditions of regulatory and economic importance. The primary contribution is the novel Stabilized Inertial Domain Overlapping (SIDO) coupling method, which allows for on-the-fly correction of TRACE solutions for local pressures and velocity profiles inside multi-dimensional regions based on the results of the CFD simulation. The method is found to outperform the more frequently-used domain decomposition coupling methods. An STH code such as TRACE is designed to simulate large, diverse component networks, requiring simplifications to the fluid flow equations for reasonable execution times. Empirical correlations are therefore required for many sub-grid processes. The coarse grids used by TRACE diminish sensitivity to small scale geometric details such as Reactor Pressure Vessel (RPV) internals. A CFD code such as STAR-CCM+ uses much finer computational meshes that are sensitive to the geometric details of reactor internals. In turbulent flows, it is infeasible to fully resolve the flow solution, but the correlations used to model turbulence are at a low level. The CFD code can therefore resolve smaller scale flow processes. The development of a 3-D coupling method was carried out with the intention of improving predictive capabilities of transport properties in the downcomer and lower plenum regions of an RPV in reactor safety calculations. These regions are responsible for the multi-dimensional mixing effects that determine the distribution at the core inlet of quantities with reactivity implications, such as fluid temperature and dissolved neutron absorber concentration.
Zhou, Jiyun; Wang, Hongpeng; Zhao, Zhishan; Xu, Ruifeng; Lu, Qin
2018-05-08
Protein secondary structure is the three dimensional form of local segments of proteins and its prediction is an important problem in protein tertiary structure prediction. Developing computational approaches for protein secondary structure prediction is becoming increasingly urgent. We present a novel deep learning based model, referred to as CNNH_PSS, by using multi-scale CNN with highway. In CNNH_PSS, any two neighbor convolutional layers have a highway to deliver information from current layer to the output of the next one to keep local contexts. As lower layers extract local context while higher layers extract long-range interdependencies, the highways between neighbor layers allow CNNH_PSS to have ability to extract both local contexts and long-range interdependencies. We evaluate CNNH_PSS on two commonly used datasets: CB6133 and CB513. CNNH_PSS outperforms the multi-scale CNN without highway by at least 0.010 Q8 accuracy and also performs better than CNF, DeepCNF and SSpro8, which cannot extract long-range interdependencies, by at least 0.020 Q8 accuracy, demonstrating that both local contexts and long-range interdependencies are indeed useful for prediction. Furthermore, CNNH_PSS also performs better than GSM and DCRNN which need extra complex model to extract long-range interdependencies. It demonstrates that CNNH_PSS not only cost less computer resource, but also achieves better predicting performance. CNNH_PSS have ability to extracts both local contexts and long-range interdependencies by combing multi-scale CNN and highway network. The evaluations on common datasets and comparisons with state-of-the-art methods indicate that CNNH_PSS is an useful and efficient tool for protein secondary structure prediction.
Origin of Permeability and Structure of Flows in Fractured Media
NASA Astrophysics Data System (ADS)
De Dreuzy, J.; Darcel, C.; Davy, P.; Erhel, J.; Le Goc, R.; Maillot, J.; Meheust, Y.; Pichot, G.; Poirriez, B.
2013-12-01
After more than three decades of research, flows in fractured media have been shown to result from multi-scale geological structures. Flows result non-exclusively from the damage zone of the large faults, from the percolation within denser networks of smaller fractures, from the aperture heterogeneity within the fracture planes and from some remaining permeability within the matrix. While the effect of each of these causes has been studied independently, global assessments of the main determinisms is still needed. We propose a general approach to determine the geological structures responsible for flows, their permeability and their organization based on field data and numerical modeling [de Dreuzy et al., 2012b]. Multi-scale synthetic networks are reconstructed from field data and simplified mechanical modeling [Davy et al., 2010]. High-performance numerical methods are developed to comply with the specificities of the geometry and physical properties of the fractured media [Pichot et al., 2010; Pichot et al., 2012]. And, based on a large Monte-Carlo sampling, we determine the key determinisms of fractured permeability and flows (Figure). We illustrate our approach on the respective influence of fracture apertures and fracture correlation patterns at large scale. We show the potential role of fracture intersections, so far overlooked between the fracture and the network scales. We also demonstrate how fracture correlations reduce the bulk fracture permeability. Using this analysis, we highlight the need for more specific in-situ characterization of fracture flow structures. Fracture modeling and characterization are necessary to meet the new requirements of a growing number of applications where fractures appear both as potential advantages to enhance permeability and drawbacks for safety, e.g. in energy storage, stimulated geothermal energy and non-conventional gas productions. References Davy, P., et al. (2010), A likely universal model of fracture scaling and its consequence for crustal hydromechanics, Journal of Geophysical Research-Solid Earth, 115, 13. de Dreuzy, J.-R., et al. (2012a), Influence of fracture scale heterogeneity on the flow properties of three-dimensional Discrete Fracture Networks (DFN), J. Geophys. Res.-Earth Surf., 117(B11207), 21 PP. de Dreuzy, J.-R., et al. (2012b), Synthetic benchmark for modeling flow in 3D fractured media, Computers and Geosciences(0). Pichot, G., et al. (2010), A Mixed Hybrid Mortar Method for solving flow in Discrete Fracture Networks, Applicable Analysis, 89(10), 1729-1643. Pichot, G., et al. (2012), Flow simulation in 3D multi-scale fractured networks using non-matching meshes, SIAM Journal on Scientific Computing (SISC), 34(1). Figure: (a) Fracture network with a broad-range of fracture lengths. (b) Flows (log-scale) with homogeneous fractures. (c) Flows (log-scale) with heterogeneous fractures [de Dreuzy et al., 2012a]. The impact of the fracture apertures (c) is illustrated on the organization of flows.
NASA Astrophysics Data System (ADS)
Parishani, H.; Pritchard, M. S.; Bretherton, C. S.; Wyant, M. C.; Khairoutdinov, M.; Singh, B.
2017-12-01
Biases and parameterization formulation uncertainties in the representation of boundary layer clouds remain a leading source of possible systematic error in climate projections. Here we show the first results of cloud feedback to +4K SST warming in a new experimental climate model, the ``Ultra-Parameterized (UP)'' Community Atmosphere Model, UPCAM. We have developed UPCAM as an unusually high-resolution implementation of cloud superparameterization (SP) in which a global set of cloud resolving arrays is embedded in a host global climate model. In UP, the cloud-resolving scale includes sufficient internal resolution to explicitly generate the turbulent eddies that form marine stratocumulus and trade cumulus clouds. This is computationally costly but complements other available approaches for studying low clouds and their climate interaction, by avoiding parameterization of the relevant scales. In a recent publication we have shown that UP, while not without its own complexity trade-offs, can produce encouraging improvements in low cloud climatology in multi-month simulations of the present climate and is a promising target for exascale computing (Parishani et al. 2017). Here we show results of its low cloud feedback to warming in multi-year simulations for the first time. References: Parishani, H., M. S. Pritchard, C. S. Bretherton, M. C. Wyant, and M. Khairoutdinov (2017), Toward low-cloud-permitting cloud superparameterization with explicit boundary layer turbulence, J. Adv. Model. Earth Syst., 9, doi:10.1002/2017MS000968.
Reduced Order Model Implementation in the Risk-Informed Safety Margin Characterization Toolkit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mandelli, Diego; Smith, Curtis L.; Alfonsi, Andrea
2015-09-01
The RISMC project aims to develop new advanced simulation-based tools to perform Probabilistic Risk Analysis (PRA) for the existing fleet of U.S. nuclear power plants (NPPs). These tools numerically model not only the thermo-hydraulic behavior of the reactor primary and secondary systems but also external events temporal evolution and components/system ageing. Thus, this is not only a multi-physics problem but also a multi-scale problem (both spatial, µm-mm-m, and temporal, ms-s-minutes-years). As part of the RISMC PRA approach, a large amount of computationally expensive simulation runs are required. An important aspect is that even though computational power is regularly growing, themore » overall computational cost of a RISMC analysis may be not viable for certain cases. A solution that is being evaluated is the use of reduce order modeling techniques. During the FY2015, we investigated and applied reduced order modeling techniques to decrease the RICM analysis computational cost by decreasing the number of simulations runs to perform and employ surrogate models instead of the actual simulation codes. This report focuses on the use of reduced order modeling techniques that can be applied to any RISMC analysis to generate, analyze and visualize data. In particular, we focus on surrogate models that approximate the simulation results but in a much faster time (µs instead of hours/days). We apply reduced order and surrogate modeling techniques to several RISMC types of analyses using RAVEN and RELAP-7 and show the advantages that can be gained.« less
Computational fluid dynamics modelling in cardiovascular medicine
Morris, Paul D; Narracott, Andrew; von Tengg-Kobligk, Hendrik; Silva Soto, Daniel Alejandro; Hsiao, Sarah; Lungu, Angela; Evans, Paul; Bressloff, Neil W; Lawford, Patricia V; Hose, D Rodney; Gunn, Julian P
2016-01-01
This paper reviews the methods, benefits and challenges associated with the adoption and translation of computational fluid dynamics (CFD) modelling within cardiovascular medicine. CFD, a specialist area of mathematics and a branch of fluid mechanics, is used routinely in a diverse range of safety-critical engineering systems, which increasingly is being applied to the cardiovascular system. By facilitating rapid, economical, low-risk prototyping, CFD modelling has already revolutionised research and development of devices such as stents, valve prostheses, and ventricular assist devices. Combined with cardiovascular imaging, CFD simulation enables detailed characterisation of complex physiological pressure and flow fields and the computation of metrics which cannot be directly measured, for example, wall shear stress. CFD models are now being translated into clinical tools for physicians to use across the spectrum of coronary, valvular, congenital, myocardial and peripheral vascular diseases. CFD modelling is apposite for minimally-invasive patient assessment. Patient-specific (incorporating data unique to the individual) and multi-scale (combining models of different length- and time-scales) modelling enables individualised risk prediction and virtual treatment planning. This represents a significant departure from traditional dependence upon registry-based, population-averaged data. Model integration is progressively moving towards ‘digital patient’ or ‘virtual physiological human’ representations. When combined with population-scale numerical models, these models have the potential to reduce the cost, time and risk associated with clinical trials. The adoption of CFD modelling signals a new era in cardiovascular medicine. While potentially highly beneficial, a number of academic and commercial groups are addressing the associated methodological, regulatory, education- and service-related challenges. PMID:26512019
Wavefield complexity and stealth structures: Resolution constraints by wave physics
NASA Astrophysics Data System (ADS)
Nissen-Meyer, T.; Leng, K.
2017-12-01
Imaging the Earth's interior relies on understanding how waveforms encode information from heterogeneous multi-scale structure. This relation is given by elastodynamics, but forward modeling in the context of tomography primarily serves to deliver synthetic waveforms and gradients for the inversion procedure. While this is entirely appropriate, it depreciates a wealth of complementary inference that can be obtained from the complexity of the wavefield. Here, we are concerned with the imprint of realistic multi-scale Earth structure on the wavefield, and the question on the inherent physical resolution limit of structures encoded in seismograms. We identify parameter and scattering regimes where structures remain invisible as a function of seismic wavelength, structural multi-scale geometry, scattering strength, and propagation path. Ultimately, this will aid in interpreting tomographic images by acknowledging the scope of "forgotten" structures, and shall offer guidance for optimising the selection of seismic data for tomography. To do so, we use our novel 3D modeling method AxiSEM3D which tackles global wave propagation in visco-elastic, anisotropic 3D structures with undulating boundaries at unprecedented resolution and efficiency by exploiting the inherent azimuthal smoothness of wavefields via a coupled Fourier expansion-spectral-element approach. The method links computational cost to wavefield complexity and thereby lends itself well to exploring the relation between waveforms and structures. We will show various examples of multi-scale heterogeneities which appear or disappear in the waveform, and argue that the nature of the structural power spectrum plays a central role in this. We introduce the concept of wavefield learning to examine the true wavefield complexity for a complexity-dependent modeling framework and discriminate which scattering structures can be retrieved by surface measurements. This leads to the question of physical invisibility and the tomographic resolution limit, and offers insight as to why tomographic images still show stark differences for smaller-scale heterogeneities despite progress in modeling and data resolution. Finally, we give an outlook on how we expand this modeling framework towards an inversion procedure guided by wavefield complexity.
Multi-scale modelling of rubber-like materials and soft tissues: an appraisal
Puglisi, G.
2016-01-01
We survey, in a partial way, multi-scale approaches for the modelling of rubber-like and soft tissues and compare them with classical macroscopic phenomenological models. Our aim is to show how it is possible to obtain practical mathematical models for the mechanical behaviour of these materials incorporating mesoscopic (network scale) information. Multi-scale approaches are crucial for the theoretical comprehension and prediction of the complex mechanical response of these materials. Moreover, such models are fundamental in the perspective of the design, through manipulation at the micro- and nano-scales, of new polymeric and bioinspired materials with exceptional macroscopic properties. PMID:27118927
Multi-scale gyrokinetic simulation of Alcator C-Mod tokamak discharges
DOE Office of Scientific and Technical Information (OSTI.GOV)
Howard, N. T., E-mail: nthoward@psfc.mit.edu; White, A. E.; Greenwald, M.
2014-03-15
Alcator C-Mod tokamak discharges have been studied with nonlinear gyrokinetic simulation simultaneously spanning both ion and electron spatiotemporal scales. These multi-scale simulations utilized the gyrokinetic model implemented by GYRO code [J. Candy and R. E. Waltz, J. Comput. Phys. 186, 545 (2003)] and the approximation of reduced electron mass (μ = (m{sub D}/m{sub e}){sup .5} = 20.0) to qualitatively study a pair of Alcator C-Mod discharges: a low-power discharge, previously demonstrated (using realistic mass, ion-scale simulation) to display an under-prediction of the electron heat flux and a high-power discharge displaying agreement with both ion and electron heat flux channels [N. T. Howard et al.,more » Nucl. Fusion 53, 123011 (2013)]. These multi-scale simulations demonstrate the importance of electron-scale turbulence in the core of conventional tokamak discharges and suggest it is a viable candidate for explaining the observed under-prediction of electron heat flux. In this paper, we investigate the coupling of turbulence at the ion (k{sub θ}ρ{sub s}∼O(1.0)) and electron (k{sub θ}ρ{sub e}∼O(1.0)) scales for experimental plasma conditions both exhibiting strong (high-power) and marginally stable (low-power) low-k (k{sub θ}ρ{sub s} < 1.0) turbulence. It is found that reduced mass simulation of the plasma exhibiting marginally stable low-k turbulence fails to provide even qualitative insight into the turbulence present in the realistic plasma conditions. In contrast, multi-scale simulation of the plasma condition exhibiting strong turbulence provides valuable insight into the coupling of the ion and electron scales.« less
Scale Interactions in the Tropics from a Simple Multi-Cloud Model
NASA Astrophysics Data System (ADS)
Niu, X.; Biello, J. A.
2017-12-01
Our lack of a complete understanding of the interaction between the moisture convection and equatorial waves remains an impediment in the numerical simulation of large-scale organization, such as the Madden-Julian Oscillation (MJO). The aim of this project is to understand interactions across spatial scales in the tropics from a simplified framework for scale interactions while a using a simplified framework to describe the basic features of moist convection. Using multiple asymptotic scales, Biello and Majda[1] derived a multi-scale model of moist tropical dynamics (IMMD[1]), which separates three regimes: the planetary scale climatology, the synoptic scale waves, and the planetary scale anomalies regime. The scales and strength of the observed MJO would categorize it in the regime of planetary scale anomalies - which themselves are forced from non-linear upscale fluxes from the synoptic scales waves. In order to close this model and determine whether it provides a self-consistent theory of the MJO. A model for diabatic heating due to moist convection must be implemented along with the IMMD. The multi-cloud parameterization is a model proposed by Khouider and Majda[2] to describe the three basic cloud types (congestus, deep and stratiform) that are most responsible for tropical diabatic heating. We implement a simplified version of the multi-cloud model that is based on results derived from large eddy simulations of convection [3]. We present this simplified multi-cloud model and show results of numerical experiments beginning with a variety of convective forcing states. Preliminary results on upscale fluxes, from synoptic scales to planetary scale anomalies, will be presented. [1] Biello J A, Majda A J. Intraseasonal multi-scale moist dynamics of the tropical atmosphere[J]. Communications in Mathematical Sciences, 2010, 8(2): 519-540. [2] Khouider B, Majda A J. A simple multicloud parameterization for convectively coupled tropical waves. Part I: Linear analysis[J]. Journal of the atmospheric sciences, 2006, 63(4): 1308-1323. [3] Dorrestijn J, Crommelin D T, Biello J A, et al. A data-driven multi-cloud model for stochastic parametrization of deep convection[J]. Philosophical Transactions of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, 2013, 371(1991): 20120374.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Savic, Vesna; Hector, Louis G.; Ezzat, Hesham
This paper presents an overview of a four-year project focused on development of an integrated computational materials engineering (ICME) toolset for third generation advanced high-strength steels (3GAHSS). Following a brief look at ICME as an emerging discipline within the Materials Genome Initiative, technical tasks in the ICME project will be discussed. Specific aims of the individual tasks are multi-scale, microstructure-based material model development using state-of-the-art computational and experimental techniques, forming, toolset assembly, design optimization, integration and technical cost modeling. The integrated approach is initially illustrated using a 980 grade transformation induced plasticity (TRIP) steel, subject to a two-step quenching andmore » partitioning (Q&P) heat treatment, as an example.« less
Nonlinear plasma wave models in 3D fluid simulations of laser-plasma interaction
NASA Astrophysics Data System (ADS)
Chapman, Thomas; Berger, Richard; Arrighi, Bill; Langer, Steve; Banks, Jeffrey; Brunner, Stephan
2017-10-01
Simulations of laser-plasma interaction (LPI) in inertial confinement fusion (ICF) conditions require multi-mm spatial scales due to the typical laser beam size and durations of order 100 ps in order for numerical laser reflectivities to converge. To be computationally achievable, these scales necessitate a fluid-like treatment of light and plasma waves with a spatial grid size on the order of the light wave length. Plasma waves experience many nonlinear phenomena not naturally described by a fluid treatment, such as frequency shifts induced by trapping, a nonlinear (typically suppressed) Landau damping, and mode couplings leading to instabilities that can cause the plasma wave to decay rapidly. These processes affect the onset and saturation of stimulated Raman and Brillouin scattering, and are of direct interest to the modeling and prediction of deleterious LPI in ICF. It is not currently computationally feasible to simulate these Debye length-scale phenomena in 3D across experimental scales. Analytically-derived and/or numerically benchmarked models of processes occurring at scales finer than the fluid simulation grid offer a path forward. We demonstrate the impact of a range of kinetic processes on plasma reflectivity via models included in the LPI simulation code pF3D. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Xiong, Qingang; Ramirez, Emilio; Pannala, Sreekanth; ...
2015-10-09
The impact of bubbling bed hydrodynamics on temporal variations in the exit tar yield for biomass fast pyrolysis was investigated using computational simulations of an experimental laboratory-scale reactor. A multi-fluid computational fluid dynamics model was employed to simulate the differential conservation equations in the reactor, and this was combined with a multi-component, multi-step pyrolysis kinetics scheme for biomass to account for chemical reactions. The predicted mean tar yields at the reactor exit appear to match corresponding experimental observations. Parametric studies predicted that increasing the fluidization velocity should improve the mean tar yield but increase its temporal variations. Increases in themore » mean tar yield coincide with reducing the diameter of sand particles or increasing the initial sand bed height. However, trends in tar yield variability are more complex than the trends in mean yield. The standard deviation in tar yield reaches a maximum with changes in sand particle size. As a result, the standard deviation in tar yield increases with the increases in initial bed height in freely bubbling state, while reaches a maximum in slugging state.« less
Micro-computed tomography pore-scale study of flow in porous media: Effect of voxel resolution
NASA Astrophysics Data System (ADS)
Shah, S. M.; Gray, F.; Crawshaw, J. P.; Boek, E. S.
2016-09-01
A fundamental understanding of flow in porous media at the pore-scale is necessary to be able to upscale average displacement processes from core to reservoir scale. The study of fluid flow in porous media at the pore-scale consists of two key procedures: Imaging - reconstruction of three-dimensional (3D) pore space images; and modelling such as with single and two-phase flow simulations with Lattice-Boltzmann (LB) or Pore-Network (PN) Modelling. Here we analyse pore-scale results to predict petrophysical properties such as porosity, single-phase permeability and multi-phase properties at different length scales. The fundamental issue is to understand the image resolution dependency of transport properties, in order to up-scale the flow physics from pore to core scale. In this work, we use a high resolution micro-computed tomography (micro-CT) scanner to image and reconstruct three dimensional pore-scale images of five sandstones (Bentheimer, Berea, Clashach, Doddington and Stainton) and five complex carbonates (Ketton, Estaillades, Middle Eastern sample 3, Middle Eastern sample 5 and Indiana Limestone 1) at four different voxel resolutions (4.4 μm, 6.2 μm, 8.3 μm and 10.2 μm), scanning the same physical field of view. Implementing three phase segmentation (macro-pore phase, intermediate phase and grain phase) on pore-scale images helps to understand the importance of connected macro-porosity in the fluid flow for the samples studied. We then compute the petrophysical properties for all the samples using PN and LB simulations in order to study the influence of voxel resolution on petrophysical properties. We then introduce a numerical coarsening scheme which is used to coarsen a high voxel resolution image (4.4 μm) to lower resolutions (6.2 μm, 8.3 μm and 10.2 μm) and study the impact of coarsening data on macroscopic and multi-phase properties. Numerical coarsening of high resolution data is found to be superior to using a lower resolution scan because it avoids the problem of partial volume effects and reduces the scaling effect by preserving the pore-space properties influencing the transport properties. This is evidently compared in this study by predicting several pore network properties such as number of pores and throats, average pore and throat radius and coordination number for both scan based analysis and numerical coarsened data.
Multi-scale habitat selection modeling: A review and outlook
Kevin McGarigal; Ho Yi Wan; Kathy A. Zeller; Brad C. Timm; Samuel A. Cushman
2016-01-01
Scale is the lens that focuses ecological relationships. Organisms select habitat at multiple hierarchical levels and at different spatial and/or temporal scales within each level. Failure to properly address scale dependence can result in incorrect inferences in multi-scale habitat selection modeling studies.
Application of Bayesian model averaging to measurements of the primordial power spectrum
NASA Astrophysics Data System (ADS)
Parkinson, David; Liddle, Andrew R.
2010-11-01
Cosmological parameter uncertainties are often stated assuming a particular model, neglecting the model uncertainty, even when Bayesian model selection is unable to identify a conclusive best model. Bayesian model averaging is a method for assessing parameter uncertainties in situations where there is also uncertainty in the underlying model. We apply model averaging to the estimation of the parameters associated with the primordial power spectra of curvature and tensor perturbations. We use CosmoNest and MultiNest to compute the model evidences and posteriors, using cosmic microwave data from WMAP, ACBAR, BOOMERanG, and CBI, plus large-scale structure data from the SDSS DR7. We find that the model-averaged 95% credible interval for the spectral index using all of the data is 0.940
Multi-objective reverse logistics model for integrated computer waste management.
Ahluwalia, Poonam Khanijo; Nema, Arvind K
2006-12-01
This study aimed to address the issues involved in the planning and design of a computer waste management system in an integrated manner. A decision-support tool is presented for selecting an optimum configuration of computer waste management facilities (segregation, storage, treatment/processing, reuse/recycle and disposal) and allocation of waste to these facilities. The model is based on an integer linear programming method with the objectives of minimizing environmental risk as well as cost. The issue of uncertainty in the estimated waste quantities from multiple sources is addressed using the Monte Carlo simulation technique. An illustrated example of computer waste management in Delhi, India is presented to demonstrate the usefulness of the proposed model and to study tradeoffs between cost and risk. The results of the example problem show that it is possible to reduce the environmental risk significantly by a marginal increase in the available cost. The proposed model can serve as a powerful tool to address the environmental problems associated with exponentially growing quantities of computer waste which are presently being managed using rudimentary methods of reuse, recovery and disposal by various small-scale vendors.
Addressing the challenges of standalone multi-core simulations in molecular dynamics
NASA Astrophysics Data System (ADS)
Ocaya, R. O.; Terblans, J. J.
2017-07-01
Computational modelling in material science involves mathematical abstractions of force fields between particles with the aim to postulate, develop and understand materials by simulation. The aggregated pairwise interactions of the material's particles lead to a deduction of its macroscopic behaviours. For practically meaningful macroscopic scales, a large amount of data are generated, leading to vast execution times. Simulation times of hours, days or weeks for moderately sized problems are not uncommon. The reduction of simulation times, improved result accuracy and the associated software and hardware engineering challenges are the main motivations for many of the ongoing researches in the computational sciences. This contribution is concerned mainly with simulations that can be done on a "standalone" computer based on Message Passing Interfaces (MPI), parallel code running on hardware platforms with wide specifications, such as single/multi- processor, multi-core machines with minimal reconfiguration for upward scaling of computational power. The widely available, documented and standardized MPI library provides this functionality through the MPI_Comm_size (), MPI_Comm_rank () and MPI_Reduce () functions. A survey of the literature shows that relatively little is written with respect to the efficient extraction of the inherent computational power in a cluster. In this work, we discuss the main avenues available to tap into this extra power without compromising computational accuracy. We also present methods to overcome the high inertia encountered in single-node-based computational molecular dynamics. We begin by surveying the current state of the art and discuss what it takes to achieve parallelism, efficiency and enhanced computational accuracy through program threads and message passing interfaces. Several code illustrations are given. The pros and cons of writing raw code as opposed to using heuristic, third-party code are also discussed. The growing trend towards graphical processor units and virtual computing clouds for high-performance computing is also discussed. Finally, we present the comparative results of vacancy formation energy calculations using our own parallelized standalone code called Verlet-Stormer velocity (VSV) operating on 30,000 copper atoms. The code is based on the Sutton-Chen implementation of the Finnis-Sinclair pairwise embedded atom potential. A link to the code is also given.
Multiscale modeling and simulation of brain blood flow
NASA Astrophysics Data System (ADS)
Perdikaris, Paris; Grinberg, Leopold; Karniadakis, George Em
2016-02-01
The aim of this work is to present an overview of recent advances in multi-scale modeling of brain blood flow. In particular, we present some approaches that enable the in silico study of multi-scale and multi-physics phenomena in the cerebral vasculature. We discuss the formulation of continuum and atomistic modeling approaches, present a consistent framework for their concurrent coupling, and list some of the challenges that one needs to overcome in achieving a seamless and scalable integration of heterogeneous numerical solvers. The effectiveness of the proposed framework is demonstrated in a realistic case involving modeling the thrombus formation process taking place on the wall of a patient-specific cerebral aneurysm. This highlights the ability of multi-scale algorithms to resolve important biophysical processes that span several spatial and temporal scales, potentially yielding new insight into the key aspects of brain blood flow in health and disease. Finally, we discuss open questions in multi-scale modeling and emerging topics of future research.
Fast Decentralized Averaging via Multi-scale Gossip
NASA Astrophysics Data System (ADS)
Tsianos, Konstantinos I.; Rabbat, Michael G.
We are interested in the problem of computing the average consensus in a distributed fashion on random geometric graphs. We describe a new algorithm called Multi-scale Gossip which employs a hierarchical decomposition of the graph to partition the computation into tractable sub-problems. Using only pairwise messages of fixed size that travel at most O(n^{1/3}) hops, our algorithm is robust and has communication cost of O(n loglogn logɛ - 1) transmissions, which is order-optimal up to the logarithmic factor in n. Simulated experiments verify the good expected performance on graphs of many thousands of nodes.
Year 2 Report: Protein Function Prediction Platform
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, C E
2012-04-27
Upon completion of our second year of development in a 3-year development cycle, we have completed a prototype protein structure-function annotation and function prediction system: Protein Function Prediction (PFP) platform (v.0.5). We have met our milestones for Years 1 and 2 and are positioned to continue development in completion of our original statement of work, or a reasonable modification thereof, in service to DTRA Programs involved in diagnostics and medical countermeasures research and development. The PFP platform is a multi-scale computational modeling system for protein structure-function annotation and function prediction. As of this writing, PFP is the only existing fullymore » automated, high-throughput, multi-scale modeling, whole-proteome annotation platform, and represents a significant advance in the field of genome annotation (Fig. 1). PFP modules perform protein functional annotations at the sequence, systems biology, protein structure, and atomistic levels of biological complexity (Fig. 2). Because these approaches provide orthogonal means of characterizing proteins and suggesting protein function, PFP processing maximizes the protein functional information that can currently be gained by computational means. Comprehensive annotation of pathogen genomes is essential for bio-defense applications in pathogen characterization, threat assessment, and medical countermeasure design and development in that it can short-cut the time and effort required to select and characterize protein biomarkers.« less
Asynchronously Coupled Models of Ice Loss from Airless Planetary Bodies
NASA Astrophysics Data System (ADS)
Schorghofer, N.
2016-12-01
Ice is found near the surface of dwarf planet Ceres, in some main belt asteroids, and perhaps in NEOs that will be explored or even mined in future. The simple but important question of how fast ice is lost from airless bodies can present computational challenges. The thermal cycle on the surface repeats on much shorter time-scales than ice retreats; one process acts on the time-scale of hours, the other over billions of years. This multi-scale situation is addressed with asynchronous coupling, where models with different time steps are woven together. The sharp contrast at the retreating ice table is dealt with with explicit interface tracking. For Ceres, which is covered with a thermally insulating dust mantle, desiccation rates are orders of magnitude slower than had been calculated with simpler models. More model challenges remain: The role of impact devolatization and the time-scale for complete desiccation of an asteroid. I will also share my experience with code distribution using GitHub and Zenodo.
Simeonov, Plamen L
2017-12-01
The goal of this paper is to advance an extensible theory of living systems using an approach to biomathematics and biocomputation that suitably addresses self-organized, self-referential and anticipatory systems with multi-temporal multi-agents. Our first step is to provide foundations for modelling of emergent and evolving dynamic multi-level organic complexes and their sustentative processes in artificial and natural life systems. Main applications are in life sciences, medicine, ecology and astrobiology, as well as robotics, industrial automation, man-machine interface and creative design. Since 2011 over 100 scientists from a number of disciplines have been exploring a substantial set of theoretical frameworks for a comprehensive theory of life known as Integral Biomathics. That effort identified the need for a robust core model of organisms as dynamic wholes, using advanced and adequately computable mathematics. The work described here for that core combines the advantages of a situation and context aware multivalent computational logic for active self-organizing networks, Wandering Logic Intelligence (WLI), and a multi-scale dynamic category theory, Memory Evolutive Systems (MES), hence WLIMES. This is presented to the modeller via a formal augmented reality language as a first step towards practical modelling and simulation of multi-level living systems. Initial work focuses on the design and implementation of this visual language and calculus (VLC) and its graphical user interface. The results will be integrated within the current methodology and practices of theoretical biology and (personalized) medicine to deepen and to enhance the holistic understanding of life. Copyright © 2017 Elsevier B.V. All rights reserved.
Adaptive multi-GPU Exchange Monte Carlo for the 3D Random Field Ising Model
NASA Astrophysics Data System (ADS)
Navarro, Cristóbal A.; Huang, Wei; Deng, Youjin
2016-08-01
This work presents an adaptive multi-GPU Exchange Monte Carlo approach for the simulation of the 3D Random Field Ising Model (RFIM). The design is based on a two-level parallelization. The first level, spin-level parallelism, maps the parallel computation as optimal 3D thread-blocks that simulate blocks of spins in shared memory with minimal halo surface, assuming a constant block volume. The second level, replica-level parallelism, uses multi-GPU computation to handle the simulation of an ensemble of replicas. CUDA's concurrent kernel execution feature is used in order to fill the occupancy of each GPU with many replicas, providing a performance boost that is more notorious at the smallest values of L. In addition to the two-level parallel design, the work proposes an adaptive multi-GPU approach that dynamically builds a proper temperature set free of exchange bottlenecks. The strategy is based on mid-point insertions at the temperature gaps where the exchange rate is most compromised. The extra work generated by the insertions is balanced across the GPUs independently of where the mid-point insertions were performed. Performance results show that spin-level performance is approximately two orders of magnitude faster than a single-core CPU version and one order of magnitude faster than a parallel multi-core CPU version running on 16-cores. Multi-GPU performance is highly convenient under a weak scaling setting, reaching up to 99 % efficiency as long as the number of GPUs and L increase together. The combination of the adaptive approach with the parallel multi-GPU design has extended our possibilities of simulation to sizes of L = 32 , 64 for a workstation with two GPUs. Sizes beyond L = 64 can eventually be studied using larger multi-GPU systems.
Evaluation of Drogue Parachute Damping Effects Utilizing the Apollo Legacy Parachute Model
NASA Technical Reports Server (NTRS)
Currin, Kelly M.; Gamble, Joe D.; Matz, Daniel A.; Bretz, David R.
2011-01-01
Drogue parachute damping is required to dampen the Orion Multi Purpose Crew Vehicle (MPCV) crew module (CM) oscillations prior to deployment of the main parachutes. During the Apollo program, drogue parachute damping was modeled on the premise that the drogue parachute force vector aligns with the resultant velocity of the parachute attach point on the CM. Equivalent Cm(sub q) and Cm(sub alpha) equations for drogue parachute damping resulting from the Apollo legacy parachute damping model premise have recently been developed. The MPCV computer simulations ANTARES and Osiris have implemented high fidelity two-body parachute damping models. However, high-fidelity model-based damping motion predictions do not match the damping observed during wind tunnel and full-scale free-flight oscillatory motion. This paper will present the methodology for comparing and contrasting the Apollo legacy parachute damping model with full-scale free-flight oscillatory motion. The analysis shows an agreement between the Apollo legacy parachute damping model and full-scale free-flight oscillatory motion.
Analysis of Gas-Particle Flows through Multi-Scale Simulations
NASA Astrophysics Data System (ADS)
Gu, Yile
Multi-scale structures are inherent in gas-solid flows, which render the modeling efforts challenging. On one hand, detailed simulations where the fine structures are resolved and particle properties can be directly specified can account for complex flow behaviors, but they are too computationally expensive to apply for larger systems. On the other hand, coarse-grained simulations demand much less computations but they necessitate constitutive models which are often not readily available for given particle properties. The present study focuses on addressing this issue, as it seeks to provide a general framework through which one can obtain the required constitutive models from detailed simulations. To demonstrate the viability of this general framework in which closures can be proposed for different particle properties, we focus on the van der Waals force of interaction between particles. We start with Computational Fluid Dynamics (CFD) - Discrete Element Method (DEM) simulations where the fine structures are resolved and van der Waals force between particles can be directly specified, and obtain closures for stress and drag that are required for coarse-grained simulations. Specifically, we develop a new cohesion model that appropriately accounts for van der Waals force between particles to be used for CFD-DEM simulations. We then validate this cohesion model and the CFD-DEM approach by showing that it can qualitatively capture experimental results where the addition of small particles to gas fluidization reduces bubble sizes. Based on the DEM and CFD-DEM simulation results, we propose stress models that account for the van der Waals force between particles. Finally, we apply machine learning, specifically neural networks, to obtain a drag model that captures the effects from fine structures and inter-particle cohesion. We show that this novel approach using neural networks, which can be readily applied for other closures other than drag here, can take advantage of the large amount of data generated from simulations, and therefore offer superior modeling performance over traditional approaches.
The Collaborative Seismic Earth Model Project
NASA Astrophysics Data System (ADS)
Fichtner, A.; van Herwaarden, D. P.; Afanasiev, M.
2017-12-01
We present the first generation of the Collaborative Seismic Earth Model (CSEM). This effort is intended to address grand challenges in tomography that currently inhibit imaging the Earth's interior across the seismically accessible scales: [1] For decades to come, computational resources will remain insufficient for the exploitation of the full observable seismic bandwidth. [2] With the man power of individual research groups, only small fractions of available waveform data can be incorporated into seismic tomographies. [3] The limited incorporation of prior knowledge on 3D structure leads to slow progress and inefficient use of resources. The CSEM is a multi-scale model of global 3D Earth structure that evolves continuously through successive regional refinements. Taking the current state of the CSEM as initial model, these refinements are contributed by external collaborators, and used to advance the CSEM to the next state. This mode of operation allows the CSEM to [1] harness the distributed man and computing power of the community, [2] to make consistent use of prior knowledge, and [3] to combine different tomographic techniques, needed to cover the seismic data bandwidth. Furthermore, the CSEM has the potential to serve as a unified and accessible representation of tomographic Earth models. Generation 1 comprises around 15 regional tomographic refinements, computed with full-waveform inversion. These include continental-scale mantle models of North America, Australasia, Europe and the South Atlantic, as well as detailed regional models of the crust beneath the Iberian Peninsula and western Turkey. A global-scale full-waveform inversion ensures that regional refinements are consistent with whole-Earth structure. This first generation will serve as the basis for further automation and methodological improvements concerning validation and uncertainty quantification.
Kassab, Ghassan S.; An, Gary; Sander, Edward A.; Miga, Michael; Guccione, Julius M.; Ji, Songbai; Vodovotz, Yoram
2016-01-01
In this era of tremendous technological capabilities and increased focus on improving clinical outcomes, decreasing costs, and increasing precision, there is a need for a more quantitative approach to the field of surgery. Multiscale computational modeling has the potential to bridge the gap to the emerging paradigms of Precision Medicine and Translational Systems Biology, in which quantitative metrics and data guide patient care through improved stratification, diagnosis, and therapy. Achievements by multiple groups have demonstrated the potential for 1) multiscale computational modeling, at a biological level, of diseases treated with surgery and the surgical procedure process at the level of the individual and the population; along with 2) patient-specific, computationally-enabled surgical planning, delivery, and guidance and robotically-augmented manipulation. In this perspective article, we discuss these concepts, and cite emerging examples from the fields of trauma, wound healing, and cardiac surgery. PMID:27015816
Initial conditions and modeling for simulations of shock driven turbulent material mixing
Grinstein, Fernando F.
2016-11-17
Here, we focus on the simulation of shock-driven material mixing driven by flow instabilities and initial conditions (IC). Beyond complex multi-scale resolution issues of shocks and variable density turbulence, me must address the equally difficult problem of predicting flow transition promoted by energy deposited at the material interfacial layer during the shock interface interactions. Transition involves unsteady large-scale coherent-structure dynamics capturable by a large eddy simulation (LES) strategy, but not by an unsteady Reynolds-Averaged Navier–Stokes (URANS) approach based on developed equilibrium turbulence assumptions and single-point-closure modeling. On the engineering end of computations, such URANS with reduced 1D/2D dimensionality and coarsermore » grids, tend to be preferred for faster turnaround in full-scale configurations.« less
Multi-scale signed envelope inversion
NASA Astrophysics Data System (ADS)
Chen, Guo-Xin; Wu, Ru-Shan; Wang, Yu-Qing; Chen, Sheng-Chang
2018-06-01
Envelope inversion based on modulation signal mode was proposed to reconstruct large-scale structures of underground media. In order to solve the shortcomings of conventional envelope inversion, multi-scale envelope inversion was proposed using new envelope Fréchet derivative and multi-scale inversion strategy to invert strong contrast models. In multi-scale envelope inversion, amplitude demodulation was used to extract the low frequency information from envelope data. However, only to use amplitude demodulation method will cause the loss of wavefield polarity information, thus increasing the possibility of inversion to obtain multiple solutions. In this paper we proposed a new demodulation method which can contain both the amplitude and polarity information of the envelope data. Then we introduced this demodulation method into multi-scale envelope inversion, and proposed a new misfit functional: multi-scale signed envelope inversion. In the numerical tests, we applied the new inversion method to the salt layer model and SEG/EAGE 2-D Salt model using low-cut source (frequency components below 4 Hz were truncated). The results of numerical test demonstrated the effectiveness of this method.
Simultaneous Multi-Scale Diffusion Estimation and Tractography Guided by Entropy Spectrum Pathways
Galinsky, Vitaly L.; Frank, Lawrence R.
2015-01-01
We have developed a method for the simultaneous estimation of local diffusion and the global fiber tracts based upon the information entropy flow that computes the maximum entropy trajectories between locations and depends upon the global structure of the multi-dimensional and multi-modal diffusion field. Computation of the entropy spectrum pathways requires only solving a simple eigenvector problem for the probability distribution for which efficient numerical routines exist, and a straight forward integration of the probability conservation through ray tracing of the convective modes guided by a global structure of the entropy spectrum coupled with a small scale local diffusion. The intervoxel diffusion is sampled by multi b-shell multi q-angle DWI data expanded in spherical waves. This novel approach to fiber tracking incorporates global information about multiple fiber crossings in every individual voxel and ranks it in the most scientifically rigorous way. This method has potential significance for a wide range of applications, including studies of brain connectivity. PMID:25532167
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Lin; Dai, Zhenxue; Gong, Huili
Understanding the heterogeneity arising from the complex architecture of sedimentary sequences in alluvial fans is challenging. This study develops a statistical inverse framework in a multi-zone transition probability approach for characterizing the heterogeneity in alluvial fans. An analytical solution of the transition probability matrix is used to define the statistical relationships among different hydrofacies and their mean lengths, integral scales, and volumetric proportions. A statistical inversion is conducted to identify the multi-zone transition probability models and estimate the optimal statistical parameters using the modified Gauss–Newton–Levenberg–Marquardt method. The Jacobian matrix is computed by the sensitivity equation method, which results in anmore » accurate inverse solution with quantification of parameter uncertainty. We use the Chaobai River alluvial fan in the Beijing Plain, China, as an example for elucidating the methodology of alluvial fan characterization. The alluvial fan is divided into three sediment zones. In each zone, the explicit mathematical formulations of the transition probability models are constructed with optimized different integral scales and volumetric proportions. The hydrofacies distributions in the three zones are simulated sequentially by the multi-zone transition probability-based indicator simulations. Finally, the result of this study provides the heterogeneous structure of the alluvial fan for further study of flow and transport simulations.« less
NASA Astrophysics Data System (ADS)
Rababaah, Haroun; Shirkhodaie, Amir
2009-04-01
The rapidly advancing hardware technology, smart sensors and sensor networks are advancing environment sensing. One major potential of this technology is Large-Scale Surveillance Systems (LS3) especially for, homeland security, battlefield intelligence, facility guarding and other civilian applications. The efficient and effective deployment of LS3 requires addressing number of aspects impacting the scalability of such systems. The scalability factors are related to: computation and memory utilization efficiency, communication bandwidth utilization, network topology (e.g., centralized, ad-hoc, hierarchical or hybrid), network communication protocol and data routing schemes; and local and global data/information fusion scheme for situational awareness. Although, many models have been proposed to address one aspect or another of these issues but, few have addressed the need for a multi-modality multi-agent data/information fusion that has characteristics satisfying the requirements of current and future intelligent sensors and sensor networks. In this paper, we have presented a novel scalable fusion engine for multi-modality multi-agent information fusion for LS3. The new fusion engine is based on a concept we call: Energy Logic. Experimental results of this work as compared to a Fuzzy logic model strongly supported the validity of the new model and inspired future directions for different levels of fusion and different applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kreutz, Thomas G; Ogden, Joan M
2000-07-01
In the final report, we present results from a technical and economic assessment of residential scale PEM fuel cell power systems. The objectives of our study are to conceptually design an inexpensive, small-scale PEMFC-based stationary power system that converts natural gas to both electricity and heat, and then to analyze the prospective performance and economics of various system configurations. We developed computer models for residential scale PEMFC cogeneration systems to compare various system designs (e.g., steam reforming vs. partial oxidation, compressed vs. atmospheric pressure, etc.) and determine the most technically and economically attractive system configurations at various scales (e.g., singlemore » family, residential, multi-dwelling, neighborhood).« less
SmallTool - a toolkit for realizing shared virtual environments on the Internet
NASA Astrophysics Data System (ADS)
Broll, Wolfgang
1998-09-01
With increasing graphics capabilities of computers and higher network communication speed, networked virtual environments have become available to a large number of people. While the virtual reality modelling language (VRML) provides users with the ability to exchange 3D data, there is still a lack of appropriate support to realize large-scale multi-user applications on the Internet. In this paper we will present SmallTool, a toolkit to support shared virtual environments on the Internet. The toolkit consists of a VRML-based parsing and rendering library, a device library, and a network library. This paper will focus on the networking architecture, provided by the network library - the distributed worlds transfer and communication protocol (DWTP). DWTP provides an application-independent network architecture to support large-scale multi-user environments on the Internet.
Interactive, graphical processing unitbased evaluation of evacuation scenarios at the state scale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perumalla, Kalyan S; Aaby, Brandon G; Yoginath, Srikanth B
2011-01-01
In large-scale scenarios, transportation modeling and simulation is severely constrained by simulation time. For example, few real- time simulators scale to evacuation traffic scenarios at the level of an entire state, such as Louisiana (approximately 1 million links) or Florida (2.5 million links). New simulation approaches are needed to overcome severe computational demands of conventional (microscopic or mesoscopic) modeling techniques. Here, a new modeling and execution methodology is explored that holds the potential to provide a tradeoff among the level of behavioral detail, the scale of transportation network, and real-time execution capabilities. A novel, field-based modeling technique and its implementationmore » on graphical processing units are presented. Although additional research with input from domain experts is needed for refining and validating the models, the techniques reported here afford interactive experience at very large scales of multi-million road segments. Illustrative experiments on a few state-scale net- works are described based on an implementation of this approach in a software system called GARFIELD. Current modeling cap- abilities and implementation limitations are described, along with possible use cases and future research.« less
Integrating multi-scale data to create a virtual physiological mouse heart.
Land, Sander; Niederer, Steven A; Louch, William E; Sejersted, Ole M; Smith, Nicolas P
2013-04-06
While the virtual physiological human (VPH) project has made great advances in human modelling, many of the tools and insights developed as part of this initiative are also applicable for facilitating mechanistic understanding of the physiology of a range of other species. This process, in turn, has the potential to provide human relevant insights via a different scientific path. Specifically, the increasing use of mice in experimental research, not yet fully complemented by a similar increase in computational modelling, is currently missing an important opportunity for using and interpreting this growing body of experimental data to improve our understanding of cardiac function. This overview describes our work to address this issue by creating a virtual physiological mouse model of the heart. We describe the similarities between human- and mouse-focused modelling, including the reuse of VPH tools, and the development of methods for investigating parameter sensitivity that are applicable across species. We show how previous results using this approach have already provided important biological insights, and how these can also be used to advance VPH heart models. Finally, we show an example application of this approach to test competing multi-scale hypotheses by investigating variations in length-dependent properties of cardiac muscle.
Integrating multi-scale data to create a virtual physiological mouse heart
Land, Sander; Niederer, Steven A.; Louch, William E.; Sejersted, Ole M.; Smith, Nicolas P.
2013-01-01
While the virtual physiological human (VPH) project has made great advances in human modelling, many of the tools and insights developed as part of this initiative are also applicable for facilitating mechanistic understanding of the physiology of a range of other species. This process, in turn, has the potential to provide human relevant insights via a different scientific path. Specifically, the increasing use of mice in experimental research, not yet fully complemented by a similar increase in computational modelling, is currently missing an important opportunity for using and interpreting this growing body of experimental data to improve our understanding of cardiac function. This overview describes our work to address this issue by creating a virtual physiological mouse model of the heart. We describe the similarities between human- and mouse-focused modelling, including the reuse of VPH tools, and the development of methods for investigating parameter sensitivity that are applicable across species. We show how previous results using this approach have already provided important biological insights, and how these can also be used to advance VPH heart models. Finally, we show an example application of this approach to test competing multi-scale hypotheses by investigating variations in length-dependent properties of cardiac muscle. PMID:24427525
Wacker, Soren; Noskov, Sergei Yu; Perissinotti, Laura L
2017-01-01
The rapid delayed rectifier current IKr is one of the major K+ currents involved in repolarization of the human cardiac action potential. Various inherited or drug-induced forms of the long QT syndrome (LQTS) in humans are linked to functional and structural modifications in the IKr conducting channels. IKr is carried by the potassium channel Kv11.1 encoded by the gene KCNH2 (commonly referred to as human ether-a-go-go-related gene or hERG) [1, 2]. The first necessary step for predicting emergent drug effects on the heart is determining and modeling the binding thermodynamics and kinetics of primary and major off-target drug interactions with subcellular targets. The bulk of drugs that target hERG channels are known to have complex interactions at the atomic scale. Accordingly, one of the goals for this review is to provide comprehensive guide in the universe of computational models aiming to refine our understanding of structure-function relations in Kv11.1 and its isoforms. The special emphasis is placed on the mapping of drug binding sites and tentative mechanisms of channel inhibition and activation by drugs. An overview over recent structural models and mapping of binding sites for blockers and activators of IKr current along with the discussion on agreements and discrepancies among different models is presented. There is an apparent reciprocity or feedback loop between drug binding and action potential of the cardiac myocytes. Thus one has to connect drug binding to a particular receptor so that its functional consequences impact on the action potential duration. The natural pathway is to develop multi-scale models that connect between receptor and cellular scales. The potential for such multi-scale model development is discussed through the lens of common gating models. Accordingly, the second part of this review covers an ongoing development of the kinetic models of gating transitions and cardiac ion currents carried by hERG channels with and without drug bound. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Adaptive Numerical Algorithms in Space Weather Modeling
NASA Technical Reports Server (NTRS)
Toth, Gabor; vanderHolst, Bart; Sokolov, Igor V.; DeZeeuw, Darren; Gombosi, Tamas I.; Fang, Fang; Manchester, Ward B.; Meng, Xing; Nakib, Dalal; Powell, Kenneth G.;
2010-01-01
Space weather describes the various processes in the Sun-Earth system that present danger to human health and technology. The goal of space weather forecasting is to provide an opportunity to mitigate these negative effects. Physics-based space weather modeling is characterized by disparate temporal and spatial scales as well as by different physics in different domains. A multi-physics system can be modeled by a software framework comprising of several components. Each component corresponds to a physics domain, and each component is represented by one or more numerical models. The publicly available Space Weather Modeling Framework (SWMF) can execute and couple together several components distributed over a parallel machine in a flexible and efficient manner. The framework also allows resolving disparate spatial and temporal scales with independent spatial and temporal discretizations in the various models. Several of the computationally most expensive domains of the framework are modeled by the Block-Adaptive Tree Solar wind Roe Upwind Scheme (BATS-R-US) code that can solve various forms of the magnetohydrodynamics (MHD) equations, including Hall, semi-relativistic, multi-species and multi-fluid MHD, anisotropic pressure, radiative transport and heat conduction. Modeling disparate scales within BATS-R-US is achieved by a block-adaptive mesh both in Cartesian and generalized coordinates. Most recently we have created a new core for BATS-R-US: the Block-Adaptive Tree Library (BATL) that provides a general toolkit for creating, load balancing and message passing in a 1, 2 or 3 dimensional block-adaptive grid. We describe the algorithms of BATL and demonstrate its efficiency and scaling properties for various problems. BATS-R-US uses several time-integration schemes to address multiple time-scales: explicit time stepping with fixed or local time steps, partially steady-state evolution, point-implicit, semi-implicit, explicit/implicit, and fully implicit numerical schemes. Depending on the application, we find that different time stepping methods are optimal. Several of the time integration schemes exploit the block-based granularity of the grid structure. The framework and the adaptive algorithms enable physics based space weather modeling and even forecasting.
Wang, Yan Jason; Nguyen, Monica T; Steffens, Jonathan T; Tong, Zheming; Wang, Yungang; Hopke, Philip K; Zhang, K Max
2013-01-15
A new methodology, referred to as the multi-scale structure, integrates "tailpipe-to-road" (i.e., on-road domain) and "road-to-ambient" (i.e., near-road domain) simulations to elucidate the environmental impacts of particulate emissions from traffic sources. The multi-scale structure is implemented in the CTAG model to 1) generate process-based on-road emission rates of ultrafine particles (UFPs) by explicitly simulating the effects of exhaust properties, traffic conditions, and meteorological conditions and 2) to characterize the impacts of traffic-related emissions on micro-environmental air quality near a highway intersection in Rochester, NY. The performance of CTAG, evaluated against with the field measurements, shows adequate agreement in capturing the dispersion of carbon monoxide (CO) and the number concentrations of UFPs in the near road micro-environment. As a proof-of-concept case study, we also apply CTAG to separate the relative impacts of the shutdown of a large coal-fired power plant (CFPP) and the adoption of the ultra-low-sulfur diesel (ULSD) on UFP concentrations in the intersection micro-environment. Although CTAG is still computationally expensive compared to the widely-used parameterized dispersion models, it has the potential to advance our capability to predict the impacts of UFP emissions and spatial/temporal variations of air pollutants in complex environments. Furthermore, for the on-road simulations, CTAG can serve as a process-based emission model; Combining the on-road and near-road simulations, CTAG becomes a "plume-in-grid" model for mobile emissions. The processed emission profiles can potentially improve regional air quality and climate predictions accordingly. Copyright © 2012 Elsevier B.V. All rights reserved.
Empirical Determination of Competence Areas to Computer Science Education
ERIC Educational Resources Information Center
Zendler, Andreas; Klaudt, Dieter; Seitz, Cornelia
2014-01-01
The authors discuss empirically determined competence areas to K-12 computer science education, emphasizing the cognitive level of competence. The results of a questionnaire with 120 professors of computer science serve as a database. By using multi-dimensional scaling and cluster analysis, four competence areas to computer science education…
Mei, Suyu
2012-10-07
Recent years have witnessed much progress in computational modeling for protein subcellular localization. However, there are far few computational models for predicting plant protein subcellular multi-localization. In this paper, we propose a multi-label multi-kernel transfer learning model for predicting multiple subcellular locations of plant proteins (MLMK-TLM). The method proposes a multi-label confusion matrix and adapts one-against-all multi-class probabilistic outputs to multi-label learning scenario, based on which we further extend our published work MK-TLM (multi-kernel transfer learning based on Chou's PseAAC formulation for protein submitochondria localization) for plant protein subcellular multi-localization. By proper homolog knowledge transfer, MLMK-TLM is applicable to novel plant protein subcellular localization in multi-label learning scenario. The experiments on plant protein benchmark dataset show that MLMK-TLM outperforms the baseline model. Unlike the existing models, MLMK-TLM also reports its misleading tendency, which is important for comprehensive survey of model's multi-labeling performance. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Peng, Haijun; Wang, Wei
2016-10-01
An adaptive surrogate model-based multi-objective optimization strategy that combines the benefits of invariant manifolds and low-thrust control toward developing a low-computational-cost transfer trajectory between libration orbits around the L1 and L2 libration points in the Sun-Earth system has been proposed in this paper. A new structure for a multi-objective transfer trajectory optimization model that divides the transfer trajectory into several segments and gives the dominations for invariant manifolds and low-thrust control in different segments has been established. To reduce the computational cost of multi-objective transfer trajectory optimization, a mixed sampling strategy-based adaptive surrogate model has been proposed. Numerical simulations show that the results obtained from the adaptive surrogate-based multi-objective optimization are in agreement with the results obtained using direct multi-objective optimization methods, and the computational workload of the adaptive surrogate-based multi-objective optimization is only approximately 10% of that of direct multi-objective optimization. Furthermore, the generating efficiency of the Pareto points of the adaptive surrogate-based multi-objective optimization is approximately 8 times that of the direct multi-objective optimization. Therefore, the proposed adaptive surrogate-based multi-objective optimization provides obvious advantages over direct multi-objective optimization methods.
Towards a virtual lung: multi-scale, multi-physics modelling of the pulmonary system.
Burrowes, K S; Swan, A J; Warren, N J; Tawhai, M H
2008-09-28
The essential function of the lung, gas exchange, is dependent on adequate matching of ventilation and perfusion, where air and blood are delivered through complex branching systems exposed to regionally varying transpulmonary and transmural pressures. Structure and function in the lung are intimately related, yet computational models in pulmonary physiology usually simplify or neglect structure. The geometries of the airway and vascular systems and their interaction with parenchymal tissue have an important bearing on regional distributions of air and blood, and therefore on whole lung gas exchange, but this has not yet been addressed by modelling studies. Models for gas exchange have typically incorporated considerable detail at the level of chemical reactions, with little thought for the influence of structure. To date, relatively little attention has been paid to modelling at the cellular or subcellular level in the lung, or to linking information from the protein structure/interaction and cellular levels to the operation of the whole lung. We review previous work in developing anatomically based models of the lung, airways, parenchyma and pulmonary vasculature, and some functional studies in which these models have been used. Models for gas exchange at several spatial scales are briefly reviewed, and the challenges and benefits from modelling cellular function in the lung are discussed.
Lantada, Andrés Díaz; Hengsbach, Stefan; Bade, Klaus
2017-10-16
In this study we present the combination of a math-based design strategy with direct laser writing as high-precision technology for promoting solid free-form fabrication of multi-scale biomimetic surfaces. Results show a remarkable control of surface topography and wettability properties. Different examples of surfaces inspired on the lotus leaf, which to our knowledge are obtained for the first time following a computer-aided design with this degree of precision, are presented. Design and manufacturing strategies towards microfluidic systems whose fluid driving capabilities are obtained just by promoting a design-controlled wettability of their surfaces, are also discussed and illustrated by means of conceptual proofs. According to our experience, the synergies between the presented computer-aided design strategy and the capabilities of direct laser writing, supported by innovative writing strategies to promote final size while maintaining high precision, constitute a relevant step forward towards materials and devices with design-controlled multi-scale and micro-structured surfaces for advanced functionalities. To our knowledge, the surface geometry of the lotus leaf, which has relevant industrial applications thanks to its hydrophobic and self-cleaning behavior, has not yet been adequately modeled and manufactured in an additive way with the degree of precision that we present here.
NASA Astrophysics Data System (ADS)
van Griensven, Ann; Haest, Pieter Jan; Broekx, Steven; Seuntjens, Piet; Campling, Paul; Ducos, Geraldine; Blaha, Ludek; Slobodnik, Jaroslav
2010-05-01
The European Union (EU) adopted the Water Framework Directive (WFD) in 2000 ensuring that all aquatic ecosystems meet ‘good status' by 2015. However, it is a major challenge for river basin managers to meet this requirement in river basins with a high population density as well as intensive agricultural and industrial activities. The EU financed AQUAREHAB project (FP7) specifically examines the ecological and economic impact of innovative rehabilitation technologies for multi-pressured degraded water bodies. For this purpose, a generic collaborative management tool ‘REACH-ER' is being developed that can be used by stakeholders, citizens and water managers to evaluate the ecological and economical effects of different remedial actions on waterbodies. The tool is built using databases from large scale models simulating the hydrological dynamics of the river basing and sub-basins, the costs of the measures and the effectiveness of the measures in terms of ecological impact. Knowledge rules are used to describe the relationships between these data in order to compute the flux concentrations or to compute the effectiveness of measures. The management tool specifically addresses nitrate pollution and pollution by organic micropollutants. Detailed models are also used to predict the effectiveness of site remedial technologies using readily available global data. Rules describing ecological impacts are derived from ecotoxicological data for (mixtures of) specific contaminants (msPAF) and ecological indices relating effects to the presence of certain contaminants. Rules describing the cost-effectiveness of measures are derived from linear programming models identifying the least-cost combination of abatement measures to satisfy multi-pollutant reduction targets and from multi-criteria analysis.
Mixedness determination of rare earth-doped ceramics
NASA Astrophysics Data System (ADS)
Czerepinski, Jennifer H.
The lack of chemical uniformity in a powder mixture, such as clustering of a minor component, can lead to deterioration of materials properties. A method to determine powder mixture quality is to correlate the chemical homogeneity of a multi-component mixture with its particle size distribution and mixing method. This is applicable to rare earth-doped ceramics, which require at least 1-2 nm dopant ion spacing to optimize optical properties. Mixedness simulations were conducted for random heterogeneous mixtures of Nd-doped LaF3 mixtures using the Concentric Shell Model of Mixedness (CSMM). Results indicate that when the host to dopant particle size ratio is 100, multi-scale concentration variance is optimized. In order to verify results from the model, experimental methods that probe a mixture at the micro, meso, and macro scales are needed. To directly compare CSMM results experimentally, an image processing method was developed to calculate variance profiles from electron images. An in-lens (IL) secondary electron image is subtracted from the corresponding Everhart-Thornley (ET) secondary electron image in a Field-Emission Scanning Electron Microscope (FESEM) to produce two phases and pores that can be quantified with 50 nm spatial resolution. A macro was developed to quickly analyze multi-scale compositional variance from these images. Results for a 50:50 mixture of NdF3 and LaF3 agree with the computational model. The method has proven to be applicable only for mixtures with major components and specific particle morphologies, but the macro is useful for any type of imaging that produces excellent phase contrast, such as confocal microscopy. Fluorescence spectroscopy was used as an indirect method to confirm computational results for Nd-doped LaF3 mixtures. Fluorescence lifetime can be used as a quantitative method to indirectly measure chemical homogeneity when the limits of electron microscopy have been reached. Fluorescence lifetime represents the compositional fluctuations of a dopant on the nanoscale while accounting for billions of particles in a fast, non-destructive manner. The significance of this study will show how small-scale fluctuations in homogeneity limit the optimization of optical properties, which can be improved by the proper selection of particle size and mixing method.
Biomaterial science meets computational biology.
Hutmacher, Dietmar W; Little, J Paige; Pettet, Graeme J; Loessner, Daniela
2015-05-01
There is a pressing need for a predictive tool capable of revealing a holistic understanding of fundamental elements in the normal and pathological cell physiology of organoids in order to decipher the mechanoresponse of cells. Therefore, the integration of a systems bioengineering approach into a validated mathematical model is necessary to develop a new simulation tool. This tool can only be innovative by combining biomaterials science with computational biology. Systems-level and multi-scale experimental data are incorporated into a single framework, thus representing both single cells and collective cell behaviour. Such a computational platform needs to be validated in order to discover key mechano-biological factors associated with cell-cell and cell-niche interactions.
NASA Astrophysics Data System (ADS)
Huang, Melin; Huang, Bormin; Huang, Allen H.
2014-10-01
The Weather Research and Forecasting (WRF) model provided operational services worldwide in many areas and has linked to our daily activity, in particular during severe weather events. The scheme of Yonsei University (YSU) is one of planetary boundary layer (PBL) models in WRF. The PBL is responsible for vertical sub-grid-scale fluxes due to eddy transports in the whole atmospheric column, determines the flux profiles within the well-mixed boundary layer and the stable layer, and thus provide atmospheric tendencies of temperature, moisture (including clouds), and horizontal momentum in the entire atmospheric column. The YSU scheme is very suitable for massively parallel computation as there are no interactions among horizontal grid points. To accelerate the computation process of the YSU scheme, we employ Intel Many Integrated Core (MIC) Architecture as it is a multiprocessor computer structure with merits of efficient parallelization and vectorization essentials. Our results show that the MIC-based optimization improved the performance of the first version of multi-threaded code on Xeon Phi 5110P by a factor of 2.4x. Furthermore, the same CPU-based optimizations improved the performance on Intel Xeon E5-2603 by a factor of 1.6x as compared to the first version of multi-threaded code.
Vickers, T. Winston; Ernest, Holly B.; Boyce, Walter M.
2017-01-01
The importance of examining multiple hierarchical levels when modeling resource use for wildlife has been acknowledged for decades. Multi-level resource selection functions have recently been promoted as a method to synthesize resource use across nested organizational levels into a single predictive surface. Analyzing multiple scales of selection within each hierarchical level further strengthens multi-level resource selection functions. We extend this multi-level, multi-scale framework to modeling resistance for wildlife by combining multi-scale resistance surfaces from two data types, genetic and movement. Resistance estimation has typically been conducted with one of these data types, or compared between the two. However, we contend it is not an either/or issue and that resistance may be better-modeled using a combination of resistance surfaces that represent processes at different hierarchical levels. Resistance surfaces estimated from genetic data characterize temporally broad-scale dispersal and successful breeding over generations, whereas resistance surfaces estimated from movement data represent fine-scale travel and contextualized movement decisions. We used telemetry and genetic data from a long-term study on pumas (Puma concolor) in a highly developed landscape in southern California to develop a multi-level, multi-scale resource selection function and a multi-level, multi-scale resistance surface. We used these multi-level, multi-scale surfaces to identify resource use patches and resistant kernel corridors. Across levels, we found puma avoided urban, agricultural areas, and roads and preferred riparian areas and more rugged terrain. For other landscape features, selection differed among levels, as did the scales of selection for each feature. With these results, we developed a conservation plan for one of the most isolated puma populations in the U.S. Our approach captured a wide spectrum of ecological relationships for a population, resulted in effective conservation planning, and can be readily applied to other wildlife species. PMID:28609466
Zeller, Katherine A; Vickers, T Winston; Ernest, Holly B; Boyce, Walter M
2017-01-01
The importance of examining multiple hierarchical levels when modeling resource use for wildlife has been acknowledged for decades. Multi-level resource selection functions have recently been promoted as a method to synthesize resource use across nested organizational levels into a single predictive surface. Analyzing multiple scales of selection within each hierarchical level further strengthens multi-level resource selection functions. We extend this multi-level, multi-scale framework to modeling resistance for wildlife by combining multi-scale resistance surfaces from two data types, genetic and movement. Resistance estimation has typically been conducted with one of these data types, or compared between the two. However, we contend it is not an either/or issue and that resistance may be better-modeled using a combination of resistance surfaces that represent processes at different hierarchical levels. Resistance surfaces estimated from genetic data characterize temporally broad-scale dispersal and successful breeding over generations, whereas resistance surfaces estimated from movement data represent fine-scale travel and contextualized movement decisions. We used telemetry and genetic data from a long-term study on pumas (Puma concolor) in a highly developed landscape in southern California to develop a multi-level, multi-scale resource selection function and a multi-level, multi-scale resistance surface. We used these multi-level, multi-scale surfaces to identify resource use patches and resistant kernel corridors. Across levels, we found puma avoided urban, agricultural areas, and roads and preferred riparian areas and more rugged terrain. For other landscape features, selection differed among levels, as did the scales of selection for each feature. With these results, we developed a conservation plan for one of the most isolated puma populations in the U.S. Our approach captured a wide spectrum of ecological relationships for a population, resulted in effective conservation planning, and can be readily applied to other wildlife species.
On the predictivity of pore-scale simulations: Estimating uncertainties with multilevel Monte Carlo
NASA Astrophysics Data System (ADS)
Icardi, Matteo; Boccardo, Gianluca; Tempone, Raúl
2016-09-01
A fast method with tunable accuracy is proposed to estimate errors and uncertainties in pore-scale and Digital Rock Physics (DRP) problems. The overall predictivity of these studies can be, in fact, hindered by many factors including sample heterogeneity, computational and imaging limitations, model inadequacy and not perfectly known physical parameters. The typical objective of pore-scale studies is the estimation of macroscopic effective parameters such as permeability, effective diffusivity and hydrodynamic dispersion. However, these are often non-deterministic quantities (i.e., results obtained for specific pore-scale sample and setup are not totally reproducible by another ;equivalent; sample and setup). The stochastic nature can arise due to the multi-scale heterogeneity, the computational and experimental limitations in considering large samples, and the complexity of the physical models. These approximations, in fact, introduce an error that, being dependent on a large number of complex factors, can be modeled as random. We propose a general simulation tool, based on multilevel Monte Carlo, that can reduce drastically the computational cost needed for computing accurate statistics of effective parameters and other quantities of interest, under any of these random errors. This is, to our knowledge, the first attempt to include Uncertainty Quantification (UQ) in pore-scale physics and simulation. The method can also provide estimates of the discretization error and it is tested on three-dimensional transport problems in heterogeneous materials, where the sampling procedure is done by generation algorithms able to reproduce realistic consolidated and unconsolidated random sphere and ellipsoid packings and arrangements. A totally automatic workflow is developed in an open-source code [1], that include rigid body physics and random packing algorithms, unstructured mesh discretization, finite volume solvers, extrapolation and post-processing techniques. The proposed method can be efficiently used in many porous media applications for problems such as stochastic homogenization/upscaling, propagation of uncertainty from microscopic fluid and rock properties to macro-scale parameters, robust estimation of Representative Elementary Volume size for arbitrary physics.
Multi-scale finite element modeling of Eustachian tube function: influence of mucosal adhesion.
Malik, J E; Swarts, J D; Ghadiali, S N
2016-12-01
The inability to open the collapsible Eustachian tube (ET) leads to the development of chronic Otitis Media (OM). Although mucosal inflammation during OM leads to increased mucin gene expression and elevated adhesion forces within the ET lumen, it is not known how changes in mucosal adhesion alter the biomechanical mechanisms of ET function. In this study, we developed a novel multi-scale finite element model of ET function in adults that utilizes adhesion spring elements to simulate changes in mucosal adhesion. Models were created for six adult subjects, and dynamic patterns in muscle contraction were used to simulate the wave-like opening of the ET that occurs during swallowing. Results indicate that ET opening is highly sensitive to the level of mucosal adhesion and that exceeding a critical value of adhesion leads to rapid ET dysfunction. Parameter variation studies and sensitivity analysis indicate that increased mucosal adhesion alters the relative importance of several tissue biomechanical properties. For example, increases in mucosal adhesion reduced the sensitivity of ET function to tensor veli palatini muscle forces but did not alter the insensitivity of ET function to levator veli palatini muscle forces. Interestingly, although changes in cartilage stiffness did not significantly influence ET opening under low adhesion conditions, ET opening was highly sensitive to changes in cartilage stiffness under high adhesion conditions. Therefore, our multi-scale computational models indicate that changes in mucosal adhesion as would occur during inflammatory OM alter the biomechanical mechanisms of ET function. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Grid-Enabled Quantitative Analysis of Breast Cancer
2010-10-01
large-scale, multi-modality computerized image analysis . The central hypothesis of this research is that large-scale image analysis for breast cancer...research, we designed a pilot study utilizing large scale parallel Grid computing harnessing nationwide infrastructure for medical image analysis . Also
Training Systems Modelers through the Development of a Multi-scale Chagas Disease Risk Model
NASA Astrophysics Data System (ADS)
Hanley, J.; Stevens-Goodnight, S.; Kulkarni, S.; Bustamante, D.; Fytilis, N.; Goff, P.; Monroy, C.; Morrissey, L. A.; Orantes, L.; Stevens, L.; Dorn, P.; Lucero, D.; Rios, J.; Rizzo, D. M.
2012-12-01
The goal of our NSF-sponsored Division of Behavioral and Cognitive Sciences grant is to create a multidisciplinary approach to develop spatially explicit models of vector-borne disease risk using Chagas disease as our model. Chagas disease is a parasitic disease endemic to Latin America that afflicts an estimated 10 million people. The causative agent (Trypanosoma cruzi) is most commonly transmitted to humans by blood feeding triatomine insect vectors. Our objectives are: (1) advance knowledge on the multiple interacting factors affecting the transmission of Chagas disease, and (2) provide next generation genomic and spatial analysis tools applicable to the study of other vector-borne diseases worldwide. This funding is a collaborative effort between the RSENR (UVM), the School of Engineering (UVM), the Department of Biology (UVM), the Department of Biological Sciences (Loyola (New Orleans)) and the Laboratory of Applied Entomology and Parasitology (Universidad de San Carlos). Throughout this five-year study, multi-educational groups (i.e., high school, undergraduate, graduate, and postdoctoral) will be trained in systems modeling. This systems approach challenges students to incorporate environmental, social, and economic as well as technical aspects and enables modelers to simulate and visualize topics that would either be too expensive, complex or difficult to study directly (Yasar and Landau 2003). We launch this research by developing a set of multi-scale, epidemiological models of Chagas disease risk using STELLA® software v.9.1.3 (isee systems, inc., Lebanon, NH). We use this particular system dynamics software as a starting point because of its simple graphical user interface (e.g., behavior-over-time graphs, stock/flow diagrams, and causal loops). To date, high school and undergraduate students have created a set of multi-scale (i.e., homestead, village, and regional) disease models. Modeling the system at multiple spatial scales forces recognition that the system's structure generates its behavior; and STELLA®'s graphical interface allows researchers at multiple educational levels to observe patterns and trends as the system changes over time. Graduate students and postdoctoral researchers will utilize these initial models to more efficiently communicate and transfer knowledge across disciplines prior to generating more novel and complex disease risk models. The hope is that these models will improve causal viewpoints, understanding of the system patterns, and how to best mitigate disease risk across multiple spatial scales. Yasar O, Landau RH (2003) Elements of computational science and engineering education. Siam Review 45(4): 787-805.
Scalable domain decomposition solvers for stochastic PDEs in high performance computing
Desai, Ajit; Khalil, Mohammad; Pettit, Chris; ...
2017-09-21
Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolutionmore » in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.« less
Scalable domain decomposition solvers for stochastic PDEs in high performance computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Desai, Ajit; Khalil, Mohammad; Pettit, Chris
Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolutionmore » in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.« less
Ground-motion signature of dynamic ruptures on rough faults
NASA Astrophysics Data System (ADS)
Mai, P. Martin; Galis, Martin; Thingbaijam, Kiran K. S.; Vyas, Jagdish C.
2016-04-01
Natural earthquakes occur on faults characterized by large-scale segmentation and small-scale roughness. This multi-scale geometrical complexity controls the dynamic rupture process, and hence strongly affects the radiated seismic waves and near-field shaking. For a fault system with given segmentation, the question arises what are the conditions for producing large-magnitude multi-segment ruptures, as opposed to smaller single-segment events. Similarly, for variable degrees of roughness, ruptures may be arrested prematurely or may break the entire fault. In addition, fault roughness induces rupture incoherence that determines the level of high-frequency radiation. Using HPC-enabled dynamic-rupture simulations, we generate physically self-consistent rough-fault earthquake scenarios (M~6.8) and their associated near-source seismic radiation. Because these computations are too expensive to be conducted routinely for simulation-based seismic hazard assessment, we thrive to develop an effective pseudo-dynamic source characterization that produces (almost) the same ground-motion characteristics. Therefore, we examine how variable degrees of fault roughness affect rupture properties and the seismic wavefield, and develop a planar-fault kinematic source representation that emulates the observed dynamic behaviour. We propose an effective workflow for improved pseudo-dynamic source modelling that incorporates rough-fault effects and its associated high-frequency radiation in broadband ground-motion computation for simulation-based seismic hazard assessment.
NASA Astrophysics Data System (ADS)
Ganguly, S.; Kumar, U.; Nemani, R. R.; Kalia, S.; Michaelis, A.
2016-12-01
In this work, we use a Fully Constrained Least Squares Subpixel Learning Algorithm to unmix global WELD (Web Enabled Landsat Data) to obtain fractions or abundances of substrate (S), vegetation (V) and dark objects (D) classes. Because of the sheer nature of data and compute needs, we leveraged the NASA Earth Exchange (NEX) high performance computing architecture to optimize and scale our algorithm for large-scale processing. Subsequently, the S-V-D abundance maps were characterized into 4 classes namely, forest, farmland, water and urban areas (with NPP-VIIRS - national polar orbiting partnership visible infrared imaging radiometer suite nighttime lights data) over California, USA using Random Forest classifier. Validation of these land cover maps with NLCD (National Land Cover Database) 2011 products and NAFD (North American Forest Dynamics) static forest cover maps showed that an overall classification accuracy of over 91% was achieved, which is a 6% improvement in unmixing based classification relative to per-pixel based classification. As such, abundance maps continue to offer an useful alternative to high-spatial resolution data derived classification maps for forest inventory analysis, multi-class mapping for eco-climatic models and applications, fast multi-temporal trend analysis and for societal and policy-relevant applications needed at the watershed scale.
Highway Traffic Simulations on Multi-Processor Computers
DOT National Transportation Integrated Search
1997-01-01
A computer model has been developed to simulate highway traffic for various degrees of automation with a high degree of fidelity in regard to driver control and vehicle characteristics. The model simulates vehicle maneuvering in a multi-lane highway ...
Design and implementation of space physics multi-model application integration based on web
NASA Astrophysics Data System (ADS)
Jiang, Wenping; Zou, Ziming
With the development of research on space environment and space science, how to develop network online computing environment of space weather, space environment and space physics models for Chinese scientific community is becoming more and more important in recent years. Currently, There are two software modes on space physics multi-model application integrated system (SPMAIS) such as C/S and B/S. the C/S mode which is traditional and stand-alone, demands a team or workshop from many disciplines and specialties to build their own multi-model application integrated system, that requires the client must be deployed in different physical regions when user visits the integrated system. Thus, this requirement brings two shortcomings: reducing the efficiency of researchers who use the models to compute; inconvenience of accessing the data. Therefore, it is necessary to create a shared network resource access environment which could help users to visit the computing resources of space physics models through the terminal quickly for conducting space science research and forecasting spatial environment. The SPMAIS develops high-performance, first-principles in B/S mode based on computational models of the space environment and uses these models to predict "Space Weather", to understand space mission data and to further our understanding of the solar system. the main goal of space physics multi-model application integration system (SPMAIS) is to provide an easily and convenient user-driven online models operating environment. up to now, the SPMAIS have contained dozens of space environment models , including international AP8/AE8 IGRF T96 models and solar proton prediction model geomagnetic transmission model etc. which are developed by Chinese scientists. another function of SPMAIS is to integrate space observation data sets which offers input data for models online high-speed computing. In this paper, service-oriented architecture (SOA) concept that divides system into independent modules according to different business needs is applied to solve the problem of the independence of the physical space between multiple models. The classic MVC(Model View Controller) software design pattern is concerned to build the architecture of space physics multi-model application integrated system. The JSP+servlet+javabean technology is used to integrate the web application programs of space physics multi-model. It solves the problem of multi-user requesting the same job of model computing and effectively balances each server computing tasks. In addition, we also complete follow tasks: establishing standard graphical user interface based on Java Applet application program; Designing the interface between model computing and model computing results visualization; Realizing three-dimensional network visualization without plug-ins; Using Java3D technology to achieve a three-dimensional network scene interaction; Improved ability to interact with web pages and dynamic execution capabilities, including rendering three-dimensional graphics, fonts and color control. Through the design and implementation of the SPMAIS based on Web, we provide an online computing and application runtime environment of space physics multi-model. The practical application improves that researchers could be benefit from our system in space physics research and engineering applications.
Coarse-grained models of key self-assembly processes in HIV-1
NASA Astrophysics Data System (ADS)
Grime, John
Computational molecular simulations can elucidate microscopic information that is inaccessible to conventional experimental techniques. However, many processes occur over time and length scales that are beyond the current capabilities of atomic-resolution molecular dynamics (MD). One such process is the self-assembly of the HIV-1 viral capsid, a biological structure that is crucial to viral infectivity. The nucleation and growth of capsid structures requires the interaction of large numbers of capsid proteins within a complicated molecular environment. Coarse-grained (CG) models, where degrees of freedom are removed to produce more computationally efficient models, can in principle access large-scale phenomena such as the nucleation and growth of HIV-1 capsid lattice. We report here studies of the self-assembly behaviors of a CG model of HIV-1 capsid protein, including the influence of the local molecular environment on nucleation and growth processes. Our results suggest a multi-stage process, involving several characteristic structures, eventually producing metastable capsid lattice morphologies that are amenable to subsequent capsid dissociation in order to transmit the viral infection.
How do I know if I’ve improved my continental scale flood early warning system?
NASA Astrophysics Data System (ADS)
Cloke, Hannah L.; Pappenberger, Florian; Smith, Paul J.; Wetterhall, Fredrik
2017-04-01
Flood early warning systems mitigate damages and loss of life and are an economically efficient way of enhancing disaster resilience. The use of continental scale flood early warning systems is rapidly growing. The European Flood Awareness System (EFAS) is a pan-European flood early warning system forced by a multi-model ensemble of numerical weather predictions. Responses to scientific and technical changes can be complex in these computationally expensive continental scale systems, and improvements need to be tested by evaluating runs of the whole system. It is demonstrated here that forecast skill is not correlated with the value of warnings. In order to tell if the system has been improved an evaluation strategy is required that considers both forecast skill and warning value. The combination of a multi-forcing ensemble of EFAS flood forecasts is evaluated with a new skill-value strategy. The full multi-forcing ensemble is recommended for operational forecasting, but, there are spatial variations in the optimal forecast combination. Results indicate that optimizing forecasts based on value rather than skill alters the optimal forcing combination and the forecast performance. Also indicated is that model diversity and ensemble size are both important in achieving best overall performance. The use of several evaluation measures that consider both skill and value is strongly recommended when considering improvements to early warning systems.
Parallel multiscale simulations of a brain aneurysm
Grinberg, Leopold; Fedosov, Dmitry A.; Karniadakis, George Em
2012-01-01
Cardiovascular pathologies, such as a brain aneurysm, are affected by the global blood circulation as well as by the local microrheology. Hence, developing computational models for such cases requires the coupling of disparate spatial and temporal scales often governed by diverse mathematical descriptions, e.g., by partial differential equations (continuum) and ordinary differential equations for discrete particles (atomistic). However, interfacing atomistic-based with continuum-based domain discretizations is a challenging problem that requires both mathematical and computational advances. We present here a hybrid methodology that enabled us to perform the first multi-scale simulations of platelet depositions on the wall of a brain aneurysm. The large scale flow features in the intracranial network are accurately resolved by using the high-order spectral element Navier-Stokes solver εκ αr. The blood rheology inside the aneurysm is modeled using a coarse-grained stochastic molecular dynamics approach (the dissipative particle dynamics method) implemented in the parallel code LAMMPS. The continuum and atomistic domains overlap with interface conditions provided by effective forces computed adaptively to ensure continuity of states across the interface boundary. A two-way interaction is allowed with the time-evolving boundary of the (deposited) platelet clusters tracked by an immersed boundary method. The corresponding heterogeneous solvers ( εκ αr and LAMMPS) are linked together by a computational multilevel message passing interface that facilitates modularity and high parallel efficiency. Results of multiscale simulations of clot formation inside the aneurysm in a patient-specific arterial tree are presented. We also discuss the computational challenges involved and present scalability results of our coupled solver on up to 300K computer processors. Validation of such coupled atomistic-continuum models is a main open issue that has to be addressed in future work. PMID:23734066
Parallel multiscale simulations of a brain aneurysm.
Grinberg, Leopold; Fedosov, Dmitry A; Karniadakis, George Em
2013-07-01
Cardiovascular pathologies, such as a brain aneurysm, are affected by the global blood circulation as well as by the local microrheology. Hence, developing computational models for such cases requires the coupling of disparate spatial and temporal scales often governed by diverse mathematical descriptions, e.g., by partial differential equations (continuum) and ordinary differential equations for discrete particles (atomistic). However, interfacing atomistic-based with continuum-based domain discretizations is a challenging problem that requires both mathematical and computational advances. We present here a hybrid methodology that enabled us to perform the first multi-scale simulations of platelet depositions on the wall of a brain aneurysm. The large scale flow features in the intracranial network are accurately resolved by using the high-order spectral element Navier-Stokes solver εκ αr . The blood rheology inside the aneurysm is modeled using a coarse-grained stochastic molecular dynamics approach (the dissipative particle dynamics method) implemented in the parallel code LAMMPS. The continuum and atomistic domains overlap with interface conditions provided by effective forces computed adaptively to ensure continuity of states across the interface boundary. A two-way interaction is allowed with the time-evolving boundary of the (deposited) platelet clusters tracked by an immersed boundary method. The corresponding heterogeneous solvers ( εκ αr and LAMMPS) are linked together by a computational multilevel message passing interface that facilitates modularity and high parallel efficiency. Results of multiscale simulations of clot formation inside the aneurysm in a patient-specific arterial tree are presented. We also discuss the computational challenges involved and present scalability results of our coupled solver on up to 300K computer processors. Validation of such coupled atomistic-continuum models is a main open issue that has to be addressed in future work.
Photogrammetric Technique for Center of Gravity Determination
NASA Technical Reports Server (NTRS)
Jones, Thomas W.; Johnson, Thomas H.; Shemwell, Dave; Shreves, Christopher M.
2012-01-01
A new measurement technique for determination of the center of gravity (CG) for large scale objects has been demonstrated. The experimental method was conducted as part of an LS-DYNA model validation program for the Max Launch Abort System (MLAS) crew module. The test was conducted on the full scale crew module concept at NASA Langley Research Center. Multi-camera photogrammetry was used to measure the test article in several asymmetric configurations. The objective of these measurements was to provide validation of the CG as computed from the original mechanical design. The methodology, measurement technique, and measurement results are presented.
NASA Astrophysics Data System (ADS)
Gulamali, M. Y.; Saunders, J. H.; Jackson, M. D.; Pain, C. C.
2009-04-01
We present results from a new computational multi-fluid dynamics code, designed to model the transport of heat, mass and chemical species during flow of single or multiple immiscible fluid phases through porous media, including gravitational effects and compressibility. The model also captures the electrical phenomena which may arise through electrokinetic, electrochemical and electrothermal coupling. Building on the advanced computational technology of the Imperial College Ocean Model, this new development leads the way towards a complex multiphase code using arbitrary unstructured and adaptive meshes, and domains decomposed to run in parallel over a cluster of workstations or a dedicated parallel computer. These facilities will allow efficient and accurate modelling of multiphase flows which capture large- and small-scale transport phenomena, while preserving the important geology and/or surface topology to make the results physically meaningful and realistic. Applications include modelling of contaminant transport in aquifers, multiphase flow during hydrocarbon production, migration of carbon dioxide during sequestration, and evaluation of the design and safety of nuclear reactors. Simulations of the streaming potential resulting from multiphase flow in laboratory- and field-scale models demonstrate that streaming potential signals originate at fluid fronts, and at geologic boundaries where fluid saturation changes. This suggests that downhole measurements of streaming potential may be used to inform production strategies in oil and gas reservoirs. As water encroaches on an oil production well, the streaming-potential signal associated with the water front encompasses the well even when the front is up to 100 m away, so the potential measured at the well starts to change significantly relative to a distant reference electrode. Variations in the geometry of the encroaching water front could be characterized using an array of electrodes positioned along the well, but a good understanding of the local reservoir geology will be required to identify signals caused by the front. The streaming potential measured at a well will be maximized in low-permeability reservoirs produced at a high rate, and in thick reservoirs with low shale content.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perdikaris, Paris, E-mail: parisp@mit.edu; Grinberg, Leopold, E-mail: leopoldgrinberg@us.ibm.com; Karniadakis, George Em, E-mail: george-karniadakis@brown.edu
The aim of this work is to present an overview of recent advances in multi-scale modeling of brain blood flow. In particular, we present some approaches that enable the in silico study of multi-scale and multi-physics phenomena in the cerebral vasculature. We discuss the formulation of continuum and atomistic modeling approaches, present a consistent framework for their concurrent coupling, and list some of the challenges that one needs to overcome in achieving a seamless and scalable integration of heterogeneous numerical solvers. The effectiveness of the proposed framework is demonstrated in a realistic case involving modeling the thrombus formation process takingmore » place on the wall of a patient-specific cerebral aneurysm. This highlights the ability of multi-scale algorithms to resolve important biophysical processes that span several spatial and temporal scales, potentially yielding new insight into the key aspects of brain blood flow in health and disease. Finally, we discuss open questions in multi-scale modeling and emerging topics of future research.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schunert, Sebastian; Schwen, Daniel; Ghassemi, Pedram
This work presents a multi-physics, multi-scale approach to modeling the Transient Test Reactor (TREAT) currently prepared for restart at the Idaho National Laboratory. TREAT fuel is made up of microscopic fuel grains (r ˜ 20µm) dispersed in a graphite matrix. The novelty of this work is in coupling a binary collision Monte-Carlo (BCMC) model to the Finite Element based code Moose for solving a microsopic heat-conduction problem whose driving source is provided by the BCMC model tracking fission fragment energy deposition. This microscopic model is driven by a transient, engineering scale neutronics model coupled to an adiabatic heating model. Themore » macroscopic model provides local power densities and neutron energy spectra to the microscpic model. Currently, no feedback from the microscopic to the macroscopic model is considered. TREAT transient 15 is used to exemplify the capabilities of the multi-physics, multi-scale model, and it is found that the average fuel grain temperature differs from the average graphite temperature by 80 K despite the low-power transient. The large temperature difference has strong implications on the Doppler feedback a potential LEU TREAT core would see, and it underpins the need for multi-physics, multi-scale modeling of a TREAT LEU core.« less
Multi-scale modeling of multi-component reactive transport in geothermal aquifers
NASA Astrophysics Data System (ADS)
Nick, Hamidreza M.; Raoof, Amir; Wolf, Karl-Heinz; Bruhn, David
2014-05-01
In deep geothermal systems heat and chemical stresses can cause physical alterations, which may have a significant effect on flow and reaction rates. As a consequence it will lead to changes in permeability and porosity of the formations due to mineral precipitation and dissolution. Large-scale modeling of reactive transport in such systems is still challenging. A large area of uncertainty is the way in which the pore-scale information controlling the flow and reaction will behave at a larger scale. A possible choice is to use constitutive relationships relating, for example the permeability and porosity evolutions to the change in the pore geometry. While determining such relationships through laboratory experiments may be limited, pore-network modeling provides an alternative solution. In this work, we introduce a new workflow in which a hybrid Finite-Element Finite-Volume method [1,2] and a pore network modeling approach [3] are employed. Using the pore-scale model, relevant constitutive relations are developed. These relations are then embedded in the continuum-scale model. This approach enables us to study non-isothermal reactive transport in porous media while accounting for micro-scale features under realistic conditions. The performance and applicability of the proposed model is explored for different flow and reaction regimes. References: 1. Matthäi, S.K., et al.: Simulation of solute transport through fractured rock: a higher-order accurate finite-element finite-volume method permitting large time steps. Transport in porous media 83.2 (2010): 289-318. 2. Nick, H.M., et al.: Reactive dispersive contaminant transport in coastal aquifers: Numerical simulation of a reactive Henry problem. Journal of contaminant hydrology 145 (2012), 90-104. 3. Raoof A., et al.: PoreFlow: A Complex pore-network model for simulation of reactive transport in variably saturated porous media, Computers & Geosciences, 61, (2013), 160-174.
Spatial Variation of Pressure in the Lyophilization Product Chamber Part 1: Computational Modeling.
Ganguly, Arnab; Varma, Nikhil; Sane, Pooja; Bogner, Robin; Pikal, Michael; Alexeenko, Alina
2017-04-01
The flow physics in the product chamber of a freeze dryer involves coupled heat and mass transfer at different length and time scales. The low-pressure environment and the relatively small flow velocities make it difficult to quantify the flow structure experimentally. The current work presents the three-dimensional computational fluid dynamics (CFD) modeling for vapor flow in a laboratory scale freeze dryer validated with experimental data and theory. The model accounts for the presence of a non-condensable gas such as nitrogen or air using a continuum multi-species model. The flow structure at different sublimation rates, chamber pressures, and shelf-gaps are systematically investigated. Emphasis has been placed on accurately predicting the pressure variation across the subliming front. At a chamber set pressure of 115 mtorr and a sublimation rate of 1.3 kg/h/m 2 , the pressure variation reaches about 9 mtorr. The pressure variation increased linearly with sublimation rate in the range of 0.5 to 1.3 kg/h/m 2 . The dependence of pressure variation on the shelf-gap was also studied both computationally and experimentally. The CFD modeling results are found to agree within 10% with the experimental measurements. The computational model was also compared to analytical solution valid for small shelf-gaps. Thus, the current work presents validation study motivating broader use of CFD in optimizing freeze-drying process and equipment design.
NASA Astrophysics Data System (ADS)
Zhang, Y.; Sankaranarayanan, S.; Zaitchik, B. F.; Siddiqui, S.
2017-12-01
Africa is home to some of the most climate vulnerable populations in the world. Energy and agricultural development have diverse impacts on the region's food security and economic well-being from the household to the national level, particularly considering climate variability and change. Our ultimate goal is to understand coupled Food-Energy-Water (FEW) dynamics across spatial scales in order to quantify the sensitivity of critical human outcomes to FEW development strategies in Ethiopia. We are developing bottom-up and top-down multi-scale models, spanning local, sub-national and national scales to capture the FEW linkages across communities and climatic adaptation zones. The focus of this presentation is the sub-national scale multi-player micro-economic (MME) partial-equilibrium model with coupled food and energy sector for Ethiopia. With fixed large-scale economic, demographic, and resource factors from the national scale computable general equilibrium (CGE) model and inferences of behavior parameters from the local scale agent-based model (ABM), the MME studies how shocks such as drought (crop failure) and development of resilience technologies would influence FEW system at a sub-national scale. The MME model is based on aggregating individual optimization problems for relevant players. It includes production, storage, and consumption of food and energy at spatially disaggregated zones, and transportation in between with endogenously modeled infrastructure. The aggregated players for each zone have different roles such as crop producers, storage managers, and distributors, who make decisions according to their own but interdependent objective functions. The food and energy supply chain across zones is therefore captured. Ethiopia is dominated by rain-fed agriculture with only 2% irrigated farmland. Small-scale irrigation has been promoted as a resilience technology that could potentially play a critical role in food security and economic well-being in Ethiopia, but that also intersects with energy and water consumption. Here, we focus on the energy usage for small-scale irrigation and the collective impact on crop production and water resources across zones in the MME model.
NASA Astrophysics Data System (ADS)
Khebbab, Mohamed; Feliachi, Mouloud; El Hadi Latreche, Mohamed
2018-03-01
In this present paper, a simulation of eddy current non-destructive testing (EC NDT) on unidirectional carbon fiber reinforced polymer is performed; for this magneto-dynamic formulation in term of magnetic vector potential is solved using finite element heterogeneous multi-scale method (FE HMM). FE HMM has as goal to compute the homogenized solution without calculating the homogenized tensor explicitly, the solution is based only on the physical characteristic known in micro domain. This feature is well adapted to EC NDT to evaluate defect in carbon composite material in microscopic scale, where the defect detection is performed by coil impedance measurement; the measurement value is intimately linked to material characteristic in microscopic level. Based on this, our model can handle different defects such as: cracks, inclusion, internal electrical conductivity changes, heterogeneities, etc. The simulation results were compared with the solution obtained with homogenized material using mixture law, a good agreement was found.
Computational fluid dynamics modelling in cardiovascular medicine.
Morris, Paul D; Narracott, Andrew; von Tengg-Kobligk, Hendrik; Silva Soto, Daniel Alejandro; Hsiao, Sarah; Lungu, Angela; Evans, Paul; Bressloff, Neil W; Lawford, Patricia V; Hose, D Rodney; Gunn, Julian P
2016-01-01
This paper reviews the methods, benefits and challenges associated with the adoption and translation of computational fluid dynamics (CFD) modelling within cardiovascular medicine. CFD, a specialist area of mathematics and a branch of fluid mechanics, is used routinely in a diverse range of safety-critical engineering systems, which increasingly is being applied to the cardiovascular system. By facilitating rapid, economical, low-risk prototyping, CFD modelling has already revolutionised research and development of devices such as stents, valve prostheses, and ventricular assist devices. Combined with cardiovascular imaging, CFD simulation enables detailed characterisation of complex physiological pressure and flow fields and the computation of metrics which cannot be directly measured, for example, wall shear stress. CFD models are now being translated into clinical tools for physicians to use across the spectrum of coronary, valvular, congenital, myocardial and peripheral vascular diseases. CFD modelling is apposite for minimally-invasive patient assessment. Patient-specific (incorporating data unique to the individual) and multi-scale (combining models of different length- and time-scales) modelling enables individualised risk prediction and virtual treatment planning. This represents a significant departure from traditional dependence upon registry-based, population-averaged data. Model integration is progressively moving towards 'digital patient' or 'virtual physiological human' representations. When combined with population-scale numerical models, these models have the potential to reduce the cost, time and risk associated with clinical trials. The adoption of CFD modelling signals a new era in cardiovascular medicine. While potentially highly beneficial, a number of academic and commercial groups are addressing the associated methodological, regulatory, education- and service-related challenges. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Next-Generation MDAC Discrimination Procedure Using Multi-Dimensional Spectral Analyses
2007-09-01
explosions near the Lop Nor, Novaya Zemlya, Semipalatinsk , Nevada, and Indian test sites . We have computed regional phase spectra and are correcting... test sites as mainly due to differences in explosion P and S corner frequencies. Fisk (2007) used source model fits to estimate Pn, Pg, and Lg corner...frequencies for Nevada Test Site (NTS) explosions and found that Lg corner frequencies exhibit similar scaling with source size as for Pn and Pg
Hybrid MPI+OpenMP Programming of an Overset CFD Solver and Performance Investigations
NASA Technical Reports Server (NTRS)
Djomehri, M. Jahed; Jin, Haoqiang H.; Biegel, Bryan (Technical Monitor)
2002-01-01
This report describes a two level parallelization of a Computational Fluid Dynamic (CFD) solver with multi-zone overset structured grids. The approach is based on a hybrid MPI+OpenMP programming model suitable for shared memory and clusters of shared memory machines. The performance investigations of the hybrid application on an SGI Origin2000 (O2K) machine is reported using medium and large scale test problems.
2017-02-17
Psychology. Brooke, J. (1996). SUS: a ‘quick and dirty ’ usability scale. In P. Jordan, B. Thomas, I. McClelland, & B. Weerdmeester (Eds.), Usability...level modeling, International Journal of Human Computer Studies, Vol. 45(3). Menzies, T. (1996b). On the Practicality of Abductive Validation, ECAI...1). Shima, T., & Rasmussen, S. (2009). UAV Cooperative Decision and Control: Challenges and Practical Approaches, SIAM Publications, ISBN
Modeling Primary Atomization of Liquid Fuels using a Multiphase DNS/LES Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arienti, Marco; Oefelein, Joe; Doisneau, Francois
2016-08-01
As part of a Laboratory Directed Research and Development project, we are developing a modeling-and-simulation capability to study fuel direct injection in automotive engines. Predicting mixing and combustion at realistic conditions remains a challenging objective of energy science. And it is a research priority in Sandia’s mission-critical area of energy security, being also relevant to many flows in defense and climate. High-performance computing applied to this non-linear multi-scale problem is key to engine calculations with increased scientific reliability.
A Goddard Multi-Scale Modeling System with Unified Physics
NASA Technical Reports Server (NTRS)
Tao, W.K.; Anderson, D.; Atlas, R.; Chern, J.; Houser, P.; Hou, A.; Lang, S.; Lau, W.; Peters-Lidard, C.; Kakar, R.;
2008-01-01
Numerical cloud resolving models (CRMs), which are based the non-hydrostatic equations of motion, have been extensively applied to cloud-scale and mesoscale processes during the past four decades. Recent GEWEX Cloud System Study (GCSS) model comparison projects have indicated that CRMs agree with observations in simulating various types of clouds and cloud systems from different geographic locations. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that Numerical Weather Prediction (NWP) and regional scale model can be run in grid size similar to cloud resolving model through nesting technique. Current and future NASA satellite programs can provide cloud, precipitation, aerosol and other data at very fine spatial and temporal scales. It requires a coupled global circulation model (GCM) and cloud-scale model (termed a szrper-parameterization or multi-scale modeling -framework, MMF) to use these satellite data to improve the understanding of the physical processes that are responsible for the variation in global and regional climate and hydrological systems. The use of a GCM will enable global coverage, and the use of a CRM will allow for better and more sophisticated physical parameterization. NASA satellite and field campaign can provide initial conditions as well as validation through utilizing the Earth Satellite simulators. At Goddard, we have developed a multi-scale modeling system with unified physics. The modeling system consists a coupled GCM-CRM (or MMF); a state-of-the-art weather research forecast model (WRF) and a cloud-resolving model (Goddard Cumulus Ensemble model). In these models, the same microphysical schemes (2ICE, several 3ICE), radiation (including explicitly calculated cloud optical properties), and surface models are applied. In addition, a comprehensive unified Earth Satellite simulator has been developed at GSFC, which is designed to fully utilize the multi-scale modeling system. A brief review of the multi-scale modeling system with unified physics/simulator and examples is presented in this article.
Multi-fidelity methods for uncertainty quantification in transport problems
NASA Astrophysics Data System (ADS)
Tartakovsky, G.; Yang, X.; Tartakovsky, A. M.; Barajas-Solano, D. A.; Scheibe, T. D.; Dai, H.; Chen, X.
2016-12-01
We compare several multi-fidelity approaches for uncertainty quantification in flow and transport simulations that have a lower computational cost than the standard Monte Carlo method. The cost reduction is achieved by combining a small number of high-resolution (high-fidelity) simulations with a large number of low-resolution (low-fidelity) simulations. We propose a new method, a re-scaled Multi Level Monte Carlo (rMLMC) method. The rMLMC is based on the idea that the statistics of quantities of interest depends on scale/resolution. We compare rMLMC with existing multi-fidelity methods such as Multi Level Monte Carlo (MLMC) and reduced basis methods and discuss advantages of each approach.
CellML and associated tools and techniques.
Garny, Alan; Nickerson, David P; Cooper, Jonathan; Weber dos Santos, Rodrigo; Miller, Andrew K; McKeever, Steve; Nielsen, Poul M F; Hunter, Peter J
2008-09-13
We have, in the last few years, witnessed the development and availability of an ever increasing number of computer models that describe complex biological structures and processes. The multi-scale and multi-physics nature of these models makes their development particularly challenging, not only from a biological or biophysical viewpoint but also from a mathematical and computational perspective. In addition, the issue of sharing and reusing such models has proved to be particularly problematic, with the published models often lacking information that is required to accurately reproduce the published results. The International Union of Physiological Sciences Physiome Project was launched in 1997 with the aim of tackling the aforementioned issues by providing a framework for the modelling of the human body. As part of this initiative, the specifications of the CellML mark-up language were released in 2001. Now, more than 7 years later, the time has come to assess the situation, in particular with regard to the tools and techniques that are now available to the modelling community. Thus, after introducing CellML, we review and discuss existing editors, validators, online repository, code generators and simulation environments, as well as the CellML Application Program Interface. We also address possible future directions including the need for additional mark-up languages.
Finegan, Donal P; Scheel, Mario; Robinson, James B; Tjaden, Bernhard; Di Michiel, Marco; Hinds, Gareth; Brett, Dan J L; Shearing, Paul R
2016-11-16
Catastrophic failure of lithium-ion batteries occurs across multiple length scales and over very short time periods. A combination of high-speed operando tomography, thermal imaging and electrochemical measurements is used to probe the degradation mechanisms leading up to overcharge-induced thermal runaway of a LiCoO 2 pouch cell, through its interrelated dynamic structural, thermal and electrical responses. Failure mechanisms across multiple length scales are explored using a post-mortem multi-scale tomography approach, revealing significant morphological and phase changes in the LiCoO 2 electrode microstructure and location dependent degradation. This combined operando and multi-scale X-ray computed tomography (CT) technique is demonstrated as a comprehensive approach to understanding battery degradation and failure.
The future of human cerebral cartography: a novel approach
Frackowiak, Richard; Markram, Henry
2015-01-01
Cerebral cartography can be understood in a limited, static, neuroanatomical sense. Temporal information from electrical recordings contributes information on regional interactions adding a functional dimension. Selective tagging and imaging of molecules adds biochemical contributions. Cartographic detail can also be correlated with normal or abnormal psychological or behavioural data. Modern cerebral cartography is assimilating all these elements. Cartographers continue to collect ever more precise data in the hope that general principles of organization will emerge. However, even detailed cartographic data cannot generate knowledge without a multi-scale framework making it possible to relate individual observations and discoveries. We propose that, in the next quarter century, advances in cartography will result in progressively more accurate drafts of a data-led, multi-scale model of human brain structure and function. These blueprints will result from analysis of large volumes of neuroscientific and clinical data, by a process of reconstruction, modelling and simulation. This strategy will capitalize on remarkable recent developments in informatics and computer science and on the existence of much existing, addressable data and prior, though fragmented, knowledge. The models will instantiate principles that govern how the brain is organized at different levels and how different spatio-temporal scales relate to each other in an organ-centred context. PMID:25823868
Multi-objective optimization of GENIE Earth system models.
Price, Andrew R; Myerscough, Richard J; Voutchkov, Ivan I; Marsh, Robert; Cox, Simon J
2009-07-13
The tuning of parameters in climate models is essential to provide reliable long-term forecasts of Earth system behaviour. We apply a multi-objective optimization algorithm to the problem of parameter estimation in climate models. This optimization process involves the iterative evaluation of response surface models (RSMs), followed by the execution of multiple Earth system simulations. These computations require an infrastructure that provides high-performance computing for building and searching the RSMs and high-throughput computing for the concurrent evaluation of a large number of models. Grid computing technology is therefore essential to make this algorithm practical for members of the GENIE project.
Kirschner, Denise E; Linderman, Jennifer J
2009-04-01
In addition to traditional and novel experimental approaches to study host-pathogen interactions, mathematical and computer modelling have recently been applied to address open questions in this area. These modelling tools not only offer an additional avenue for exploring disease dynamics at multiple biological scales, but also complement and extend knowledge gained via experimental tools. In this review, we outline four examples where modelling has complemented current experimental techniques in a way that can or has already pushed our knowledge of host-pathogen dynamics forward. Two of the modelling approaches presented go hand in hand with articles in this issue exploring fluorescence resonance energy transfer and two-photon intravital microscopy. Two others explore virtual or 'in silico' deletion and depletion as well as a new method to understand and guide studies in genetic epidemiology. In each of these examples, the complementary nature of modelling and experiment is discussed. We further note that multi-scale modelling may allow us to integrate information across length (molecular, cellular, tissue, organism, population) and time (e.g. seconds to lifetimes). In sum, when combined, these compatible approaches offer new opportunities for understanding host-pathogen interactions.
A Liver-Centric Multiscale Modeling Framework for Xenobiotics.
Sluka, James P; Fu, Xiao; Swat, Maciej; Belmonte, Julio M; Cosmanescu, Alin; Clendenon, Sherry G; Wambaugh, John F; Glazier, James A
2016-01-01
We describe a multi-scale, liver-centric in silico modeling framework for acetaminophen pharmacology and metabolism. We focus on a computational model to characterize whole body uptake and clearance, liver transport and phase I and phase II metabolism. We do this by incorporating sub-models that span three scales; Physiologically Based Pharmacokinetic (PBPK) modeling of acetaminophen uptake and distribution at the whole body level, cell and blood flow modeling at the tissue/organ level and metabolism at the sub-cellular level. We have used standard modeling modalities at each of the three scales. In particular, we have used the Systems Biology Markup Language (SBML) to create both the whole-body and sub-cellular scales. Our modeling approach allows us to run the individual sub-models separately and allows us to easily exchange models at a particular scale without the need to extensively rework the sub-models at other scales. In addition, the use of SBML greatly facilitates the inclusion of biological annotations directly in the model code. The model was calibrated using human in vivo data for acetaminophen and its sulfate and glucuronate metabolites. We then carried out extensive parameter sensitivity studies including the pairwise interaction of parameters. We also simulated population variation of exposure and sensitivity to acetaminophen. Our modeling framework can be extended to the prediction of liver toxicity following acetaminophen overdose, or used as a general purpose pharmacokinetic model for xenobiotics.
A Liver-Centric Multiscale Modeling Framework for Xenobiotics
Swat, Maciej; Cosmanescu, Alin; Clendenon, Sherry G.; Wambaugh, John F.; Glazier, James A.
2016-01-01
We describe a multi-scale, liver-centric in silico modeling framework for acetaminophen pharmacology and metabolism. We focus on a computational model to characterize whole body uptake and clearance, liver transport and phase I and phase II metabolism. We do this by incorporating sub-models that span three scales; Physiologically Based Pharmacokinetic (PBPK) modeling of acetaminophen uptake and distribution at the whole body level, cell and blood flow modeling at the tissue/organ level and metabolism at the sub-cellular level. We have used standard modeling modalities at each of the three scales. In particular, we have used the Systems Biology Markup Language (SBML) to create both the whole-body and sub-cellular scales. Our modeling approach allows us to run the individual sub-models separately and allows us to easily exchange models at a particular scale without the need to extensively rework the sub-models at other scales. In addition, the use of SBML greatly facilitates the inclusion of biological annotations directly in the model code. The model was calibrated using human in vivo data for acetaminophen and its sulfate and glucuronate metabolites. We then carried out extensive parameter sensitivity studies including the pairwise interaction of parameters. We also simulated population variation of exposure and sensitivity to acetaminophen. Our modeling framework can be extended to the prediction of liver toxicity following acetaminophen overdose, or used as a general purpose pharmacokinetic model for xenobiotics. PMID:27636091
Supercomputers ready for use as discovery machines for neuroscience.
Helias, Moritz; Kunkel, Susanne; Masumoto, Gen; Igarashi, Jun; Eppler, Jochen Martin; Ishii, Shin; Fukai, Tomoki; Morrison, Abigail; Diesmann, Markus
2012-01-01
NEST is a widely used tool to simulate biological spiking neural networks. Here we explain the improvements, guided by a mathematical model of memory consumption, that enable us to exploit for the first time the computational power of the K supercomputer for neuroscience. Multi-threaded components for wiring and simulation combine 8 cores per MPI process to achieve excellent scaling. K is capable of simulating networks corresponding to a brain area with 10(8) neurons and 10(12) synapses in the worst case scenario of random connectivity; for larger networks of the brain its hierarchical organization can be exploited to constrain the number of communicating computer nodes. We discuss the limits of the software technology, comparing maximum filling scaling plots for K and the JUGENE BG/P system. The usability of these machines for network simulations has become comparable to running simulations on a single PC. Turn-around times in the range of minutes even for the largest systems enable a quasi interactive working style and render simulations on this scale a practical tool for computational neuroscience.
Supercomputers Ready for Use as Discovery Machines for Neuroscience
Helias, Moritz; Kunkel, Susanne; Masumoto, Gen; Igarashi, Jun; Eppler, Jochen Martin; Ishii, Shin; Fukai, Tomoki; Morrison, Abigail; Diesmann, Markus
2012-01-01
NEST is a widely used tool to simulate biological spiking neural networks. Here we explain the improvements, guided by a mathematical model of memory consumption, that enable us to exploit for the first time the computational power of the K supercomputer for neuroscience. Multi-threaded components for wiring and simulation combine 8 cores per MPI process to achieve excellent scaling. K is capable of simulating networks corresponding to a brain area with 108 neurons and 1012 synapses in the worst case scenario of random connectivity; for larger networks of the brain its hierarchical organization can be exploited to constrain the number of communicating computer nodes. We discuss the limits of the software technology, comparing maximum filling scaling plots for K and the JUGENE BG/P system. The usability of these machines for network simulations has become comparable to running simulations on a single PC. Turn-around times in the range of minutes even for the largest systems enable a quasi interactive working style and render simulations on this scale a practical tool for computational neuroscience. PMID:23129998
Komini Babu, S.; Chung, H. T.; Wu, G.; ...
2014-08-18
This paper reports the development of a model for simulating polymer electrolyte fuel cells (PEFCs) with non-precious metal catalyst (NPMC) cathodes. NPMCs present an opportunity to dramatically reduce the cost of PEFC electrodes by removing the costly Pt catalyst. To address the significant transport losses in thick NPMC cathodes (ca. >60 µm), we developed a hierarchical electrode model that resolves the unique structure of the NPMCs we studied. A unique feature of the approach is the integration of the model with morphology data extracted from nano-scale resolution X-ray computed tomography (nano-CT) imaging of the electrodes. A notable finding is themore » impact of the liquid water accumulation in the electrode and the significant performance improvement possible if electrode flooding is mitigated.« less
NCI HPC Scaling and Optimisation in Climate, Weather, Earth system science and the Geosciences
NASA Astrophysics Data System (ADS)
Evans, B. J. K.; Bermous, I.; Freeman, J.; Roberts, D. S.; Ward, M. L.; Yang, R.
2016-12-01
The Australian National Computational Infrastructure (NCI) has a national focus in the Earth system sciences including climate, weather, ocean, water management, environment and geophysics. NCI leads a Program across its partners from the Australian science agencies and research communities to identify priority computational models to scale-up. Typically, these cases place a large overall demand on the available computer time, need to scale to higher resolutions, use excessive scarce resources such as large memory or bandwidth that limits, or in some cases, need to meet requirements for transition to a separate operational forecasting system, with set time-windows. The model codes include the UK Met Office Unified Model atmospheric model (UM), GFDL's Modular Ocean Model (MOM), both the UK Met Office's GC3 and Australian ACCESS coupled-climate systems (including sea ice), 4D-Var data assimilation and satellite processing, the Regional Ocean Model (ROMS), and WaveWatch3 as well as geophysics codes including hazards, magentuellerics, seismic inversions, and geodesy. Many of these codes use significant compute resources both for research applications as well as within the operational systems. Some of these models are particularly complex, and their behaviour had not been critically analysed for effective use of the NCI supercomputer or how they could be improved. As part of the Program, we have established a common profiling methodology that uses a suite of open source tools for performing scaling analyses. The most challenging cases are profiling multi-model coupled systems where the component models have their own complex algorithms and performance issues. We have also found issues within the current suite of profiling tools, and no single tool fully exposes the nature of the code performance. As a result of this work, international collaborations are now in place to ensure that improvements are incorporated within the community models, and our effort can be targeted in a coordinated way. The coordinations have involved user stakeholders, the model developer community, and dependent software libraries. For example, we have spent significant time characterising I/O scalability, and improving the use of libraries such as NetCDF and HDF5.
Spencer, T J; Hidalgo-Bastida, L A; Cartmell, S H; Halliday, I; Care, C M
2013-04-01
Computer simulations can potentially be used to design, predict, and inform properties for tissue engineering perfusion bioreactors. In this work, we investigate the flow properties that result from a particular poly-L-lactide porous scaffold and a particular choice of perfusion bioreactor vessel design used in bone tissue engineering. We also propose a model to investigate the dynamic seeding properties such as the homogeneity (or lack of) of the cellular distribution within the scaffold of the perfusion bioreactor: a pre-requisite for the subsequent successful uniform growth of a viable bone tissue engineered construct. Flows inside geometrically complex scaffolds have been investigated previously and results shown at these pore scales. Here, it is our aim to show accurately that through the use of modern high performance computers that the bioreactor device scale that encloses a scaffold can affect the flows and stresses within the pores throughout the scaffold which has implications for bioreactor design, control, and use. Central to this work is that the boundary conditions are derived from micro computed tomography scans of both a device chamber and scaffold in order to avoid generalizations and uncertainties. Dynamic seeding methods have also been shown to provide certain advantages over static seeding methods. We propose here a novel coupled model for dynamic seeding accounting for flow, species mass transport and cell advection-diffusion-attachment tuned for bone tissue engineering. The model highlights the timescale differences between different species suggesting that traditional homogeneous porous flow models of transport must be applied with caution to perfusion bioreactors. Our in silico data illustrate the extent to which these experiments have the potential to contribute to future design and development of large-scale bioreactors. Copyright © 2012 Wiley Periodicals, Inc.
Schwarz-Christoffel Conformal Mapping based Grid Generation for Global Oceanic Circulation Models
NASA Astrophysics Data System (ADS)
Xu, Shiming
2015-04-01
We propose new grid generation algorithms for global ocean general circulation models (OGCMs). Contrary to conventional, analytical forms based dipolar or tripolar grids, the new algorithm are based on Schwarz-Christoffel (SC) conformal mapping with prescribed boundary information. While dealing with the conventional grid design problem of pole relocation, it also addresses more advanced issues of computational efficiency and the new requirements on OGCM grids arisen from the recent trend of high-resolution and multi-scale modeling. The proposed grid generation algorithm could potentially achieve the alignment of grid lines to coastlines, enhanced spatial resolution in coastal regions, and easier computational load balance. Since the generated grids are still orthogonal curvilinear, they can be readily 10 utilized in existing Bryan-Cox-Semtner type ocean models. The proposed methodology can also be applied to the grid generation task for regional ocean modeling when complex land-ocean distribution is present.
Prototype Biology-Based Radiation Risk Module Project
NASA Technical Reports Server (NTRS)
Terrier, Douglas; Clayton, Ronald G.; Patel, Zarana; Hu, Shaowen; Huff, Janice
2015-01-01
Biological effects of space radiation and risk mitigation are strategic knowledge gaps for the Evolvable Mars Campaign. The current epidemiology-based NASA Space Cancer Risk (NSCR) model contains large uncertainties (HAT #6.5a) due to lack of information on the radiobiology of galactic cosmic rays (GCR) and lack of human data. The use of experimental models that most accurately replicate the response of human tissues is critical for precision in risk projections. Our proposed study will compare DNA damage, histological, and cell kinetic parameters after irradiation in normal 2D human cells versus 3D tissue models, and it will use a multi-scale computational model (CHASTE) to investigate various biological processes that may contribute to carcinogenesis, including radiation-induced cellular signaling pathways. This cross-disciplinary work, with biological validation of an evolvable mathematical computational model, will help reduce uncertainties within NSCR and aid risk mitigation for radiation-induced carcinogenesis.
Application of the actor model to large scale NDE data analysis
NASA Astrophysics Data System (ADS)
Coughlin, Chris
2018-03-01
The Actor model of concurrent computation discretizes a problem into a series of independent units or actors that interact only through the exchange of messages. Without direct coupling between individual components, an Actor-based system is inherently concurrent and fault-tolerant. These traits lend themselves to so-called "Big Data" applications in which the volume of data to analyze requires a distributed multi-system design. For a practical demonstration of the Actor computational model, a system was developed to assist with the automated analysis of Nondestructive Evaluation (NDE) datasets using the open source Myriad Data Reduction Framework. A machine learning model trained to detect damage in two-dimensional slices of C-Scan data was deployed in a streaming data processing pipeline. To demonstrate the flexibility of the Actor model, the pipeline was deployed on a local system and re-deployed as a distributed system without recompiling, reconfiguring, or restarting the running application.
Numerical modeling of landslide-generated tsunami using adaptive unstructured meshes
NASA Astrophysics Data System (ADS)
Wilson, Cian; Collins, Gareth; Desousa Costa, Patrick; Piggott, Matthew
2010-05-01
Landslides impacting into or occurring under water generate waves, which can have devastating environmental consequences. Depending on the characteristics of the landslide the waves can have significant amplitude and potentially propagate over large distances. Linear models of classical earthquake-generated tsunamis cannot reproduce the highly nonlinear generation mechanisms required to accurately predict the consequences of landslide-generated tsunamis. Also, laboratory-scale experimental investigation is limited to simple geometries and short time-scales before wave reflections contaminate the data. Computational fluid dynamics models based on the nonlinear Navier-Stokes equations can simulate landslide-tsunami generation at realistic scales. However, traditional chessboard-like structured meshes introduce superfluous resolution and hence the computing power required for such a simulation can be prohibitively high, especially in three dimensions. Unstructured meshes allow the grid spacing to vary rapidly from high resolution in the vicinity of small scale features to much coarser, lower resolution in other areas. Combining this variable resolution with dynamic mesh adaptivity allows such high resolution zones to follow features like the interface between the landslide and the water whilst minimising the computational costs. Unstructured meshes are also better suited to representing complex geometries and bathymetries allowing more realistic domains to be simulated. Modelling multiple materials, like water, air and a landslide, on an unstructured adaptive mesh poses significant numerical challenges. Novel methods of interface preservation must be considered and coupled to a flow model in such a way that ensures conservation of the different materials. Furthermore this conservation property must be maintained during successive stages of mesh optimisation and interpolation. In this paper we validate a new multi-material adaptive unstructured fluid dynamics model against the well-known Lituya Bay landslide-generated wave experiment and case study [1]. In addition, we explore the effect of physical parameters, such as the shape, velocity and viscosity of the landslide, on wave amplitude and run-up, to quantify their influence on the landslide-tsunami hazard. As well as reproducing the experimental results, the model is shown to have excellent conservation and bounding properties. It also requires fewer nodes than an equivalent resolution fixed mesh simulation, therefore minimising at least one aspect of the computational cost. These computational savings are directly transferable to higher dimensions and some initial three dimensional results are also presented. These reproduce the experiments of DiRisio et al. [2], where an 80cm long landslide analogue was released from the side of an 8.9m diameter conical island in a 50 × 30m tank of water. The resulting impact between the landslide and the water generated waves with an amplitude of 1cm at wave gauges around the island. The range of scales that must be considered in any attempt to numerically reproduce this experiment makes it an ideal case study for our multi-material adaptive unstructured fluid dynamics model. [1] FRITZ, H. M., MOHAMMED, F., & YOO, J. 2009. Lituya Bay Landslide Impact Generated Mega-Tsunami 50th Anniversary. Pure and Applied Geophysics, 166(1), 153-175. [2] DIRISIO, M., DEGIROLAMO, P., BELLOTTI, G., PANIZZO, A., ARISTODEMO, F.,
Construction of multi-scale consistent brain networks: methods and applications.
Ge, Bao; Tian, Yin; Hu, Xintao; Chen, Hanbo; Zhu, Dajiang; Zhang, Tuo; Han, Junwei; Guo, Lei; Liu, Tianming
2015-01-01
Mapping human brain networks provides a basis for studying brain function and dysfunction, and thus has gained significant interest in recent years. However, modeling human brain networks still faces several challenges including constructing networks at multiple spatial scales and finding common corresponding networks across individuals. As a consequence, many previous methods were designed for a single resolution or scale of brain network, though the brain networks are multi-scale in nature. To address this problem, this paper presents a novel approach to constructing multi-scale common structural brain networks from DTI data via an improved multi-scale spectral clustering applied on our recently developed and validated DICCCOLs (Dense Individualized and Common Connectivity-based Cortical Landmarks). Since the DICCCOL landmarks possess intrinsic structural correspondences across individuals and populations, we employed the multi-scale spectral clustering algorithm to group the DICCCOL landmarks and their connections into sub-networks, meanwhile preserving the intrinsically-established correspondences across multiple scales. Experimental results demonstrated that the proposed method can generate multi-scale consistent and common structural brain networks across subjects, and its reproducibility has been verified by multiple independent datasets. As an application, these multi-scale networks were used to guide the clustering of multi-scale fiber bundles and to compare the fiber integrity in schizophrenia and healthy controls. In general, our methods offer a novel and effective framework for brain network modeling and tract-based analysis of DTI data.
Modeling and Density Estimation of an Urban Freeway Network Based on Dynamic Graph Hybrid Automata
Chen, Yangzhou; Guo, Yuqi; Wang, Ying
2017-01-01
In this paper, in order to describe complex network systems, we firstly propose a general modeling framework by combining a dynamic graph with hybrid automata and thus name it Dynamic Graph Hybrid Automata (DGHA). Then we apply this framework to model traffic flow over an urban freeway network by embedding the Cell Transmission Model (CTM) into the DGHA. With a modeling procedure, we adopt a dual digraph of road network structure to describe the road topology, use linear hybrid automata to describe multi-modes of dynamic densities in road segments and transform the nonlinear expressions of the transmitted traffic flow between two road segments into piecewise linear functions in terms of multi-mode switchings. This modeling procedure is modularized and rule-based, and thus is easily-extensible with the help of a combination algorithm for the dynamics of traffic flow. It can describe the dynamics of traffic flow over an urban freeway network with arbitrary topology structures and sizes. Next we analyze mode types and number in the model of the whole freeway network, and deduce a Piecewise Affine Linear System (PWALS) model. Furthermore, based on the PWALS model, a multi-mode switched state observer is designed to estimate the traffic densities of the freeway network, where a set of observer gain matrices are computed by using the Lyapunov function approach. As an example, we utilize the PWALS model and the corresponding switched state observer to traffic flow over Beijing third ring road. In order to clearly interpret the principle of the proposed method and avoid computational complexity, we adopt a simplified version of Beijing third ring road. Practical application for a large-scale road network will be implemented by decentralized modeling approach and distributed observer designing in the future research. PMID:28353664
Modeling and Density Estimation of an Urban Freeway Network Based on Dynamic Graph Hybrid Automata.
Chen, Yangzhou; Guo, Yuqi; Wang, Ying
2017-03-29
In this paper, in order to describe complex network systems, we firstly propose a general modeling framework by combining a dynamic graph with hybrid automata and thus name it Dynamic Graph Hybrid Automata (DGHA). Then we apply this framework to model traffic flow over an urban freeway network by embedding the Cell Transmission Model (CTM) into the DGHA. With a modeling procedure, we adopt a dual digraph of road network structure to describe the road topology, use linear hybrid automata to describe multi-modes of dynamic densities in road segments and transform the nonlinear expressions of the transmitted traffic flow between two road segments into piecewise linear functions in terms of multi-mode switchings. This modeling procedure is modularized and rule-based, and thus is easily-extensible with the help of a combination algorithm for the dynamics of traffic flow. It can describe the dynamics of traffic flow over an urban freeway network with arbitrary topology structures and sizes. Next we analyze mode types and number in the model of the whole freeway network, and deduce a Piecewise Affine Linear System (PWALS) model. Furthermore, based on the PWALS model, a multi-mode switched state observer is designed to estimate the traffic densities of the freeway network, where a set of observer gain matrices are computed by using the Lyapunov function approach. As an example, we utilize the PWALS model and the corresponding switched state observer to traffic flow over Beijing third ring road. In order to clearly interpret the principle of the proposed method and avoid computational complexity, we adopt a simplified version of Beijing third ring road. Practical application for a large-scale road network will be implemented by decentralized modeling approach and distributed observer designing in the future research.
Center for Center for Technology for Advanced Scientific Component Software (TASCS)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kostadin, Damevski
A resounding success of the Scientific Discovery through Advanced Computing (SciDAC) program is that high-performance computational science is now universally recognized as a critical aspect of scientific discovery [71], complementing both theoretical and experimental research. As scientific communities prepare to exploit unprecedented computing capabilities of emerging leadership-class machines for multi-model simulations at the extreme scale [72], it is more important than ever to address the technical and social challenges of geographically distributed teams that combine expertise in domain science, applied mathematics, and computer science to build robust and flexible codes that can incorporate changes over time. The Center for Technologymore » for Advanced Scientific Component Software (TASCS)1 tackles these these issues by exploiting component-based software development to facilitate collaborative high-performance scientific computing.« less
Pathmanathan, Pras; Shotwell, Matthew S; Gavaghan, David J; Cordeiro, Jonathan M; Gray, Richard A
2015-01-01
Perhaps the most mature area of multi-scale systems biology is the modelling of the heart. Current models are grounded in over fifty years of research in the development of biophysically detailed models of the electrophysiology (EP) of cardiac cells, but one aspect which is inadequately addressed is the incorporation of uncertainty and physiological variability. Uncertainty quantification (UQ) is the identification and characterisation of the uncertainty in model parameters derived from experimental data, and the computation of the resultant uncertainty in model outputs. It is a necessary tool for establishing the credibility of computational models, and will likely be expected of EP models for future safety-critical clinical applications. The focus of this paper is formal UQ of one major sub-component of cardiac EP models, the steady-state inactivation of the fast sodium current, INa. To better capture average behaviour and quantify variability across cells, we have applied for the first time an 'individual-based' statistical methodology to assess voltage clamp data. Advantages of this approach over a more traditional 'population-averaged' approach are highlighted. The method was used to characterise variability amongst cells isolated from canine epi and endocardium, and this variability was then 'propagated forward' through a canine model to determine the resultant uncertainty in model predictions at different scales, such as of upstroke velocity and spiral wave dynamics. Statistically significant differences between epi and endocardial cells (greater half-inactivation and less steep slope of steady state inactivation curve for endo) was observed, and the forward propagation revealed a lack of robustness of the model to underlying variability, but also surprising robustness to variability at the tissue scale. Overall, the methodology can be used to: (i) better analyse voltage clamp data; (ii) characterise underlying population variability; (iii) investigate consequences of variability; and (iv) improve the ability to validate a model. To our knowledge this article is the first to quantify population variability in membrane dynamics in this manner, and the first to perform formal UQ for a component of a cardiac model. The approach is likely to find much wider applicability across systems biology as current application domains reach greater levels of maturity. Published by Elsevier Ltd.
Suzuki, Yuma; Shimizu, Tetsuhide; Yang, Ming
2017-01-01
The quantitative evaluation of the biomolecules transport with multi-physics in nano/micro scale is demanded in order to optimize the design of microfluidics device for the biomolecules detection with high detection sensitivity and rapid diagnosis. This paper aimed to investigate the effectivity of the computational simulation using the numerical model of the biomolecules transport with multi-physics near a microchannel surface on the development of biomolecules-detection devices. The biomolecules transport with fluid drag force, electric double layer (EDL) force, and van der Waals force was modeled by Newtonian Equation of motion. The model validity was verified in the influence of ion strength and flow velocity on biomolecules distribution near the surface compared with experimental results of previous studies. The influence of acting forces on its distribution near the surface was investigated by the simulation. The trend of its distribution to ion strength and flow velocity was agreement with the experimental result by the combination of all acting forces. Furthermore, EDL force dominantly influenced its distribution near its surface compared with fluid drag force except for the case of high velocity and low ion strength. The knowledges from the simulation might be useful for the design of biomolecules-detection devices and the simulation can be expected to be applied on its development as the design tool for high detection sensitivity and rapid diagnosis in the future.
NASA Technical Reports Server (NTRS)
Putnam, William M.
2011-01-01
Earth system models like the Goddard Earth Observing System model (GEOS-5) have been pushing the limits of large clusters of multi-core microprocessors, producing breath-taking fidelity in resolving cloud systems at a global scale. GPU computing presents an opportunity for improving the efficiency of these leading edge models. A GPU implementation of GEOS-5 will facilitate the use of cloud-system resolving resolutions in data assimilation and weather prediction, at resolutions near 3.5 km, improving our ability to extract detailed information from high-resolution satellite observations and ultimately produce better weather and climate predictions
WOMBAT: A Scalable and High-performance Astrophysical Magnetohydrodynamics Code
NASA Astrophysics Data System (ADS)
Mendygral, P. J.; Radcliffe, N.; Kandalla, K.; Porter, D.; O'Neill, B. J.; Nolting, C.; Edmon, P.; Donnert, J. M. F.; Jones, T. W.
2017-02-01
We present a new code for astrophysical magnetohydrodynamics specifically designed and optimized for high performance and scaling on modern and future supercomputers. We describe a novel hybrid OpenMP/MPI programming model that emerged from a collaboration between Cray, Inc. and the University of Minnesota. This design utilizes MPI-RMA optimized for thread scaling, which allows the code to run extremely efficiently at very high thread counts ideal for the latest generation of multi-core and many-core architectures. Such performance characteristics are needed in the era of “exascale” computing. We describe and demonstrate our high-performance design in detail with the intent that it may be used as a model for other, future astrophysical codes intended for applications demanding exceptional performance.
High-resolution time-frequency representation of EEG data using multi-scale wavelets
NASA Astrophysics Data System (ADS)
Li, Yang; Cui, Wei-Gang; Luo, Mei-Lin; Li, Ke; Wang, Lina
2017-09-01
An efficient time-varying autoregressive (TVAR) modelling scheme that expands the time-varying parameters onto the multi-scale wavelet basis functions is presented for modelling nonstationary signals and with applications to time-frequency analysis (TFA) of electroencephalogram (EEG) signals. In the new parametric modelling framework, the time-dependent parameters of the TVAR model are locally represented by using a novel multi-scale wavelet decomposition scheme, which can allow the capability to capture the smooth trends as well as track the abrupt changes of time-varying parameters simultaneously. A forward orthogonal least square (FOLS) algorithm aided by mutual information criteria are then applied for sparse model term selection and parameter estimation. Two simulation examples illustrate that the performance of the proposed multi-scale wavelet basis functions outperforms the only single-scale wavelet basis functions or Kalman filter algorithm for many nonstationary processes. Furthermore, an application of the proposed method to a real EEG signal demonstrates the new approach can provide highly time-dependent spectral resolution capability.
Modeling of Multi-Tube Pulse Detonation Engine Operation
NASA Technical Reports Server (NTRS)
Ebrahimi, Houshang B.; Mohanraj, Rajendran; Merkle, Charles L.
2001-01-01
The present paper explores some preliminary issues concerning the operational characteristics of multiple-tube pulsed detonation engines (PDEs). The study is based on a two-dimensional analysis of the first-pulse operation of two detonation tubes exhausting through a common nozzle. Computations are first performed to assess isolated tube behavior followed by results for multi-tube flow phenomena. The computations are based on an eight-species, finite-rate transient flow-field model. The results serve as an important precursor to understanding appropriate propellant fill procedures and shock wave propagation in multi-tube, multi-dimensional simulations. Differences in behavior between single and multi-tube PDE models are discussed, The influence of multi-tube geometry and the preferred times for injecting the fresh propellant mixture during multi-tube PDE operation are studied.
NASA Astrophysics Data System (ADS)
Breinl, Korbinian; Di Baldassarre, Giuliano; Girons Lopez, Marc
2017-04-01
We assess uncertainties of multi-site rainfall generation across spatial scales and different climatic conditions. Many research subjects in earth sciences such as floods, droughts or water balance simulations require the generation of long rainfall time series. In large study areas the simulation at multiple sites becomes indispensable to account for the spatial rainfall variability, but becomes more complex compared to a single site due to the intermittent nature of rainfall. Weather generators can be used for extrapolating rainfall time series, and various models have been presented in the literature. Even though the large majority of multi-site rainfall generators is based on similar methods, such as resampling techniques or Markovian processes, they often become too complex. We think that this complexity has been a limit for the application of such tools. Furthermore, the majority of multi-site rainfall generators found in the literature are either not publicly available or intended for being applied at small geographical scales, often only in temperate climates. Here we present a revised, and now publicly available, version of a multi-site rainfall generation code first applied in 2014 in Austria and France, which we call TripleM (Multisite Markov Model). We test this fast and robust code with daily rainfall observations from the United States, in a subtropical, tropical and temperate climate, using rain gauge networks with a maximum site distance above 1,000km, thereby generating one million years of synthetic time series. The modelling of these one million years takes one night on a recent desktop computer. In this research, we first start the simulations with a small station network of three sites and progressively increase the number of sites and the spatial extent, and analyze the changing uncertainties for multiple statistical metrics such as dry and wet spells, rainfall autocorrelation, lagged cross correlations and the inter-annual rainfall variability. Our study contributes to the scientific community of earth sciences and the ongoing debate on extreme precipitation in a changing climate by making a stable, and very easily applicable, multi-site rainfall generation code available to the research community and providing a better understanding of the performance of multi-site rainfall generation depending on spatial scales and climatic conditions.
Alnaggar, Mohammed; Di Luzio, Giovanni; Cusatis, Gianluca
2017-04-28
Alkali Silica Reaction (ASR) is known to be a serious problem for concrete worldwide, especially in high humidity and high temperature regions. ASR is a slow process that develops over years to decades and it is influenced by changes in environmental and loading conditions of the structure. The problem becomes even more complicated if one recognizes that other phenomena like creep and shrinkage are coupled with ASR. This results in synergistic mechanisms that can not be easily understood without a comprehensive computational model. In this paper, coupling between creep, shrinkage and ASR is modeled within the Lattice Discrete Particle Model (LDPM) framework. In order to achieve this, a multi-physics formulation is used to compute the evolution of temperature, humidity, cement hydration, and ASR in both space and time, which is then used within physics-based formulations of cracking, creep and shrinkage. The overall model is calibrated and validated on the basis of experimental data available in the literature. Results show that even during free expansions (zero macroscopic stress), a significant degree of coupling exists because ASR induced expansions are relaxed by meso-scale creep driven by self-equilibriated stresses at the meso-scale. This explains and highlights the importance of considering ASR and other time dependent aging and deterioration phenomena at an appropriate length scale in coupled modeling approaches.
Alnaggar, Mohammed; Di Luzio, Giovanni; Cusatis, Gianluca
2017-01-01
Alkali Silica Reaction (ASR) is known to be a serious problem for concrete worldwide, especially in high humidity and high temperature regions. ASR is a slow process that develops over years to decades and it is influenced by changes in environmental and loading conditions of the structure. The problem becomes even more complicated if one recognizes that other phenomena like creep and shrinkage are coupled with ASR. This results in synergistic mechanisms that can not be easily understood without a comprehensive computational model. In this paper, coupling between creep, shrinkage and ASR is modeled within the Lattice Discrete Particle Model (LDPM) framework. In order to achieve this, a multi-physics formulation is used to compute the evolution of temperature, humidity, cement hydration, and ASR in both space and time, which is then used within physics-based formulations of cracking, creep and shrinkage. The overall model is calibrated and validated on the basis of experimental data available in the literature. Results show that even during free expansions (zero macroscopic stress), a significant degree of coupling exists because ASR induced expansions are relaxed by meso-scale creep driven by self-equilibriated stresses at the meso-scale. This explains and highlights the importance of considering ASR and other time dependent aging and deterioration phenomena at an appropriate length scale in coupled modeling approaches. PMID:28772829
Downscaling modelling system for multi-scale air quality forecasting
NASA Astrophysics Data System (ADS)
Nuterman, R.; Baklanov, A.; Mahura, A.; Amstrup, B.; Weismann, J.
2010-09-01
Urban modelling for real meteorological situations, in general, considers only a small part of the urban area in a micro-meteorological model, and urban heterogeneities outside a modelling domain affect micro-scale processes. Therefore, it is important to build a chain of models of different scales with nesting of higher resolution models into larger scale lower resolution models. Usually, the up-scaled city- or meso-scale models consider parameterisations of urban effects or statistical descriptions of the urban morphology, whereas the micro-scale (street canyon) models are obstacle-resolved and they consider a detailed geometry of the buildings and the urban canopy. The developed system consists of the meso-, urban- and street-scale models. First, it is the Numerical Weather Prediction (HIgh Resolution Limited Area Model) model combined with Atmospheric Chemistry Transport (the Comprehensive Air quality Model with extensions) model. Several levels of urban parameterisation are considered. They are chosen depending on selected scales and resolutions. For regional scale, the urban parameterisation is based on the roughness and flux corrections approach; for urban scale - building effects parameterisation. Modern methods of computational fluid dynamics allow solving environmental problems connected with atmospheric transport of pollutants within urban canopy in a presence of penetrable (vegetation) and impenetrable (buildings) obstacles. For local- and micro-scales nesting the Micro-scale Model for Urban Environment is applied. This is a comprehensive obstacle-resolved urban wind-flow and dispersion model based on the Reynolds averaged Navier-Stokes approach and several turbulent closures, i.e. k -É linear eddy-viscosity model, k - É non-linear eddy-viscosity model and Reynolds stress model. Boundary and initial conditions for the micro-scale model are used from the up-scaled models with corresponding interpolation conserving the mass. For the boundaries a kind of Dirichlet condition is chosen to provide the values based on interpolation from the coarse to the fine grid. When the roughness approach is changed to the obstacle-resolved one in the nested model, the interpolation procedure will increase the computational time (due to additional iterations) for meteorological/ chemical fields inside the urban sub-layer. In such situations, as a possible alternative, the perturbation approach can be applied. Here, the effects of main meteorological variables and chemical species are considered as a sum of two components: background (large-scale) values, described by the coarse-resolution model, and perturbations (micro-scale) features, obtained from the nested fine resolution model.
Simulating and mapping spatial complexity using multi-scale techniques
De Cola, L.
1994-01-01
A central problem in spatial analysis is the mapping of data for complex spatial fields using relatively simple data structures, such as those of a conventional GIS. This complexity can be measured using such indices as multi-scale variance, which reflects spatial autocorrelation, and multi-fractal dimension, which characterizes the values of fields. These indices are computed for three spatial processes: Gaussian noise, a simple mathematical function, and data for a random walk. Fractal analysis is then used to produce a vegetation map of the central region of California based on a satellite image. This analysis suggests that real world data lie on a continuum between the simple and the random, and that a major GIS challenge is the scientific representation and understanding of rapidly changing multi-scale fields. -Author
Reduced-Order Modeling: New Approaches for Computational Physics
NASA Technical Reports Server (NTRS)
Beran, Philip S.; Silva, Walter A.
2001-01-01
In this paper, we review the development of new reduced-order modeling techniques and discuss their applicability to various problems in computational physics. Emphasis is given to methods ba'sed on Volterra series representations and the proper orthogonal decomposition. Results are reported for different nonlinear systems to provide clear examples of the construction and use of reduced-order models, particularly in the multi-disciplinary field of computational aeroelasticity. Unsteady aerodynamic and aeroelastic behaviors of two- dimensional and three-dimensional geometries are described. Large increases in computational efficiency are obtained through the use of reduced-order models, thereby justifying the initial computational expense of constructing these models and inotivatim,- their use for multi-disciplinary design analysis.
NASA Technical Reports Server (NTRS)
Birisan, Mihnea; Beling, Peter
2011-01-01
New generations of surveillance drones are being outfitted with numerous high definition cameras. The rapid proliferation of fielded sensors and supporting capacity for processing and displaying data will translate into ever more capable platforms, but with increased capability comes increased complexity and scale that may diminish the usefulness of such platforms to human operators. We investigate methods for alleviating strain on analysts by automatically retrieving content specific to their current task using a machine learning technique known as Multi-Instance Learning (MIL). We use MIL to create a real time model of the analysts' task and subsequently use the model to dynamically retrieve relevant content. This paper presents results from a pilot experiment in which a computer agent is assigned analyst tasks such as identifying caravanning vehicles in a simulated vehicle traffic environment. We compare agent performance between MIL aided trials and unaided trials.
NASA Astrophysics Data System (ADS)
Nakano, Masuo; Wada, Akiyoshi; Sawada, Masahiro; Yoshimura, Hiromasa; Onishi, Ryo; Kawahara, Shintaro; Sasaki, Wataru; Nasuno, Tomoe; Yamaguchi, Munehiko; Iriguchi, Takeshi; Sugi, Masato; Takeuchi, Yoshiaki
2017-03-01
Recent advances in high-performance computers facilitate operational numerical weather prediction by global hydrostatic atmospheric models with horizontal resolutions of ˜ 10 km. Given further advances in such computers and the fact that the hydrostatic balance approximation becomes invalid for spatial scales < 10 km, the development of global nonhydrostatic models with high accuracy is urgently required. The Global 7 km mesh nonhydrostatic Model Intercomparison Project for improving TYphoon forecast (TYMIP-G7) is designed to understand and statistically quantify the advantages of high-resolution nonhydrostatic global atmospheric models to improve tropical cyclone (TC) prediction. A total of 137 sets of 5-day simulations using three next-generation nonhydrostatic global models with horizontal resolutions of 7 km and a conventional hydrostatic global model with a horizontal resolution of 20 km were run on the Earth Simulator. The three 7 km mesh nonhydrostatic models are the nonhydrostatic global spectral atmospheric Double Fourier Series Model (DFSM), the Multi-Scale Simulator for the Geoenvironment (MSSG) and the Nonhydrostatic ICosahedral Atmospheric Model (NICAM). The 20 km mesh hydrostatic model is the operational Global Spectral Model (GSM) of the Japan Meteorological Agency. Compared with the 20 km mesh GSM, the 7 km mesh models reduce systematic errors in the TC track, intensity and wind radii predictions. The benefits of the multi-model ensemble method were confirmed for the 7 km mesh nonhydrostatic global models. While the three 7 km mesh models reproduce the typical axisymmetric mean inner-core structure, including the primary and secondary circulations, the simulated TC structures and their intensities in each case are very different for each model. In addition, the simulated track is not consistently better than that of the 20 km mesh GSM. These results suggest that the development of more sophisticated initialization techniques and model physics is needed to further improve the TC prediction.
NASA Technical Reports Server (NTRS)
Afjeh, Abdollah A.; Reed, John A.
2003-01-01
The following reports are presented on this project:A first year progress report on: Development of a Dynamically Configurable,Object-Oriented Framework for Distributed, Multi-modal Computational Aerospace Systems Simulation; A second year progress report on: Development of a Dynamically Configurable, Object-Oriented Framework for Distributed, Multi-modal Computational Aerospace Systems Simulation; An Extensible, Interchangeable and Sharable Database Model for Improving Multidisciplinary Aircraft Design; Interactive, Secure Web-enabled Aircraft Engine Simulation Using XML Databinding Integration; and Improving the Aircraft Design Process Using Web-based Modeling and Simulation.
Multi-GPU implementation of a VMAT treatment plan optimization algorithm.
Tian, Zhen; Peng, Fei; Folkerts, Michael; Tan, Jun; Jia, Xun; Jiang, Steve B
2015-06-01
Volumetric modulated arc therapy (VMAT) optimization is a computationally challenging problem due to its large data size, high degrees of freedom, and many hardware constraints. High-performance graphics processing units (GPUs) have been used to speed up the computations. However, GPU's relatively small memory size cannot handle cases with a large dose-deposition coefficient (DDC) matrix in cases of, e.g., those with a large target size, multiple targets, multiple arcs, and/or small beamlet size. The main purpose of this paper is to report an implementation of a column-generation-based VMAT algorithm, previously developed in the authors' group, on a multi-GPU platform to solve the memory limitation problem. While the column-generation-based VMAT algorithm has been previously developed, the GPU implementation details have not been reported. Hence, another purpose is to present detailed techniques employed for GPU implementation. The authors also would like to utilize this particular problem as an example problem to study the feasibility of using a multi-GPU platform to solve large-scale problems in medical physics. The column-generation approach generates VMAT apertures sequentially by solving a pricing problem (PP) and a master problem (MP) iteratively. In the authors' method, the sparse DDC matrix is first stored on a CPU in coordinate list format (COO). On the GPU side, this matrix is split into four submatrices according to beam angles, which are stored on four GPUs in compressed sparse row format. Computation of beamlet price, the first step in PP, is accomplished using multi-GPUs. A fast inter-GPU data transfer scheme is accomplished using peer-to-peer access. The remaining steps of PP and MP problems are implemented on CPU or a single GPU due to their modest problem scale and computational loads. Barzilai and Borwein algorithm with a subspace step scheme is adopted here to solve the MP problem. A head and neck (H&N) cancer case is then used to validate the authors' method. The authors also compare their multi-GPU implementation with three different single GPU implementation strategies, i.e., truncating DDC matrix (S1), repeatedly transferring DDC matrix between CPU and GPU (S2), and porting computations involving DDC matrix to CPU (S3), in terms of both plan quality and computational efficiency. Two more H&N patient cases and three prostate cases are used to demonstrate the advantages of the authors' method. The authors' multi-GPU implementation can finish the optimization process within ∼ 1 min for the H&N patient case. S1 leads to an inferior plan quality although its total time was 10 s shorter than the multi-GPU implementation due to the reduced matrix size. S2 and S3 yield the same plan quality as the multi-GPU implementation but take ∼4 and ∼6 min, respectively. High computational efficiency was consistently achieved for the other five patient cases tested, with VMAT plans of clinically acceptable quality obtained within 23-46 s. Conversely, to obtain clinically comparable or acceptable plans for all six of these VMAT cases that the authors have tested in this paper, the optimization time needed in a commercial TPS system on CPU was found to be in an order of several minutes. The results demonstrate that the multi-GPU implementation of the authors' column-generation-based VMAT optimization can handle the large-scale VMAT optimization problem efficiently without sacrificing plan quality. The authors' study may serve as an example to shed some light on other large-scale medical physics problems that require multi-GPU techniques.
Challenges and Opportunities in Modeling of the Global Atmosphere
NASA Astrophysics Data System (ADS)
Janjic, Zavisa; Djurdjevic, Vladimir; Vasic, Ratko
2016-04-01
Modeling paradigms on global scales may need to be reconsidered in order to better utilize the power of massively parallel processing. For high computational efficiency with distributed memory, each core should work on a small subdomain of the full integration domain, and exchange only few rows of halo data with the neighbouring cores. Note that the described scenario strongly favors horizontally local discretizations. This is relatively easy to achieve in regional models. However, the spherical geometry complicates the problem. The latitude-longitude grid with local in space and explicit in time differencing has been an early choice and remained in use ever since. The problem with this method is that the grid size in the longitudinal direction tends to zero as the poles are approached. So, in addition to having unnecessarily high resolution near the poles, polar filtering has to be applied in order to use a time step of a reasonable size. However, the polar filtering requires transpositions involving extra communications as well as more computations. The spectral transform method and the semi-implicit semi-Lagrangian schemes opened the way for application of spectral representation. With some variations, such techniques are currently dominating in global models. Unfortunately, the horizontal non-locality is inherent to the spectral representation and implicit time differencing, which inhibits scaling on a large number of cores. In this respect the lat-lon grid with polar filtering is a step in the right direction, particularly at high resolutions where the Legendre transforms become increasingly expensive. Other grids with reduced variability of grid distances, such as various versions of the cubed sphere and the hexagonal/pentagonal ("soccer ball") grids, were proposed almost fifty years ago. However, on these grids, large-scale (wavenumber 4 and 5) fictitious solutions ("grid imprinting") with significant amplitudes can develop. Due to their large scales, that are comparable to the scales of the dominant Rossby waves, such fictitious solutions are hard to identify and remove. Another new challenge on the global scale is that the limit of validity of the hydrostatic approximation is rapidly being approached. Relaxing the hydrostatic approximation requieres careful reformulation of the model dynamics and more computations and communications. The unified Non-hydrostatic Multi-scale Model (NMMB) will be briefly discussed as an example. The non-hydrostatic dynamics were designed in such a way as to avoid over-specification. The global version is run on the latitude-longitude grid, and the polar filter selectively slows down the waves that would otherwise be unstable without modifying their amplitudes. The model has been successfully tested on various scales. The skill of the medium range forecasts produced by the NMMB is comparable to that of other major medium range models, and its computational efficiency on parallel computers is good.
Understanding Hydraulic Fracturing: A Multi-Scale Problem
Hyman, Jeffrey De'Haven; Gimenez Martinez, Joaquin; Viswanathan, Hari S.; ...
2016-09-05
Despite the impact that hydraulic fracturing has had on the energy sector, the physical mechanisms that control its efficiency and environmental impacts remain poorly understood in part because the length scales involved range from nano-meters to kilo-meters. We characterize flow and transport in shale formations across and between these scales using integrated computational, theoretical, and experimental efforts. At the field scale, we use discrete fracture network modeling to simulate production at a well site whose fracture network is based on a site characterization of a shale formation. At the core scale, we use triaxial fracture experiments and a finite-element discrete-elementmore » fracture propagation model with a coupled fluid solver to study dynamic crack propagation in low permeability shale. We use lattice Boltzmann pore-scale simulations and microfluidic experiments in both synthetic and real micromodels to study pore-scale flow phenomenon such as multiphase flow and mixing. A mechanistic description and integration of these multiple scales is required for accurate predictions of production and the eventual optimization of hydrocarbon extraction from unconventional reservoirs.« less
Multi-scale simulations of space problems with iPIC3D
NASA Astrophysics Data System (ADS)
Lapenta, Giovanni; Bettarini, Lapo; Markidis, Stefano
The implicit Particle-in-Cell method for the computer simulation of space plasma, and its im-plementation in a three-dimensional parallel code, called iPIC3D, are presented. The implicit integration in time of the Vlasov-Maxwell system removes the numerical stability constraints and enables kinetic plasma simulations at magnetohydrodynamics scales. Simulations of mag-netic reconnection in plasma are presented to show the effectiveness of the algorithm. In particular we will show a number of simulations done for large scale 3D systems using the physical mass ratio for Hydrogen. Most notably one simulation treats kinetically a box of tens of Earth radii in each direction and was conducted using about 16000 processors of the Pleiades NASA computer. The work is conducted in collaboration with the MMS-IDS theory team from University of Colorado (M. Goldman, D. Newman and L. Andersson). Reference: Stefano Markidis, Giovanni Lapenta, Rizwan-uddin Multi-scale simulations of plasma with iPIC3D Mathematics and Computers in Simulation, Available online 17 October 2009, http://dx.doi.org/10.1016/j.matcom.2009.08.038
2007-10-31
equation of ultrafine particles , or (JP-8) fuel vapor, whose dominant radial transfer mechanisms are Brownian motion and turbulent dispersion is given in...Deposition of ultrafine particles at carinal ridges of the upper bronchial airways. Aerosol Science and Technology 38, 991-1000. Comer, J.K...from studies of ultrafine particles . Environmental Health Perspectives 113, 823-839. Ritchie, G., Still, K., Rossi III, J., Bekkedal, M., Bobb, A. and
Two-Equation Turbulence Models for Prediction of Heat Transfer on a Transonic Turbine Blade
NASA Technical Reports Server (NTRS)
Garg, Vijay K.; Ameri, Ali A.; Gaugler, R. E. (Technical Monitor)
2001-01-01
Two versions of the two-equation k-omega model and a shear stress transport (SST) model are used in a three-dimensional, multi-block, Navier-Stokes code to compare the detailed heat transfer measurements on a transonic turbine blade. It is found that the SST model resolves the passage vortex better on the suction side of the blade, thus yielding a better comparison with the experimental data than either of the k-w models. However, the comparison is still deficient on the suction side of the blade. Use of the SST model does require the computation of distance from a wall, which for a multiblock grid, such as in the present case, can be complicated. However, a relatively easy fix for this problem was devised. Also addressed are issues such as (1) computation of the production term in the turbulence equations for aerodynamic applications, and (2) the relation between the computational and experimental values for the turbulence length scale, and its influence on the passage vortex on the suction side of the turbine blade.
Large-Scale, Multi-Sensor Atmospheric Data Fusion Using Hybrid Cloud Computing
NASA Astrophysics Data System (ADS)
Wilson, B. D.; Manipon, G.; Hua, H.; Fetzer, E. J.
2015-12-01
NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the "A-Train" platforms (AIRS, MODIS, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over decades. Moving to multi-sensor, long-duration presents serious challenges for large-scale data mining and fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another (MODIS), and to a model (ECMWF), stratify the comparisons using a classification of the "cloud scenes" from CloudSat, and repeat the entire analysis over 10 years of data. HySDS is a Hybrid-Cloud Science Data System that has been developed and applied under NASA AIST, MEaSUREs, and ACCESS grants. HySDS uses the SciFlow workflow engine to partition analysis workflows into parallel tasks (e.g. segmenting by time or space) that are pushed into a durable job queue. The tasks are "pulled" from the queue by worker Virtual Machines (VM's) and executed in an on-premise Cloud (Eucalyptus or OpenStack) or at Amazon in the public Cloud or govCloud. In this way, years of data (millions of files) can be processed in a massively parallel way. Input variables (arrays) are pulled on-demand into the Cloud using OPeNDAP URLs or other subsetting services, thereby minimizing the size of the transferred data. We are using HySDS to automate the production of multiple versions of a ten-year A-Train water vapor climatology under a MEASURES grant. We will present the architecture of HySDS, describe the achieved "clock time" speedups in fusing datasets on our own nodes and in the Amazon Cloud, and discuss the Cloud cost tradeoffs for storage, compute, and data transfer. Our system demonstrates how one can pull A-Train variables (Levels 2 & 3) on-demand into the Amazon Cloud, and cache only those variables that are heavily used, so that any number of compute jobs can be executed "near" the multi-sensor data. Decade-long, multi-sensor studies can be performed without pre-staging data, with the researcher paying only his own Cloud compute bill.
Jung, Jaewoon; Mori, Takaharu; Kobayashi, Chigusa; Matsunaga, Yasuhiro; Yoda, Takao; Feig, Michael; Sugita, Yuji
2015-07-01
GENESIS (Generalized-Ensemble Simulation System) is a new software package for molecular dynamics (MD) simulations of macromolecules. It has two MD simulators, called ATDYN and SPDYN. ATDYN is parallelized based on an atomic decomposition algorithm for the simulations of all-atom force-field models as well as coarse-grained Go-like models. SPDYN is highly parallelized based on a domain decomposition scheme, allowing large-scale MD simulations on supercomputers. Hybrid schemes combining OpenMP and MPI are used in both simulators to target modern multicore computer architectures. Key advantages of GENESIS are (1) the highly parallel performance of SPDYN for very large biological systems consisting of more than one million atoms and (2) the availability of various REMD algorithms (T-REMD, REUS, multi-dimensional REMD for both all-atom and Go-like models under the NVT, NPT, NPAT, and NPγT ensembles). The former is achieved by a combination of the midpoint cell method and the efficient three-dimensional Fast Fourier Transform algorithm, where the domain decomposition space is shared in real-space and reciprocal-space calculations. Other features in SPDYN, such as avoiding concurrent memory access, reducing communication times, and usage of parallel input/output files, also contribute to the performance. We show the REMD simulation results of a mixed (POPC/DMPC) lipid bilayer as a real application using GENESIS. GENESIS is released as free software under the GPLv2 licence and can be easily modified for the development of new algorithms and molecular models. WIREs Comput Mol Sci 2015, 5:310-323. doi: 10.1002/wcms.1220.
NASA Astrophysics Data System (ADS)
Luna, Byron Quan; Vidar Vangelsten, Bjørn; Liu, Zhongqiang; Eidsvig, Unni; Nadim, Farrokh
2013-04-01
Landslide risk must be assessed at the appropriate scale in order to allow effective risk management. At the moment, few deterministic models exist that can do all the computations required for a complete landslide risk assessment at a regional scale. This arises from the difficulty to precisely define the location and volume of the released mass and from the inability of the models to compute the displacement with a large amount of individual initiation areas (computationally exhaustive). This paper presents a medium-scale, dynamic physical model for rapid mass movements in mountainous and volcanic areas. The deterministic nature of the approach makes it possible to apply it to other sites since it considers the frictional equilibrium conditions for the initiation process, the rheological resistance of the displaced flow for the run-out process and fragility curve that links intensity to economic loss for each building. The model takes into account the triggering effect of an earthquake, intense rainfall and a combination of both (spatial and temporal). The run-out module of the model considers the flow as a 2-D continuum medium solving the equations of mass balance and momentum conservation. The model is embedded in an open source environment geographical information system (GIS), it is computationally efficient and it is transparent (understandable and comprehensible) for the end-user. The model was applied to a virtual region, assessing landslide hazard, vulnerability and risk. A Monte Carlo simulation scheme was applied to quantify, propagate and communicate the effects of uncertainty in input parameters on the final results. In this technique, the input distributions are recreated through sampling and the failure criteria are calculated for each stochastic realisation of the site properties. The model is able to identify the released volumes of the critical slopes and the areas threatened by the run-out intensity. The obtained final outcome is the estimation of individual building damage and total economic risk. The research leading to these results has received funding from the European Community's Seventh Framework Programme [FP7/2007-2013] under grant agreement No 265138 New Multi-HAzard and MulTi-RIsK Assessment MethodS for Europe (MATRIX).
Chen, Hai; Liang, Xiaoying; Li, Rui
2013-01-01
Multi-Agent Systems (MAS) offer a conceptual approach to include multi-actor decision making into models of land use change. Through the simulation based on the MAS, this paper tries to show the application of MAS in the micro scale LUCC, and reveal the transformation mechanism of difference scale. This paper starts with a description of the context of MAS research. Then, it adopts the Nested Spatial Choice (NSC) method to construct the multi-scale LUCC decision-making model. And a case study for Mengcha village, Mizhi County, Shaanxi Province is reported. Finally, the potentials and drawbacks of the following approach is discussed and concluded. From our design and implementation of the MAS in multi-scale model, a number of observations and conclusions can be drawn on the implementation and future research directions. (1) The use of the LUCC decision-making and multi-scale transformation framework provides, according to us, a more realistic modeling of multi-scale decision making process. (2) By using continuous function, rather than discrete function, to construct the decision-making of the households is more realistic to reflect the effect. (3) In this paper, attempts have been made to give a quantitative analysis to research the household interaction. And it provides the premise and foundation for researching the communication and learning among the households. (4) The scale transformation architecture constructed in this paper helps to accumulate theory and experience for the interaction research between the micro land use decision-making and the macro land use landscape pattern. Our future research work will focus on: (1) how to rational use risk aversion principle, and put the rule on rotation between household parcels into model. (2) Exploring the methods aiming at researching the household decision-making over a long period, it allows us to find the bridge between the long-term LUCC data and the short-term household decision-making. (3) Researching the quantitative method and model, especially the scenario analysis model which may reflect the interaction among different household types.
Influence of heat transfer rates on pressurization of liquid/slush hydrogen propellant tanks
NASA Technical Reports Server (NTRS)
Sasmal, G. P.; Hochstein, J. I.; Hardy, T. L.
1993-01-01
A multi-dimensional computational model of the pressurization process in liquid/slush hydrogen tank is developed and used to study the influence of heat flux rates at the ullage boundaries on the process. The new model computes these rates and performs an energy balance for the tank wall whereas previous multi-dimensional models required a priori specification of the boundary heat flux rates. Analyses of both liquid hydrogen and slush hydrogen pressurization were performed to expose differences between the two processes. Graphical displays are presented to establish the dependence of pressurization time, pressurant mass required, and other parameters of interest on ullage boundary heat flux rates and pressurant mass flow rate. Detailed velocity fields and temperature distributions are presented for selected cases to further illuminate the details of the pressurization process. It is demonstrated that ullage boundary heat flux rates do significantly effect the pressurization process and that minimizing heat loss from the ullage and maximizing pressurant flow rate minimizes the mass of pressurant gas required to pressurize the tank. It is further demonstrated that proper dimensionless scaling of pressure and time permit all the pressure histories examined during this study to be displayed as a single curve.
Computational Plume Modeling of COnceptual ARES Vehicle Stage Tests
NASA Technical Reports Server (NTRS)
Allgood, Daniel C.; Ahuja, Vineet
2007-01-01
The plume-induced environment of a conceptual ARES V vehicle stage test at the NASA Stennis Space Center (NASA-SSC) was modeled using computational fluid dynamics (CFD). A full-scale multi-element grid was generated for the NASA-SSC B-2 test stand with the ARES V stage being located in a proposed off-center forward position. The plume produced by the ARES V main power plant (cluster of five RS-68 LOX/LH2 engines) was simulated using a multi-element flow solver - CRUNCH. The primary objective of this work was to obtain a fundamental understanding of the ARES V plume and its impingement characteristics on the B-2 flame-deflector. The location, size and shape of the impingement region were quantified along with the un-cooled deflector wall pressures, temperatures and incident heating rates. Issues with the proposed tests were identified and several of these addressed using the CFD methodology. The final results of this modeling effort will provide useful data and boundary conditions in upcoming engineering studies that are directed towards determining the required facility modifications for ensuring safe and reliable stage testing in support of the Constellation Program.
NASA Astrophysics Data System (ADS)
Quan, Zhe; Wu, Lei
2017-09-01
This article investigates the use of parallel computing for solving the disjunctively constrained knapsack problem. The proposed parallel computing model can be viewed as a cooperative algorithm based on a multi-neighbourhood search. The cooperation system is composed of a team manager and a crowd of team members. The team members aim at applying their own search strategies to explore the solution space. The team manager collects the solutions from the members and shares the best one with them. The performance of the proposed method is evaluated on a group of benchmark data sets. The results obtained are compared to those reached by the best methods from the literature. The results show that the proposed method is able to provide the best solutions in most cases. In order to highlight the robustness of the proposed parallel computing model, a new set of large-scale instances is introduced. Encouraging results have been obtained.
Large-Scale, Parallel, Multi-Sensor Data Fusion in the Cloud
NASA Astrophysics Data System (ADS)
Wilson, B. D.; Manipon, G.; Hua, H.
2012-12-01
NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the "A-Train" platforms (AIRS, AMSR-E, MODIS, MISR, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over periods of years to decades. However, moving from predominantly single-instrument studies to a multi-sensor, measurement-based model for long-duration analysis of important climate variables presents serious challenges for large-scale data mining and data fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another instrument (MODIS), and to a model (ECMWF), stratify the comparisons using a classification of the "cloud scenes" from CloudSat, and repeat the entire analysis over years of AIRS data. To perform such an analysis, one must discover & access multiple datasets from remote sites, find the space/time "matchups" between instruments swaths and model grids, understand the quality flags and uncertainties for retrieved physical variables, assemble merged datasets, and compute fused products for further scientific and statistical analysis. To efficiently assemble such decade-scale datasets in a timely manner, we are utilizing Elastic Computing in the Cloud and parallel map/reduce-based algorithms. "SciReduce" is a Hadoop-like parallel analysis system, programmed in parallel python, that is designed from the ground up for Earth science. SciReduce executes inside VMWare images and scales to any number of nodes in the Cloud. Unlike Hadoop, in which simple tuples (keys & values) are passed between the map and reduce functions, SciReduce operates on bundles of named numeric arrays, which can be passed in memory or serialized to disk in netCDF4 or HDF5. Thus, SciReduce uses the native datatypes (geolocated grids, swaths, and points) that geo-scientists are familiar with. We are deploying within SciReduce a versatile set of python operators for data lookup, access, subsetting, co-registration, mining, fusion, and statistical analysis. All operators take in sets of geo-located arrays and generate more arrays. Large, multi-year satellite and model datasets are automatically "sharded" by time and space across a cluster of nodes so that years of data (millions of granules) can be compared or fused in a massively parallel way. Input variables (arrays) are pulled on-demand into the Cloud using OPeNDAP or webification URLs, thereby minimizing the size of the stored input and intermediate datasets. A typical map function might assemble and quality control AIRS Level-2 water vapor profiles for a year of data in parallel, then a reduce function would average the profiles in lat/lon bins (again, in parallel), and a final reduce would aggregate the climatology and write it to output files. We are using SciReduce to automate the production of multiple versions of a multi-year water vapor climatology (AIRS & MODIS), stratified by Cloudsat cloud classification, and compare it to models (ECMWF & MERRA reanalysis). We will present the architecture of SciReduce, describe the achieved "clock time" speedups in fusing huge datasets on our own nodes and in the Amazon Cloud, and discuss the Cloud cost tradeoffs for storage, compute, and data transfer.
Large-Scale, Parallel, Multi-Sensor Data Fusion in the Cloud
NASA Astrophysics Data System (ADS)
Wilson, B.; Manipon, G.; Hua, H.
2012-04-01
NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the "A-Train" platforms (AIRS, AMSR-E, MODIS, MISR, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over periods of years to decades. However, moving from predominantly single-instrument studies to a multi-sensor, measurement-based model for long-duration analysis of important climate variables presents serious challenges for large-scale data mining and data fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another instrument (MODIS), and to a model (ECMWF), stratify the comparisons using a classification of the "cloud scenes" from CloudSat, and repeat the entire analysis over years of AIRS data. To perform such an analysis, one must discover & access multiple datasets from remote sites, find the space/time "matchups" between instruments swaths and model grids, understand the quality flags and uncertainties for retrieved physical variables, assemble merged datasets, and compute fused products for further scientific and statistical analysis. To efficiently assemble such decade-scale datasets in a timely manner, we are utilizing Elastic Computing in the Cloud and parallel map/reduce-based algorithms. "SciReduce" is a Hadoop-like parallel analysis system, programmed in parallel python, that is designed from the ground up for Earth science. SciReduce executes inside VMWare images and scales to any number of nodes in the Cloud. Unlike Hadoop, in which simple tuples (keys & values) are passed between the map and reduce functions, SciReduce operates on bundles of named numeric arrays, which can be passed in memory or serialized to disk in netCDF4 or HDF5. Thus, SciReduce uses the native datatypes (geolocated grids, swaths, and points) that geo-scientists are familiar with. We are deploying within SciReduce a versatile set of python operators for data lookup, access, subsetting, co-registration, mining, fusion, and statistical analysis. All operators take in sets of geo-arrays and generate more arrays. Large, multi-year satellite and model datasets are automatically "sharded" by time and space across a cluster of nodes so that years of data (millions of granules) can be compared or fused in a massively parallel way. Input variables (arrays) are pulled on-demand into the Cloud using OPeNDAP or webification URLs, thereby minimizing the size of the stored input and intermediate datasets. A typical map function might assemble and quality control AIRS Level-2 water vapor profiles for a year of data in parallel, then a reduce function would average the profiles in bins (again, in parallel), and a final reduce would aggregate the climatology and write it to output files. We are using SciReduce to automate the production of multiple versions of a multi-year water vapor climatology (AIRS & MODIS), stratified by Cloudsat cloud classification, and compare it to models (ECMWF & MERRA reanalysis). We will present the architecture of SciReduce, describe the achieved "clock time" speedups in fusing huge datasets on our own nodes and in the Amazon Cloud, and discuss the Cloud cost tradeoffs for storage, compute, and data transfer.
Large area sub-micron chemical imaging of magnesium in sea urchin teeth.
Masic, Admir; Weaver, James C
2015-03-01
The heterogeneous and site-specific incorporation of inorganic ions can profoundly influence the local mechanical properties of damage tolerant biological composites. Using the sea urchin tooth as a research model, we describe a multi-technique approach to spatially map the distribution of magnesium in this complex multiphase system. Through the combined use of 16-bit backscattered scanning electron microscopy, multi-channel energy dispersive spectroscopy elemental mapping, and diffraction-limited confocal Raman spectroscopy, we demonstrate a new set of high throughput, multi-spectral, high resolution methods for the large scale characterization of mineralized biological materials. In addition, instrument hardware and data collection protocols can be modified such that several of these measurements can be performed on irregularly shaped samples with complex surface geometries and without the need for extensive sample preparation. Using these approaches, in conjunction with whole animal micro-computed tomography studies, we have been able to spatially resolve micron and sub-micron structural features across macroscopic length scales on entire urchin tooth cross-sections and correlate these complex morphological features with local variability in elemental composition. Copyright © 2015 Elsevier Inc. All rights reserved.
Multi-scale heat and mass transfer modelling of cell and tissue cryopreservation
Xu, Feng; Moon, Sangjun; Zhang, Xiaohui; Shao, Lei; Song, Young Seok; Demirci, Utkan
2010-01-01
Cells and tissues undergo complex physical processes during cryopreservation. Understanding the underlying physical phenomena is critical to improve current cryopreservation methods and to develop new techniques. Here, we describe multi-scale approaches for modelling cell and tissue cryopreservation including heat transfer at macroscale level, crystallization, cell volume change and mass transport across cell membranes at microscale level. These multi-scale approaches allow us to study cell and tissue cryopreservation. PMID:20047939
Multi-material 3D Models for Temporal Bone Surgical Simulation.
Rose, Austin S; Kimbell, Julia S; Webster, Caroline E; Harrysson, Ola L A; Formeister, Eric J; Buchman, Craig A
2015-07-01
A simulated, multicolor, multi-material temporal bone model can be created using 3-dimensional (3D) printing that will prove both safe and beneficial in training for actual temporal bone surgical cases. As the process of additive manufacturing, or 3D printing, has become more practical and affordable, a number of applications for the technology in the field of Otolaryngology-Head and Neck Surgery have been considered. One area of promise is temporal bone surgical simulation. Three-dimensional representations of human temporal bones were created from temporal bone computed tomography (CT) scans using biomedical image processing software. Multi-material models were then printed and dissected in a temporal bone laboratory by attending and resident otolaryngologists. A 5-point Likert scale was used to grade the models for their anatomical accuracy and suitability as a simulation of cadaveric and operative temporal bone drilling. The models produced for this study demonstrate significant anatomic detail and a likeness to human cadaver specimens for drilling and dissection. Simulated temporal bones created by this process have potential benefit in surgical training, preoperative simulation for challenging otologic cases, and the standardized testing of temporal bone surgical skills. © The Author(s) 2015.
NASA Astrophysics Data System (ADS)
Folch, A.
2012-08-01
Tephra transport models try to predict atmospheric dispersion and sedimentation of tephra depending on meteorology, particle properties, and eruption characteristics, defined by eruption column height, mass eruption rate, and vertical distribution of mass. Models are used for different purposes, from operational forecast of volcanic ash clouds to hazard assessment of tephra dispersion and fallout. The size of the erupted particles, a key parameter controlling the dynamics of particle sedimentation in the atmosphere, varies within a wide range. Largest centimetric to millimetric particles fallout at proximal to medial distances from the volcano and sediment by gravitational settling. On the other extreme, smallest micrometric to sub-micrometric particles can be transported at continental or even at global scales and are affected by other deposition and aggregation mechanisms. Different scientific communities had traditionally modeled the dispersion of these two end members. Volcanologists developed families of models suitable for lapilli and coarse ash and aimed at computing fallout deposits and for hazard assessment. In contrast, meteorologists and atmospheric scientists have traditionally used other atmospheric transport models, dealing with finer particles, for tracking motion of volcanic ash clouds and, eventually, for computing airborne ash concentrations. During the last decade, the increasing demand for model accuracy and forecast reliability has pushed on two fronts. First, the original gap between these different families of models has been filled with the emergence of multi-scale and multi-purpose models. Second, new modeling strategies including, for example, ensemble and probabilistic forecast or model data assimilation are being investigated for future implementation in models and or modeling strategies. This paper reviews the evolution of tephra transport and dispersal models during the last two decades, presents the status and limitations of the current modeling strategies, and discusses some emergent perspectives expected to be implemented at operational level during the next few years. Improvements in both real-time forecasting and long-term hazard assessment are necessary to loss prevention programs on a local, regional, national and international level.
A High-Performance Cellular Automaton Model of Tumor Growth with Dynamically Growing Domains
Poleszczuk, Jan; Enderling, Heiko
2014-01-01
Tumor growth from a single transformed cancer cell up to a clinically apparent mass spans many spatial and temporal orders of magnitude. Implementation of cellular automata simulations of such tumor growth can be straightforward but computing performance often counterbalances simplicity. Computationally convenient simulation times can be achieved by choosing appropriate data structures, memory and cell handling as well as domain setup. We propose a cellular automaton model of tumor growth with a domain that expands dynamically as the tumor population increases. We discuss memory access, data structures and implementation techniques that yield high-performance multi-scale Monte Carlo simulations of tumor growth. We discuss tumor properties that favor the proposed high-performance design and present simulation results of the tumor growth model. We estimate to which parameters the model is the most sensitive, and show that tumor volume depends on a number of parameters in a non-monotonic manner. PMID:25346862
Agent-based modeling of the interaction between CD8+ T cells and Beta cells in type 1 diabetes.
Ozturk, Mustafa Cagdas; Xu, Qian; Cinar, Ali
2018-01-01
We propose an agent-based model for the simulation of the autoimmune response in T1D. The model incorporates cell behavior from various rules derived from the current literature and is implemented on a high-performance computing system, which enables the simulation of a significant portion of the islets in the mouse pancreas. Simulation results indicate that the model is able to capture the trends that emerge during the progression of the autoimmunity. The multi-scale nature of the model enables definition of rules or equations that govern cellular or sub-cellular level phenomena and observation of the outcomes at the tissue scale. It is expected that such a model would facilitate in vivo clinical studies through rapid testing of hypotheses and planning of future experiments by providing insight into disease progression at different scales, some of which may not be obtained easily in clinical studies. Furthermore, the modular structure of the model simplifies tasks such as the addition of new cell types, and the definition or modification of different behaviors of the environment and the cells with ease.
Beyond Low Rank + Sparse: Multi-scale Low Rank Matrix Decomposition
Ong, Frank; Lustig, Michael
2016-01-01
We present a natural generalization of the recent low rank + sparse matrix decomposition and consider the decomposition of matrices into components of multiple scales. Such decomposition is well motivated in practice as data matrices often exhibit local correlations in multiple scales. Concretely, we propose a multi-scale low rank modeling that represents a data matrix as a sum of block-wise low rank matrices with increasing scales of block sizes. We then consider the inverse problem of decomposing the data matrix into its multi-scale low rank components and approach the problem via a convex formulation. Theoretically, we show that under various incoherence conditions, the convex program recovers the multi-scale low rank components either exactly or approximately. Practically, we provide guidance on selecting the regularization parameters and incorporate cycle spinning to reduce blocking artifacts. Experimentally, we show that the multi-scale low rank decomposition provides a more intuitive decomposition than conventional low rank methods and demonstrate its effectiveness in four applications, including illumination normalization for face images, motion separation for surveillance videos, multi-scale modeling of the dynamic contrast enhanced magnetic resonance imaging and collaborative filtering exploiting age information. PMID:28450978
VOYAGE!, a Scale Model of the Solar System on the National Mall
NASA Astrophysics Data System (ADS)
Bennett, J. O.; Schoemer, J.; Goldstein, J. J.
1994-12-01
The Laboratory for Astrophysics (LfA) at the National Air and Space Museum (NASM) is proposing a new exhibit: an outdoor model of the Solar System on the National Mall, dedicated to the Spirit of Human Exploration. At one ten- billionth of the size of the actual Solar System, the model would provide a unique educational tool to illustrate the vast distances that characterize our local corner of the universe. Mounted on pedestals along a gravel walkway between the U.S. Capitol and the Washington Monument for 0.6 kilometers (an easy walk for over 10 million visitors a year), plaques would tactilely depict the scaled sizes and distances of the Sun, the planets, and their larger satellites in polished bronze. Porcelain enamel insets in the bronze would display color photographs, language-independent educational pictograms, and an international pictoral listing of spacecraft that have visited these bodies. Designed for a multi-cultural audience of varied ages and educational backgrounds, and with easy access to persons with disabilities, the model would celebrate humanity's long and ongoing relationship with Earth's nearest neighbors. Ideally, this exhibit will be supported by teacher-activity packets, self-guided tours, exportable models, computer software, and multi-lingual audio programs. This proposal is being partially funded by the NASA Solar Systems division.
NASA Astrophysics Data System (ADS)
Beckingham, L. E.; Mitnick, E. H.; Zhang, S.; Voltolini, M.; Yang, L.; Steefel, C. I.; Swift, A.; Cole, D. R.; Sheets, J.; Kneafsey, T. J.; Landrot, G.; Anovitz, L. M.; Mito, S.; Xue, Z.; Ajo Franklin, J. B.; DePaolo, D.
2015-12-01
CO2 sequestration in deep sedimentary formations is a promising means of reducing atmospheric CO2 emissions but the rate and extent of mineral trapping remains difficult to predict. Reactive transport models provide predictions of mineral trapping based on laboratory mineral reaction rates, which have been shown to have large discrepancies with field rates. This, in part, may be due to poor quantification of mineral reactive surface area in natural porous media. Common estimates of mineral reactive surface area are ad hoc and typically based on grain size, adjusted several orders of magnitude to account for surface roughness and reactivity. This results in orders of magnitude discrepancies in estimated surface areas that directly translate into orders of magnitude discrepancies in model predictions. Additionally, natural systems can be highly heterogeneous and contain abundant nano- and micro-porosity, which can limit connected porosity and access to mineral surfaces. In this study, mineral-specific accessible surface areas are computed for a sample from the reservoir formation at the Nagaoka pilot CO2 injection site (Japan). Accessible mineral surface areas are determined from a multi-scale image analysis including X-ray microCT, SEM QEMSCAN, XRD, SANS, and SEM-FIB. Powder and flow-through column laboratory experiments are performed and the evolution of solutes in the aqueous phase is tracked. Continuum-scale reactive transport models are used to evaluate the impact of reactive surface area on predictions of experimental reaction rates. Evaluated reactive surface areas include geometric and specific surface areas (eg. BET) in addition to their reactive-site weighted counterparts. The most accurate predictions of observed powder mineral dissolution rates were obtained through use of grain-size specific surface areas computed from a BET-based correlation. Effectively, this surface area reflects the grain-fluid contact area, or accessible surface area, in the powder dissolution experiment. In the model of the flow-through column experiment, the accessible mineral surface area, computed from the multi-scale image analysis, is evaluated in addition to the traditional surface area estimates.
A uniform approach for programming distributed heterogeneous computing systems
Grasso, Ivan; Pellegrini, Simone; Cosenza, Biagio; Fahringer, Thomas
2014-01-01
Large-scale compute clusters of heterogeneous nodes equipped with multi-core CPUs and GPUs are getting increasingly popular in the scientific community. However, such systems require a combination of different programming paradigms making application development very challenging. In this article we introduce libWater, a library-based extension of the OpenCL programming model that simplifies the development of heterogeneous distributed applications. libWater consists of a simple interface, which is a transparent abstraction of the underlying distributed architecture, offering advanced features such as inter-context and inter-node device synchronization. It provides a runtime system which tracks dependency information enforced by event synchronization to dynamically build a DAG of commands, on which we automatically apply two optimizations: collective communication pattern detection and device-host-device copy removal. We assess libWater’s performance in three compute clusters available from the Vienna Scientific Cluster, the Barcelona Supercomputing Center and the University of Innsbruck, demonstrating improved performance and scaling with different test applications and configurations. PMID:25844015
A uniform approach for programming distributed heterogeneous computing systems.
Grasso, Ivan; Pellegrini, Simone; Cosenza, Biagio; Fahringer, Thomas
2014-12-01
Large-scale compute clusters of heterogeneous nodes equipped with multi-core CPUs and GPUs are getting increasingly popular in the scientific community. However, such systems require a combination of different programming paradigms making application development very challenging. In this article we introduce libWater, a library-based extension of the OpenCL programming model that simplifies the development of heterogeneous distributed applications. libWater consists of a simple interface, which is a transparent abstraction of the underlying distributed architecture, offering advanced features such as inter-context and inter-node device synchronization. It provides a runtime system which tracks dependency information enforced by event synchronization to dynamically build a DAG of commands, on which we automatically apply two optimizations: collective communication pattern detection and device-host-device copy removal. We assess libWater's performance in three compute clusters available from the Vienna Scientific Cluster, the Barcelona Supercomputing Center and the University of Innsbruck, demonstrating improved performance and scaling with different test applications and configurations.
Lu, Liqiang; Gopalan, Balaji; Benyahia, Sofiane
2017-06-21
Several discrete particle methods exist in the open literature to simulate fluidized bed systems, such as discrete element method (DEM), time driven hard sphere (TDHS), coarse-grained particle method (CGPM), coarse grained hard sphere (CGHS), and multi-phase particle-in-cell (MP-PIC). These different approaches usually solve the fluid phase in a Eulerian fixed frame of reference and the particle phase using the Lagrangian method. The first difference between these models lies in tracking either real particles or lumped parcels. The second difference is in the treatment of particle-particle interactions: by calculating collision forces (DEM and CGPM), using momentum conservation laws (TDHS and CGHS),more » or based on particle stress model (MP-PIC). These major model differences lead to a wide range of results accuracy and computation speed. However, these models have never been compared directly using the same experimental dataset. In this research, a small-scale fluidized bed is simulated with these methods using the same open-source code MFIX. The results indicate that modeling the particle-particle collision by TDHS increases the computation speed while maintaining good accuracy. Also, lumping few particles in a parcel increases the computation speed with little loss in accuracy. However, modeling particle-particle interactions with solids stress leads to a big loss in accuracy with a little increase in computation speed. The MP-PIC method predicts an unphysical particle-particle overlap, which results in incorrect voidage distribution and incorrect overall bed hydrodynamics. Based on this study, we recommend using the CGHS method for fluidized bed simulations due to its computational speed that rivals that of MPPIC while maintaining a much better accuracy.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Liqiang; Gopalan, Balaji; Benyahia, Sofiane
Several discrete particle methods exist in the open literature to simulate fluidized bed systems, such as discrete element method (DEM), time driven hard sphere (TDHS), coarse-grained particle method (CGPM), coarse grained hard sphere (CGHS), and multi-phase particle-in-cell (MP-PIC). These different approaches usually solve the fluid phase in a Eulerian fixed frame of reference and the particle phase using the Lagrangian method. The first difference between these models lies in tracking either real particles or lumped parcels. The second difference is in the treatment of particle-particle interactions: by calculating collision forces (DEM and CGPM), using momentum conservation laws (TDHS and CGHS),more » or based on particle stress model (MP-PIC). These major model differences lead to a wide range of results accuracy and computation speed. However, these models have never been compared directly using the same experimental dataset. In this research, a small-scale fluidized bed is simulated with these methods using the same open-source code MFIX. The results indicate that modeling the particle-particle collision by TDHS increases the computation speed while maintaining good accuracy. Also, lumping few particles in a parcel increases the computation speed with little loss in accuracy. However, modeling particle-particle interactions with solids stress leads to a big loss in accuracy with a little increase in computation speed. The MP-PIC method predicts an unphysical particle-particle overlap, which results in incorrect voidage distribution and incorrect overall bed hydrodynamics. Based on this study, we recommend using the CGHS method for fluidized bed simulations due to its computational speed that rivals that of MPPIC while maintaining a much better accuracy.« less
NASA Astrophysics Data System (ADS)
Lin, Shian-Jiann; Harris, Lucas; Chen, Jan-Huey; Zhao, Ming
2014-05-01
A multi-scale High-Resolution Atmosphere Model (HiRAM) is being developed at NOAA/Geophysical Fluid Dynamics Laboratory. The model's dynamical framework is the non-hydrostatic extension of the vertically Lagrangian finite-volume dynamical core (Lin 2004, Monthly Wea. Rev.) constructed on a stretchable (via Schmidt transformation) cubed-sphere grid. Physical parametrizations originally designed for IPCC-type climate predictions are in the process of being modified and made more "scale-aware", in an effort to make the model suitable for multi-scale weather-climate applications, with horizontal resolution ranging from 1 km (near the target high-resolution region) to as low as 400 km (near the antipodal point). One of the main goals of this development is to enable simulation of high impact weather phenomena (such as tornadoes, thunderstorms, category-5 hurricanes) within an IPCC-class climate modeling system previously thought impossible. We will present preliminary results, covering a very wide spectrum of temporal-spatial scales, ranging from simulation of tornado genesis (hours), Madden-Julian Oscillations (intra-seasonal), topical cyclones (seasonal), to Quasi Biennial Oscillations (intra-decadal), using the same global multi-scale modeling system.
Multi-scale graph-cut algorithm for efficient water-fat separation.
Berglund, Johan; Skorpil, Mikael
2017-09-01
To improve the accuracy and robustness to noise in water-fat separation by unifying the multiscale and graph cut based approaches to B 0 -correction. A previously proposed water-fat separation algorithm that corrects for B 0 field inhomogeneity in 3D by a single quadratic pseudo-Boolean optimization (QPBO) graph cut was incorporated into a multi-scale framework, where field map solutions are propagated from coarse to fine scales for voxels that are not resolved by the graph cut. The accuracy of the single-scale and multi-scale QPBO algorithms was evaluated against benchmark reference datasets. The robustness to noise was evaluated by adding noise to the input data prior to water-fat separation. Both algorithms achieved the highest accuracy when compared with seven previously published methods, while computation times were acceptable for implementation in clinical routine. The multi-scale algorithm was more robust to noise than the single-scale algorithm, while causing only a small increase (+10%) of the reconstruction time. The proposed 3D multi-scale QPBO algorithm offers accurate water-fat separation, robustness to noise, and fast reconstruction. The software implementation is freely available to the research community. Magn Reson Med 78:941-949, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Final Technical Report: Mathematical Foundations for Uncertainty Quantification in Materials Design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Plechac, Petr; Vlachos, Dionisios G.
We developed path-wise information theory-based and goal-oriented sensitivity analysis and parameter identification methods for complex high-dimensional dynamics and in particular of non-equilibrium extended molecular systems. The combination of these novel methodologies provided the first methods in the literature which are capable to handle UQ questions for stochastic complex systems with some or all of the following features: (a) multi-scale stochastic models such as (bio)chemical reaction networks, with a very large number of parameters, (b) spatially distributed systems such as Kinetic Monte Carlo or Langevin Dynamics, (c) non-equilibrium processes typically associated with coupled physico-chemical mechanisms, driven boundary conditions, hybrid micro-macro systems,more » etc. A particular computational challenge arises in simulations of multi-scale reaction networks and molecular systems. Mathematical techniques were applied to in silico prediction of novel materials with emphasis on the effect of microstructure on model uncertainty quantification (UQ). We outline acceleration methods to make calculations of real chemistry feasible followed by two complementary tasks on structure optimization and microstructure-induced UQ.« less
Development of the US3D Code for Advanced Compressible and Reacting Flow Simulations
NASA Technical Reports Server (NTRS)
Candler, Graham V.; Johnson, Heath B.; Nompelis, Ioannis; Subbareddy, Pramod K.; Drayna, Travis W.; Gidzak, Vladimyr; Barnhardt, Michael D.
2015-01-01
Aerothermodynamics and hypersonic flows involve complex multi-disciplinary physics, including finite-rate gas-phase kinetics, finite-rate internal energy relaxation, gas-surface interactions with finite-rate oxidation and sublimation, transition to turbulence, large-scale unsteadiness, shock-boundary layer interactions, fluid-structure interactions, and thermal protection system ablation and thermal response. Many of the flows have a large range of length and time scales, requiring large computational grids, implicit time integration, and large solution run times. The University of Minnesota NASA US3D code was designed for the simulation of these complex, highly-coupled flows. It has many of the features of the well-established DPLR code, but uses unstructured grids and has many advanced numerical capabilities and physical models for multi-physics problems. The main capabilities of the code are described, the physical modeling approaches are discussed, the different types of numerical flux functions and time integration approaches are outlined, and the parallelization strategy is overviewed. Comparisons between US3D and the NASA DPLR code are presented, and several advanced simulations are presented to illustrate some of novel features of the code.
Computing Properties of Hadrons, Nuclei and Nuclear Matter from Quantum Chromodynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Savage, Martin J.
This project was part of a coordinated software development effort which the nuclear physics lattice QCD community pursues in order to ensure that lattice calculations can make optimal use of present, and forthcoming leadership-class and dedicated hardware, including those of the national laboratories, and prepares for the exploitation of future computational resources in the exascale era. The UW team improved and extended software libraries used in lattice QCD calculations related to multi-nucleon systems, enhanced production running codes related to load balancing multi-nucleon production on large-scale computing platforms, and developed SQLite (addressable database) interfaces to efficiently archive and analyze multi-nucleon datamore » and developed a Mathematica interface for the SQLite databases.« less
Multi-Scale Validation of a Nanodiamond Drug Delivery System and Multi-Scale Engineering Education
ERIC Educational Resources Information Center
Schwalbe, Michelle Kristin
2010-01-01
This dissertation has two primary concerns: (i) evaluating the uncertainty and prediction capabilities of a nanodiamond drug delivery model using Bayesian calibration and bias correction, and (ii) determining conceptual difficulties of multi-scale analysis from an engineering education perspective. A Bayesian uncertainty quantification scheme…
Approximate l-fold cross-validation with Least Squares SVM and Kernel Ridge Regression
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edwards, Richard E; Zhang, Hao; Parker, Lynne Edwards
2013-01-01
Kernel methods have difficulties scaling to large modern data sets. The scalability issues are based on computational and memory requirements for working with a large matrix. These requirements have been addressed over the years by using low-rank kernel approximations or by improving the solvers scalability. However, Least Squares Support VectorMachines (LS-SVM), a popular SVM variant, and Kernel Ridge Regression still have several scalability issues. In particular, the O(n^3) computational complexity for solving a single model, and the overall computational complexity associated with tuning hyperparameters are still major problems. We address these problems by introducing an O(n log n) approximate l-foldmore » cross-validation method that uses a multi-level circulant matrix to approximate the kernel. In addition, we prove our algorithm s computational complexity and present empirical runtimes on data sets with approximately 1 million data points. We also validate our approximate method s effectiveness at selecting hyperparameters on real world and standard benchmark data sets. Lastly, we provide experimental results on using a multi-level circulant kernel approximation to solve LS-SVM problems with hyperparameters selected using our method.« less
Assembling Large, Multi-Sensor Climate Datasets Using the SciFlo Grid Workflow System
NASA Astrophysics Data System (ADS)
Wilson, B. D.; Manipon, G.; Xing, Z.; Fetzer, E.
2008-12-01
NASA's Earth Observing System (EOS) is the world's most ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the A-Train platforms (AIRS, AMSR-E, MODIS, MISR, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over periods of years to decades. However, moving from predominantly single-instrument studies to a multi-sensor, measurement-based model for long-duration analysis of important climate variables presents serious challenges for large-scale data mining and data fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another instrument (MODIS), and to a model (ECMWF), stratify the comparisons using a classification of the cloud scenes from CloudSat, and repeat the entire analysis over years of AIRS data. To perform such an analysis, one must discover & access multiple datasets from remote sites, find the space/time matchups between instruments swaths and model grids, understand the quality flags and uncertainties for retrieved physical variables, and assemble merged datasets for further scientific and statistical analysis. To meet these large-scale challenges, we are utilizing a Grid computing and dataflow framework, named SciFlo, in which we are deploying a set of versatile and reusable operators for data query, access, subsetting, co-registration, mining, fusion, and advanced statistical analysis. SciFlo is a semantically-enabled ("smart") Grid Workflow system that ties together a peer-to-peer network of computers into an efficient engine for distributed computation. The SciFlo workflow engine enables scientists to do multi-instrument Earth Science by assembling remotely-invokable Web Services (SOAP or http GET URLs), native executables, command-line scripts, and Python codes into a distributed computing flow. A scientist visually authors the graph of operation in the VizFlow GUI, or uses a text editor to modify the simple XML workflow documents. The SciFlo client & server engines optimize the execution of such distributed workflows and allow the user to transparently find and use datasets and operators without worrying about the actual location of the Grid resources. The engine transparently moves data to the operators, and moves operators to the data (on the dozen trusted SciFlo nodes). SciFlo also deploys a variety of Data Grid services to: query datasets in space and time, locate & retrieve on-line data granules, provide on-the-fly variable and spatial subsetting, and perform pairwise instrument matchups for A-Train datasets. These services are combined into efficient workflows to assemble the desired large-scale, merged climate datasets. SciFlo is currently being applied in several large climate studies: comparisons of aerosol optical depth between MODIS, MISR, AERONET ground network, and U. Michigan's IMPACT aerosol transport model; characterization of long-term biases in microwave and infrared instruments (AIRS, MLS) by comparisons to GPS temperature retrievals accurate to 0.1 degrees Kelvin; and construction of a decade-long, multi-sensor water vapor climatology stratified by classified cloud scene by bringing together datasets from AIRS/AMSU, AMSR-E, MLS, MODIS, and CloudSat (NASA MEASUREs grant, Fetzer PI). The presentation will discuss the SciFlo technologies, their application in these distributed workflows, and the many challenges encountered in assembling and analyzing these massive datasets.
NASA Astrophysics Data System (ADS)
Grujicic, M.; Bell, W. C.; Arakere, G.; He, T.; Xie, X.; Cheeseman, B. A.
2010-02-01
A meso-scale ballistic material model for a prototypical plain-woven single-ply flexible armor is developed and implemented in a material user subroutine for the use in commercial explicit finite element programs. The main intent of the model is to attain computational efficiency when calculating the mechanical response of the multi-ply fabric-based flexible-armor material during its impact with various projectiles without significantly sacrificing the key physical aspects of the fabric microstructure, architecture, and behavior. To validate the new model, a comparative finite element method analysis is carried out in which: (a) the plain-woven single-ply fabric is modeled using conventional shell elements and weaving is done in an explicit manner by snaking the yarns through the fabric and (b) the fabric is treated as a planar continuum surface composed of conventional shell elements to which the new meso-scale unit-cell based material model is assigned. The results obtained show that the material model provides a reasonably good description for the fabric deformation and fracture behavior under different combinations of fixed and free boundary conditions. Finally, the model is used in an investigation of the ability of a multi-ply soft-body armor vest to protect the wearer from impact by a 9-mm round nose projectile. The effects of inter-ply friction, projectile/yarn friction, and the far-field boundary conditions are revealed and the results explained using simple wave mechanics principles, high-deformation rate material behavior, and the role of various energy-absorbing mechanisms in the fabric-based armor systems.
High Resolution Model Intercomparison Project (HighResMIP v1.0) for CMIP6
NASA Astrophysics Data System (ADS)
Haarsma, Reindert J.; Roberts, Malcolm J.; Vidale, Pier Luigi; Senior, Catherine A.; Bellucci, Alessio; Bao, Qing; Chang, Ping; Corti, Susanna; Fučkar, Neven S.; Guemas, Virginie; von Hardenberg, Jost; Hazeleger, Wilco; Kodama, Chihiro; Koenigk, Torben; Leung, L. Ruby; Lu, Jian; Luo, Jing-Jia; Mao, Jiafu; Mizielinski, Matthew S.; Mizuta, Ryo; Nobre, Paulo; Satoh, Masaki; Scoccimarro, Enrico; Semmler, Tido; Small, Justin; von Storch, Jin-Song
2016-11-01
Robust projections and predictions of climate variability and change, particularly at regional scales, rely on the driving processes being represented with fidelity in model simulations. The role of enhanced horizontal resolution in improved process representation in all components of the climate system is of growing interest, particularly as some recent simulations suggest both the possibility of significant changes in large-scale aspects of circulation as well as improvements in small-scale processes and extremes. However, such high-resolution global simulations at climate timescales, with resolutions of at least 50 km in the atmosphere and 0.25° in the ocean, have been performed at relatively few research centres and generally without overall coordination, primarily due to their computational cost. Assessing the robustness of the response of simulated climate to model resolution requires a large multi-model ensemble using a coordinated set of experiments. The Coupled Model Intercomparison Project 6 (CMIP6) is the ideal framework within which to conduct such a study, due to the strong link to models being developed for the CMIP DECK experiments and other model intercomparison projects (MIPs). Increases in high-performance computing (HPC) resources, as well as the revised experimental design for CMIP6, now enable a detailed investigation of the impact of increased resolution up to synoptic weather scales on the simulated mean climate and its variability. The High Resolution Model Intercomparison Project (HighResMIP) presented in this paper applies, for the first time, a multi-model approach to the systematic investigation of the impact of horizontal resolution. A coordinated set of experiments has been designed to assess both a standard and an enhanced horizontal-resolution simulation in the atmosphere and ocean. The set of HighResMIP experiments is divided into three tiers consisting of atmosphere-only and coupled runs and spanning the period 1950-2050, with the possibility of extending to 2100, together with some additional targeted experiments. This paper describes the experimental set-up of HighResMIP, the analysis plan, the connection with the other CMIP6 endorsed MIPs, as well as the DECK and CMIP6 historical simulations. HighResMIP thereby focuses on one of the CMIP6 broad questions, "what are the origins and consequences of systematic model biases?", but we also discuss how it addresses the World Climate Research Program (WCRP) grand challenges.
NASA Astrophysics Data System (ADS)
Spak, S.; Pooley, M.
2012-12-01
The next generation of coupled human and earth systems models promises immense potential and grand challenges as they transition toward new roles as core tools for defining and living within planetary boundaries. New frontiers in community model development include not only computational, organizational, and geophysical process questions, but also the twin objectives of more meaningfully integrating the human dimension and extending applicability to informing policy decisions on a range of new and interconnected issues. We approach these challenges by posing key policy questions that require more comprehensive coupled human and geophysical models, identify necessary model and organizational processes and outputs, and work backwards to determine design criteria in response to these needs. We find that modular community earth system model design must: * seamlessly scale in space (global to urban) and time (nowcasting to paleo-studies) and fully coupled on all component systems * automatically differentiate to provide complete coupled forward and adjoint models for sensitivity studies, optimization applications, and 4DVAR assimilation across Earth and human observing systems * incorporate diagnostic tools to quantify uncertainty in couplings, and in how human activity affects them * integrate accessible community development and application with JIT-compilation, cloud computing, game-oriented interfaces, and crowd-sourced problem-solving We outline accessible near-term objectives toward these goals, and describe attempts to incorporate these design objectives in recent pilot activities using atmosphere-land-ocean-biosphere-human models (WRF-Chem, IBIS, UrbanSim) at urban and regional scales for policy applications in climate, energy, and air quality.
Multi scale modeling of 2450MHz electric field effects on microtubule mechanical properties.
Setayandeh, S S; Lohrasebi, A
2016-11-01
Microtubule (MT) rigidity and response to 2450MHz electric fields were investigated, via multi scale modeling approach. For this purpose, six systems were designed and simulated to consider all types of feasible interactions between α and β monomers in MT, by using all atom molecular dynamics method. Subsequently, coarse grain modeling was used to design different lengths of MT. Investigation of effects of external 2450MHz electric field on MT showed MT less rigidity in the presence of such field, which may perturb its functions. Moreover, an additional computational setup was designed to study effects of 2450MHz field on MT response to AFM tip. It was found, more tip velocity led to MT faster transformation and less time was required to change MT elastic response to plastic one, applying constant radius. Moreover it was observed smaller tip caused to increase required time to change MT elastic response to plastic one, considering constant velocity. Furthermore, exposing MT to 2450MHz field led to no significant changes in MT response to AFM tip, but quick change in MT elastic response to plastic one. Copyright © 2016 Elsevier Inc. All rights reserved.