Multi-Scale Models for the Scale Interaction of Organized Tropical Convection
NASA Astrophysics Data System (ADS)
Yang, Qiu
Assessing the upscale impact of organized tropical convection from small spatial and temporal scales is a research imperative, not only for having a better understanding of the multi-scale structures of dynamical and convective fields in the tropics, but also for eventually helping in the design of new parameterization strategies to improve the next-generation global climate models. Here self-consistent multi-scale models are derived systematically by following the multi-scale asymptotic methods and used to describe the hierarchical structures of tropical atmospheric flows. The advantages of using these multi-scale models lie in isolating the essential components of multi-scale interaction and providing assessment of the upscale impact of the small-scale fluctuations onto the large-scale mean flow through eddy flux divergences of momentum and temperature in a transparent fashion. Specifically, this thesis includes three research projects about multi-scale interaction of organized tropical convection, involving tropical flows at different scaling regimes and utilizing different multi-scale models correspondingly. Inspired by the observed variability of tropical convection on multiple temporal scales, including daily and intraseasonal time scales, the goal of the first project is to assess the intraseasonal impact of the diurnal cycle on the planetary-scale circulation such as the Hadley cell. As an extension of the first project, the goal of the second project is to assess the intraseasonal impact of the diurnal cycle over the Maritime Continent on the Madden-Julian Oscillation. In the third project, the goals are to simulate the baroclinic aspects of the ITCZ breakdown and assess its upscale impact on the planetary-scale circulation over the eastern Pacific. These simple multi-scale models should be useful to understand the scale interaction of organized tropical convection and help improve the parameterization of unresolved processes in global climate models.
Parameterizing the Morse Potential for Coarse-Grained Modeling of Blood Plasma
Zhang, Na; Zhang, Peng; Kang, Wei; Bluestein, Danny; Deng, Yuefan
2014-01-01
Multiscale simulations of fluids such as blood represent a major computational challenge of coupling the disparate spatiotemporal scales between molecular and macroscopic transport phenomena characterizing such complex fluids. In this paper, a coarse-grained (CG) particle model is developed for simulating blood flow by modifying the Morse potential, traditionally used in Molecular Dynamics for modeling vibrating structures. The modified Morse potential is parameterized with effective mass scales for reproducing blood viscous flow properties, including density, pressure, viscosity, compressibility and characteristic flow dynamics of human blood plasma fluid. The parameterization follows a standard inverse-problem approach in which the optimal micro parameters are systematically searched, by gradually decoupling loosely correlated parameter spaces, to match the macro physical quantities of viscous blood flow. The predictions of this particle based multiscale model compare favorably to classic viscous flow solutions such as Counter-Poiseuille and Couette flows. It demonstrates that such coarse grained particle model can be applied to replicate the dynamics of viscous blood flow, with the advantage of bridging the gap between macroscopic flow scales and the cellular scales characterizing blood flow that continuum based models fail to handle adequately. PMID:24910470
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Na; Zhang, Peng; Kang, Wei
Multiscale simulations of fluids such as blood represent a major computational challenge of coupling the disparate spatiotemporal scales between molecular and macroscopic transport phenomena characterizing such complex fluids. In this paper, a coarse-grained (CG) particle model is developed for simulating blood flow by modifying the Morse potential, traditionally used in Molecular Dynamics for modeling vibrating structures. The modified Morse potential is parameterized with effective mass scales for reproducing blood viscous flow properties, including density, pressure, viscosity, compressibility and characteristic flow dynamics of human blood plasma fluid. The parameterization follows a standard inverse-problem approach in which the optimal micro parameters aremore » systematically searched, by gradually decoupling loosely correlated parameter spaces, to match the macro physical quantities of viscous blood flow. The predictions of this particle based multiscale model compare favorably to classic viscous flow solutions such as Counter-Poiseuille and Couette flows. It demonstrates that such coarse grained particle model can be applied to replicate the dynamics of viscous blood flow, with the advantage of bridging the gap between macroscopic flow scales and the cellular scales characterizing blood flow that continuum based models fail to handle adequately.« less
Li, Xianfeng; Murthy, N. Sanjeeva; Becker, Matthew L.; Latour, Robert A.
2016-01-01
A multiscale modeling approach is presented for the efficient construction of an equilibrated all-atom model of a cross-linked poly(ethylene glycol) (PEG)-based hydrogel using the all-atom polymer consistent force field (PCFF). The final equilibrated all-atom model was built with a systematic simulation toolset consisting of three consecutive parts: (1) building a global cross-linked PEG-chain network at experimentally determined cross-link density using an on-lattice Monte Carlo method based on the bond fluctuation model, (2) recovering the local molecular structure of the network by transitioning from the lattice model to an off-lattice coarse-grained (CG) model parameterized from PCFF, followed by equilibration using high performance molecular dynamics methods, and (3) recovering the atomistic structure of the network by reverse mapping from the equilibrated CG structure, hydrating the structure with explicitly represented water, followed by final equilibration using PCFF parameterization. The developed three-stage modeling approach has application to a wide range of other complex macromolecular hydrogel systems, including the integration of peptide, protein, and/or drug molecules as side-chains within the hydrogel network for the incorporation of bioactivity for tissue engineering, regenerative medicine, and drug delivery applications. PMID:27013229
Yu, Sungduk; Pritchard, Michael S.
2015-12-17
The effect of global climate model (GCM) time step—which also controls how frequently global and embedded cloud resolving scales are coupled—is examined in the Superparameterized Community Atmosphere Model ver 3.0. Systematic bias reductions of time-mean shortwave cloud forcing (~10 W/m 2) and longwave cloud forcing (~5 W/m 2) occur as scale coupling frequency increases, but with systematically increasing rainfall variance and extremes throughout the tropics. An overarching change in the vertical structure of deep tropical convection, favoring more bottom-heavy deep convection as a global model time step is reduced may help orchestrate these responses. The weak temperature gradient approximation ismore » more faithfully satisfied when a high scale coupling frequency (a short global model time step) is used. These findings are distinct from the global model time step sensitivities of conventionally parameterized GCMs and have implications for understanding emergent behaviors of multiscale deep convective organization in superparameterized GCMs. Lastly, the results may also be useful for helping to tune them.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Sungduk; Pritchard, Michael S.
The effect of global climate model (GCM) time step—which also controls how frequently global and embedded cloud resolving scales are coupled—is examined in the Superparameterized Community Atmosphere Model ver 3.0. Systematic bias reductions of time-mean shortwave cloud forcing (~10 W/m 2) and longwave cloud forcing (~5 W/m 2) occur as scale coupling frequency increases, but with systematically increasing rainfall variance and extremes throughout the tropics. An overarching change in the vertical structure of deep tropical convection, favoring more bottom-heavy deep convection as a global model time step is reduced may help orchestrate these responses. The weak temperature gradient approximation ismore » more faithfully satisfied when a high scale coupling frequency (a short global model time step) is used. These findings are distinct from the global model time step sensitivities of conventionally parameterized GCMs and have implications for understanding emergent behaviors of multiscale deep convective organization in superparameterized GCMs. Lastly, the results may also be useful for helping to tune them.« less
NASA Technical Reports Server (NTRS)
Glaessgen, Edward H.; Saether, Erik; Phillips, Dawn R.; Yamakov, Vesselin
2006-01-01
A multiscale modeling strategy is developed to study grain boundary fracture in polycrystalline aluminum. Atomistic simulation is used to model fundamental nanoscale deformation and fracture mechanisms and to develop a constitutive relationship for separation along a grain boundary interface. The nanoscale constitutive relationship is then parameterized within a cohesive zone model to represent variations in grain boundary properties. These variations arise from the presence of vacancies, intersticies, and other defects in addition to deviations in grain boundary angle from the baseline configuration considered in the molecular dynamics simulation. The parameterized cohesive zone models are then used to model grain boundaries within finite element analyses of aluminum polycrystals.
Multi-Scale Modeling and the Eddy-Diffusivity/Mass-Flux (EDMF) Parameterization
NASA Astrophysics Data System (ADS)
Teixeira, J.
2015-12-01
Turbulence and convection play a fundamental role in many key weather and climate science topics. Unfortunately, current atmospheric models cannot explicitly resolve most turbulent and convective flow. Because of this fact, turbulence and convection in the atmosphere has to be parameterized - i.e. equations describing the dynamical evolution of the statistical properties of turbulence and convection motions have to be devised. Recently a variety of different models have been developed that attempt at simulating the atmosphere using variable resolution. A key problem however is that parameterizations are in general not explicitly aware of the resolution - the scale awareness problem. In this context, we will present and discuss a specific approach, the Eddy-Diffusivity/Mass-Flux (EDMF) parameterization, that not only is in itself a multi-scale parameterization but it is also particularly well suited to deal with the scale-awareness problems that plague current variable-resolution models. It does so by representing small-scale turbulence using a classic Eddy-Diffusivity (ED) method, and the larger-scale (boundary layer and tropospheric-scale) eddies as a variety of plumes using the Mass-Flux (MF) concept.
Following the examination and evaluation of 12 nucleation parameterizations presented in part 1, 11 of them representing binary, ternary, kinetic, and cluster‐activated nucleation theories are evaluated in the U.S. Environmental Protection Agency Community Multiscale Air Quality ...
The uploaded data consists of the BRACE Na aerosol observations paired with CMAQ model output, the updated model's parameterization of sea salt aerosol emission size distribution, and the model's parameterization of the sea salt emission factor as a function of sea surface temperature. This dataset is associated with the following publication:Gantt , B., J. Kelly , and J. Bash. Updating sea spray aerosol emissions in the Community Multiscale Air Quality (CMAQ) model version 5.0.2. Geoscientific Model Development. Copernicus Publications, Katlenburg-Lindau, GERMANY, 8: 3733-3746, (2015).
Hierarchical coarse-graining strategy for protein-membrane systems to access mesoscopic scales
Ayton, Gary S.; Lyman, Edward
2014-01-01
An overall multiscale simulation strategy for large scale coarse-grain simulations of membrane protein systems is presented. The protein is modeled as a heterogeneous elastic network, while the lipids are modeled using the hybrid analytic-systematic (HAS) methodology, where in both cases atomistic level information obtained from molecular dynamics simulation is used to parameterize the model. A feature of this approach is that from the outset liposome length scales are employed in the simulation (i.e., on the order of ½ a million lipids plus protein). A route to develop highly coarse-grained models from molecular-scale information is proposed and results for N-BAR domain protein remodeling of a liposome are presented. PMID:20158037
A Goddard Multi-Scale Modeling System with Unified Physics
NASA Technical Reports Server (NTRS)
Tao, W.K.; Anderson, D.; Atlas, R.; Chern, J.; Houser, P.; Hou, A.; Lang, S.; Lau, W.; Peters-Lidard, C.; Kakar, R.;
2008-01-01
Numerical cloud resolving models (CRMs), which are based the non-hydrostatic equations of motion, have been extensively applied to cloud-scale and mesoscale processes during the past four decades. Recent GEWEX Cloud System Study (GCSS) model comparison projects have indicated that CRMs agree with observations in simulating various types of clouds and cloud systems from different geographic locations. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that Numerical Weather Prediction (NWP) and regional scale model can be run in grid size similar to cloud resolving model through nesting technique. Current and future NASA satellite programs can provide cloud, precipitation, aerosol and other data at very fine spatial and temporal scales. It requires a coupled global circulation model (GCM) and cloud-scale model (termed a szrper-parameterization or multi-scale modeling -framework, MMF) to use these satellite data to improve the understanding of the physical processes that are responsible for the variation in global and regional climate and hydrological systems. The use of a GCM will enable global coverage, and the use of a CRM will allow for better and more sophisticated physical parameterization. NASA satellite and field campaign can provide initial conditions as well as validation through utilizing the Earth Satellite simulators. At Goddard, we have developed a multi-scale modeling system with unified physics. The modeling system consists a coupled GCM-CRM (or MMF); a state-of-the-art weather research forecast model (WRF) and a cloud-resolving model (Goddard Cumulus Ensemble model). In these models, the same microphysical schemes (2ICE, several 3ICE), radiation (including explicitly calculated cloud optical properties), and surface models are applied. In addition, a comprehensive unified Earth Satellite simulator has been developed at GSFC, which is designed to fully utilize the multi-scale modeling system. A brief review of the multi-scale modeling system with unified physics/simulator and examples is presented in this article.
NASA Technical Reports Server (NTRS)
Rasool, Quazi Z.; Zhang, Rui; Lash, Benjamin; Cohan, Daniel S.; Cooter, Ellen J.; Bash, Jesse O.; Lamsal, Lok N.
2016-01-01
Modeling of soil nitric oxide (NO) emissions is highly uncertain and may misrepresent its spatial and temporal distribution. This study builds upon a recently introduced parameterization to improve the timing and spatial distribution of soil NO emission estimates in the Community Multiscale Air Quality (CMAQ) model. The parameterization considers soil parameters, meteorology, land use, and mineral nitrogen (N) availability to estimate NO emissions. We incorporate daily year-specific fertilizer data from the Environmental Policy Integrated Climate (EPIC) agricultural model to replace the annual generic data of the initial parameterization, and use a 12km resolution soil biome map over the continental USA. CMAQ modeling for July 2011 shows slight differences in model performance in simulating fine particulate matter and ozone from Interagency Monitoring of Protected Visual Environments (IMPROVE) and Clean Air Status and Trends Network (CASTNET) sites and NO2 columns from Ozone Monitoring Instrument (OMI) satellite retrievals. We also simulate how the change in soil NO emissions scheme affects the expected O3 response to projected emissions reductions.
NASA Astrophysics Data System (ADS)
Liu, C.; Yang, X.; Bailey, V. L.; Bond-Lamberty, B. P.; Hinkle, C.
2013-12-01
Mathematical representations of hydrological and biogeochemical processes in soil, plant, aquatic, and atmospheric systems vary with scale. Process-rich models are typically used to describe hydrological and biogeochemical processes at the pore and small scales, while empirical, correlation approaches are often used at the watershed and regional scales. A major challenge for multi-scale modeling is that water flow, biogeochemical processes, and reactive transport are described using different physical laws and/or expressions at the different scales. For example, the flow is governed by the Navier-Stokes equations at the pore-scale in soils, by the Darcy law in soil columns and aquifer, and by the Navier-Stokes equations again in open water bodies (ponds, lake, river) and atmosphere surface layer. This research explores whether the physical laws at the different scales and in different physical domains can be unified to form a unified multi-scale model (UMSM) to systematically investigate the cross-scale, cross-domain behavior of fundamental processes at different scales. This presentation will discuss our research on the concept, mathematical equations, and numerical execution of the UMSM. Three-dimensional, multi-scale hydrological processes at the Disney Wilderness Preservation (DWP) site, Florida will be used as an example for demonstrating the application of the UMSM. In this research, the UMSM was used to simulate hydrological processes in rooting zones at the pore and small scales including water migration in soils under saturated and unsaturated conditions, root-induced hydrological redistribution, and role of rooting zone biogeochemical properties (e.g., root exudates and microbial mucilage) on water storage and wetting/draining. The small scale simulation results were used to estimate effective water retention properties in soil columns that were superimposed on the bulk soil water retention properties at the DWP site. The UMSM parameterized from smaller scale simulations were then used to simulate coupled flow and moisture migration in soils in saturated and unsaturated zones, surface and groundwater exchange, and surface water flow in streams and lakes at the DWP site under dynamic precipitation conditions. Laboratory measurements of soil hydrological and biogeochemical properties are used to parameterize the UMSM at the small scales, and field measurements are used to evaluate the UMSM.
NASA Astrophysics Data System (ADS)
Sanyal, Tanmoy; Shell, M. Scott
2016-07-01
Bottom-up multiscale techniques are frequently used to develop coarse-grained (CG) models for simulations at extended length and time scales but are often limited by a compromise between computational efficiency and accuracy. The conventional approach to CG nonbonded interactions uses pair potentials which, while computationally efficient, can neglect the inherently multibody contributions of the local environment of a site to its energy, due to degrees of freedom that were coarse-grained out. This effect often causes the CG potential to depend strongly on the overall system density, composition, or other properties, which limits its transferability to states other than the one at which it was parameterized. Here, we propose to incorporate multibody effects into CG potentials through additional nonbonded terms, beyond pair interactions, that depend in a mean-field manner on local densities of different atomic species. This approach is analogous to embedded atom and bond-order models that seek to capture multibody electronic effects in metallic systems. We show that the relative entropy coarse-graining framework offers a systematic route to parameterizing such local density potentials. We then characterize this approach in the development of implicit solvation strategies for interactions between model hydrophobes in an aqueous environment.
Multiscale Cloud System Modeling
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo; Moncrieff, Mitchell W.
2009-01-01
The central theme of this paper is to describe how cloud system resolving models (CRMs) of grid spacing approximately 1 km have been applied to various important problems in atmospheric science across a wide range of spatial and temporal scales and how these applications relate to other modeling approaches. A long-standing problem concerns the representation of organized precipitating convective cloud systems in weather and climate models. Since CRMs resolve the mesoscale to large scales of motion (i.e., 10 km to global) they explicitly address the cloud system problem. By explicitly representing organized convection, CRMs bypass restrictive assumptions associated with convective parameterization such as the scale gap between cumulus and large-scale motion. Dynamical models provide insight into the physical mechanisms involved with scale interaction and convective organization. Multiscale CRMs simulate convective cloud systems in computational domains up to global and have been applied in place of contemporary convective parameterizations in global models. Multiscale CRMs pose a new challenge for model validation, which is met in an integrated approach involving CRMs, operational prediction systems, observational measurements, and dynamical models in a new international project: the Year of Tropical Convection, which has an emphasis on organized tropical convection and its global effects.
Land-Atmosphere Coupling in the Multi-Scale Modelling Framework
NASA Astrophysics Data System (ADS)
Kraus, P. M.; Denning, S.
2015-12-01
The Multi-Scale Modeling Framework (MMF), in which cloud-resolving models (CRMs) are embedded within general circulation model (GCM) gridcells to serve as the model's cloud parameterization, has offered a number of benefits to GCM simulations. The coupling of these cloud-resolving models directly to land surface model instances, rather than passing averaged atmospheric variables to a single instance of a land surface model, the logical next step in model development, has recently been accomplished. This new configuration offers conspicuous improvements to estimates of precipitation and canopy through-fall, but overall the model exhibits warm surface temperature biases and low productivity.This work presents modifications to a land-surface model that take advantage of the new multi-scale modeling framework, and accommodate the change in spatial scale from a typical GCM range of ~200 km to the CRM grid-scale of 4 km.A parameterization is introduced to apportion modeled surface radiation into direct-beam and diffuse components. The diffuse component is then distributed among the land-surface model instances within each GCM cell domain. This substantially reduces the number excessively low light values provided to the land-surface model when cloudy conditions are modeled in the CRM, associated with its 1-D radiation scheme. The small spatial scale of the CRM, ~4 km, as compared with the typical ~200 km GCM scale, provides much more realistic estimates of precipitation intensity, this permits the elimination of a model parameterization of canopy through-fall. However, runoff at such scales can no longer be considered as an immediate flow to the ocean. Allowing sub-surface water flow between land-surface instances within the GCM domain affords better realism and also reduces temperature and productivity biases.The MMF affords a number of opportunities to land-surface modelers, providing both the advantages of direct simulation at the 4 km scale and a much reduced conceptual gap between model resolution and parameterized processes.
Improvement and Extension of Shape Evaluation Criteria in Multi-Scale Image Segmentation
NASA Astrophysics Data System (ADS)
Sakamoto, M.; Honda, Y.; Kondo, A.
2016-06-01
From the last decade, the multi-scale image segmentation is getting a particular interest and practically being used for object-based image analysis. In this study, we have addressed the issues on multi-scale image segmentation, especially, in improving the performances for validity of merging and variety of derived region's shape. Firstly, we have introduced constraints on the application of spectral criterion which could suppress excessive merging between dissimilar regions. Secondly, we have extended the evaluation for smoothness criterion by modifying the definition on the extent of the object, which was brought for controlling the shape's diversity. Thirdly, we have developed new shape criterion called aspect ratio. This criterion helps to improve the reproducibility on the shape of object to be matched to the actual objectives of interest. This criterion provides constraint on the aspect ratio in the bounding box of object by keeping properties controlled with conventional shape criteria. These improvements and extensions lead to more accurate, flexible, and diverse segmentation results according to the shape characteristics of the target of interest. Furthermore, we also investigated a technique for quantitative and automatic parameterization in multi-scale image segmentation. This approach is achieved by comparing segmentation result with training area specified in advance by considering the maximization of the average area in derived objects or satisfying the evaluation index called F-measure. Thus, it has been possible to automate the parameterization that suited the objectives especially in the view point of shape's reproducibility.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanyal, Tanmoy; Shell, M. Scott, E-mail: shell@engineering.ucsb.edu
Bottom-up multiscale techniques are frequently used to develop coarse-grained (CG) models for simulations at extended length and time scales but are often limited by a compromise between computational efficiency and accuracy. The conventional approach to CG nonbonded interactions uses pair potentials which, while computationally efficient, can neglect the inherently multibody contributions of the local environment of a site to its energy, due to degrees of freedom that were coarse-grained out. This effect often causes the CG potential to depend strongly on the overall system density, composition, or other properties, which limits its transferability to states other than the one atmore » which it was parameterized. Here, we propose to incorporate multibody effects into CG potentials through additional nonbonded terms, beyond pair interactions, that depend in a mean-field manner on local densities of different atomic species. This approach is analogous to embedded atom and bond-order models that seek to capture multibody electronic effects in metallic systems. We show that the relative entropy coarse-graining framework offers a systematic route to parameterizing such local density potentials. We then characterize this approach in the development of implicit solvation strategies for interactions between model hydrophobes in an aqueous environment.« less
NASA Astrophysics Data System (ADS)
Pritchard, M. S.; Kooperman, G. J.; Zhao, Z.; Wang, M.; Russell, L. M.; Somerville, R. C.; Ghan, S. J.
2011-12-01
Evaluating the fidelity of new aerosol physics in climate models is confounded by uncertainties in source emissions, systematic error in cloud parameterizations, and inadequate sampling of long-range plume concentrations. To explore the degree to which cloud parameterizations distort aerosol processing and scavenging, the Pacific Northwest National Laboratory (PNNL) Aerosol-Enabled Multi-Scale Modeling Framework (AE-MMF), a superparameterized branch of the Community Atmosphere Model Version 5 (CAM5), is applied to represent the unusually active and well sampled North American wildfire season in 2004. In the AE-MMF approach, the evolution of double moment aerosols in the exterior global resolved scale is linked explicitly to convective statistics harvested from an interior cloud resolving scale. The model is configured in retroactive nudged mode to observationally constrain synoptic meteorology, and Arctic wildfire activity is prescribed at high space/time resolution using data from the Global Fire Emissions Database. Comparisons against standard CAM5 bracket the effect of superparameterization to isolate the role of capturing rainfall intermittency on the bulk characteristics of 2004 Arctic plume transport. Ground based lidar and in situ aircraft wildfire plume constraints from the International Consortium for Atmospheric Research on Transport and Transformation field campaign are used as a baseline for model evaluation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larson, Vincent
2016-11-25
The Multiscale Modeling Framework (MMF) embeds a cloud-resolving model in each grid column of a General Circulation Model (GCM). A MMF model does not need to use a deep convective parameterization, and thereby dispenses with the uncertainties in such parameterizations. However, MMF models grossly under-resolve shallow boundary-layer clouds, and hence those clouds may still benefit from parameterization. In this grant, we successfully created a climate model that embeds a cloud parameterization (“CLUBB”) within a MMF model. This involved interfacing CLUBB’s clouds with microphysics and reducing computational cost. We have evaluated the resulting simulated clouds and precipitation with satellite observations. Themore » chief benefit of the project is to provide a MMF model that has an improved representation of clouds and that provides improved simulations of precipitation.« less
Estimation of NH3 Bi-Directional Flux from Managed Agricultural Soils
The Community Multi-Scale Air Quality model (CMAQ v4.7) contains a bi-directional ammonia (NH3) flux option that computes emission and deposition of ammonia derived from commercial fertilizer via a temperature dependent parameterization of canopy and soil compensation ...
Modeling of soil nitric oxide (NO) emissions is highly uncertain and may misrepresent its spatial and temporal distribution. This study builds upon a recently introduced parameterization to improve the timing and spatial distribution of soil NO emission estimates in the Community...
The U.S. Environmental Protection Agency (U.S. EPA) is extending its Models-3/Community Multiscale Air Quality (CMAQ) Modeling System to provide detailed gridded air quality concentration fields and sub-grid variability characterization at neighborhood scales and in urban areas...
A new windblown dust emission treatment was incorporated in the Community Multiscale Air Quality (CMAQ) modeling system. This new model treatment has been built upon previously developed physics-based parameterization schemes from the literature. A distinct and novel feature of t...
Short-term Wind Forecasting at Wind Farms using WRF-LES and Actuator Disk Model
NASA Astrophysics Data System (ADS)
Kirkil, Gokhan
2017-04-01
Short-term wind forecasts are obtained for a wind farm on a mountainous terrain using WRF-LES. Multi-scale simulations are also performed using different PBL parameterizations. Turbines are parameterized using Actuator Disc Model. LES models improved the forecasts. Statistical error analysis is performed and ramp events are analyzed. Complex topography of the study area affects model performance, especially the accuracy of wind forecasts were poor for cross valley-mountain flows. By means of LES, we gain new knowledge about the sources of spatial and temporal variability of wind fluctuations such as the configuration of wind turbines.
Numerical Simulations of a Multiscale Model of Stratified Langmuir Circulation
NASA Astrophysics Data System (ADS)
Malecha, Ziemowit; Chini, Gregory; Julien, Keith
2012-11-01
Langmuir circulation (LC), a prominent form of wind and surface-wave driven shear turbulence in the ocean surface boundary layer (BL), is commonly modeled using the Craik-Leibovich (CL) equations, a phase-averaged variant of the Navier-Stokes (NS) equations. Although surface-wave filtering renders the CL equations more amenable to simulation than are the instantaneous NS equations, simulations in wide domains, hundreds of times the BL depth, currently earn the ``grand challenge'' designation. To facilitate simulations of LC in such spatially-extended domains, we have derived multiscale CL equations by exploiting the scale separation between submesoscale and BL flows in the upper ocean. The numerical algorithm for simulating this multiscale model resembles super-parameterization schemes used in meteorology, but retains a firm mathematical basis. We have validated our algorithm and here use it to perform multiscale simulations of the interaction between LC and upper ocean density stratification. ZMM, GPC, KJ gratefully acknowledge funding from NSF CMG Award 0934827.
High performance computing (HPC) requirements for the new generation variable grid resolution (VGR) global climate models differ from that of traditional global models. A VGR global model with 15 km grids over the CONUS stretching to 60 km grids elsewhere will have about ~2.5 tim...
Global sensitivity analysis of multiscale properties of porous materials
NASA Astrophysics Data System (ADS)
Um, Kimoon; Zhang, Xuan; Katsoulakis, Markos; Plechac, Petr; Tartakovsky, Daniel M.
2018-02-01
Ubiquitous uncertainty about pore geometry inevitably undermines the veracity of pore- and multi-scale simulations of transport phenomena in porous media. It raises two fundamental issues: sensitivity of effective material properties to pore-scale parameters and statistical parameterization of Darcy-scale models that accounts for pore-scale uncertainty. Homogenization-based maps of pore-scale parameters onto their Darcy-scale counterparts facilitate both sensitivity analysis (SA) and uncertainty quantification. We treat uncertain geometric characteristics of a hierarchical porous medium as random variables to conduct global SA and to derive probabilistic descriptors of effective diffusion coefficients and effective sorption rate. Our analysis is formulated in terms of solute transport diffusing through a fluid-filled pore space, while sorbing to the solid matrix. Yet it is sufficiently general to be applied to other multiscale porous media phenomena that are amenable to homogenization.
Evaluating and Improving Cloud Processes in the Multi-Scale Modeling Framework
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ackerman, Thomas P.
2015-03-01
The research performed under this grant was intended to improve the embedded cloud model in the Multi-scale Modeling Framework (MMF) for convective clouds by using a 2-moment microphysics scheme rather than the single moment scheme used in all the MMF runs to date. The technical report and associated documents describe the results of testing the cloud resolving model with fixed boundary conditions and evaluation of model results with data. The overarching conclusion is that such model evaluations are problematic because errors in the forcing fields control the results so strongly that variations in parameterization values cannot be usefully constrained
NASA Technical Reports Server (NTRS)
Moncrieff, Mitchell
2003-01-01
The two studies summarized below represent the results of a one-year extension to the original award grant. These studies involve cloud-resolving simulation, theory and parameterization of multi-scale convective systems in the Tropics. It is a contribution to the basic scientific objectives of TRMM and the prospective NASA Global Precipitation Mission.
NASA Astrophysics Data System (ADS)
Pathiraja, S. D.; van Leeuwen, P. J.
2017-12-01
Model Uncertainty Quantification remains one of the central challenges of effective Data Assimilation (DA) in complex partially observed non-linear systems. Stochastic parameterization methods have been proposed in recent years as a means of capturing the uncertainty associated with unresolved sub-grid scale processes. Such approaches generally require some knowledge of the true sub-grid scale process or rely on full observations of the larger scale resolved process. We present a methodology for estimating the statistics of sub-grid scale processes using only partial observations of the resolved process. It finds model error realisations over a training period by minimizing their conditional variance, constrained by available observations. Special is that these realisations are binned conditioned on the previous model state during the minimization process, allowing for the recovery of complex error structures. The efficacy of the approach is demonstrated through numerical experiments on the multi-scale Lorenz 96' model. We consider different parameterizations of the model with both small and large time scale separations between slow and fast variables. Results are compared to two existing methods for accounting for model uncertainty in DA and shown to provide improved analyses and forecasts.
NASA Astrophysics Data System (ADS)
Pierson, Kyle D.; Hochhalter, Jacob D.; Spear, Ashley D.
2018-05-01
Systematic correlation analysis was performed between simulated micromechanical fields in an uncracked polycrystal and the known path of an eventual fatigue-crack surface based on experimental observation. Concurrent multiscale finite-element simulation of cyclic loading was performed using a high-fidelity representation of grain structure obtained from near-field high-energy x-ray diffraction microscopy measurements. An algorithm was developed to parameterize and systematically correlate the three-dimensional (3D) micromechanical fields from simulation with the 3D fatigue-failure surface from experiment. For comparison, correlation coefficients were also computed between the micromechanical fields and hypothetical, alternative surfaces. The correlation of the fields with hypothetical surfaces was found to be consistently weaker than that with the known crack surface, suggesting that the micromechanical fields of the cyclically loaded, uncracked microstructure might provide some degree of predictiveness for microstructurally small fatigue-crack paths, although the extent of such predictiveness remains to be tested. In general, gradients of the field variables exhibit stronger correlations with crack path than the field variables themselves. Results from the data-driven approach implemented here can be leveraged in future model development for prediction of fatigue-failure surfaces (for example, to facilitate univariate feature selection required by convolution-based models).
NASA Astrophysics Data System (ADS)
Phillips, M.; Denning, A. S.; Randall, D. A.; Branson, M.
2016-12-01
Multi-scale models of the atmosphere provide an opportunity to investigate processes that are unresolved by traditional Global Climate Models while at the same time remaining viable in terms of computational resources for climate-length time scales. The MMF represents a shift away from large horizontal grid spacing in traditional GCMs that leads to overabundant light precipitation and lack of heavy events, toward a model where precipitation intensity is allowed to vary over a much wider range of values. Resolving atmospheric motions on the scale of 4 km makes it possible to recover features of precipitation, such as intense downpours, that were previously only obtained by computationally expensive regional simulations. These heavy precipitation events may have little impact on large-scale moisture and energy budgets, but are outstanding in terms of interaction with the land surface and potential impact on human life. Three versions of the Community Earth System Model were used in this study; the standard CESM, the multi-scale `Super-Parameterized' CESM where large-scale parameterizations have been replaced with a 2D cloud-permitting model, and a multi-instance land version of the SP-CESM where each column of the 2D CRM is allowed to interact with an individual land unit. These simulations were carried out using prescribed Sea Surface Temperatures for the period from 1979-2006 with daily precipitation saved for all 28 years. Comparisons of the statistical properties of precipitation between model architectures and against observations from rain gauges were made, with specific focus on detection and evaluation of extreme precipitation events.
Prototype Mcs Parameterization for Global Climate Models
NASA Astrophysics Data System (ADS)
Moncrieff, M. W.
2017-12-01
Excellent progress has been made with observational, numerical and theoretical studies of MCS processes but the parameterization of those processes remain in a dire state and are missing from GCMs. The perceived complexity of the distribution, type, and intensity of organized precipitation systems has arguably daunted attention and stifled the development of adequate parameterizations. TRMM observations imply links between convective organization and large-scale meteorological features in the tropics and subtropics that are inadequately treated by GCMs. This calls for improved physical-dynamical treatment of organized convection to enable the next-generation of GCMs to reliably address a slew of challenges. The multiscale coherent structure parameterization (MCSP) paradigm is based on the fluid-dynamical concept of coherent structures in turbulent environments. The effects of vertical shear on MCS dynamics implemented as 2nd baroclinic convective heating and convective momentum transport is based on Lagrangian conservation principles, nonlinear dynamical models, and self-similarity. The prototype MCS parameterization, a minimalist proof-of-concept, is applied in the NCAR Community Climate Model, Version 5.5 (CAM 5.5). The MCSP generates convectively coupled tropical waves and large-scale precipitation features notably in the Indo-Pacific warm-pool and Maritime Continent region, a center-of-action for weather and climate variability around the globe.
NASA Astrophysics Data System (ADS)
Engquist, Björn; Frederick, Christina; Huynh, Quyen; Zhou, Haomin
2017-06-01
We present a multiscale approach for identifying features in ocean beds by solving inverse problems in high frequency seafloor acoustics. The setting is based on Sound Navigation And Ranging (SONAR) imaging used in scientific, commercial, and military applications. The forward model incorporates multiscale simulations, by coupling Helmholtz equations and geometrical optics for a wide range of spatial scales in the seafloor geometry. This allows for detailed recovery of seafloor parameters including material type. Simulated backscattered data is generated using numerical microlocal analysis techniques. In order to lower the computational cost of the large-scale simulations in the inversion process, we take advantage of a pre-computed library of representative acoustic responses from various seafloor parameterizations.
A Hybrid Multiscale Framework for Subsurface Flow and Transport Simulations
Scheibe, Timothy D.; Yang, Xiaofan; Chen, Xingyuan; ...
2015-06-01
Extensive research efforts have been invested in reducing model errors to improve the predictive ability of biogeochemical earth and environmental system simulators, with applications ranging from contaminant transport and remediation to impacts of biogeochemical elemental cycling (e.g., carbon and nitrogen) on local ecosystems and regional to global climate. While the bulk of this research has focused on improving model parameterizations in the face of observational limitations, the more challenging type of model error/uncertainty to identify and quantify is model structural error which arises from incorrect mathematical representations of (or failure to consider) important physical, chemical, or biological processes, properties, ormore » system states in model formulations. While improved process understanding can be achieved through scientific study, such understanding is usually developed at small scales. Process-based numerical models are typically designed for a particular characteristic length and time scale. For application-relevant scales, it is generally necessary to introduce approximations and empirical parameterizations to describe complex systems or processes. This single-scale approach has been the best available to date because of limited understanding of process coupling combined with practical limitations on system characterization and computation. While computational power is increasing significantly and our understanding of biological and environmental processes at fundamental scales is accelerating, using this information to advance our knowledge of the larger system behavior requires the development of multiscale simulators. Accordingly there has been much recent interest in novel multiscale methods in which microscale and macroscale models are explicitly coupled in a single hybrid multiscale simulation. A limited number of hybrid multiscale simulations have been developed for biogeochemical earth systems, but they mostly utilize application-specific and sometimes ad-hoc approaches for model coupling. We are developing a generalized approach to hierarchical model coupling designed for high-performance computational systems, based on the Swift computing workflow framework. In this presentation we will describe the generalized approach and provide two use cases: 1) simulation of a mixing-controlled biogeochemical reaction coupling pore- and continuum-scale models, and 2) simulation of biogeochemical impacts of groundwater – river water interactions coupling fine- and coarse-grid model representations. This generalized framework can be customized for use with any pair of linked models (microscale and macroscale) with minimal intrusiveness to the at-scale simulators. It combines a set of python scripts with the Swift workflow environment to execute a complex multiscale simulation utilizing an approach similar to the well-known Heterogeneous Multiscale Method. User customization is facilitated through user-provided input and output file templates and processing function scripts, and execution within a high-performance computing environment is handled by Swift, such that minimal to no user modification of at-scale codes is required.« less
NASA Technical Reports Server (NTRS)
Cheng, Anning; Xu, Kuan-Man
2015-01-01
Five-year simulation experiments with a multi-scale modeling Framework (MMF) with a advanced intermediately prognostic higher-order turbulence closure (IPHOC) in its cloud resolving model (CRM) component, also known as SPCAM-IPHOC (super parameterized Community Atmospheric Model), are performed to understand the fast tropical (30S-30N) cloud response to an instantaneous doubling of CO2 concentration with SST held fixed at present-day values. SPCAM-IPHOC has substantially improved the low-level representation compared with SPCAM. It is expected that the cloud responses to greenhouse warming in SPCAM-IPHOC is more realistic. The change of rising motion, surface precipitation, cloud cover, and shortwave and longwave cloud radiative forcing in SPCAM-IPHOC from the greenhouse warming will be presented in the presentation.
A multi-scale study of the adsorption of lanthanum on the (110) surface of tungsten
NASA Astrophysics Data System (ADS)
Samin, Adib J.; Zhang, Jinsuo
2016-07-01
In this study, we utilize a multi-scale approach to studying lanthanum adsorption on the (110) plane of tungsten. The energy of the system is described from density functional theory calculations within the framework of the cluster expansion method. It is found that including two-body figures up to the sixth nearest neighbor yielded a reasonable agreement with density functional theory calculations as evidenced by the reported cross validation score. The results indicate that the interaction between the adsorbate atoms in the adlayer is important and cannot be ignored. The parameterized cluster expansion expression is used in a lattice gas Monte Carlo simulation in the grand canonical ensemble at 773 K and the adsorption isotherm is recorded. Implications of the obtained results for the pyroprocessing application are discussed.
Performance of the Goddard Multiscale Modeling Framework with Goddard Ice Microphysical Schemes
NASA Technical Reports Server (NTRS)
Chern, Jiun-Dar; Tao, Wei-Kuo; Lang, Stephen E.; Matsui, Toshihisa; Li, J.-L.; Mohr, Karen I.; Skofronick-Jackson, Gail M.; Peters-Lidard, Christa D.
2016-01-01
The multiscale modeling framework (MMF), which replaces traditional cloud parameterizations with cloud-resolving models (CRMs) within a host atmospheric general circulation model (GCM), has become a new approach for climate modeling. The embedded CRMs make it possible to apply CRM-based cloud microphysics directly within a GCM. However, most such schemes have never been tested in a global environment for long-term climate simulation. The benefits of using an MMF to evaluate rigorously and improve microphysics schemes are here demonstrated. Four one-moment microphysical schemes are implemented into the Goddard MMF and their results validated against three CloudSat/CALIPSO cloud ice products and other satellite data. The new four-class (cloud ice, snow, graupel, and frozen drops/hail) ice scheme produces a better overall spatial distribution of cloud ice amount, total cloud fractions, net radiation, and total cloud radiative forcing than earlier three-class ice schemes, with biases within the observational uncertainties. Sensitivity experiments are conducted to examine the impact of recently upgraded microphysical processes on global hydrometeor distributions. Five processes dominate the global distributions of cloud ice and snow amount in long-term simulations: (1) allowing for ice supersaturation in the saturation adjustment, (2) three additional correction terms in the depositional growth of cloud ice to snow, (3) accounting for cloud ice fall speeds, (4) limiting cloud ice particle size, and (5) new size-mapping schemes for snow and graupel. Despite the cloud microphysics improvements, systematic errors associated with subgrid processes, cyclic lateral boundaries in the embedded CRMs, and momentum transport remain and will require future improvement.
Subgrid-scale parameterization and low-frequency variability: a response theory approach
NASA Astrophysics Data System (ADS)
Demaeyer, Jonathan; Vannitsem, Stéphane
2016-04-01
Weather and climate models are limited in the possible range of resolved spatial and temporal scales. However, due to the huge space- and time-scale ranges involved in the Earth System dynamics, the effects of many sub-grid processes should be parameterized. These parameterizations have an impact on the forecasts or projections. It could also affect the low-frequency variability present in the system (such as the one associated to ENSO or NAO). An important question is therefore to know what is the impact of stochastic parameterizations on the Low-Frequency Variability generated by the system and its model representation. In this context, we consider a stochastic subgrid-scale parameterization based on the Ruelle's response theory and proposed in Wouters and Lucarini (2012). We test this approach in the context of a low-order coupled ocean-atmosphere model, detailed in Vannitsem et al. (2015), for which a part of the atmospheric modes is considered as unresolved. A natural separation of the phase-space into a slow invariant set and its fast complement allows for an analytical derivation of the different terms involved in the parameterization, namely the average, the fluctuation and the long memory terms. Its application to the low-order system reveals that a considerable correction of the low-frequency variability along the invariant subset can be obtained. This new approach of scale separation opens new avenues of subgrid-scale parameterizations in multiscale systems used for climate forecasts. References: Vannitsem S, Demaeyer J, De Cruz L, Ghil M. 2015. Low-frequency variability and heat transport in a low-order nonlinear coupled ocean-atmosphere model. Physica D: Nonlinear Phenomena 309: 71-85. Wouters J, Lucarini V. 2012. Disentangling multi-level systems: averaging, correlations and memory. Journal of Statistical Mechanics: Theory and Experiment 2012(03): P03 003.
Multiscale Analysis of Delamination of Carbon Fiber-Epoxy Laminates with Carbon Nanotubes
NASA Technical Reports Server (NTRS)
Riddick, Jaret C.; Frankland, SJV; Gates, TS
2006-01-01
A multi-scale analysis is presented to parametrically describe the Mode I delamination of a carbon fiber/epoxy laminate. In the midplane of the laminate, carbon nanotubes are included for the purposes of selectively enhancing the fracture toughness of the laminate. To analyze carbon fiber epoxy carbon nanotube laminate, the multi-scale methodology presented here links a series of parameterizations taken at various length scales ranging from the atomistic through the micromechanical to the structural level. At the atomistic scale molecular dynamics simulations are performed in conjunction with an equivalent continuum approach to develop constitutive properties for representative volume elements of the molecular structure of components of the laminate. The molecular-level constitutive results are then used in the Mori-Tanaka micromechanics to develop bulk properties for the epoxy-carbon nanotube matrix system. In order to demonstrate a possible application of this multi-scale methodology, a double cantilever beam specimen is modeled. An existing analysis is employed which uses discrete springs to model the fiber bridging affect during delamination propagation. In the absence of empirical data or a damage mechanics model describing the effect of CNTs on fracture toughness, several tractions laws are postulated, linking CNT volume fraction to fiber bridging in a DCB specimen. Results from this demonstration are presented in terms of DCB specimen load-displacement responses.
Evaluation of wave runup predictions from numerical and parametric models
Stockdon, Hilary F.; Thompson, David M.; Plant, Nathaniel G.; Long, Joseph W.
2014-01-01
Wave runup during storms is a primary driver of coastal evolution, including shoreline and dune erosion and barrier island overwash. Runup and its components, setup and swash, can be predicted from a parameterized model that was developed by comparing runup observations to offshore wave height, wave period, and local beach slope. Because observations during extreme storms are often unavailable, a numerical model is used to simulate the storm-driven runup to compare to the parameterized model and then develop an approach to improve the accuracy of the parameterization. Numerically simulated and parameterized runup were compared to observations to evaluate model accuracies. The analysis demonstrated that setup was accurately predicted by both the parameterized model and numerical simulations. Infragravity swash heights were most accurately predicted by the parameterized model. The numerical model suffered from bias and gain errors that depended on whether a one-dimensional or two-dimensional spatial domain was used. Nonetheless, all of the predictions were significantly correlated to the observations, implying that the systematic errors can be corrected. The numerical simulations did not resolve the incident-band swash motions, as expected, and the parameterized model performed best at predicting incident-band swash heights. An assimilated prediction using a weighted average of the parameterized model and the numerical simulations resulted in a reduction in prediction error variance. Finally, the numerical simulations were extended to include storm conditions that have not been previously observed. These results indicated that the parameterized predictions of setup may need modification for extreme conditions; numerical simulations can be used to extend the validity of the parameterized predictions of infragravity swash; and numerical simulations systematically underpredict incident swash, which is relatively unimportant under extreme conditions.
On the Relationship between Observed NLDN Lightning ...
Lightning-produced nitrogen oxides (NOX=NO+NO2) in the middle and upper troposphere play an essential role in the production of ozone (O3) and influence the oxidizing capacity of the troposphere. Despite much effort in both observing and modeling lightning NOX during the past decade, considerable uncertainties still exist with the quantification of lightning NOX production and distribution in the troposphere. It is even more challenging for regional chemistry and transport models to accurately parameterize lightning NOX production and distribution in time and space. The Community Multiscale Air Quality Model (CMAQ) parameterizes the lightning NO emissions using local scaling factors adjusted by the convective precipitation rate that is predicted by the upstream meteorological model; the adjustment is based on the observed lightning strikes from the National Lightning Detection Network (NLDN). For this parameterization to be valid, the existence of an a priori reasonable relationship between the observed lightning strikes and the modeled convective precipitation rates is needed. In this study, we will present an analysis leveraged on the observed NLDN lightning strikes and CMAQ model simulations over the continental United States for a time period spanning over a decade. Based on the analysis, new parameterization scheme for lightning NOX will be proposed and the results will be evaluated. The proposed scheme will be beneficial to modeling exercises where the obs
NASA Astrophysics Data System (ADS)
Anderson, William; Meneveau, Charles
2010-05-01
A dynamic subgrid-scale (SGS) parameterization for hydrodynamic surface roughness is developed for large-eddy simulation (LES) of atmospheric boundary layer (ABL) flow over multiscale, fractal-like surfaces. The model consists of two parts. First, a baseline model represents surface roughness at horizontal length-scales that can be resolved in the LES. This model takes the form of a force using a prescribed drag coefficient. This approach is tested in LES of flow over cubes, wavy surfaces, and ellipsoidal roughness elements for which there are detailed experimental data available. Secondly, a dynamic roughness model is built, accounting for SGS surface details of finer resolution than the LES grid width. The SGS boundary condition is based on the logarithmic law of the wall, where the unresolved roughness of the surface is modeled as the product of local root-mean-square (RMS) of the unresolved surface height and an unknown dimensionless model coefficient. This coefficient is evaluated dynamically by comparing the plane-average hydrodynamic drag at two resolutions (grid- and test-filter scale, Germano et al., 1991). The new model is tested on surfaces generated through superposition of random-phase Fourier modes with prescribed, power-law surface-height spectra. The results show that the method yields convergent results and correct trends. Limitations and further challenges are highlighted. Supported by the US National Science Foundation (EAR-0609690).
A Multi-scale Modeling System: Developments, Applications and Critical Issues
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo; Chern, Jiundar; Atlas, Robert; Randall, David; Lin, Xin; Khairoutdinov, Marat; Li, Jui-Lin; Waliser, Duane E.; Hou, Arthur; Peters-Lidard, Christa;
2006-01-01
A multi-scale modeling framework (MMF), which replaces the conventional cloud parameterizations with a cloud-resolving model (CRM) in each grid column of a GCM, constitutes a new and promising approach. The MMF can provide for global coverage and two-way interactions between the CRMs and their parent GCM. The GCM allows global coverage and the CRM allows explicit simulation of cloud processes and their interactions with radiation and surface processes. A new MMF has been developed that is based the Goddard finite volume GCM (fvGCM) and the Goddard Cumulus Ensemble (GCE) model. This Goddard MMF produces many features that are similar to another MMF that was developed at Colorado State University (CSU), such as an improved .surface precipitation pattern, better cloudiness, improved diurnal variability over both oceans and continents, and a stronger, propagating Madden-Julian oscillation (MJO) compared to their parent GCMs using conventional cloud parameterizations. Both MMFs also produce a precipitation bias in the western Pacific during Northern Hemisphere summer. However, there are also notable differences between two MMFs. For example, the CSU MMF simulates less rainfall over land than its parent GCM. This is why the CSU MMF simulated less overall global rainfall than its parent GCM. The Goddard MMF overestimates global rainfall because of its oceanic component. Some critical issues associated with the Goddard MMF are presented in this paper.
A multi-scale study of the adsorption of lanthanum on the (110) surface of tungsten
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samin, Adib J.; Zhang, Jinsuo
In this study, we utilize a multi-scale approach to studying lanthanum adsorption on the (110) plane of tungsten. The energy of the system is described from density functional theory calculations within the framework of the cluster expansion method. It is found that including two-body figures up to the sixth nearest neighbor yielded a reasonable agreement with density functional theory calculations as evidenced by the reported cross validation score. The results indicate that the interaction between the adsorbate atoms in the adlayer is important and cannot be ignored. The parameterized cluster expansion expression is used in a lattice gas Monte Carlomore » simulation in the grand canonical ensemble at 773 K and the adsorption isotherm is recorded. Implications of the obtained results for the pyroprocessing application are discussed.« less
A multiscale strength model for tantalum over an extended range of strain rates
NASA Astrophysics Data System (ADS)
Barton, N. R.; Rhee, M.
2013-09-01
A strength model for tantalum is developed and exercised across a range of conditions relevant to various types of experimental observations. The model is based on previous multiscale modeling work combined with experimental observations. As such, the model's parameterization includes a hybrid of quantities that arise directly from predictive sub-scale physics models and quantities that are adjusted to align the model with experimental observations. Given current computing and experimental limitations, the response regions for sub-scale physics simulations and detailed experimental observations have been largely disjoint. In formulating the new model and presenting results here, attention is paid to integrated experimental observations that probe strength response at the elevated strain rates where a previous version of the model has generally been successful in predicting experimental data [Barton et al., J. Appl. Phys. 109(7), 073501 (2011)].
Microphysics in the Multi-Scale Modeling Systems with Unified Physics
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo; Chern, J.; Lamg, S.; Matsui, T.; Shen, B.; Zeng, X.; Shi, R.
2011-01-01
In recent years, exponentially increasing computer power has extended Cloud Resolving Model (CRM) integrations from hours to months, the number of computational grid points from less than a thousand to close to ten million. Three-dimensional models are now more prevalent. Much attention is devoted to precipitating cloud systems where the crucial 1-km scales are resolved in horizontal domains as large as 10,000 km in two-dimensions, and 1,000 x 1,000 km2 in three-dimensions. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that NWP and mesoscale model can be run in grid size similar to cloud resolving model through nesting technique. Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (l) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, the microphysics developments of the multi-scale modeling system will be presented. In particular, the results from using multi-scale modeling system to study the heavy precipitation processes will be presented.
2012-08-01
Molecular Dynamics Simulations Coarse-Grain Particle Dynamics Simulations Local structure; Force field parameterization Extended structure...K) C8H18 C12H26 C16H34 Adhesive forces can cause local density gradients and defects " Pronounced layering of polymer near interfaces...reactive end groups (CnH2n+1S) on Cu Gap SubPc on C60 Pentacene on a-SiO2 Cyclopentene on Au Crystalline CuPc on Al Polyimide on Si
Upscaling the Coupled Water and Heat Transport in the Shallow Subsurface
NASA Astrophysics Data System (ADS)
Sviercoski, R. F.; Efendiev, Y.; Mohanty, B. P.
2018-02-01
Predicting simultaneous movement of liquid water, water vapor, and heat in the shallow subsurface has many practical interests. The demand for multidimensional multiscale models for this region is important given: (a) the critical role that these processes play in the global water and energy balances, (b) that more data from air-borne and space-borne sensors are becoming available for parameterizations of modeling efforts. On the other hand, numerical models that consider spatial variations of the soil properties, termed here as multiscale, are prohibitively expensive. Thus, there is a need for upscaled models that take into consideration these features, and be computationally affordable. In this paper, a multidimensional multiscale model coupling the water flow and heat transfer and its respective upscaled version are proposed. The formulation is novel as it describes the multidimensional and multiscale tensorial versions of the hydraulic conductivity and the vapor diffusivity, taking into account the tortuosity and porosity properties of the medium. It also includes the coupling with the energy balance equation as a boundary describing atmospheric influences at the shallow subsurface. To demonstrate the accuracy of both models, comparisons were made between simulation and field experiments for soil moisture and temperature at 2, 7, and 12 cm deep, during 11 days. The root-mean-square errors showed that the upscaled version of the system captured the multiscale features with similar accuracy. Given the good matching between simulated and field data for near-surface soil temperature, the results suggest that it can be regarded as a 1-D variable.
Multi-Scale Surface Descriptors
Cipriano, Gregory; Phillips, George N.; Gleicher, Michael
2010-01-01
Local shape descriptors compactly characterize regions of a surface, and have been applied to tasks in visualization, shape matching, and analysis. Classically, curvature has be used as a shape descriptor; however, this differential property characterizes only an infinitesimal neighborhood. In this paper, we provide shape descriptors for surface meshes designed to be multi-scale, that is, capable of characterizing regions of varying size. These descriptors capture statistically the shape of a neighborhood around a central point by fitting a quadratic surface. They therefore mimic differential curvature, are efficient to compute, and encode anisotropy. We show how simple variants of mesh operations can be used to compute the descriptors without resorting to expensive parameterizations, and additionally provide a statistical approximation for reduced computational cost. We show how these descriptors apply to a number of uses in visualization, analysis, and matching of surfaces, particularly to tasks in protein surface analysis. PMID:19834190
Full-wave multiscale anisotropy tomography in Southern California
NASA Astrophysics Data System (ADS)
Lin, Yu-Pin; Zhao, Li; Hung, Shu-Huei
2014-12-01
Understanding the spatial variation of anisotropy in the upper mantle is important for characterizing the lithospheric deformation and mantle flow dynamics. In this study, we apply a full-wave approach to image the upper-mantle anisotropy in Southern California using 5954 SKS splitting data. Three-dimensional sensitivity kernels combined with a wavelet-based model parameterization are adopted in a multiscale inversion. Spatial resolution lengths are estimated based on a statistical resolution matrix approach, showing a finest resolution length of ~25 km in regions with densely distributed stations. The anisotropic model displays structural fabric in relation to surface geologic features such as the Salton Trough, the Transverse Ranges, and the San Andreas Fault. The depth variation of anisotropy does not suggest a lithosphere-asthenosphere decoupling. At long wavelengths, the fast directions of anisotropy are aligned with the absolute plate motion inside the Pacific and North American plates.
NASA Astrophysics Data System (ADS)
Berloff, P. S.
2016-12-01
This work aims at developing a framework for dynamically consistent parameterization of mesoscale eddy effects for use in non-eddy-resolving ocean circulation models. The proposed eddy parameterization framework is successfully tested on the classical, wind-driven double-gyre model, which is solved both with explicitly resolved vigorous eddy field and in the non-eddy-resolving configuration with the eddy parameterization replacing the eddy effects. The parameterization focuses on the effect of the stochastic part of the eddy forcing that backscatters and induces eastward jet extension of the western boundary currents and its adjacent recirculation zones. The parameterization locally approximates transient eddy flux divergence by spatially localized and temporally periodic forcing, referred to as the plunger, and focuses on the linear-dynamics flow solution induced by it. The nonlinear self-interaction of this solution, referred to as the footprint, characterizes and quantifies the induced eddy forcing exerted on the large-scale flow. We find that spatial pattern and amplitude of each footprint strongly depend on the underlying large-scale flow, and the corresponding relationships provide the basis for the eddy parameterization and its closure on the large-scale flow properties. Dependencies of the footprints on other important parameters of the problem are also systematically analyzed. The parameterization utilizes the local large-scale flow information, constructs and scales the corresponding footprints, and then sums them up over the gyres to produce the resulting eddy forcing field, which is interactively added to the model as an extra forcing. Thus, the assumed ensemble of plunger solutions can be viewed as a simple model for the cumulative effect of the stochastic eddy forcing. The parameterization framework is implemented in the simplest way, but it provides a systematic strategy for improving the implementation algorithm.
NASA Technical Reports Server (NTRS)
Gershman, Daniel J.; Gliese, Ulrik; Dorelli, John C.; Avanov, Levon A.; Barrie, Alexander C.; Chornay, Dennis J.; MacDonald, Elizabeth A.; Holland, Matthew P.; Pollock, Craig J.
2015-01-01
The most common instrument for low energy plasmas consists of a top-hat electrostatic analyzer geometry coupled with a microchannel-plate (MCP)-based detection system. While the electrostatic optics for such sensors are readily simulated and parameterized during the laboratory calibration process, the detection system is often less well characterized. Furthermore, due to finite resources, for large sensor suites such as the Fast Plasma Investigation (FPI) on NASA's Magnetospheric Multiscale (MMS) mission, calibration data are increasingly sparse. Measurements must be interpolated and extrapolated to understand instrument behavior for untestable operating modes and yet sensor inter-calibration is critical to mission success. To characterize instruments from a minimal set of parameters we have developed the first comprehensive mathematical description of both sensor electrostatic optics and particle detection systems. We include effects of MCP efficiency, gain, scattering, capacitive crosstalk, and charge cloud spreading at the detector output. Our parameterization enables the interpolation and extrapolation of instrument response to all relevant particle energies, detector high voltage settings, and polar angles from a small set of calibration data. We apply this model to the 32 sensor heads in the Dual Electron Sensor (DES) and 32 sensor heads in the Dual Ion Sensor (DIS) instruments on the 4 MMS observatories and use least squares fitting of calibration data to extract all key instrument parameters. Parameters that will evolve in flight, namely MCP gain, will be determined daily through application of this model to specifically tailored in-flight calibration activities, providing a robust characterization of sensor suite performance throughout mission lifetime. Beyond FPI, our model provides a valuable framework for the simulation and evaluation of future detection system designs and can be used to maximize instrument understanding with minimal calibration resources.
NASA Astrophysics Data System (ADS)
Salas, W.; Torbick, N.
2017-12-01
Rice greenhouse gas (GHG) emissions in production hot spots have been mapped using multiscale satellite imagery and a processed-based biogeochemical model. The multiscale Synthetic Aperture Radar (SAR) and optical imagery were co-processed and fed into a machine leanring framework to map paddy attributes that are tuned using field observations and surveys. Geospatial maps of rice extent, crop calendar, hydroperiod, and cropping intensity were then used to parameterize the DeNitrification-DeComposition (DNDC) model to estimate emissions. Results, in the Red River Detla for example, show total methane emissions at 345.4 million kgCH4-C equivalent to 11.5 million tonnes CO2e (carbon dioxide equivalent). We further assessed the role of Alternative Wetting and Drying and the impact on GHG and yield across production hot spots with uncertainty estimates. The approach described in this research provides a framework for using SAR to derive maps of rice and landscape characteristics to drive process models like DNDC. These types of tools and approaches will support the next generation of Monitoring, Reporting, and Verification (MRV) to combat climate change and support ecosystem service markets.
Multiscale solvers and systematic upscaling in computational physics
NASA Astrophysics Data System (ADS)
Brandt, A.
2005-07-01
Multiscale algorithms can overcome the scale-born bottlenecks that plague most computations in physics. These algorithms employ separate processing at each scale of the physical space, combined with interscale iterative interactions, in ways which use finer scales very sparingly. Having been developed first and well known as multigrid solvers for partial differential equations, highly efficient multiscale techniques have more recently been developed for many other types of computational tasks, including: inverse PDE problems; highly indefinite (e.g., standing wave) equations; Dirac equations in disordered gauge fields; fast computation and updating of large determinants (as needed in QCD); fast integral transforms; integral equations; astrophysics; molecular dynamics of macromolecules and fluids; many-atom electronic structures; global and discrete-state optimization; practical graph problems; image segmentation and recognition; tomography (medical imaging); fast Monte-Carlo sampling in statistical physics; and general, systematic methods of upscaling (accurate numerical derivation of large-scale equations from microscopic laws).
NASA Technical Reports Server (NTRS)
Mohr, Karen Irene; Tao, Wei-Kuo; Chern, Jiun-Dar; Kumar, Sujay V.; Peters-Lidard, Christa D.
2013-01-01
The present generation of general circulation models (GCM) use parameterized cumulus schemes and run at hydrostatic grid resolutions. To improve the representation of cloud-scale moist processes and landeatmosphere interactions, a global, Multi-scale Modeling Framework (MMF) coupled to the Land Information System (LIS) has been developed at NASA-Goddard Space Flight Center. The MMFeLIS has three components, a finite-volume (fv) GCM (Goddard Earth Observing System Ver. 4, GEOS-4), a 2D cloud-resolving model (Goddard Cumulus Ensemble, GCE), and the LIS, representing the large-scale atmospheric circulation, cloud processes, and land surface processes, respectively. The non-hydrostatic GCE model replaces the single-column cumulus parameterization of fvGCM. The model grid is composed of an array of fvGCM gridcells each with a series of embedded GCE models. A horizontal coupling strategy, GCE4fvGCM4Coupler4LIS, offered significant computational efficiency, with the scalability and I/O capabilities of LIS permitting landeatmosphere interactions at cloud-scale. Global simulations of 2007e2008 and comparisons to observations and reanalysis products were conducted. Using two different versions of the same land surface model but the same initial conditions, divergence in regional, synoptic-scale surface pressure patterns emerged within two weeks. The sensitivity of largescale circulations to land surface model physics revealed significant functional value to using a scalable, multi-model land surface modeling system in global weather and climate prediction.
Multiscale Simulation Framework for Coupled Fluid Flow and Mechanical Deformation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hou, Thomas; Efendiev, Yalchin; Tchelepi, Hamdi
2016-05-24
Our work in this project is aimed at making fundamental advances in multiscale methods for flow and transport in highly heterogeneous porous media. The main thrust of this research is to develop a systematic multiscale analysis and efficient coarse-scale models that can capture global effects and extend existing multiscale approaches to problems with additional physics and uncertainties. A key emphasis is on problems without an apparent scale separation. Multiscale solution methods are currently under active investigation for the simulation of subsurface flow in heterogeneous formations. These procedures capture the effects of fine-scale permeability variations through the calculation of specialized coarse-scalemore » basis functions. Most of the multiscale techniques presented to date employ localization approximations in the calculation of these basis functions. For some highly correlated (e.g., channelized) formations, however, global effects are important and these may need to be incorporated into the multiscale basis functions. Other challenging issues facing multiscale simulations are the extension of existing multiscale techniques to problems with additional physics, such as compressibility, capillary effects, etc. In our project, we explore the improvement of multiscale methods through the incorporation of additional (single-phase flow) information and the development of a general multiscale framework for flows in the presence of uncertainties, compressible flow and heterogeneous transport, and geomechanics. We have considered (1) adaptive local-global multiscale methods, (2) multiscale methods for the transport equation, (3) operator-based multiscale methods and solvers, (4) multiscale methods in the presence of uncertainties and applications, (5) multiscale finite element methods for high contrast porous media and their generalizations, and (6) multiscale methods for geomechanics.« less
Multiscale analysis and computation for flows in heterogeneous media
DOE Office of Scientific and Technical Information (OSTI.GOV)
Efendiev, Yalchin; Hou, T. Y.; Durlofsky, L. J.
Our work in this project is aimed at making fundamental advances in multiscale methods for flow and transport in highly heterogeneous porous media. The main thrust of this research is to develop a systematic multiscale analysis and efficient coarse-scale models that can capture global effects and extend existing multiscale approaches to problems with additional physics and uncertainties. A key emphasis is on problems without an apparent scale separation. Multiscale solution methods are currently under active investigation for the simulation of subsurface flow in heterogeneous formations. These procedures capture the effects of fine-scale permeability variations through the calculation of specialized coarse-scalemore » basis functions. Most of the multiscale techniques presented to date employ localization approximations in the calculation of these basis functions. For some highly correlated (e.g., channelized) formations, however, global effects are important and these may need to be incorporated into the multiscale basis functions. Other challenging issues facing multiscale simulations are the extension of existing multiscale techniques to problems with additional physics, such as compressibility, capillary effects, etc. In our project, we explore the improvement of multiscale methods through the incorporation of additional (single-phase flow) information and the development of a general multiscale framework for flows in the presence of uncertainties, compressible flow and heterogeneous transport, and geomechanics. We have considered (1) adaptive local-global multiscale methods, (2) multiscale methods for the transport equation, (3) operator-based multiscale methods and solvers, (4) multiscale methods in the presence of uncertainties and applications, (5) multiscale finite element methods for high contrast porous media and their generalizations, and (6) multiscale methods for geomechanics. Below, we present a brief overview of each of these contributions.« less
NASA Astrophysics Data System (ADS)
Salehipour, Hesam; Peltier, W. Richard
2013-04-01
Increasing recognition of the importance of the diapycnal mixing induced by the dissipation of internal tides excited by the interaction of the barotropic tide with bottom topography has begun to attract increasing attention. The partition of the dissipation of the barotropic tide between that related to the internal tide and that related to bottom friction is also of considerable interest as this partition has been shown to shift significantly between the modern and Last Glacial Maximum tidal regimes [Griffiths and Peltier, 2008, 2009] . Ocean general circulation models, though clearly unable to explicitly resolve small scale mixing processes, currently rely on the introduction of an appropriate parameterization of the contribution to such mixing due to dissipation of the internal tidal. One widely-used parameterization of this kind (which is currently employed in POP2) is that proposed by Jayne and St. Laurent [GRL 2001] and is based on topographic roughness. This contrasts with the parameterization of Carrere and Lyard [GRL 2003] and Lyard [Ocean Dynamics, 2006] which also considers the flow direction with respect to the topographic features. Both of these parameterizations require the tuning of parameters to arrive at sensible tidal amplitudes. We have developed an original higher order barotropic tidal model based on the discontinuous Galerkin finite element method applied on global triangular grids [Salehipour et al., submitted to Ocean Modelling] in which we parameterize the energy conversion to baroclinic tides by introducing an anisotropic internal tide drag [Griffiths and Peltier GRL 2008, Griffiths and Peltier J Climate 2009] which also considers the time dependent angle of attack of the barotropic tidal flow on abyssal topographic features but requires no tuning parameters. The model is massively parallelized which enables very high resolution modeling of global barotropic tides as well as the implementation of local grid refinement. In this paper we will present maps of energy dissipation for different tidal constituents using grids with resolutions up to 1/18° in coastal regions as well as in areas with high gradients in the bottom topography. The discontinuous Galerkin formulation provides important energy conservation properties as well as enabling the accurate representation of sharp topographic gradients without smoothing, a feature well matched to the multi-scale problem of the dissipation of the internal tide. We will describe the detailed energy budgets delivered by this model under both modern and Last Glacial Maximum oceanographic conditions, including relative sea level and internal density stratification effects. The results of the simulations will be illustrated with global maps with enhanced resolution for the internal tidal dissipation which may be exploited in the parameterization of vertical mixing. We will use the reconstructed paleotopography of the ICE-5G model of Peltier [Annu. Rev. Earth Planet Sci. 2004] as well as the more recent refinement (ICE-6G) to compute the characteristics of the LGM tidal regime and will compare these characteristics to those of the modern ocean.
Huo, Mengmeng; Li, Wenyan; Chaudhuri, Arka Sen; Fan, Yuchao; Han, Xiu; Yang, Chen; Wu, Zhenghong; Qi, Xiaole
2017-09-01
In this study, we developed bio-stimuli-responsive multi-scale hyaluronic acid (HA) nanoparticles encapsulated with polyamidoamine (PAMAM) dendrimers as the subunits. These HA/PAMAM nanoparticles of large scale (197.10±3.00nm) were stable during systematic circulation then enriched at the tumor sites; however, they were prone to be degraded by the high expressed hyaluronidase (HAase) to release inner PAMAM dendrimers and regained a small scale (5.77±0.25nm) with positive charge. After employing tumor spheroids penetration assay on A549 3D tumor spheroids for 8h, the fluorescein isothiocyanate (FITC) labeled multi-scale HA/PAMAM-FITC nanoparticles could penetrate deeply into these tumor spheroids with the degradation of HAase. Moreover, small animal imaging technology in male nude mice bearing H22 tumor showed HA/PAMAM-FITC nanoparticles possess higher prolonged systematic circulation compared with both PAMAM-FITC nanoparticles and free FITC. In addition, after intravenous administration in mice bearing H22 tumors, methotrexate (MTX) loaded multi-scale HA/PAMAM-MTX nanoparticles exhibited a 2.68-fold greater antitumor activity. Copyright © 2017 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gettelman, Andrew
2015-10-01
In this project we have been upgrading the Multiscale Modeling Framework (MMF) in the Community Atmosphere Model (CAM), also known as Super-Parameterized CAM (SP-CAM). This has included a major effort to update the coding standards and interface with CAM so that it can be placed on the main development trunk. It has also included development of a new software structure for CAM to be able to handle sub-grid column information. These efforts have formed the major thrust of the work.
Multiscale modelling in immunology: a review.
Cappuccio, Antonio; Tieri, Paolo; Castiglione, Filippo
2016-05-01
One of the greatest challenges in biomedicine is to get a unified view of observations made from the molecular up to the organism scale. Towards this goal, multiscale models have been highly instrumental in contexts such as the cardiovascular field, angiogenesis, neurosciences and tumour biology. More recently, such models are becoming an increasingly important resource to address immunological questions as well. Systematic mining of the literature in multiscale modelling led us to identify three main fields of immunological applications: host-virus interactions, inflammatory diseases and their treatment and development of multiscale simulation platforms for immunological research and for educational purposes. Here, we review the current developments in these directions, which illustrate that multiscale models can consistently integrate immunological data generated at several scales, and can be used to describe and optimize therapeutic treatments of complex immune diseases. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
A Multi-scale Modeling System with Unified Physics to Study Precipitation Processes
NASA Astrophysics Data System (ADS)
Tao, W. K.
2017-12-01
In recent years, exponentially increasing computer power has extended Cloud Resolving Model (CRM) integrations from hours to months, the number of computational grid points from less than a thousand to close to ten million. Three-dimensional models are now more prevalent. Much attention is devoted to precipitating cloud systems where the crucial 1-km scales are resolved in horizontal domains as large as 10,000 km in two-dimensions, and 1,000 x 1,000 km2 in three-dimensions. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that NWP and mesoscale model can be run in grid size similar to cloud resolving model through nesting technique. Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), and (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF). The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, a review of developments and applications of the multi-scale modeling system will be presented. In particular, the results from using multi-scale modeling system to study the precipitation, processes and their sensitivity on model resolution and microphysics schemes will be presented. Also how to use of the multi-satellite simulator to improve precipitation processes will be discussed.
Using Multi-Scale Modeling Systems and Satellite Data to Study the Precipitation Processes
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo; Chern, J.; Lamg, S.; Matsui, T.; Shen, B.; Zeng, X.; Shi, R.
2011-01-01
In recent years, exponentially increasing computer power has extended Cloud Resolving Model (CRM) integrations from hours to months, the number of computational grid points from less than a thousand to close to ten million. Three-dimensional models are now more prevalent. Much attention is devoted to precipitating cloud systems where the crucial 1-km scales are resolved in horizontal domains as large as 10,000 km in two-dimensions, and 1,000 x 1,000 km2 in three-dimensions. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that NWP and mesoscale model can be run in grid size similar to cloud resolving model through nesting technique. Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (l) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, the recent developments and applications of the multi-scale modeling system will be presented. In particular, the results from using multi-scale modeling system to study the precipitating systems and hurricanes/typhoons will be presented. The high-resolution spatial and temporal visualization will be utilized to show the evolution of precipitation processes. Also how to use of the multi-satellite simulator tqimproy precipitation processes will be discussed.
Using Multi-Scale Modeling Systems and Satellite Data to Study the Precipitation Processes
NASA Technical Reports Server (NTRS)
Tao, Wei--Kuo; Chern, J.; Lamg, S.; Matsui, T.; Shen, B.; Zeng, X.; Shi, R.
2010-01-01
In recent years, exponentially increasing computer power extended Cloud Resolving Model (CRM) integrations from hours to months, the number of computational grid points from less than a thousand to close to ten million. Three-dimensional models are now more prevalent. Much attention is devoted to precipitating cloud systems where the crucial 1-km scales are resolved in horizontal domains as large as 10,000 km in two-dimensions, and 1,000 x 1,000 sq km in three-dimensions. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that NWP and mesoscale models can be run in grid size similar to cloud resolving models through nesting technique. Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model). (2) a regional scale model (a NASA unified weather research and forecast, W8F). (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, a review of developments and applications of the multi-scale modeling system will be presented. In particular, the results from using multi-scale modeling systems to study the interactions between clouds, precipitation, and aerosols will be presented. Also how to use the multi-satellite simulator to improve precipitation processes will be discussed.
Using Multi-Scale Modeling Systems to Study the Precipitation Processes
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo
2010-01-01
In recent years, exponentially increasing computer power has extended Cloud Resolving Model (CRM) integrations from hours to months, the number of computational grid points from less than a thousand to close to ten million. Three-dimensional models are now more prevalent. Much attention is devoted to precipitating cloud systems where the crucial 1-km scales are resolved in horizontal domains as large as 10,000 km in two-dimensions, and 1,000 x 1,000 km2 in three-dimensions. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that NWP and mesoscale model can be run in grid size similar to cloud resolving model through nesting technique. Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, a review of developments and applications of the multi-scale modeling system will be presented. In particular, the results from using multi-scale modeling system to study the interactions between clouds, precipitation, and aerosols will be presented. Also how to use of the multi-satellite simulator to improve precipitation processes will be discussed.
Importance of Physico-Chemical Properties of Aerosols in the Formation of Arctic Ice Clouds
NASA Astrophysics Data System (ADS)
Keita, S. A.; Girard, E.
2014-12-01
Ice clouds play an important role in the Arctic weather and climate system but interactions between aerosols, clouds and radiation are poorly understood. Consequently, it is essential to fully understand their properties and especially their formation process. Extensive measurements from ground-based sites and satellite remote sensing reveal the existence of two Types of Ice Clouds (TICs) in the Arctic during the polar night and early spring. TIC-1 are composed by non-precipitating very small (radar-unseen) ice crystals whereas TIC-2 are detected by both sensors and are characterized by a low concentration of large precipitating ice crystals. It is hypothesized that TIC-2 formation is linked to the acidification of aerosols, which inhibit the ice nucleating properties of ice nuclei (IN). As a result, the IN concentration is reduced in these regions, resulting to a smaller concentration of larger ice crystals. Over the past 10 years, several parameterizations of homogeneous and heterogeneous ice nucleation have been developed to reflect the various physical and chemical properties of aerosols. These parameterizations are derived from laboratory studies on aerosols of different chemical compositions. The parameterizations are also developed according to two main approaches: stochastic (that nucleation is a probabilistic process, which is time dependent) and singular (that nucleation occurs at fixed conditions of temperature and humidity and time-independent). This research aims to better understand the formation process of TICs using a newly-developed ice nucleation parameterizations. For this purpose, we implement some parameterizations (2 approaches) into the Limited Area version of the Global Multiscale Environmental Model (GEM-LAM) and use them to simulate ice clouds observed during the Indirect and Semi-Direct Arctic Cloud (ISDAC) in Alaska. We use both approaches but special attention is focused on the new parameterizations of the singular approach. Simulation results of the TICs-2 observed on April 15th and 25th (polluted or acidic cases) and TICs-1 observed on April 5th (non-polluted cases) will be presented.
Facing the challenges of multiscale modelling of bacterial and fungal pathogen–host interactions
Schleicher, Jana; Conrad, Theresia; Gustafsson, Mika; Cedersund, Gunnar; Guthke, Reinhard
2017-01-01
Abstract Recent and rapidly evolving progress on high-throughput measurement techniques and computational performance has led to the emergence of new disciplines, such as systems medicine and translational systems biology. At the core of these disciplines lies the desire to produce multiscale models: mathematical models that integrate multiple scales of biological organization, ranging from molecular, cellular and tissue models to organ, whole-organism and population scale models. Using such models, hypotheses can systematically be tested. In this review, we present state-of-the-art multiscale modelling of bacterial and fungal infections, considering both the pathogen and host as well as their interaction. Multiscale modelling of the interactions of bacteria, especially Mycobacterium tuberculosis, with the human host is quite advanced. In contrast, models for fungal infections are still in their infancy, in particular regarding infections with the most important human pathogenic fungi, Candida albicans and Aspergillus fumigatus. We reflect on the current availability of computational approaches for multiscale modelling of host–pathogen interactions and point out current challenges. Finally, we provide an outlook for future requirements of multiscale modelling. PMID:26857943
Enhanced representation of soil NO emissions in the ...
Modeling of soil nitric oxide (NO) emissions is highly uncertain and may misrepresent its spatial and temporal distribution. This study builds upon a recently introduced parameterization to improve the timing and spatial distribution of soil NO emission estimates in the Community Multiscale Air Quality (CMAQ) model. The parameterization considers soil parameters, meteorology, land use, and mineral nitrogen (N) availability to estimate NO emissions. We incorporate daily year-specific fertilizer data from the Environmental Policy Integrated Climate (EPIC) agricultural model to replace the annual generic data of the initial parameterization, and use a 12 km resolution soil biome map over the continental USA. CMAQ modeling for July 2011 shows slight differences in model performance in simulating fine particulate matter and ozone from Interagency Monitoring of Protected Visual Environments (IMPROVE) and Clean Air Status and Trends Network (CASTNET) sites and NO2 columns from Ozone Monitoring Instrument (OMI) satellite retrievals. We also simulate how the change in soil NO emissions scheme affects the expected O3 response to projected emissions reductions. The National Exposure Research Laboratory (NERL) Computational Exposure Division (CED) develops and evaluates data, decision-support tools, and models to be applied to media-specific or receptor-specific problem areas. CED uses modeling-based approaches to characterize exposures, evaluate fate and transport, and
The parameterization of microchannel-plate-based detection systems
NASA Astrophysics Data System (ADS)
Gershman, Daniel J.; Gliese, Ulrik; Dorelli, John C.; Avanov, Levon A.; Barrie, Alexander C.; Chornay, Dennis J.; MacDonald, Elizabeth A.; Holland, Matthew P.; Giles, Barbara L.; Pollock, Craig J.
2016-10-01
The most common instrument for low-energy plasmas consists of a top-hat electrostatic analyzer (ESA) geometry coupled with a microchannel-plate-based (MCP-based) detection system. While the electrostatic optics for such sensors are readily simulated and parameterized during the laboratory calibration process, the detection system is often less well characterized. Here we develop a comprehensive mathematical description of particle detection systems. As a function of instrument azimuthal angle, we parameterize (1) particle scattering within the ESA and at the surface of the MCP, (2) the probability distribution of MCP gain for an incident particle, (3) electron charge cloud spreading between the MCP and anode board, and (4) capacitive coupling between adjacent discrete anodes. Using the Dual Electron Spectrometers on the Fast Plasma Investigation on NASA's Magnetospheric Multiscale mission as an example, we demonstrate a method for extracting these fundamental detection system parameters from laboratory calibration. We further show that parameters that will evolve in flight, namely, MCP gain, can be determined through application of this model to specifically tailored in-flight calibration activities. This methodology provides a robust characterization of sensor suite performance throughout mission lifetime. The model developed in this work is not only applicable to existing sensors but also can be used as an analytical design tool for future particle instrumentation.
Aerosol-cloud interactions in a multi-scale modeling framework
NASA Astrophysics Data System (ADS)
Lin, G.; Ghan, S. J.
2017-12-01
Atmospheric aerosols play an important role in changing the Earth's climate through scattering/absorbing solar and terrestrial radiation and interacting with clouds. However, quantification of the aerosol effects remains one of the most uncertain aspects of current and future climate projection. Much of the uncertainty results from the multi-scale nature of aerosol-cloud interactions, which is very challenging to represent in traditional global climate models (GCMs). In contrast, the multi-scale modeling framework (MMF) provides a viable solution, which explicitly resolves the cloud/precipitation in the cloud resolved model (CRM) embedded in the GCM grid column. In the MMF version of community atmospheric model version 5 (CAM5), aerosol processes are treated with a parameterization, called the Explicit Clouds Parameterized Pollutants (ECPP). It uses the cloud/precipitation statistics derived from the CRM to treat the cloud processing of aerosols on the GCM grid. However, this treatment treats clouds on the CRM grid but aerosols on the GCM grid, which is inconsistent with the reality that cloud-aerosol interactions occur on the cloud scale. To overcome the limitation, here, we propose a new aerosol treatment in the MMF: Explicit Clouds Explicit Aerosols (ECEP), in which we resolve both clouds and aerosols explicitly on the CRM grid. We first applied the MMF with ECPP to the Accelerated Climate Modeling for Energy (ACME) model to have an MMF version of ACME. Further, we also developed an alternative version of ACME-MMF with ECEP. Based on these two models, we have conducted two simulations: one with the ECPP and the other with ECEP. Preliminary results showed that the ECEP simulations tend to predict higher aerosol concentrations than ECPP simulations, because of the more efficient vertical transport from the surface to the higher atmosphere but the less efficient wet removal. We also found that the cloud droplet number concentrations are also different between the two simulations due to the difference in the cloud droplet lifetime. Next, we will explore how the ECEP treatment affects the anthropogenic aerosol forcing, particularly the aerosol indirect forcing, by comparing present-day and pre-industrial simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hammond, Glenn Edward; Yang, Xiaofan; Song, Xuehang
The groundwater-surface water interaction zone (GSIZ) plays an important role in riverine and watershed ecosystems as the exchange of waters of variable composition and temperature (hydrologic exchange flows) stimulate microbial activity and associated biogeochemical reactions. Variable temporal and spatial scales of hydrologic exchange flows, heterogeneity of the subsurface environment, and complexity of biogeochemical reaction networks in the GSIZ present challenges to incorporation of fundamental process representations and model parameterization across a range of spatial scales (e.g. from pore-scale to field scale). This paper presents a novel hybrid multiscale simulation approach that couples hydrologic-biogeochemical (HBGC) processes between two distinct length scalesmore » of interest.« less
Representing the effects of stratosphere–troposphere ...
Downward transport of ozone (O3) from the stratosphere can be a significant contributor to tropospheric O3 background levels. However, this process often is not well represented in current regional models. In this study, we develop a seasonally and spatially varying potential vorticity (PV)-based function to parameterize upper tropospheric and/or lower stratospheric (UTLS) O3 in a chemistry transport model. This dynamic O3–PV function is developed based on 21-year ozonesonde records from World Ozone and Ultraviolet Radiation Data Centre (WOUDC) with corresponding PV values from a 21-year Weather Research and Forecasting (WRF) simulation across the Northern Hemisphere from 1990 to 2010. The result suggests strong spatial and seasonal variations of O3 ∕ PV ratios which exhibits large values in the upper layers and in high-latitude regions, with highest values in spring and the lowest values in autumn over an annual cycle. The newly developed O3 ∕ PV function was then applied in the Community Multiscale Air Quality (CMAQ) model for an annual simulation of the year 2006. The simulated UTLS O3 agrees much better with observations in both magnitude and seasonality after the implementation of the new parameterization. Considerable impacts on surface O3 model performance were found in the comparison with observations from three observational networks, i.e., EMEP, CASTNET and WDCGG. With the new parameterization, the negative bias in spring is reduced from
On the intrinsic flexibility of the opioid receptor through multiscale modeling approaches
NASA Astrophysics Data System (ADS)
Vercauteren, Daniel; FosséPré, Mathieu; Leherte, Laurence; Laaksonen, Aatto
Numerous releases of G protein-coupled receptors crystalline structures created the opportunity for computational methods to widely explore their dynamics. Here, we study the biological implication of the intrinsic flexibility properties of opioid receptor OR. First, one performed classical all-atom (AA) Molecular Dynamics (MD) simulations of OR in its apo-form. We highlighted that the various degrees of bendability of the α-helices present important consequences on the plasticity of the binding site. Hence, this latter adopts a wide diversity of shape and volume, explaining why OR interacts with very diverse ligands. Then, one introduces a new strategy for parameterizing purely mechanical but precise coarse-grained (CG) elastic network models (ENMs). The CG ENMs reproduced in a high accurate way the flexibility properties of OR versus the AA simulations. At last, one uses network modularization to design multi-grained (MG) models. They represent a novel type of low resolution models, different in nature versus CG models as being true multi-resolution models, i . e ., each MG grouping a different number of residues. The three parts constitute hierarchical and multiscale approach for tackling the flexibility of OR.
Transverse momentum dependent (TMD) parton distribution functions: Status and prospects*
Angeles-Martinez, R.; Bacchetta, A.; Balitsky, Ian I.; ...
2015-01-01
In this study, we review transverse momentum dependent (TMD) parton distribution functions, their application to topical issues in high-energy physics phenomenology, and their theoretical connections with QCD resummation, evolution and factorization theorems. We illustrate the use of TMDs via examples of multi-scale problems in hadronic collisions. These include transverse momentum q T spectra of Higgs and vector bosons for low q T, and azimuthal correlations in the production of multiple jets associated with heavy bosons at large jet masses. We discuss computational tools for TMDs, and present the application of a new tool, TMD LIB, to parton density fits andmore » parameterizations.« less
NASA Astrophysics Data System (ADS)
Sánchez, M.; Oldenhof, M.; Freitez, J. A.; Mundim, K. C.; Ruette, F.
A systematic improvement of parametric quantum methods (PQM) is performed by considering: (a) a new application of parameterization procedure to PQMs and (b) novel parametric functionals based on properties of elementary parametric functionals (EPF) [Ruette et al., Int J Quantum Chem 2008, 108, 1831]. Parameterization was carried out by using the simplified generalized simulated annealing (SGSA) method in the CATIVIC program. This code has been parallelized and comparison with MOPAC/2007 (PM6) and MINDO/SR was performed for a set of molecules with C=C, C=H, and H=H bonds. Results showed better accuracy than MINDO/SR and MOPAC-2007 for a selected trial set of molecules.
NASA Astrophysics Data System (ADS)
He, Xibing; Shinoda, Wataru; DeVane, Russell; Anderson, Kelly L.; Klein, Michael L.
2010-02-01
A coarse-grained (CG) forcefield for linear alkylbenzene sulfonates (LAS) was systematically parameterized. Thermodynamic data from experiments and structural data obtained from all-atom molecular dynamics were used as targets to parameterize CG potentials for the bonded and non-bonded interactions. The added computational efficiency permits one to employ computer simulation to probe the self-assembly of LAS aqueous solutions into different morphologies starting from a random configuration. The present CG model is shown to accurately reproduce the phase behavior of solutions of pure isomers of sodium dodecylbenzene sulfonate, despite the fact that phase behavior was not directly taken into account in the forcefield parameterization.
Lu, Zhao; Sun, Jing; Butts, Kenneth
2014-05-01
Support vector regression for approximating nonlinear dynamic systems is more delicate than the approximation of indicator functions in support vector classification, particularly for systems that involve multitudes of time scales in their sampled data. The kernel used for support vector learning determines the class of functions from which a support vector machine can draw its solution, and the choice of kernel significantly influences the performance of a support vector machine. In this paper, to bridge the gap between wavelet multiresolution analysis and kernel learning, the closed-form orthogonal wavelet is exploited to construct new multiscale asymmetric orthogonal wavelet kernels for linear programming support vector learning. The closed-form multiscale orthogonal wavelet kernel provides a systematic framework to implement multiscale kernel learning via dyadic dilations and also enables us to represent complex nonlinear dynamics effectively. To demonstrate the superiority of the proposed multiscale wavelet kernel in identifying complex nonlinear dynamic systems, two case studies are presented that aim at building parallel models on benchmark datasets. The development of parallel models that address the long-term/mid-term prediction issue is more intricate and challenging than the identification of series-parallel models where only one-step ahead prediction is required. Simulation results illustrate the effectiveness of the proposed multiscale kernel learning.
A review of recent research on improvement of physical parameterizations in the GLA GCM
NASA Technical Reports Server (NTRS)
Sud, Y. C.; Walker, G. K.
1990-01-01
A systematic assessment of the effect of a series of improvements in physical parameterizations of the Goddard Laboratory for Atmospheres (GLA) general circulation model (GCM) are summarized. The implementation of the Simple Biosphere Model (SiB) in the GCM is followed by a comparison of SiB GCM simulations with that of the earlier slab soil hydrology GCM (SSH-GCM) simulations. In the Sahelian context, the biogeophysical component of desertification was analyzed for SiB-GCM simulations. Cumulus parameterization is found to be the primary determinant of the organization of the simulated tropical rainfall of the GLA GCM using Arakawa-Schubert cumulus parameterization. A comparison of model simulations with station data revealed excessive shortwave radiation accompanied by excessive drying and heating to the land. The perpetual July simulations with and without interactive soil moisture shows that 30 to 40 day oscillations may be a natural mode of the simulated earth atmosphere system.
NASA Astrophysics Data System (ADS)
Maher, Penelope; Vallis, Geoffrey K.; Sherwood, Steven C.; Webb, Mark J.; Sansom, Philip G.
2018-04-01
Convective parameterizations are widely believed to be essential for realistic simulations of the atmosphere. However, their deficiencies also result in model biases. The role of convection schemes in modern atmospheric models is examined using Selected Process On/Off Klima Intercomparison Experiment simulations without parameterized convection and forced with observed sea surface temperatures. Convection schemes are not required for reasonable climatological precipitation. However, they are essential for reasonable daily precipitation and constraining extreme daily precipitation that otherwise develops. Systematic effects on lapse rate and humidity are likewise modest compared with the intermodel spread. Without parameterized convection Kelvin waves are more realistic. An unexpectedly large moist Southern Hemisphere storm track bias is identified. This storm track bias persists without convection schemes, as does the double Intertropical Convergence Zone and excessive ocean precipitation biases. This suggests that model biases originate from processes other than convection or that convection schemes are missing key processes.
Cloud microphysics modification with an online coupled COSMO-MUSCAT regional model
NASA Astrophysics Data System (ADS)
Sudhakar, D.; Quaas, J.; Wolke, R.; Stoll, J.; Muehlbauer, A. D.; Tegen, I.
2015-12-01
Abstract: The quantification of clouds, aerosols, and aerosol-cloud interactions in models, continues to be a challenge (IPCC, 2013). In this scenario two-moment bulk microphysical scheme is used to understand the aerosol-cloud interactions in the regional model COSMO (Consortium for Small Scale Modeling). The two-moment scheme in COSMO has been especially designed to represent aerosol effects on the microphysics of mixed-phase clouds (Seifert et al., 2006). To improve the model predictability, the radiation scheme has been coupled with two-moment microphysical scheme. Further, the cloud microphysics parameterization has been modified via coupling COSMO with MUSCAT (MultiScale Chemistry Aerosol Transport model, Wolke et al., 2004). In this study, we will be discussing the initial result from the online-coupled COSMO-MUSCAT model system with modified two-moment parameterization scheme along with COSP (CFMIP Observational Simulator Package) satellite simulator. This online coupled model system aims to improve the sub-grid scale process in the regional weather prediction scenario. The constant aerosol concentration used in the Seifert and Beheng, (2006) parameterizations in COSMO model has been replaced by aerosol concentration derived from MUSCAT model. The cloud microphysical process from the modified two-moment scheme is compared with stand-alone COSMO model. To validate the robustness of the model simulation, the coupled model system is integrated with COSP satellite simulator (Muhlbauer et al., 2012). Further, the simulations are compared with MODIS (Moderate Resolution Imaging Spectroradiometer) and ISCCP (International Satellite Cloud Climatology Project) satellite products.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daleu, C. L.; Plant, R. S.; Woolnough, S. J.
As part of an international intercomparison project, the weak temperature gradient (WTG) and damped gravity wave (DGW) methods are used to parameterize large-scale dynamics in a set of cloud-resolving models (CRMs) and single column models (SCMs). The WTG or DGW method is implemented using a configuration that couples a model to a reference state defined with profiles obtained from the same model in radiative-convective equilibrium. We investigated the sensitivity of each model to changes in SST, given a fixed reference state. We performed a systematic comparison of the WTG and DGW methods in different models, and a systematic comparison ofmore » the behavior of those models using the WTG method and the DGW method. The sensitivity to the SST depends on both the large-scale parameterization method and the choice of the cloud model. In general, SCMs display a wider range of behaviors than CRMs. All CRMs using either the WTG or DGW method show an increase of precipitation with SST, while SCMs show sensitivities which are not always monotonic. CRMs using either the WTG or DGW method show a similar relationship between mean precipitation rate and column-relative humidity, while SCMs exhibit a much wider range of behaviors. DGW simulations produce large-scale velocity profiles which are smoother and less top-heavy compared to those produced by the WTG simulations. Lastly, these large-scale parameterization methods provide a useful tool to identify the impact of parameterization differences on model behavior in the presence of two-way feedback between convection and the large-scale circulation.« less
Multiscale Modeling of Plasmon-Exciton Dynamics of Malachite Green Monolayers on Gold Nanoparticles
NASA Astrophysics Data System (ADS)
Smith, Holden; Karam, Tony; Haber, Louis; Lopata, Kenneth
A multi-scale hybrid quantum/classical approach using classical electrodynamics and a collection of discrete two level quantum system is used to investigate the coupling dynamics of malachite green monolayers adsorbed to the surface of a spherical gold nanoparticle (NP). This method utilizes finite difference time domain (FDTD) to describe the plasmonic response of the NP and a two-level quantum description for the molecule via the Maxwell/Liouville equation. The molecular parameters are parameterized using CASPT2 for the energies and transition dipole moments, with the dephasing lifetime fit to experiment. This approach is suited to simulating thousands of molecules on the surface of a plasmonic NP. There is good agreement with experimental extinction measurements, predicting the plasmon and molecule depletions. Additionally, this model captures the polariton peaks overlapped with a Fano-type resonance profile observed in the experimental extinction measurements. This technique shows promise for modeling plasmon/molecule interactions in chemical sensing and light harvesting in multi-chromophore systems. This material is based upon work supported by the National Science Foundation under the NSF EPSCoR Cooperative Agreement No. EPS-1003897 and the Louisiana Board of Regents Research Competitiveness Subprogram under Contract Number LEQSF(2014-17)-RD-A-0.
Multiscale Modeling of Plasmon-Exciton Dynamics of Malachite Green Monolayers on Gold Nanoparticles
NASA Astrophysics Data System (ADS)
Smith, Holden; Karam, Tony; Haber, Louis; Lopata, Kenneth
A multi-scale hybrid quantum/classical approach using classical electrodynamics and a collection of discrete two-level quantum system is used to investigate the coupling dynamics of malachite green monolayers adsorbed to the surface of a spherical gold nanoparticle (NP). This method utilizes finite difference time domain (FDTD) to describe the plasmonic response of the NP and a two-level quantum description for the molecule via the Maxwell/Liouville equation. The molecular parameters are parameterized using CASPT2 for the energies and transition dipole moments, with the dephasing lifetime fit to experiment. This approach is suited to simulating thousands of molecules on the surface of a plasmonic NP. There is good agreement with experimental extinction measurements, predicting the plasmon and molecule depletions. Additionally, this model captures the polariton peaks overlapped with a Fano-type resonance profile observed in the experimental extinction measurements. This technique shows promise for modeling plasmon/molecule interactions in chemical sensing and light harvesting in multi-chromophore systems. This material is based upon work supported by the National Science Foundation under the NSF EPSCoR Cooperative Agreement No. EPS-1003897 and by the Louisiana Board of Regents Research Competitiveness Subprogram under Contract Number LEQSF(2014-17)-RD-A-0.
Challenges of Representing Sub-Grid Physics in an Adaptive Mesh Refinement Atmospheric Model
NASA Astrophysics Data System (ADS)
O'Brien, T. A.; Johansen, H.; Johnson, J. N.; Rosa, D.; Benedict, J. J.; Keen, N. D.; Collins, W.; Goodfriend, E.
2015-12-01
Some of the greatest potential impacts from future climate change are tied to extreme atmospheric phenomena that are inherently multiscale, including tropical cyclones and atmospheric rivers. Extremes are challenging to simulate in conventional climate models due to existing models' coarse resolutions relative to the native length-scales of these phenomena. Studying the weather systems of interest requires an atmospheric model with sufficient local resolution, and sufficient performance for long-duration climate-change simulations. To this end, we have developed a new global climate code with adaptive spatial and temporal resolution. The dynamics are formulated using a block-structured conservative finite volume approach suitable for moist non-hydrostatic atmospheric dynamics. By using both space- and time-adaptive mesh refinement, the solver focuses computational resources only where greater accuracy is needed to resolve critical phenomena. We explore different methods for parameterizing sub-grid physics, such as microphysics, macrophysics, turbulence, and radiative transfer. In particular, we contrast the simplified physics representation of Reed and Jablonowski (2012) with the more complex physics representation used in the System for Atmospheric Modeling of Khairoutdinov and Randall (2003). We also explore the use of a novel macrophysics parameterization that is designed to be explicitly scale-aware.
Ultra-Parameterized CAM: Progress Towards Low-Cloud Permitting Superparameterization
NASA Astrophysics Data System (ADS)
Parishani, H.; Pritchard, M. S.; Bretherton, C. S.; Khairoutdinov, M.; Wyant, M. C.; Singh, B.
2016-12-01
A leading source of uncertainty in climate feedback arises from the representation of low clouds, which are not resolved but depend on small-scale physical processes (e.g. entrainment, boundary layer turbulence) that are heavily parameterized. We show results from recent attempts to achieve an explicit representation of low clouds by pushing the computational limits of cloud superparameterization to resolve boundary-layer eddy scales relevant to marine stratocumulus (250m horizontal and 20m vertical length scales). This extreme configuration is called "ultraparameterization". Effects of varying horizontal vs. vertical resolution are analyzed in the context of altered constraints on the turbulent kinetic energy statistics of the marine boundary layer. We show that 250m embedded horizontal resolution leads to a more realistic boundary layer vertical structure, but also to an unrealistic cloud pulsation that cannibalizes time mean LWP. We explore the hypothesis that feedbacks involving horizontal advection (not typically encountered in offline LES that neglect this degree of freedom) may conspire to produce such effects and present strategies to compensate. The results are relevant to understanding the emergent behavior of quasi-resolved low cloud decks in a multi-scale modeling framework within a previously unencountered grey zone of better resolved boundary-layer turbulence.
Multiscale model reduction for shale gas transport in poroelastic fractured media
NASA Astrophysics Data System (ADS)
Akkutlu, I. Yucel; Efendiev, Yalchin; Vasilyeva, Maria; Wang, Yuhe
2018-01-01
Inherently coupled flow and geomechanics processes in fractured shale media have implications for shale gas production. The system involves highly complex geo-textures comprised of a heterogeneous anisotropic fracture network spatially embedded in an ultra-tight matrix. In addition, nonlinearities due to viscous flow, diffusion, and desorption in the matrix and high velocity gas flow in the fractures complicates the transport. In this paper, we develop a multiscale model reduction approach to couple gas flow and geomechanics in fractured shale media. A Discrete Fracture Model (DFM) is used to treat the complex network of fractures on a fine grid. The coupled flow and geomechanics equations are solved using a fixed stress-splitting scheme by solving the pressure equation using a continuous Galerkin method and the displacement equation using an interior penalty discontinuous Galerkin method. We develop a coarse grid approximation and coupling using the Generalized Multiscale Finite Element Method (GMsFEM). GMsFEM constructs the multiscale basis functions in a systematic way to capture the fracture networks and their interactions with the shale matrix. Numerical results and an error analysis is provided showing that the proposed approach accurately captures the coupled process using a few multiscale basis functions, i.e. a small fraction of the degrees of freedom of the fine-scale problem.
Differential geometry based solvation model. III. Quantum formulation
Chen, Zhan; Wei, Guo-Wei
2011-01-01
Solvation is of fundamental importance to biomolecular systems. Implicit solvent models, particularly those based on the Poisson-Boltzmann equation for electrostatic analysis, are established approaches for solvation analysis. However, ad hoc solvent-solute interfaces are commonly used in the implicit solvent theory. Recently, we have introduced differential geometry based solvation models which allow the solvent-solute interface to be determined by the variation of a total free energy functional. Atomic fixed partial charges (point charges) are used in our earlier models, which depends on existing molecular mechanical force field software packages for partial charge assignments. As most force field models are parameterized for a certain class of molecules or materials, the use of partial charges limits the accuracy and applicability of our earlier models. Moreover, fixed partial charges do not account for the charge rearrangement during the solvation process. The present work proposes a differential geometry based multiscale solvation model which makes use of the electron density computed directly from the quantum mechanical principle. To this end, we construct a new multiscale total energy functional which consists of not only polar and nonpolar solvation contributions, but also the electronic kinetic and potential energies. By using the Euler-Lagrange variation, we derive a system of three coupled governing equations, i.e., the generalized Poisson-Boltzmann equation for the electrostatic potential, the generalized Laplace-Beltrami equation for the solvent-solute boundary, and the Kohn-Sham equations for the electronic structure. We develop an iterative procedure to solve three coupled equations and to minimize the solvation free energy. The present multiscale model is numerically validated for its stability, consistency and accuracy, and is applied to a few sets of molecules, including a case which is difficult for existing solvation models. Comparison is made to many other classic and quantum models. By using experimental data, we show that the present quantum formulation of our differential geometry based multiscale solvation model improves the prediction of our earlier models, and outperforms some explicit solvation model. PMID:22112067
NASA Astrophysics Data System (ADS)
Fritts, Dave; Wang, Ling; Balsley, Ben; Lawrence, Dale
2013-04-01
A number of sources contribute to intermittent small-scale turbulence in the stable boundary layer (SBL). These include Kelvin-Helmholtz instability (KHI), gravity wave (GW) breaking, and fluid intrusions, among others. Indeed, such sources arise naturally in response to even very simple "multi-scale" superpositions of larger-scale GWs and smaller-scale GWs, mean flows, or fine structure (FS) throughout the atmosphere and the oceans. We describe here results of two direct numerical simulations (DNS) of these GW-FS interactions performed at high resolution and high Reynolds number that allow exploration of these turbulence sources and the character and effects of the turbulence that arises in these flows. Results include episodic turbulence generation, a broad range of turbulence scales and intensities, PDFs of dissipation fields exhibiting quasi-log-normal and more complex behavior, local turbulent mixing, and "sheet and layer" structures in potential temperature that closely resemble high-resolution measurements. Importantly, such multi-scale dynamics differ from their larger-scale, quasi-monochromatic gravity wave or quasi-horizontally homogeneous shear flow instabilities in significant ways. The ability to quantify such multi-scale dynamics with new, very high-resolution measurements is also advancing rapidly. New in-situ sensors on small, unmanned aerial vehicles (UAVs), balloons, or tethered systems are enabling definition of SBL (and deeper) environments and turbulence structure and dissipation fields with high spatial and temporal resolution and precision. These new measurement and modeling capabilities promise significant advances in understanding small-scale instability and turbulence dynamics, in quantifying their roles in mixing, transport, and evolution of the SBL environment, and in contributing to improved parameterizations of these dynamics in mesoscale, numerical weather prediction, climate, and general circulation models. We expect such measurement and modeling capabilities to also aid in the design of new and more comprehensive future SBL measurement programs.
Model's sparse representation based on reduced mixed GMsFE basis methods
NASA Astrophysics Data System (ADS)
Jiang, Lijian; Li, Qiuqi
2017-06-01
In this paper, we propose a model's sparse representation based on reduced mixed generalized multiscale finite element (GMsFE) basis methods for elliptic PDEs with random inputs. A typical application for the elliptic PDEs is the flow in heterogeneous random porous media. Mixed generalized multiscale finite element method (GMsFEM) is one of the accurate and efficient approaches to solve the flow problem in a coarse grid and obtain the velocity with local mass conservation. When the inputs of the PDEs are parameterized by the random variables, the GMsFE basis functions usually depend on the random parameters. This leads to a large number degree of freedoms for the mixed GMsFEM and substantially impacts on the computation efficiency. In order to overcome the difficulty, we develop reduced mixed GMsFE basis methods such that the multiscale basis functions are independent of the random parameters and span a low-dimensional space. To this end, a greedy algorithm is used to find a set of optimal samples from a training set scattered in the parameter space. Reduced mixed GMsFE basis functions are constructed based on the optimal samples using two optimal sampling strategies: basis-oriented cross-validation and proper orthogonal decomposition. Although the dimension of the space spanned by the reduced mixed GMsFE basis functions is much smaller than the dimension of the original full order model, the online computation still depends on the number of coarse degree of freedoms. To significantly improve the online computation, we integrate the reduced mixed GMsFE basis methods with sparse tensor approximation and obtain a sparse representation for the model's outputs. The sparse representation is very efficient for evaluating the model's outputs for many instances of parameters. To illustrate the efficacy of the proposed methods, we present a few numerical examples for elliptic PDEs with multiscale and random inputs. In particular, a two-phase flow model in random porous media is simulated by the proposed sparse representation method.
Model's sparse representation based on reduced mixed GMsFE basis methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Lijian, E-mail: ljjiang@hnu.edu.cn; Li, Qiuqi, E-mail: qiuqili@hnu.edu.cn
2017-06-01
In this paper, we propose a model's sparse representation based on reduced mixed generalized multiscale finite element (GMsFE) basis methods for elliptic PDEs with random inputs. A typical application for the elliptic PDEs is the flow in heterogeneous random porous media. Mixed generalized multiscale finite element method (GMsFEM) is one of the accurate and efficient approaches to solve the flow problem in a coarse grid and obtain the velocity with local mass conservation. When the inputs of the PDEs are parameterized by the random variables, the GMsFE basis functions usually depend on the random parameters. This leads to a largemore » number degree of freedoms for the mixed GMsFEM and substantially impacts on the computation efficiency. In order to overcome the difficulty, we develop reduced mixed GMsFE basis methods such that the multiscale basis functions are independent of the random parameters and span a low-dimensional space. To this end, a greedy algorithm is used to find a set of optimal samples from a training set scattered in the parameter space. Reduced mixed GMsFE basis functions are constructed based on the optimal samples using two optimal sampling strategies: basis-oriented cross-validation and proper orthogonal decomposition. Although the dimension of the space spanned by the reduced mixed GMsFE basis functions is much smaller than the dimension of the original full order model, the online computation still depends on the number of coarse degree of freedoms. To significantly improve the online computation, we integrate the reduced mixed GMsFE basis methods with sparse tensor approximation and obtain a sparse representation for the model's outputs. The sparse representation is very efficient for evaluating the model's outputs for many instances of parameters. To illustrate the efficacy of the proposed methods, we present a few numerical examples for elliptic PDEs with multiscale and random inputs. In particular, a two-phase flow model in random porous media is simulated by the proposed sparse representation method.« less
Effects of Planetary Boundary Layer Parameterizations on CWRF Regional Climate Simulation
NASA Astrophysics Data System (ADS)
Liu, S.; Liang, X.
2011-12-01
Planetary Boundary Layer (PBL) parameterizations incorporated in CWRF (Climate extension of the Weather Research and Forecasting model) are first evaluated by comparing simulated PBL heights with observations. Among the 10 evaluated PBL schemes, 2 (CAM, UW) are new in CWRF while the other 8 are original WRF schemes. MYJ, QNSE and UW determine the PBL heights based on turbulent kinetic energy (TKE) profiles, while others (YSU, ACM, GFS, CAM, TEMF) are from bulk Richardson criteria. All TKE-based schemes (MYJ, MYNN, QNSE, UW, Boulac) substantially underestimate convective or residual PBL heights from noon toward evening, while others (ACM, CAM, YSU) well capture the observed diurnal cycle except for the GFS with systematic overestimation. These differences among the schemes are representative over most areas of the simulation domain, suggesting systematic behaviors of the parameterizations. Lower PBL heights simulated by the QNSE and MYJ are consistent with their smaller Bowen ratios and heavier rainfalls, while higher PBL tops by the GFS correspond to warmer surface temperatures. Effects of PBL parameterizations on CWRF regional climate simulation are then compared. The QNSE PBL scheme yields systematically heavier rainfall almost everywhere and throughout the year; this is identified with a much greater surface Bowen ratio (smaller sensible versus larger latent heating) and wetter soil moisture than other PBL schemes. Its predecessor MYJ scheme shares the same deficiency to a lesser degree. For temperature, the performance of the QNSE and MYJ schemes remains poor, having substantially larger rms errors in all seasons. GFS PBL scheme also produces large warm biases. Pronounced sensitivities are also found to the PBL schemes in winter and spring over most areas except the southern U.S. (Southeast, Gulf States, NAM); excluding the outliers (QNSE, MYJ, GFS) that cause extreme biases of -6 to +3°C, the differences among the schemes are still visible (±2°C), where the CAM is generally more realistic. QNSE, MYJ, GFS and BouLac PBL parameterizations are identified as obvious outliers of overall performance in representing precipitation, surface air temperature or PBL height variations. Their poor performance may result from deficiencies in physical formulations, dependences on applicable scales, or trouble numerical implementations, requiring future detailed investigation to isolate the actual cause.
Cloud Simulations in Response to Turbulence Parameterizations in the GISS Model E GCM
NASA Technical Reports Server (NTRS)
Yao, Mao-Sung; Cheng, Ye
2013-01-01
The response of cloud simulations to turbulence parameterizations is studied systematically using the GISS general circulation model (GCM) E2 employed in the Intergovernmental Panel on Climate Change's (IPCC) Fifth Assessment Report (AR5).Without the turbulence parameterization, the relative humidity (RH) and the low cloud cover peak unrealistically close to the surface; with the dry convection or with only the local turbulence parameterization, these two quantities improve their vertical structures, but the vertical transport of water vapor is still weak in the planetary boundary layers (PBLs); with both local and nonlocal turbulence parameterizations, the RH and low cloud cover have better vertical structures in all latitudes due to more significant vertical transport of water vapor in the PBL. The study also compares the cloud and radiation climatologies obtained from an experiment using a newer version of turbulence parameterization being developed at GISS with those obtained from the AR5 version. This newer scheme differs from the AR5 version in computing nonlocal transports, turbulent length scale, and PBL height and shows significant improvements in cloud and radiation simulations, especially over the subtropical eastern oceans and the southern oceans. The diagnosed PBL heights appear to correlate well with the low cloud distribution over oceans. This suggests that a cloud-producing scheme needs to be constructed in a framework that also takes the turbulence into consideration.
Daleu, C. L.; Plant, R. S.; Woolnough, S. J.; ...
2016-03-18
As part of an international intercomparison project, the weak temperature gradient (WTG) and damped gravity wave (DGW) methods are used to parameterize large-scale dynamics in a set of cloud-resolving models (CRMs) and single column models (SCMs). The WTG or DGW method is implemented using a configuration that couples a model to a reference state defined with profiles obtained from the same model in radiative-convective equilibrium. We investigated the sensitivity of each model to changes in SST, given a fixed reference state. We performed a systematic comparison of the WTG and DGW methods in different models, and a systematic comparison ofmore » the behavior of those models using the WTG method and the DGW method. The sensitivity to the SST depends on both the large-scale parameterization method and the choice of the cloud model. In general, SCMs display a wider range of behaviors than CRMs. All CRMs using either the WTG or DGW method show an increase of precipitation with SST, while SCMs show sensitivities which are not always monotonic. CRMs using either the WTG or DGW method show a similar relationship between mean precipitation rate and column-relative humidity, while SCMs exhibit a much wider range of behaviors. DGW simulations produce large-scale velocity profiles which are smoother and less top-heavy compared to those produced by the WTG simulations. Lastly, these large-scale parameterization methods provide a useful tool to identify the impact of parameterization differences on model behavior in the presence of two-way feedback between convection and the large-scale circulation.« less
Plant, R. S.; Woolnough, S. J.; Sessions, S.; Herman, M. J.; Sobel, A.; Wang, S.; Kim, D.; Cheng, A.; Bellon, G.; Peyrille, P.; Ferry, F.; Siebesma, P.; van Ulft, L.
2016-01-01
Abstract As part of an international intercomparison project, the weak temperature gradient (WTG) and damped gravity wave (DGW) methods are used to parameterize large‐scale dynamics in a set of cloud‐resolving models (CRMs) and single column models (SCMs). The WTG or DGW method is implemented using a configuration that couples a model to a reference state defined with profiles obtained from the same model in radiative‐convective equilibrium. We investigated the sensitivity of each model to changes in SST, given a fixed reference state. We performed a systematic comparison of the WTG and DGW methods in different models, and a systematic comparison of the behavior of those models using the WTG method and the DGW method. The sensitivity to the SST depends on both the large‐scale parameterization method and the choice of the cloud model. In general, SCMs display a wider range of behaviors than CRMs. All CRMs using either the WTG or DGW method show an increase of precipitation with SST, while SCMs show sensitivities which are not always monotonic. CRMs using either the WTG or DGW method show a similar relationship between mean precipitation rate and column‐relative humidity, while SCMs exhibit a much wider range of behaviors. DGW simulations produce large‐scale velocity profiles which are smoother and less top‐heavy compared to those produced by the WTG simulations. These large‐scale parameterization methods provide a useful tool to identify the impact of parameterization differences on model behavior in the presence of two‐way feedback between convection and the large‐scale circulation. PMID:27642501
Multiscale Materials Modeling in an Industrial Environment.
Weiß, Horst; Deglmann, Peter; In 't Veld, Pieter J; Cetinkaya, Murat; Schreiner, Eduard
2016-06-07
In this review, we sketch the materials modeling process in industry. We show that predictive and fast modeling is a prerequisite for successful participation in research and development processes in the chemical industry. Stable and highly automated workflows suitable for handling complex systems are a must. In particular, we review approaches to build and parameterize soft matter systems. By satisfying these prerequisites, efficiency for the development of new materials can be significantly improved, as exemplified here for formulation polymer development. This is in fact in line with recent Materials Genome Initiative efforts sponsored by the US government. Valuable contributions to product development are possible today by combining existing modeling techniques in an intelligent fashion, provided modeling and experiment work hand in hand.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hagos, Samson M.; Leung, Lai-Yung R.; Gustafson, William I.
2014-02-28
A multi-scale moisture budget analysis is used to identify the mechanisms responsible for the sensitivity of the water cycle to spatial resolution using idealized regional aquaplanet simulations. In the higher resolution simulations, moisture transport by eddies fluxes dry the boundary layer enhancing evaporation and precipitation. This effect of eddies, which is underestimated by the physics parameterizations in the low-resolution simulations, is found to be responsible for the sensitivity of the water cycle both directly, and through its upscale effect, on the mean circulation. Correlations among moisture transport by eddies at adjacent ranges of scales provides the potential for reducing thismore » sensitivity by representing the unresolved eddies by their marginally resolved counterparts.« less
NASA Astrophysics Data System (ADS)
Subramanian, Aneesh C.; Palmer, Tim N.
2017-06-01
Stochastic schemes to represent model uncertainty in the European Centre for Medium-Range Weather Forecasts (ECMWF) ensemble prediction system has helped improve its probabilistic forecast skill over the past decade by both improving its reliability and reducing the ensemble mean error. The largest uncertainties in the model arise from the model physics parameterizations. In the tropics, the parameterization of moist convection presents a major challenge for the accurate prediction of weather and climate. Superparameterization is a promising alternative strategy for including the effects of moist convection through explicit turbulent fluxes calculated from a cloud-resolving model (CRM) embedded within a global climate model (GCM). In this paper, we compare the impact of initial random perturbations in embedded CRMs, within the ECMWF ensemble prediction system, with stochastically perturbed physical tendency (SPPT) scheme as a way to represent model uncertainty in medium-range tropical weather forecasts. We especially focus on forecasts of tropical convection and dynamics during MJO events in October-November 2011. These are well-studied events for MJO dynamics as they were also heavily observed during the DYNAMO field campaign. We show that a multiscale ensemble modeling approach helps improve forecasts of certain aspects of tropical convection during the MJO events, while it also tends to deteriorate certain large-scale dynamic fields with respect to stochastically perturbed physical tendencies approach that is used operationally at ECMWF.
Moss, Robert; Grosse, Thibault; Marchant, Ivanny; Lassau, Nathalie; Gueyffier, François; Thomas, S. Randall
2012-01-01
Mathematical models that integrate multi-scale physiological data can offer insight into physiological and pathophysiological function, and may eventually assist in individualized predictive medicine. We present a methodology for performing systematic analyses of multi-parameter interactions in such complex, multi-scale models. Human physiology models are often based on or inspired by Arthur Guyton's whole-body circulatory regulation model. Despite the significance of this model, it has not been the subject of a systematic and comprehensive sensitivity study. Therefore, we use this model as a case study for our methodology. Our analysis of the Guyton model reveals how the multitude of model parameters combine to affect the model dynamics, and how interesting combinations of parameters may be identified. It also includes a “virtual population” from which “virtual individuals” can be chosen, on the basis of exhibiting conditions similar to those of a real-world patient. This lays the groundwork for using the Guyton model for in silico exploration of pathophysiological states and treatment strategies. The results presented here illustrate several potential uses for the entire dataset of sensitivity results and the “virtual individuals” that we have generated, which are included in the supplementary material. More generally, the presented methodology is applicable to modern, more complex multi-scale physiological models. PMID:22761561
Gyrokinetic predictions of multiscale transport in a DIII-D ITER baseline discharge
NASA Astrophysics Data System (ADS)
Holland, C.; Howard, N. T.; Grierson, B. A.
2017-06-01
New multiscale gyrokinetic simulations predict that electron energy transport in a DIII-D ITER baseline discharge with dominant electron heating and low input torque is multiscale in nature, with roughly equal amounts of the electron energy flux Q e coming from long wavelength ion-scale (k y ρ s < 1) and short wavelength electron-scale (k y ρ s > 1) fluctuations when the gyrokinetic results match independent power balance calculations. Corresponding conventional ion-scale simulations are able to match the power balance ion energy flux Q i, but systematically underpredict Q e when doing so. Significant nonlinear cross-scale couplings are observed in the multiscale simulations, but the exact simulation predictions are found to be extremely sensitive to variations of model input parameters within experimental uncertainties. Most notably, depending upon the exact value of the equilibrium E × B shearing rate γ E×B used, either enhancement or suppression of the long-wavelength turbulence and transport levels in the multiscale simulations is observed relative to what is predicted by ion-scale simulations. While the enhancement of the long wavelength fluctuations by inclusion of the short wavelength turbulence was previously observed in similar multiscale simulations of an Alcator C-Mod L-mode discharge, these new results show for the first time a complete suppression of long-wavelength turbulence in a multiscale simulation, for parameters at which conventional ion-scale simulation predicts small but finite levels of low-k turbulence and transport consistent with the power balance Q i. Although computational resource limitations prevent a fully rigorous validation assessment of these new results, they provide significant new evidence that electron energy transport in burning plasmas is likely to have a strong multiscale character, with significant nonlinear cross-scale couplings that must be fully understood to predict the performance of those plasmas with confidence.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Horstemeyer, Mark R.; Chaudhuri, Santanu
2015-09-30
A multiscale modeling Internal State Variable (ISV) constitutive model was developed that captures the fundamental structure-property relationships. The macroscale ISV model used lower length scale simulations (Butler-Volmer and Electronics Structures results) in order to inform the ISVs at the macroscale. The chemomechanical ISV model was calibrated and validated from experiments with magnesium (Mg) alloys that were investigated under corrosive environments coupled with experimental electrochemical studies. Because the ISV chemomechanical model is physically based, it can be used for other material systems to predict corrosion behavior. As such, others can use the chemomechanical model for analyzing corrosion effects on their designs.
Reyes, Jeanette M; Xu, Yadong; Vizuete, William; Serre, Marc L
2017-01-01
The regulatory Community Multiscale Air Quality (CMAQ) model is a means to understanding the sources, concentrations and regulatory attainment of air pollutants within a model's domain. Substantial resources are allocated to the evaluation of model performance. The Regionalized Air quality Model Performance (RAMP) method introduced here explores novel ways of visualizing and evaluating CMAQ model performance and errors for daily Particulate Matter ≤ 2.5 micrometers (PM2.5) concentrations across the continental United States. The RAMP method performs a non-homogenous, non-linear, non-homoscedastic model performance evaluation at each CMAQ grid. This work demonstrates that CMAQ model performance, for a well-documented 2001 regulatory episode, is non-homogeneous across space/time. The RAMP correction of systematic errors outperforms other model evaluation methods as demonstrated by a 22.1% reduction in Mean Square Error compared to a constant domain wide correction. The RAMP method is able to accurately reproduce simulated performance with a correlation of r = 76.1%. Most of the error coming from CMAQ is random error with only a minority of error being systematic. Areas of high systematic error are collocated with areas of high random error, implying both error types originate from similar sources. Therefore, addressing underlying causes of systematic error will have the added benefit of also addressing underlying causes of random error.
NASA Astrophysics Data System (ADS)
Verrier, Sébastien; Crépon, Michel; Thiria, Sylvie
2014-09-01
Spectral scaling properties have already been evidenced on oceanic numerical simulations and have been subject to several interpretations. They can be used to evaluate classical turbulence theories that predict scaling with specific exponents and to evaluate the quality of GCM outputs from a statistical and multiscale point of view. However, a more complete framework based on multifractal cascades is able to generalize the classical but restrictive second-order spectral framework to other moment orders, providing an accurate description of probability distributions of the fields at multiple scales. The predictions of this formalism still needed systematic verification in oceanic GCM while they have been confirmed recently for their atmospheric counterparts by several papers. The present paper is devoted to a systematic analysis of several oceanic fields produced by the NEMO oceanic GCM. Attention is focused to regional, idealized configurations that permit to evaluate the NEMO engine core from a scaling point of view regardless of limitations involved by land masks. Based on classical multifractal analysis tools, multifractal properties were evidenced for several oceanic state variables (sea surface temperature and salinity, velocity components, etc.). While first-order structure functions estimated a different nonconservativity parameter H in two scaling ranges, the multiorder statistics of turbulent fluxes were scaling over almost the whole available scaling range. This multifractal scaling was then parameterized with the help of the universal multifractal framework, providing parameters that are coherent with existing empirical literature. Finally, we argue that the knowledge of these properties may be useful for oceanographers. The framework seems very well suited for the statistical evaluation of OGCM outputs. Moreover, it also provides practical solutions to simulate subpixel variability stochastically for GCM downscaling purposes. As an independent perspective, the existence of multifractal properties in oceanic flows seems also interesting for investigating scale dependencies in remote sensing inversion algorithms.
Parameterization of Model Validating Sets for Uncertainty Bound Optimizations. Revised
NASA Technical Reports Server (NTRS)
Lim, K. B.; Giesy, D. P.
2000-01-01
Given measurement data, a nominal model and a linear fractional transformation uncertainty structure with an allowance on unknown but bounded exogenous disturbances, easily computable tests for the existence of a model validating uncertainty set are given. Under mild conditions, these tests are necessary and sufficient for the case of complex, nonrepeated, block-diagonal structure. For the more general case which includes repeated and/or real scalar uncertainties, the tests are only necessary but become sufficient if a collinearity condition is also satisfied. With the satisfaction of these tests, it is shown that a parameterization of all model validating sets of plant models is possible. The new parameterization is used as a basis for a systematic way to construct or perform uncertainty tradeoff with model validating uncertainty sets which have specific linear fractional transformation structure for use in robust control design and analysis. An illustrative example which includes a comparison of candidate model validating sets is given.
Stochastic Ocean Predictions with Dynamically-Orthogonal Primitive Equations
NASA Astrophysics Data System (ADS)
Subramani, D. N.; Haley, P., Jr.; Lermusiaux, P. F. J.
2017-12-01
The coastal ocean is a prime example of multiscale nonlinear fluid dynamics. Ocean fields in such regions are complex and intermittent with unstationary heterogeneous statistics. Due to the limited measurements, there are multiple sources of uncertainties, including the initial conditions, boundary conditions, forcing, parameters, and even the model parameterizations and equations themselves. For efficient and rigorous quantification and prediction of these uncertainities, the stochastic Dynamically Orthogonal (DO) PDEs for a primitive equation ocean modeling system with a nonlinear free-surface are derived and numerical schemes for their space-time integration are obtained. Detailed numerical studies with idealized-to-realistic regional ocean dynamics are completed. These include consistency checks for the numerical schemes and comparisons with ensemble realizations. As an illustrative example, we simulate the 4-d multiscale uncertainty in the Middle Atlantic/New York Bight region during the months of Jan to Mar 2017. To provide intitial conditions for the uncertainty subspace, uncertainties in the region were objectively analyzed using historical data. The DO primitive equations were subsequently integrated in space and time. The probability distribution function (pdf) of the ocean fields is compared to in-situ, remote sensing, and opportunity data collected during the coincident POSYDON experiment. Results show that our probabilistic predictions had skill and are 3- to 4- orders of magnitude faster than classic ensemble schemes.
NASA Technical Reports Server (NTRS)
Chen, Fei; Yates, David; LeMone, Margaret
2001-01-01
To understand the effects of land-surface heterogeneity and the interactions between the land-surface and the planetary boundary layer at different scales, we develop a multiscale data set. This data set, based on the Cooperative Atmosphere-Surface Exchange Study (CASES97) observations, includes atmospheric, surface, and sub-surface observations obtained from a dense observation network covering a large region on the order of 100 km. We use this data set to drive three land-surface models (LSMs) to generate multi-scale (with three resolutions of 1, 5, and 10 kilometers) gridded surface heat flux maps for the CASES area. Upon validating these flux maps with measurements from surface station and aircraft, we utilize them to investigate several approaches for estimating the area-integrated surface heat flux for the CASES97 domain of 71x74 square kilometers, which is crucial for land surface model development/validation and area water and energy budget studies. This research is aimed at understanding the relative contribution of random turbulence versus organized mesoscale circulations to the area-integrated surface flux at the scale of 100 kilometers, and identifying the most important effective parameters for characterizing the subgrid-scale variability for large-scale atmosphere-hydrology models.
Importance of Chemical Composition of Ice Nuclei on the Formation of Arctic Ice Clouds
NASA Astrophysics Data System (ADS)
Keita, Setigui Aboubacar; Girard, Eric
2016-09-01
Ice clouds play an important role in the Arctic weather and climate system but interactions between aerosols, clouds and radiation remain poorly understood. Consequently, it is essential to fully understand their properties and especially their formation process. Extensive measurements from ground-based sites and satellite remote sensing reveal the existence of two Types of Ice Clouds (TICs) in the Arctic during the polar night and early spring. TICs-1 are composed by non-precipitating small (radar-unseen) ice crystals of less than 30 μm in diameter. The second type, TICs-2, are detected by radar and are characterized by a low concentration of large precipitating ice crystals ice crystals (>30 μm). To explain these differences, we hypothesized that TIC-2 formation is linked to the acidification of aerosols, which inhibits the ice nucleating properties of ice nuclei (IN). As a result, the IN concentration is reduced in these regions, resulting to a lower concentration of larger ice crystals. Water vapor available for deposition being the same, these crystals reach a larger size. Current weather and climate models cannot simulate these different types of ice clouds. This problem is partly due to the parameterizations implemented for ice nucleation. Over the past 10 years, several parameterizations of homogeneous and heterogeneous ice nucleation on IN of different chemical compositions have been developed. These parameterizations are based on two approaches: stochastic (that is nucleation is a probabilistic process, which is time dependent) and singular (that is nucleation occurs at fixed conditions of temperature and humidity and time-independent). The best approach remains unclear. This research aims to better understand the formation process of Arctic TICs using recently developed ice nucleation parameterizations. For this purpose, we have implemented these ice nucleation parameterizations into the Limited Area version of the Global Multiscale Environmental Model (GEM-LAM) and use them to simulate ice clouds observed during the Indirect and Semi-Direct Aerosol Campaign (ISDAC) in Alaska. Simulation results of the TICs-2 observed on April 15th and 25th (acidic cases) and TICs-1 observed on April 5th (non-acidic cases) are presented. Our results show that the stochastic approach based on the classical nucleation theory with the appropriate contact angle is better. Parameterizations of ice nucleation based on the singular approach tend to overestimate the ice crystal concentration in TICs-1 and TICs-2. The classical nucleation theory using the appropriate contact angle is the best approach to use to simulate the ice clouds investigated in this research.
Year of Tropical Convection (YOTC): Status and Research Agenda
NASA Astrophysics Data System (ADS)
Moncrieff, M. W.; Waliser, D. E.
2009-12-01
The realistic representation of tropical convection in global models is a long-standing challenge for numerical weather prediction and an emerging grand challenge for climate prediction in respect to its physical basis. Insufficient knowledge and practical capabilities in this area disadvantage the modeling and prediction of prominent multi-scale phenomena such as the ITCZ, ENSO, monsoons and their active/break periods, the MJO, subtropical stratus decks, near-surface ocean properties, and tropical cyclones. Science elements include the diurnal cycle of precipitation, multi-scale convective organization, the global energy and water cycle, and interaction between the tropics and extra-tropics which interact strongly on timescales of weeks-to-months: the intersection of weather and climate. To address such challenges, the WCRP and WWRP/THORPEX are conducting a joint international research project, the Year of Tropical Convection (YOTC) which is a coordinated observing, modeling and forecasting project. The focus-year and integrated framework is intended to exploit the vast observational datasets, the modern high-resolution modeling frameworks, and theoretical insights. The over-arching objective is to advance the characterization, diagnosis, modeling, parameterization and prediction of multi-scale organized tropical phenomena and their interaction with the global circulation. The “Year” (May 2008 - April 2010) is intended to leverage recent major investments in Earth Science infrastructure and overlapping observational activities, e.g., Asian Monsoon Years (AMY) and the THORPEX Pacific Asian Regional Campaign (T-PARC). The research agenda involves phenomena and scale-interactions that are problematic for prediction models and have important socio-economic implications: MJO and convectively coupled equatorial waves; easterly waves and tropical cyclones; the monsoons including their intraseasonal variability; the diurnal cycle of precipitation; and two-way tropical-extratropical interaction. This presentation will summarize the status of the above.
Wang, Yan Jason; Nguyen, Monica T; Steffens, Jonathan T; Tong, Zheming; Wang, Yungang; Hopke, Philip K; Zhang, K Max
2013-01-15
A new methodology, referred to as the multi-scale structure, integrates "tailpipe-to-road" (i.e., on-road domain) and "road-to-ambient" (i.e., near-road domain) simulations to elucidate the environmental impacts of particulate emissions from traffic sources. The multi-scale structure is implemented in the CTAG model to 1) generate process-based on-road emission rates of ultrafine particles (UFPs) by explicitly simulating the effects of exhaust properties, traffic conditions, and meteorological conditions and 2) to characterize the impacts of traffic-related emissions on micro-environmental air quality near a highway intersection in Rochester, NY. The performance of CTAG, evaluated against with the field measurements, shows adequate agreement in capturing the dispersion of carbon monoxide (CO) and the number concentrations of UFPs in the near road micro-environment. As a proof-of-concept case study, we also apply CTAG to separate the relative impacts of the shutdown of a large coal-fired power plant (CFPP) and the adoption of the ultra-low-sulfur diesel (ULSD) on UFP concentrations in the intersection micro-environment. Although CTAG is still computationally expensive compared to the widely-used parameterized dispersion models, it has the potential to advance our capability to predict the impacts of UFP emissions and spatial/temporal variations of air pollutants in complex environments. Furthermore, for the on-road simulations, CTAG can serve as a process-based emission model; Combining the on-road and near-road simulations, CTAG becomes a "plume-in-grid" model for mobile emissions. The processed emission profiles can potentially improve regional air quality and climate predictions accordingly. Copyright © 2012 Elsevier B.V. All rights reserved.
Toward seamless hydrologic predictions across spatial scales
NASA Astrophysics Data System (ADS)
Samaniego, Luis; Kumar, Rohini; Thober, Stephan; Rakovec, Oldrich; Zink, Matthias; Wanders, Niko; Eisner, Stephanie; Müller Schmied, Hannes; Sutanudjaja, Edwin H.; Warrach-Sagi, Kirsten; Attinger, Sabine
2017-09-01
Land surface and hydrologic models (LSMs/HMs) are used at diverse spatial resolutions ranging from catchment-scale (1-10 km) to global-scale (over 50 km) applications. Applying the same model structure at different spatial scales requires that the model estimates similar fluxes independent of the chosen resolution, i.e., fulfills a flux-matching condition across scales. An analysis of state-of-the-art LSMs and HMs reveals that most do not have consistent hydrologic parameter fields. Multiple experiments with the mHM, Noah-MP, PCR-GLOBWB, and WaterGAP models demonstrate the pitfalls of deficient parameterization practices currently used in most operational models, which are insufficient to satisfy the flux-matching condition. These examples demonstrate that J. Dooge's 1982 statement on the unsolved problem of parameterization in these models remains true. Based on a review of existing parameter regionalization techniques, we postulate that the multiscale parameter regionalization (MPR) technique offers a practical and robust method that provides consistent (seamless) parameter and flux fields across scales. Herein, we develop a general model protocol to describe how MPR can be applied to a particular model and present an example application using the PCR-GLOBWB model. Finally, we discuss potential advantages and limitations of MPR in obtaining the seamless prediction of hydrological fluxes and states across spatial scales.
Qu, Zhiyu; Qu, Fuxin; Hou, Changbo; Jing, Fulong
2018-05-19
In an inverse synthetic aperture radar (ISAR) imaging system for targets with complex motion, the azimuth echo signals of the target are always modeled as multicomponent quadratic frequency modulation (QFM) signals. The chirp rate (CR) and quadratic chirp rate (QCR) estimation of QFM signals is very important to solve the ISAR image defocus problem. For multicomponent QFM (multi-QFM) signals, the conventional QR and QCR estimation algorithms suffer from the cross-term and poor anti-noise ability. This paper proposes a novel estimation algorithm called a two-dimensional product modified parameterized chirp rate-quadratic chirp rate distribution (2D-PMPCRD) for QFM signals parameter estimation. The 2D-PMPCRD employs a multi-scale parametric symmetric self-correlation function and modified nonuniform fast Fourier transform-Fast Fourier transform to transform the signals into the chirp rate-quadratic chirp rate (CR-QCR) domains. It can greatly suppress the cross-terms while strengthening the auto-terms by multiplying different CR-QCR domains with different scale factors. Compared with high order ambiguity function-integrated cubic phase function and modified Lv's distribution, the simulation results verify that the 2D-PMPCRD acquires higher anti-noise performance and obtains better cross-terms suppression performance for multi-QFM signals with reasonable computation cost.
The Collaborative Seismic Earth Model: Generation 1
NASA Astrophysics Data System (ADS)
Fichtner, Andreas; van Herwaarden, Dirk-Philip; Afanasiev, Michael; SimutÄ--, SaulÄ--; Krischer, Lion; ćubuk-Sabuncu, Yeşim; Taymaz, Tuncay; Colli, Lorenzo; Saygin, Erdinc; Villaseñor, Antonio; Trampert, Jeannot; Cupillard, Paul; Bunge, Hans-Peter; Igel, Heiner
2018-05-01
We present a general concept for evolutionary, collaborative, multiscale inversion of geophysical data, specifically applied to the construction of a first-generation Collaborative Seismic Earth Model. This is intended to address the limited resources of individual researchers and the often limited use of previously accumulated knowledge. Model evolution rests on a Bayesian updating scheme, simplified into a deterministic method that honors today's computational restrictions. The scheme is able to harness distributed human and computing power. It furthermore handles conflicting updates, as well as variable parameterizations of different model refinements or different inversion techniques. The first-generation Collaborative Seismic Earth Model comprises 12 refinements from full seismic waveform inversion, ranging from regional crustal- to continental-scale models. A global full-waveform inversion ensures that regional refinements translate into whole-Earth structure.
NASA Astrophysics Data System (ADS)
Bell, C.; Li, Y.; Lopez, E.; Hogue, T. S.
2017-12-01
Decision support tools that quantitatively estimate the cost and performance of infrastructure alternatives are valuable for urban planners. Such a tool is needed to aid in planning stormwater projects to meet diverse goals such as the regulation of stormwater runoff and its pollutants, minimization of economic costs, and maximization of environmental and social benefits in the communities served by the infrastructure. This work gives a brief overview of an integrated decision support tool, called i-DST, that is currently being developed to serve this need. This presentation focuses on the development of a default database for the i-DST that parameterizes water quality treatment efficiency of stormwater best management practices (BMPs) by region. Parameterizing the i-DST by region will allow the tool to perform accurate simulations in all parts of the United States. A national dataset of BMP performance is analyzed to determine which of a series of candidate regionalizations explains the most variance in the national dataset. The data used in the regionalization analysis comes from the International Stormwater BMP Database and data gleaned from an ongoing systematic review of peer-reviewed and gray literature. In addition to identifying a regionalization scheme for water quality performance parameters in the i-DST, our review process will also provide example methods and protocols for systematic reviews in the field of Earth Science.
NASA Astrophysics Data System (ADS)
Lemieux, Jean-François; Dupont, Frédéric; Blain, Philippe; Roy, François; Smith, Gregory C.; Flato, Gregory M.
2016-10-01
In some coastal regions of the Arctic Ocean, grounded ice ridges contribute to stabilizing and maintaining a landfast ice cover. Recently, a grounding scheme representing this effect on sea ice dynamics was introduced and tested in a viscous-plastic sea ice model. This grounding scheme, based on a basal stress parameterization, improves the simulation of landfast ice in many regions such as in the East Siberian Sea, the Laptev Sea, and along the coast of Alaska. Nevertheless, in some regions like the Kara Sea, the area of landfast ice is systematically underestimated. This indicates that another mechanism such as ice arching is at play for maintaining the ice cover fast. To address this problem, the combination of the basal stress parameterization and tensile strength is investigated using a 0.25° Pan-Arctic CICE-NEMO configuration. Both uniaxial and isotropic tensile strengths notably improve the simulation of landfast ice in the Kara Sea but also in the Laptev Sea. However, the simulated landfast ice season for the Kara Sea is too short compared to observations. This is especially obvious for the onset of the landfast ice season which systematically occurs later in the model and with a slower build up. This suggests that improvements to the sea ice thermodynamics could reduce these discrepancies with the data.
NASA Technical Reports Server (NTRS)
Jones, John H.
2010-01-01
Longhi et al. [1] have used the D(Ni) vs. D(Mg) parameterizations of Jones [2, 3] in attempting to explain the Ni systematics of lunar differentiation. A key element of the Jones parameterization and the Longhi et al. models is that, at very high temperatures, Ni may become incompatible in olivine. Unfortunately, there is no actual experimental evidence that this is ever the case [1]. To date, all experiments designed to demonstrate such incompatibility have failed. Here I will investigate the thermodynamic foundations of the D vs. D(Mg) trends for olivine/liquid discovered by [2].
NASA Astrophysics Data System (ADS)
Lasaponara, R.; Lanorte, A.; Coluzzi, R.; Masini, N.
2009-04-01
The systematic monitoring of cultural and natural heritage is a basic step for its conservation. Monitoring strategies should constitute an integral component of policies relating to land use, development, and planning. To this aim remote sensing technologies can be used profitably. This paper deals with the use of multitemporal, multisensors, and multiscale satellite data for assessing and monitoring changes affecting cultural landscapes and archaeological sites. The discussion is focused on some significant test cases selected in Peru (South America) and Southern Italy . Artifacts, unearthed sites, and marks of buried remains have been investigated by using multitemporal aerial and satellite data, such as Quickbird, ASTER, Landsat MSS and TM.
The topology of the cosmic web in terms of persistent Betti numbers
NASA Astrophysics Data System (ADS)
Pranav, Pratyush; Edelsbrunner, Herbert; van de Weygaert, Rien; Vegter, Gert; Kerber, Michael; Jones, Bernard J. T.; Wintraecken, Mathijs
2017-03-01
We introduce a multiscale topological description of the Megaparsec web-like cosmic matter distribution. Betti numbers and topological persistence offer a powerful means of describing the rich connectivity structure of the cosmic web and of its multiscale arrangement of matter and galaxies. Emanating from algebraic topology and Morse theory, Betti numbers and persistence diagrams represent an extension and deepening of the cosmologically familiar topological genus measure and the related geometric Minkowski functionals. In addition to a description of the mathematical background, this study presents the computational procedure for computing Betti numbers and persistence diagrams for density field filtrations. The field may be computed starting from a discrete spatial distribution of galaxies or simulation particles. The main emphasis of this study concerns an extensive and systematic exploration of the imprint of different web-like morphologies and different levels of multiscale clustering in the corresponding computed Betti numbers and persistence diagrams. To this end, we use Voronoi clustering models as templates for a rich variety of web-like configurations and the fractal-like Soneira-Peebles models exemplify a range of multiscale configurations. We have identified the clear imprint of cluster nodes, filaments, walls, and voids in persistence diagrams, along with that of the nested hierarchy of structures in multiscale point distributions. We conclude by outlining the potential of persistent topology for understanding the connectivity structure of the cosmic web, in large simulations of cosmic structure formation and in the challenging context of the observed galaxy distribution in large galaxy surveys.
Gyrokinetic predictions of multiscale transport in a DIII-D ITER baseline discharge
Holland, C.; Howard, N. T.; Grierson, B. A.
2017-05-08
New multiscale gyrokinetic simulations predict that electron energy transport in a DIII-D ITER baseline discharge with dominant electron heating and low input torque is multiscale in nature, with roughly equal amounts of the electron energy flux Q e coming from long wavelength ion-scale (k yρ s < 1) and short wavelength electron-scale (k yρ s > 1) fluctuations when the gyrokinetic results match independent power balance calculations. Corresponding conventional ion-scale simulations are able to match the power balance ion energy flux Q i, but systematically underpredict Q e when doing so. We observe significant nonlinear cross-scale couplings in the multiscalemore » simulations, but the exact simulation predictions are found to be extremely sensitive to variations of model input parameters within experimental uncertainties. Most notably, depending upon the exact value of the equilibrium E x B shearing rate γ E x B used, either enhancement or suppression of the long-wavelength turbulence and transport levels in the multiscale simulations is observed relative to what is predicted by ion-scale simulations. And while the enhancement of the long wavelength fluctuations by inclusion of the short wavelength turbulence was previously observed in similar multiscale simulations of an Alcator C-Mod L-mode discharge, these new results show for the first time a complete suppression of long-wavelength turbulence in a multiscale simulation, for parameters at which conventional ion-scale simulation predicts small but finite levels of low-k turbulence and transport consistent with the power balance Q i. Though computational resource limitations prevent a fully rigorous validation assessment of these new results, they provide significant new evidence that electron energy transport in burning plasmas is likely to have a strong multiscale character, with significant nonlinear cross-scale couplings that must be fully understood to predict the performance of those plasmas with confidence.« less
Stochastic Parameterization: Toward a New View of Weather and Climate Models
Berner, Judith; Achatz, Ulrich; Batté, Lauriane; ...
2017-03-31
The last decade has seen the success of stochastic parameterizations in short-term, medium-range, and seasonal forecasts: operational weather centers now routinely use stochastic parameterization schemes to represent model inadequacy better and to improve the quantification of forecast uncertainty. Developed initially for numerical weather prediction, the inclusion of stochastic parameterizations not only provides better estimates of uncertainty, but it is also extremely promising for reducing long-standing climate biases and is relevant for determining the climate response to external forcing. This article highlights recent developments from different research groups that show that the stochastic representation of unresolved processes in the atmosphere, oceans,more » land surface, and cryosphere of comprehensive weather and climate models 1) gives rise to more reliable probabilistic forecasts of weather and climate and 2) reduces systematic model bias. We make a case that the use of mathematically stringent methods for the derivation of stochastic dynamic equations will lead to substantial improvements in our ability to accurately simulate weather and climate at all scales. Recent work in mathematics, statistical mechanics, and turbulence is reviewed; its relevance for the climate problem is demonstrated; and future research directions are outlined« less
Stochastic Parameterization: Toward a New View of Weather and Climate Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berner, Judith; Achatz, Ulrich; Batté, Lauriane
The last decade has seen the success of stochastic parameterizations in short-term, medium-range, and seasonal forecasts: operational weather centers now routinely use stochastic parameterization schemes to represent model inadequacy better and to improve the quantification of forecast uncertainty. Developed initially for numerical weather prediction, the inclusion of stochastic parameterizations not only provides better estimates of uncertainty, but it is also extremely promising for reducing long-standing climate biases and is relevant for determining the climate response to external forcing. This article highlights recent developments from different research groups that show that the stochastic representation of unresolved processes in the atmosphere, oceans,more » land surface, and cryosphere of comprehensive weather and climate models 1) gives rise to more reliable probabilistic forecasts of weather and climate and 2) reduces systematic model bias. We make a case that the use of mathematically stringent methods for the derivation of stochastic dynamic equations will lead to substantial improvements in our ability to accurately simulate weather and climate at all scales. Recent work in mathematics, statistical mechanics, and turbulence is reviewed; its relevance for the climate problem is demonstrated; and future research directions are outlined« less
Scale Interactions in the Tropics from a Simple Multi-Cloud Model
NASA Astrophysics Data System (ADS)
Niu, X.; Biello, J. A.
2017-12-01
Our lack of a complete understanding of the interaction between the moisture convection and equatorial waves remains an impediment in the numerical simulation of large-scale organization, such as the Madden-Julian Oscillation (MJO). The aim of this project is to understand interactions across spatial scales in the tropics from a simplified framework for scale interactions while a using a simplified framework to describe the basic features of moist convection. Using multiple asymptotic scales, Biello and Majda[1] derived a multi-scale model of moist tropical dynamics (IMMD[1]), which separates three regimes: the planetary scale climatology, the synoptic scale waves, and the planetary scale anomalies regime. The scales and strength of the observed MJO would categorize it in the regime of planetary scale anomalies - which themselves are forced from non-linear upscale fluxes from the synoptic scales waves. In order to close this model and determine whether it provides a self-consistent theory of the MJO. A model for diabatic heating due to moist convection must be implemented along with the IMMD. The multi-cloud parameterization is a model proposed by Khouider and Majda[2] to describe the three basic cloud types (congestus, deep and stratiform) that are most responsible for tropical diabatic heating. We implement a simplified version of the multi-cloud model that is based on results derived from large eddy simulations of convection [3]. We present this simplified multi-cloud model and show results of numerical experiments beginning with a variety of convective forcing states. Preliminary results on upscale fluxes, from synoptic scales to planetary scale anomalies, will be presented. [1] Biello J A, Majda A J. Intraseasonal multi-scale moist dynamics of the tropical atmosphere[J]. Communications in Mathematical Sciences, 2010, 8(2): 519-540. [2] Khouider B, Majda A J. A simple multicloud parameterization for convectively coupled tropical waves. Part I: Linear analysis[J]. Journal of the atmospheric sciences, 2006, 63(4): 1308-1323. [3] Dorrestijn J, Crommelin D T, Biello J A, et al. A data-driven multi-cloud model for stochastic parametrization of deep convection[J]. Philosophical Transactions of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, 2013, 371(1991): 20120374.
Long-term simulations with the coupled WRF–CMAQ (Weather Research and Forecasting–Community Multi-scale Air Quality) model have been conducted to systematically investigate the changes in anthropogenic emissions of SO2 and NOx over the past 16 years (1995–2010) ...
Energy functions for regularization algorithms
NASA Technical Reports Server (NTRS)
Delingette, H.; Hebert, M.; Ikeuchi, K.
1991-01-01
Regularization techniques are widely used for inverse problem solving in computer vision such as surface reconstruction, edge detection, or optical flow estimation. Energy functions used for regularization algorithms measure how smooth a curve or surface is, and to render acceptable solutions these energies must verify certain properties such as invariance with Euclidean transformations or invariance with parameterization. The notion of smoothness energy is extended here to the notion of a differential stabilizer, and it is shown that to void the systematic underestimation of undercurvature for planar curve fitting, it is necessary that circles be the curves of maximum smoothness. A set of stabilizers is proposed that meet this condition as well as invariance with rotation and parameterization.
Anatomical parameterization for volumetric meshing of the liver
NASA Astrophysics Data System (ADS)
Vera, Sergio; González Ballester, Miguel A.; Gil, Debora
2014-03-01
A coordinate system describing the interior of organs is a powerful tool for a systematic localization of injured tissue. If the same coordinate values are assigned to specific anatomical landmarks, the coordinate system allows integration of data across different medical image modalities. Harmonic mappings have been used to produce parametric coordinate systems over the surface of anatomical shapes, given their flexibility to set values at specific locations through boundary conditions. However, most of the existing implementations in medical imaging restrict to either anatomical surfaces, or the depth coordinate with boundary conditions is given at sites of limited geometric diversity. In this paper we present a method for anatomical volumetric parameterization that extends current harmonic parameterizations to the interior anatomy using information provided by the volume medial surface. We have applied the methodology to define a common reference system for the liver shape and functional anatomy. This reference system sets a solid base for creating anatomical models of the patient's liver, and allows comparing livers from several patients in a common framework of reference.
Multiscale 2D Inversions of Active-source First-arrival Times in Taiwan
NASA Astrophysics Data System (ADS)
Lin, Y. P.; Zhao, L.; Hung, S. H.
2015-12-01
In this study, we make use of the active-source records collected by the TAIGER (TAiwan Integrated GEodynamics Research) project in 2008 at nearly 1400 locations on the island of Taiwan and the surrounding ocean bottom. We manually picked the first-arrival times from the waveform records to obtain a set of highly accurate P-wave traveltimes. Among the 1400 receivers, more than 1000 were deployed along four almost linear cross-island profiles with inter-seismometer spacing down to 200 m. This ground-truth dataset provides strong constrains on the structure between the exactly known active sources and densely distributed receivers, which can be used to calibrate the seismic structure in the upper crust in Taiwan. In this study, we use this dataset to image the two-dimensional P-wave structure along the four linear profiles. A wavelet parameterization of the model is adopted to achieve an objective and data-adaptive multiscale resolution to the 2D structures. Rigorous estimations of resolution lengths were also conducted to quantify the spatial resolutions of the tomography inversions. The resulting 2D models yield first-arrival time predictions that are in excellent agreement with the observations. The seismic structures along the 2D profiles display strong lateral variations (up to 80% relative to regional average) with more realistic amplitudes of velocity perturbations and spatial patterns consistent with geological zonations of Taiwan
Module-based multiscale simulation of angiogenesis in skeletal muscle
2011-01-01
Background Mathematical modeling of angiogenesis has been gaining momentum as a means to shed new light on the biological complexity underlying blood vessel growth. A variety of computational models have been developed, each focusing on different aspects of the angiogenesis process and occurring at different biological scales, ranging from the molecular to the tissue levels. Integration of models at different scales is a challenging and currently unsolved problem. Results We present an object-oriented module-based computational integration strategy to build a multiscale model of angiogenesis that links currently available models. As an example case, we use this approach to integrate modules representing microvascular blood flow, oxygen transport, vascular endothelial growth factor transport and endothelial cell behavior (sensing, migration and proliferation). Modeling methodologies in these modules include algebraic equations, partial differential equations and agent-based models with complex logical rules. We apply this integrated model to simulate exercise-induced angiogenesis in skeletal muscle. The simulation results compare capillary growth patterns between different exercise conditions for a single bout of exercise. Results demonstrate how the computational infrastructure can effectively integrate multiple modules by coordinating their connectivity and data exchange. Model parameterization offers simulation flexibility and a platform for performing sensitivity analysis. Conclusions This systems biology strategy can be applied to larger scale integration of computational models of angiogenesis in skeletal muscle, or other complex processes in other tissues under physiological and pathological conditions. PMID:21463529
NASA Astrophysics Data System (ADS)
Bonan, Gordon B.; Patton, Edward G.; Harman, Ian N.; Oleson, Keith W.; Finnigan, John J.; Lu, Yaqiong; Burakowski, Elizabeth A.
2018-04-01
Land surface models used in climate models neglect the roughness sublayer and parameterize within-canopy turbulence in an ad hoc manner. We implemented a roughness sublayer turbulence parameterization in a multilayer canopy model (CLM-ml v0) to test if this theory provides a tractable parameterization extending from the ground through the canopy and the roughness sublayer. We compared the canopy model with the Community Land Model (CLM4.5) at seven forest, two grassland, and three cropland AmeriFlux sites over a range of canopy heights, leaf area indexes, and climates. CLM4.5 has pronounced biases during summer months at forest sites in midday latent heat flux, sensible heat flux, gross primary production, nighttime friction velocity, and the radiative temperature diurnal range. The new canopy model reduces these biases by introducing new physics. Advances in modeling stomatal conductance and canopy physiology beyond what is in CLM4.5 substantially improve model performance at the forest sites. The signature of the roughness sublayer is most evident in nighttime friction velocity and the diurnal cycle of radiative temperature, but is also seen in sensible heat flux. Within-canopy temperature profiles are markedly different compared with profiles obtained using Monin-Obukhov similarity theory, and the roughness sublayer produces cooler daytime and warmer nighttime temperatures. The herbaceous sites also show model improvements, but the improvements are related less systematically to the roughness sublayer parameterization in these canopies. The multilayer canopy with the roughness sublayer turbulence improves simulations compared with CLM4.5 while also advancing the theoretical basis for surface flux parameterizations.
Saa, Pedro; Nielsen, Lars K.
2015-01-01
Kinetic models provide the means to understand and predict the dynamic behaviour of enzymes upon different perturbations. Despite their obvious advantages, classical parameterizations require large amounts of data to fit their parameters. Particularly, enzymes displaying complex reaction and regulatory (allosteric) mechanisms require a great number of parameters and are therefore often represented by approximate formulae, thereby facilitating the fitting but ignoring many real kinetic behaviours. Here, we show that full exploration of the plausible kinetic space for any enzyme can be achieved using sampling strategies provided a thermodynamically feasible parameterization is used. To this end, we developed a General Reaction Assembly and Sampling Platform (GRASP) capable of consistently parameterizing and sampling accurate kinetic models using minimal reference data. The former integrates the generalized MWC model and the elementary reaction formalism. By formulating the appropriate thermodynamic constraints, our framework enables parameterization of any oligomeric enzyme kinetics without sacrificing complexity or using simplifying assumptions. This thermodynamically safe parameterization relies on the definition of a reference state upon which feasible parameter sets can be efficiently sampled. Uniform sampling of the kinetics space enabled dissecting enzyme catalysis and revealing the impact of thermodynamics on reaction kinetics. Our analysis distinguished three reaction elasticity regions for common biochemical reactions: a steep linear region (0> ΔGr >-2 kJ/mol), a transition region (-2> ΔGr >-20 kJ/mol) and a constant elasticity region (ΔGr <-20 kJ/mol). We also applied this framework to model more complex kinetic behaviours such as the monomeric cooperativity of the mammalian glucokinase and the ultrasensitive response of the phosphoenolpyruvate carboxylase of Escherichia coli. In both cases, our approach described appropriately not only the kinetic behaviour of these enzymes, but it also provided insights about the particular features underpinning the observed kinetics. Overall, this framework will enable systematic parameterization and sampling of enzymatic reactions. PMID:25874556
Multi-scale Modeling of Arctic Clouds
NASA Astrophysics Data System (ADS)
Hillman, B. R.; Roesler, E. L.; Dexheimer, D.
2017-12-01
The presence and properties of clouds are critically important to the radiative budget in the Arctic, but clouds are notoriously difficult to represent in global climate models (GCMs). The challenge stems partly from a disconnect in the scales at which these models are formulated and the scale of the physical processes important to the formation of clouds (e.g., convection and turbulence). Because of this, these processes are parameterized in large-scale models. Over the past decades, new approaches have been explored in which a cloud system resolving model (CSRM), or in the extreme a large eddy simulation (LES), is embedded into each gridcell of a traditional GCM to replace the cloud and convective parameterizations to explicitly simulate more of these important processes. This approach is attractive in that it allows for more explicit simulation of small-scale processes while also allowing for interaction between the small and large-scale processes. The goal of this study is to quantify the performance of this framework in simulating Arctic clouds relative to a traditional global model, and to explore the limitations of such a framework using coordinated high-resolution (eddy-resolving) simulations. Simulations from the global model are compared with satellite retrievals of cloud fraction partioned by cloud phase from CALIPSO, and limited-area LES simulations are compared with ground-based and tethered-balloon measurements from the ARM Barrow and Oliktok Point measurement facilities.
Accounting for Fault Roughness in Pseudo-Dynamic Ground-Motion Simulations
NASA Astrophysics Data System (ADS)
Mai, P. Martin; Galis, Martin; Thingbaijam, Kiran K. S.; Vyas, Jagdish C.; Dunham, Eric M.
2017-09-01
Geological faults comprise large-scale segmentation and small-scale roughness. These multi-scale geometrical complexities determine the dynamics of the earthquake rupture process, and therefore affect the radiated seismic wavefield. In this study, we examine how different parameterizations of fault roughness lead to variability in the rupture evolution and the resulting near-fault ground motions. Rupture incoherence naturally induced by fault roughness generates high-frequency radiation that follows an ω-2 decay in displacement amplitude spectra. Because dynamic rupture simulations are computationally expensive, we test several kinematic source approximations designed to emulate the observed dynamic behavior. When simplifying the rough-fault geometry, we find that perturbations in local moment tensor orientation are important, while perturbations in local source location are not. Thus, a planar fault can be assumed if the local strike, dip, and rake are maintained. We observe that dynamic rake angle variations are anti-correlated with the local dip angles. Testing two parameterizations of dynamically consistent Yoffe-type source-time function, we show that the seismic wavefield of the approximated kinematic ruptures well reproduces the radiated seismic waves of the complete dynamic source process. This finding opens a new avenue for an improved pseudo-dynamic source characterization that captures the effects of fault roughness on earthquake rupture evolution. By including also the correlations between kinematic source parameters, we outline a new pseudo-dynamic rupture modeling approach for broadband ground-motion simulation.
Regional Air Quality Model Application of the Aqueous-Phase ...
In most ecosystems, atmospheric deposition is the primary input of mercury. The total wet deposition of mercury in atmospheric chemistry models is sensitive to parameterization of the aqueous-phase reduction of divalent oxidized mercury (Hg2+). However, most atmospheric chemistry models use a parameterization of the aqueous-phase reduction of Hg2+ that has been shown to be unlikely under normal ambient conditions or use a non mechanistic value derived to optimize wet deposition results. Recent laboratory experiments have shown that Hg2+ can be photochemically reduced to elemental mercury (Hg) in the aqueous-phase by dissolved organic matter and a mechanism and the rate for Hg2+ photochemical reduction by dicarboxylic acids (DCA) has been proposed. For the first time in a regional scale model, the DCA mechanism has been applied. The HO2-Hg2+ reduction mechanism, the proposed DCA reduction mechanism, and no aqueous-phase reduction (NAR) of Hg2+ are evaluated against weekly wet deposition totals, concentrations and precipitation observations from the Mercury Deposition Network (MDN) using the Community Multiscale Air Quality (CMAQ) model version 4.7.1. Regional scale simulations of mercury wet deposition using a DCA reduction mechanism evaluated well against observations, and reduced the bias in model evaluation by at least 13% over the other schemes evaluated, although summertime deposition estimates were still biased by −31.4% against observations. The use of t
Multiscale Enaction Model (MEM): the case of complexity and “context-sensitivity” in vision
Laurent, Éric
2014-01-01
I review the data on human visual perception that reveal the critical role played by non-visual contextual factors influencing visual activity. The global perspective that progressively emerges reveals that vision is sensitive to multiple couplings with other systems whose nature and levels of abstraction in science are highly variable. Contrary to some views where vision is immersed in modular hard-wired modules, rather independent from higher-level or other non-cognitive processes, converging data gathered in this article suggest that visual perception can be theorized in the larger context of biological, physical, and social systems with which it is coupled, and through which it is enacted. Therefore, any attempt to model complexity and multiscale couplings, or to develop a complex synthesis in the fields of mind, brain, and behavior, shall involve a systematic empirical study of both connectedness between systems or subsystems, and the embodied, multiscale and flexible teleology of subsystems. The conceptual model (Multiscale Enaction Model [MEM]) that is introduced in this paper finally relates empirical evidence gathered from psychology to biocomputational data concerning the human brain. Both psychological and biocomputational descriptions of MEM are proposed in order to help fill in the gap between scales of scientific analysis and to provide an account for both the autopoiesis-driven search for information, and emerging perception. PMID:25566115
NASA Astrophysics Data System (ADS)
Goswami, B. B.; Khouider, B.; Krishna, R. P. M.; Mukhopadhyay, P.; Majda, A.
2017-12-01
A stochastic multicloud (SMCM) cumulus parameterization is implemented in the National Centres for Environmental Predictions (NCEP) Climate Forecast System version 2 (CFSv2) model, named as the CFSsmcm model. We present here results from a systematic attempt to understand the CFSsmcm model's sensitivity to the SMCM parameters. To asses the model-sentivity to the different SMCM parameters, we have analized a set of 14 5-year long climate simulations produced by the CFSsmcm model. The model is found to be resilient to minor changes in the parameter values. The middle tropospheric dryness (MTD) and the stratiform cloud decay timescale are found to be most crucial parameters in the SMCM formulation in the CFSsmcm model.
Modeling the formation and aging of secondary organic aerosols in Los Angeles during CalNex 2010
NASA Astrophysics Data System (ADS)
Hayes, P. L.; Carlton, A. G.; Baker, K. R.; Ahmadov, R.; Washenfelder, R. A.; Alvarez, S.; Rappenglück, B.; Gilman, J. B.; Kuster, W. C.; de Gouw, J. A.; Zotter, P.; Prévôt, A. S. H.; Szidat, S.; Kleindienst, T. E.; Offenberg, J. H.; Jimenez, J. L.
2014-12-01
Four different parameterizations for the formation and evolution of secondary organic aerosol (SOA) are evaluated using a 0-D box model representing the Los Angeles Metropolitan Region during the CalNex 2010 field campaign. We constrain the model predictions with measurements from several platforms and compare predictions with particle and gas-phase observations from the CalNex Pasadena ground site. That site provides a unique opportunity to study aerosol formation close to anthropogenic emission sources with limited recirculation. The model SOA formed only from the oxidation of VOCs (V-SOA) is insufficient to explain the observed SOA concentrations, even when using SOA parameterizations with multi-generation oxidation that produce much higher yields than have been observed in chamber experiments, or when increasing yields to their upper limit estimates accounting for recently reported losses of vapors to chamber walls. The Community Multiscale Air Quality (WRF-CMAQ) model (version 5.0.1) provides excellent predictions of secondary inorganic particle species but underestimates the observed SOA mass by a factor of 25 when an older VOC-only parameterization is used, which is consistent with many previous model-measurement comparisons for pre-2007 anthropogenic SOA modules in urban areas. Including SOA from primary semi-volatile and intermediate volatility organic compounds (P-S/IVOCs) following the parameterizations of Robinson et al. (2007), Grieshop et al. (2009), or Pye and Seinfeld (2010) improves model/measurement agreement for mass concentration. When comparing the three parameterizations, the Grieshop et al. (2009) parameterization more accurately reproduces both the SOA mass concentration and oxygen-to-carbon ratio inside the urban area. Our results strongly suggest that other precursors besides VOCs, such as P-S/IVOCs, are needed to explain the observed SOA concentrations in Pasadena. All the parameterizations over-predict urban SOA formation at long photochemical ages (≈ 3 days) compared to observations from multiple sites, which can lead to problems in regional and global modeling. Among the explicitly modeled VOCs, the precursor compounds that contribute the greatest SOA mass are methylbenzenes. Polycyclic aromatic hydrocarbons (PAHs) are less important precursors and contribute less than 4% of the SOA mass. The amounts of SOA mass from diesel vehicles, gasoline vehicles, and cooking emissions are estimated to be 16-27, 35-61, and 19-35%, respectively, depending on the parameterization used, which is consistent with the observed fossil fraction of urban SOA, 71 (±3) %. In-basin biogenic VOCs are predicted to contribute only a few percent to SOA. A regional SOA background of approximately 2.1 μg m-3 is also present due to the long distance transport of highly aged OA. The percentage of SOA from diesel vehicle emissions is the same, within the estimated uncertainty, as reported in previous work that analyzed the weekly cycles in OA concentrations (Bahreini et al., 2012; Hayes et al., 2013). However, the modeling work presented here suggests a strong anthropogenic source of modern carbon in SOA, due to cooking emissions, which was not accounted for in those previous studies. Lastly, this work adapts a simple two-parameter model to predict SOA concentration and O/C from urban emissions. This model successfully predicts SOA concentration, and the optimal parameter combination is very similar to that found for Mexico City. This approach provides a computationally inexpensive method for predicting urban SOA in global and climate models. We estimate pollution SOA to account for 26 Tg yr-1 of SOA globally, or 17% of global SOA, 1/3 of which is likely to be non-fossil.
Multiscale Modeling of Hematologic Disorders
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fedosov, Dmitry A.; Pivkin, Igor; Pan, Wenxiao
Parasitic infectious diseases and other hereditary hematologic disorders are often associated with major changes in the shape and viscoelastic properties of red blood cells (RBCs). Such changes can disrupt blood flow and even brain perfusion, as in the case of cerebral malaria. Modeling of these hematologic disorders requires a seamless multiscale approach, where blood cells and blood flow in the entire arterial tree are represented accurately using physiologically consistent parameters. In this chapter, we present a computational methodology based on dissipative particle dynamics (DPD) which models RBCs as well as whole blood in health and disease. DPD is a Lagrangianmore » method that can be derived from systematic coarse-graining of molecular dynamics but can scale efficiently up to small arteries and can also be used to model RBCs down to spectrin level. To this end, we present two complementary mathematical models for RBCs and describe a systematic procedure on extracting the relevant input parameters from optical tweezers and microfluidic experiments for single RBCs. We then use these validated RBC models to predict the behavior of whole healthy blood and compare with experimental results. The same procedure is applied to modeling malaria, and results for infected single RBCs and whole blood are presented.« less
National Centers for Environmental Prediction
: Influence of convective parameterization on the systematic errors of Climate Forecast System (CFS) model ; Climate Dynamics, 41, 45-61, 2013. Saha, S., S. Pokhrel and H. S. Chaudhari : Influence of Eurasian snow Organization Search Enter text Search Navigation Bar End Cap Search EMC Go Branches Global Climate and Weather
Experiments with a Regional Vector-Vorticity Model, and Comparison with Other Models
NASA Astrophysics Data System (ADS)
Konor, C. S.; Dazlich, D. A.; Jung, J.; Randall, D. A.
2017-12-01
The Vector-Vorticity Model (VVM) is an anelastic model with a unique dynamical core that predicts the three-dimensional vorticity instead of the three-dimensional momentum. The VVM is used in the CRMs of the Global Quasi-3D Multiscale Modeling Framework, which is discussed by Joon-Hee Jung and collaborators elsewhere in this session. We are updating the physics package of the VVM, replacing it with the physics package of the System for Atmosphere Modeling (SAM). The new physics package includes a double-moment microphysics, Mellor-Yamada turbulence, Monin-Obukov surface fluxes, and the RRTMG radiation parameterization. We briefly describe the VVM and show results from standard test cases, including TWP-ICE. We compare the results with those obtained using the earlier physics. We also show results from experiments on convection aggregation in radiative-convective equilibrium, and compare with those obtained using both SAM and the Regional Atmospheric Modeling System (RAMS).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jason Heath; Brian McPherson; Thomas Dewers
The assessment of caprocks for geologic CO{sub 2} storage is a multi-scale endeavor. Investigation of a regional caprock - the Kirtland Formation, San Juan Basin, USA - at the pore-network scale indicates high capillary sealing capacity and low permeabilities. Core and wellscale data, however, indicate a potential seal bypass system as evidenced by multiple mineralized fractures and methane gas saturations within the caprock. Our interpretation of {sup 4}He concentrations, measured at the top and bottom of the caprock, suggests low fluid fluxes through the caprock: (1) Of the total {sup 4}He produced in situ (i.e., at the locations of sampling)more » by uranium and thorium decay since deposition of the Kirtland Formation, a large portion still resides in the pore fluids. (2) Simple advection-only and advection-diffusion models, using the measured {sup 4}He concentrations, indicate low permeability ({approx}10-20 m{sup 2} or lower) for the thickness of the Kirtland Formation. These findings, however, do not guarantee the lack of a large-scale bypass system. The measured data, located near the boundary conditions of the models (i.e., the overlying and underlying aquifers), limit our testing of conceptual models and the sensitivity of model parameterization. Thus, we suggest approaches for future studies to better assess the presence or lack of a seal bypass system at this particular site and for other sites in general.« less
Wang, Minghuai; Larson, Vincent E.; Ghan, Steven; ...
2015-04-18
In this study, a higher-order turbulence closure scheme, called Cloud Layers Unified by Binormals (CLUBB), is implemented into a Multi-scale Modeling Framework (MMF) model to improve low cloud simulations. The performance of CLUBB in MMF simulations with two different microphysics configurations (one-moment cloud microphysics without aerosol treatment and two-moment cloud microphysics coupled with aerosol treatment) is evaluated against observations and further compared with results from the Community Atmosphere Model, Version 5 (CAM5) with conventional cloud parameterizations. CLUBB is found to improve low cloud simulations in the MMF, and the improvement is particularly evident in the stratocumulus-to-cumulus transition regions. Compared tomore » the single-moment cloud microphysics, CLUBB with two-moment microphysics produces clouds that are closer to the coast, and agrees better with observations. In the stratocumulus-to cumulus transition regions, CLUBB with two-moment cloud microphysics produces shortwave cloud forcing in better agreement with observations, while CLUBB with single moment cloud microphysics overestimates shortwave cloud forcing. CLUBB is further found to produce quantitatively similar improvements in the MMF and CAM5, with slightly better performance in the MMF simulations (e.g., MMF with CLUBB generally produces low clouds that are closer to the coast than CAM5 with CLUBB). As a result, improved low cloud simulations in MMF make it an even more attractive tool for studying aerosol-cloud-precipitation interactions.« less
Zhao, Wei; Marchand, Roger; Fu, Qiang
2017-07-08
Millimeter Wavelength Cloud Radar (MMCR) data from December 1996 to December 2010, collected at the U.S. Department of Energy Atmospheric Radiation Measurement (ARM) program Southern Great Plains (SGP) site, are used to examine the diurnal cycle of hydrometeor occurrence. These data are categorized into clouds (-40 dBZ e ≤ reflectivity < -10 dBZ e), drizzle and light precipitation (-10 dBZ e ≤ reflectivity < 10 dBZ e), and heavy precipitation (reflectivity ≥ 10 dBZ e). The same criteria are implemented for the observation-equivalent reflectivity calculated by feeding outputs from a Multiscale Modeling Framework (MMF) climate model into a radar simulator.more » The MMF model consists of the National Center for Atmospheric Research Community Atmosphere Model with conventional cloud parameterizations replaced by a cloud-resolving model. We find that a radar simulator combined with the simple reflectivity categories can be an effective approach for evaluating diurnal variations in model hydrometeor occurrence. It is shown that the MMF only marginally captures observed increases in the occurrence of boundary layer clouds after sunrise in spring and autumn and does not capture diurnal changes in boundary layer clouds during the summer. Above the boundary layer, the MMF captures reasonably well diurnal variations in the vertical structure of clouds and light and heavy precipitation in the summer but not in the spring.« less
Updating Sea Spray Aerosol Emissions in the Community Multiscale Air Quality Model
NASA Astrophysics Data System (ADS)
Gantt, B.; Bash, J. O.; Kelly, J.
2014-12-01
Sea spray aerosols (SSA) impact the particle mass concentration and gas-particle partitioning in coastal environments, with implications for human and ecosystem health. In this study, the Community Multiscale Air Quality (CMAQ) model is updated to enhance fine mode SSA emissions, include sea surface temperature (SST) dependency, and revise surf zone emissions. Based on evaluation with several regional and national observational datasets in the continental U.S., the updated emissions generally improve surface concentrations predictions of primary aerosols composed of sea-salt and secondary aerosols affected by sea-salt chemistry in coastal and near-coastal sites. Specifically, the updated emissions lead to better predictions of the magnitude and coastal-to-inland gradient of sodium, chloride, and nitrate concentrations at Bay Regional Atmospheric Chemistry Experiment (BRACE) sites near Tampa, FL. Including SST-dependency to the SSA emission parameterization leads to increased sodium concentrations in the southeast U.S. and decreased concentrations along the Pacific coast and northeastern U.S., bringing predictions into closer agreement with observations at most Interagency Monitoring of Protected Visual Environments (IMPROVE) and Chemical Speciation Network (CSN) sites. Model comparison with California Research at the Nexus of Air Quality and Climate Change (CalNex) observations will also be discussed, with particular focus on the South Coast Air Basin where clean marine air mixes with anthropogenic pollution in a complex environment. These SSA emission updates enable more realistic simulation of chemical processes in coastal environments, both in clean marine air masses and mixtures of clean marine and polluted conditions.
Andrew T. Hudak; Roger D. Ottmar; Robert E. Vihnanek; Clint S. Wright
2014-01-01
The RxCADRE research team collected multi-scale measurements of pre-, during, and post-fire variables on operational prescribed fires conducted in 2008, 2011, and 2012 in longleaf pine ecosystems in the southeastern USA. Pre- and post-fire surface fuel loads were characterized in alternating pre- and post-fire clip plots systematically established within burn units....
NASA Astrophysics Data System (ADS)
Xu, Kuan-Man; Cheng, Anning
2014-05-01
A high-resolution cloud-resolving model (CRM) embedded in a general circulation model (GCM) is an attractive alternative for climate modeling because it replaces all traditional cloud parameterizations and explicitly simulates cloud physical processes in each grid column of the GCM. Such an approach is called "Multiscale Modeling Framework." MMF still needs to parameterize the subgrid-scale (SGS) processes associated with clouds and large turbulent eddies because circulations associated with planetary boundary layer (PBL) and in-cloud turbulence are unresolved by CRMs with horizontal grid sizes on the order of a few kilometers. A third-order turbulence closure (IPHOC) has been implemented in the CRM component of the super-parameterized Community Atmosphere Model (SPCAM). IPHOC is used to predict (or diagnose) fractional cloudiness and the variability of temperature and water vapor at scales that are not resolved on the CRM's grid. This model has produced promised results, especially for low-level cloud climatology, seasonal variations and diurnal variations (Cheng and Xu 2011, 2013a, b; Xu and Cheng 2013a, b). Because of the enormous computational cost of SPCAM-IPHOC, which is 400 times of a conventional CAM, we decided to bypass the CRM and implement the IPHOC directly to CAM version 5 (CAM5). IPHOC replaces the PBL/stratocumulus, shallow convection, and cloud macrophysics parameterizations in CAM5. Since there are large discrepancies in the spatial and temporal scales between CRM and CAM5, IPHOC used in CAM5 has to be modified from that used in SPCAM. In particular, we diagnose all second- and third-order moments except for the fluxes. These prognostic and diagnostic moments are used to select a double-Gaussian probability density function to describe the SGS variability. We also incorporate a diagnostic PBL height parameterization to represent the strong inversion above PBL. The goal of this study is to compare the simulation of the climatology from these three models (CAM5, CAM5-IPHOC and SPCAM-IPHOC), with emphasis on low-level clouds and precipitation. Detailed comparisons of scatter diagrams among the monthly-mean low-level cloudiness, PBL height, surface relative humidity and lower tropospheric stability (LTS) reveal the relative strengths and weaknesses for five coastal low-cloud regions among the three models. Observations from CloudSat and CALIPSO and ECMWF Interim reanalysis are used as the truths for the comparisons. We found that the standard CAM5 underestimates cloudiness and produces small cloud fractions at low PBL heights that contradict with observations. CAM5-IPHOC tends to overestimate low clouds but the ranges of LTS and PBL height variations are most realistic. SPCAM-IPHOC seems to produce most realistic results with relatively consistent results from one region to another. Further comparisons with other atmospheric environmental variables will be helpful to reveal the causes of model deficiencies so that SPCAM-IPHOC results will provide guidance to the other two models.
Design of a framework for modeling, integration and simulation of physiological models.
Erson, E Zeynep; Cavuşoğlu, M Cenk
2012-09-01
Multiscale modeling and integration of physiological models carry challenges due to the complex nature of physiological processes. High coupling within and among scales present a significant challenge in constructing and integrating multiscale physiological models. In order to deal with such challenges in a systematic way, there is a significant need for an information technology framework together with related analytical and computational tools that will facilitate integration of models and simulations of complex biological systems. Physiological Model Simulation, Integration and Modeling Framework (Phy-SIM) is an information technology framework providing the tools to facilitate development, integration and simulation of integrated models of human physiology. Phy-SIM brings software level solutions to the challenges raised by the complex nature of physiological systems. The aim of Phy-SIM, and this paper is to lay some foundation with the new approaches such as information flow and modular representation of the physiological models. The ultimate goal is to enhance the development of both the models and the integration approaches of multiscale physiological processes and thus this paper focuses on the design approaches that would achieve such a goal. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Multiscale Morphology of Nanoporous Copper Made from Intermetallic Phases
Egle, Tobias; Barroo, Cédric; Janvelyan, Nare; ...
2017-07-11
Many application-relevant properties of nanoporous metals critically depend on their multiscale architecture. For example, the intrinsically high step-edge density of curved surfaces at the nanoscale provides highly reactive sites for catalysis, whereas the macroscale pore and grain morphology determines the macroscopic properties, such as mass transport, electrical conductivity, or mechanical properties. Here, in this work, we systematically study the effects of alloy composition and dealloying conditions on the multiscale morphology of nanoporous copper (np-Cu) made from various commercial Zn–Cu precursor alloys. Using a combination of X-ray diffraction, electron backscatter diffraction, and focused ion beam cross-sectional analysis, our results reveal thatmore » the macroscopic grain structure of the starting alloy surprisingly survives the dealloying process, despite a change in crystal structure from body-centered cubic (Zn–Cu starting alloy) to face-centered cubic (Cu). The nanoscale structure can be controlled by the acid used for dealloying with HCl leading to a larger and more faceted ligament morphology compared to that of H 3PO 4. Finally, anhydrous ethanol dehydrogenation was used as a probe reaction to test the effect of the nanoscale ligament morphology on the apparent activation energy of the reaction.« less
Multiscale Morphology of Nanoporous Copper Made from Intermetallic Phases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Egle, Tobias; Barroo, Cédric; Janvelyan, Nare
Many application-relevant properties of nanoporous metals critically depend on their multiscale architecture. For example, the intrinsically high step-edge density of curved surfaces at the nanoscale provides highly reactive sites for catalysis, whereas the macroscale pore and grain morphology determines the macroscopic properties, such as mass transport, electrical conductivity, or mechanical properties. Here, in this work, we systematically study the effects of alloy composition and dealloying conditions on the multiscale morphology of nanoporous copper (np-Cu) made from various commercial Zn–Cu precursor alloys. Using a combination of X-ray diffraction, electron backscatter diffraction, and focused ion beam cross-sectional analysis, our results reveal thatmore » the macroscopic grain structure of the starting alloy surprisingly survives the dealloying process, despite a change in crystal structure from body-centered cubic (Zn–Cu starting alloy) to face-centered cubic (Cu). The nanoscale structure can be controlled by the acid used for dealloying with HCl leading to a larger and more faceted ligament morphology compared to that of H 3PO 4. Finally, anhydrous ethanol dehydrogenation was used as a probe reaction to test the effect of the nanoscale ligament morphology on the apparent activation energy of the reaction.« less
Senay, Gabriel B.; Bohms, Stefanie; Singh, Ramesh K.; Gowda, Prasanna H.; Velpuri, Naga Manohar; Alemu, Henok; Verdin, James P.
2013-01-01
The increasing availability of multi-scale remotely sensed data and global weather datasets is allowing the estimation of evapotranspiration (ET) at multiple scales. We present a simple but robust method that uses remotely sensed thermal data and model-assimilated weather fields to produce ET for the contiguous United States (CONUS) at monthly and seasonal time scales. The method is based on the Simplified Surface Energy Balance (SSEB) model, which is now parameterized for operational applications, renamed as SSEBop. The innovative aspect of the SSEBop is that it uses predefined boundary conditions that are unique to each pixel for the "hot" and "cold" reference conditions. The SSEBop model was used for computing ET for 12 years (2000-2011) using the MODIS and Global Data Assimilation System (GDAS) data streams. SSEBop ET results compared reasonably well with monthly eddy covariance ET data explaining 64% of the observed variability across diverse ecosystems in the CONUS during 2005. Twelve annual ET anomalies (2000-2011) depicted the spatial extent and severity of the commonly known drought years in the CONUS. More research is required to improve the representation of the predefined boundary conditions in complex terrain at small spatial scales. SSEBop model was found to be a promising approach to conduct water use studies in the CONUS, with a similar opportunity in other parts of the world. The approach can also be applied with other thermal sensors such as Landsat.
Improvement of fog predictability in a coupled system of PAFOG and WRF
NASA Astrophysics Data System (ADS)
Kim, Wonheung; Yum, Seong Soo; Kim, Chang Ki
2017-04-01
Fog is difficult to predict because of the multi-scale nature of its formation mechanism: not only the synoptic conditions but also the local meteorological conditions crucially influence fog formation. Coarse vertical resolution and parameterization errors in fog prediction models are also critical reasons for low predictability. In this study, we use a coupled model system of a 3D mesoscale model (WRF) and a single column model with a fine vertical resolution (PAFOG, PArameterized FOG) to simulate fogs formed over the southern coastal region of the Korean Peninsula, where National Center for Intensive Observation of Severe Weather (NCIO) is located. NCIO is unique in that it has a 300 m meteorological tower built at the location to measure basic meteorological variables (temperature, dew point temperature and winds) at eleven different altitudes, and comprehensive atmospheric physics measurements are made with the various remote sensing instruments such as visibility meter, cloud radar, wind profiler, microwave radiometer, and ceilometer. These measurement data are used as input data to the model system and for evaluating the results. Particularly the data for initial and external forcings, which are tightly connected to the predictability of coupled model system, are derived from the tower measurement. This study aims at finding out the most important factors that influence fog predictability of the coupled system for NCIO. Nudging of meteorological tower data and soil moisture variability are found to be critically influencing fog predictability. Detailed results will be discussed at the conference.
Wang, Junmei; Tingjun, Hou
2011-01-01
Molecular mechanical force field (FF) methods are useful in studying condensed phase properties. They are complementary to experiment and can often go beyond experiment in atomic details. Even a FF is specific for studying structures, dynamics and functions of biomolecules, it is still important for the FF to accurately reproduce the experimental liquid properties of small molecules that represent the chemical moieties of biomolecules. Otherwise, the force field may not describe the structures and energies of macromolecules in aqueous solutions properly. In this work, we have carried out a systematic study to evaluate the General AMBER Force Field (GAFF) in studying densities and heats of vaporization for a large set of organic molecules that covers the most common chemical functional groups. The latest techniques, such as the particle mesh Ewald (PME) for calculating electrostatic energies, and Langevin dynamics for scaling temperatures, have been applied in the molecular dynamics (MD) simulations. For density, the average percent error (APE) of 71 organic compounds is 4.43% when compared to the experimental values. More encouragingly, the APE drops to 3.43% after the exclusion of two outliers and four other compounds for which the experimental densities have been measured with pressures higher than 1.0 atm. For heat of vaporization, several protocols have been investigated and the best one, P4/ntt0, achieves an average unsigned error (AUE) and a root-mean-square error (RMSE) of 0.93 and 1.20 kcal/mol, respectively. How to reduce the prediction errors through proper van der Waals (vdW) parameterization has been discussed. An encouraging finding in vdW parameterization is that both densities and heats of vaporization approach their “ideal” values in a synchronous fashion when vdW parameters are tuned. The following hydration free energy calculation using thermodynamic integration further justifies the vdW refinement. We conclude that simple vdW parameterization can significantly reduce the prediction errors. We believe that GAFF can greatly improve its performance in predicting liquid properties of organic molecules after a systematic vdW parameterization, which will be reported in a separate paper. PMID:21857814
A multiscale cerebral neurochemical connectome of the rat brain
Schöttler, Judith; Ercsey-Ravasz, Maria; Cosa-Linan, Alejandro; Varga, Melinda; Toroczkai, Zoltan; Spanagel, Rainer
2017-01-01
Understanding the rat neurochemical connectome is fundamental for exploring neuronal information processing. By using advanced data mining, supervised machine learning, and network analysis, this study integrates over 5 decades of neuroanatomical investigations into a multiscale, multilayer neurochemical connectome of the rat brain. This neurochemical connectivity database (ChemNetDB) is supported by comprehensive systematically-determined receptor distribution maps. The rat connectome has an onion-type structural organization and shares a number of structural features with mesoscale connectomes of mouse and macaque. Furthermore, we demonstrate that extremal values of graph theoretical measures (e.g., degree and betweenness) are associated with evolutionary-conserved deep brain structures such as amygdala, bed nucleus of the stria terminalis, dorsal raphe, and lateral hypothalamus, which regulate primitive, yet fundamental functions, such as circadian rhythms, reward, aggression, anxiety, and fear. The ChemNetDB is a freely available resource for systems analysis of motor, sensory, emotional, and cognitive information processing. PMID:28671956
Moreau, Jean-David; Cloetens, Peter; Gomez, Bernard; Daviero-Gomez, Véronique; Néraudeau, Didier; Lafford, Tamzin A; Tafforeau, Paul
2014-02-01
A multiscale approach combining phase-contrast X-ray micro- and nanotomography is applied for imaging a Cretaceous fossil inflorescence in the resolution range from 0.75 μm to 50 nm. The wide range of scale views provides three-dimensional reconstructions from the external gross morphology of the inflorescence fragment to the finest exine sculptures of in situ pollen. This approach enables most of the characteristics usually observed under light microscopy, or with low magnification under scanning and transmission electron microscopy, to be obtained nondestructively. In contrast to previous tomography studies of fossil and extant flowers that used resolutions down to the micron range, we used voxels with a 50 nm side in local tomography scans. This high level of resolution enables systematic affinities of fossil flowers to be established without breaking or slicing specimens.
Modeling and Simulation of High Dimensional Stochastic Multiscale PDE Systems at the Exascale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kevrekidis, Ioannis
2017-03-22
The thrust of the proposal was to exploit modern data-mining tools in a way that will create a systematic, computer-assisted approach to the representation of random media -- and also to the representation of the solutions of an array of important physicochemical processes that take place in/on such media. A parsimonious representation/parametrization of the random media links directly (via uncertainty quantification tools) to good sampling of the distribution of random media realizations. It also links directly to modern multiscale computational algorithms (like the equation-free approach that has been developed in our group) and plays a crucial role in accelerating themore » scientific computation of solutions of nonlinear PDE models (deterministic or stochastic) in such media – both solutions in particular realizations of the random media, and estimation of the statistics of the solutions over multiple realizations (e.g. expectations).« less
2013-01-01
Background The volume of influenza pandemic modelling studies has increased dramatically in the last decade. Many models incorporate now sophisticated parameterization and validation techniques, economic analyses and the behaviour of individuals. Methods We reviewed trends in these aspects in models for influenza pandemic preparedness that aimed to generate policy insights for epidemic management and were published from 2000 to September 2011, i.e. before and after the 2009 pandemic. Results We find that many influenza pandemics models rely on parameters from previous modelling studies, models are rarely validated using observed data and are seldom applied to low-income countries. Mechanisms for international data sharing would be necessary to facilitate a wider adoption of model validation. The variety of modelling decisions makes it difficult to compare and evaluate models systematically. Conclusions We propose a model Characteristics, Construction, Parameterization and Validation aspects protocol (CCPV protocol) to contribute to the systematisation of the reporting of models with an emphasis on the incorporation of economic aspects and host behaviour. Model reporting, as already exists in many other fields of modelling, would increase confidence in model results, and transparency in their assessment and comparison. PMID:23651557
Structural test of the parameterized-backbone method for protein design.
Plecs, Joseph J; Harbury, Pehr B; Kim, Peter S; Alber, Tom
2004-09-03
Designing new protein folds requires a method for simultaneously optimizing the conformation of the backbone and the side-chains. One approach to this problem is the use of a parameterized backbone, which allows the systematic exploration of families of structures. We report the crystal structure of RH3, a right-handed, three-helix coiled coil that was designed using a parameterized backbone and detailed modeling of core packing. This crystal structure was determined using another rationally designed feature, a metal-binding site that permitted experimental phasing of the X-ray data. RH3 adopted the intended fold, which has not been observed previously in biological proteins. Unanticipated structural asymmetry in the trimer was a principal source of variation within the RH3 structure. The sequence of RH3 differs from that of a previously characterized right-handed tetramer, RH4, at only one position in each 11 amino acid sequence repeat. This close similarity indicates that the design method is sensitive to the core packing interactions that specify the protein structure. Comparison of the structures of RH3 and RH4 indicates that both steric overlap and cavity formation provide strong driving forces for oligomer specificity.
A diapycnal diffusivity model for stratified environmental flows
NASA Astrophysics Data System (ADS)
Bouffard, Damien; Boegman, Leon
2013-06-01
The vertical diffusivity of density, Kρ, regulates ocean circulation, climate and coastal water quality. Kρ is difficult to measure and model in these stratified turbulent flows, resulting in the need for the development of Kρ parameterizations from more readily measurable flow quantities. Typically, Kρ is parameterized from turbulent temperature fluctuations using the Osborn-Cox model or from the buoyancy frequency, N, kinematic viscosity, ν, and the rate of dissipation of turbulent kinetic energy, ɛ, using the Osborn model. More recently, Shih et al. (2005, J. Fluid Mech. 525: 193-214) proposed a laboratory scale parameterization for Kρ, at Prandtl number (ratio of the viscosity over the molecular diffusivity) Pr = 0.7, in terms of the turbulence intensity parameter, Re=ɛ/(νN), which is the ratio between the destabilizing effect of turbulence to the stabilizing effects of stratification and viscosity. In the present study, we extend the SKIF parameterization, against extensive sets of published data, over 0.7 < Pr < 700 and validate it at field scale. Our results show that the SKIF model must be modified to include a new Buoyancy-controlled mixing regime, between the Molecular and Transitional regimes, where Kρ is captured using the molecular diffusivity and Osborn model, respectively. The Buoyancy-controlled regime occurs over 10Pr
NASA Astrophysics Data System (ADS)
Abedi, S.; Mashhadian, M.; Noshadravan, A.
2015-12-01
Increasing the efficiency and sustainability in operation of hydrocarbon recovery from organic-rich shales requires a fundamental understanding of chemomechanical properties of organic-rich shales. This understanding is manifested in form of physics-bases predictive models capable of capturing highly heterogeneous and multi-scale structure of organic-rich shale materials. In this work we present a framework of experimental characterization, micromechanical modeling, and uncertainty quantification that spans from nanoscale to macroscale. Application of experiments such as coupled grid nano-indentation and energy dispersive x-ray spectroscopy and micromechanical modeling attributing the role of organic maturity to the texture of the material, allow us to identify unique clay mechanical properties among different samples that are independent of maturity of shale formations and total organic content. The results can then be used to inform the physically-based multiscale model for organic rich shales consisting of three levels that spans from the scale of elementary building blocks (e.g. clay minerals in clay-dominated formations) of organic rich shales to the scale of the macroscopic inorganic/organic hard/soft inclusion composite. Although this approach is powerful in capturing the effective properties of organic-rich shale in an average sense, it does not account for the uncertainty in compositional and mechanical model parameters. Thus, we take this model one step forward by systematically incorporating the main sources of uncertainty in modeling multiscale behavior of organic-rich shales. In particular we account for the uncertainty in main model parameters at different scales such as porosity, elastic properties and mineralogy mass percent. To that end, we use Maximum Entropy Principle and random matrix theory to construct probabilistic descriptions of model inputs based on available information. The Monte Carlo simulation is then carried out to propagate the uncertainty and consequently construct probabilistic descriptions of properties at multiple length-scales. The combination of experimental characterization and stochastic multi-scale modeling presented in this work improves the robustness in the prediction of essential subsurface parameters in engineering scale.
Multiscale pore structure and its effect on gas transport in organic-rich shale
NASA Astrophysics Data System (ADS)
Wu, Tianhao; Li, Xiang; Zhao, Junliang; Zhang, Dongxiao
2017-07-01
A systematic investigation of multiscale pore structure in organic-rich shale by means of the combination of various imaging techniques is presented, including the state-of-the-art Helium-Ion-Microscope (HIM). The study achieves insight into the major features at each scale and suggests the affordable techniques for specific objectives from the aspects of resolution, dimension, and cost. The pores, which appear to be isolated, are connected by smaller pores resolved by higher-resolution imaging. This observation provides valuable information, from the microscopic perspective of pore structure, for understanding how gas accumulates and transports from where it is generated. A comprehensive workflow is proposed based on the characteristics acquired from the multiscale pore structure analysis to simulate the gas transport process. The simulations are completed with three levels: the microscopic mechanisms should be taken into consideration at level I; the spatial distribution features of organic matter, inorganic matter, and macropores constitute the major issue at level II; and the microfracture orientation and topological structure are dominant factors at level III. The results of apparent permeability from simulations agree well with the values acquired from experiments. By means of the workflow, the impact of various gas transport mechanisms at different scales can be investigated more individually and precisely than conventional experiments.
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo; Chern, Jiun-Dar
2017-01-01
The importance of precipitating mesoscale convective systems (MCSs) has been quantified from TRMM precipitation radar and microwave imager retrievals. MCSs generate more than 50% of the rainfall in most tropical regions. MCSs usually have horizontal scales of a few hundred kilometers (km); therefore, a large domain with several hundred km is required for realistic simulations of MCSs in cloud-resolving models (CRMs). Almost all traditional global and climate models do not have adequate parameterizations to represent MCSs. Typical multi-scale modeling frameworks (MMFs) may also lack the resolution (4 km grid spacing) and domain size (128 km) to realistically simulate MCSs. In this study, the impact of MCSs on precipitation is examined by conducting model simulations using the Goddard Cumulus Ensemble (GCE) model and Goddard MMF (GMMF). The results indicate that both models can realistically simulate MCSs with more grid points (i.e., 128 and 256) and higher resolutions (1 or 2 km) compared to those simulations with fewer grid points (i.e., 32 and 64) and low resolution (4 km). The modeling results also show the strengths of the Hadley circulations, mean zonal and regional vertical velocities, surface evaporation, and amount of surface rainfall are weaker or reduced in the GMMF when using more CRM grid points and higher CRM resolution. In addition, the results indicate that large-scale surface evaporation and wind feed back are key processes for determining the surface rainfall amount in the GMMF. A sensitivity test with reduced sea surface temperatures shows both reduced surface rainfall and evaporation.
NASA Astrophysics Data System (ADS)
Tao, Wei-Kuo; Chern, Jiun-Dar
2017-06-01
The importance of precipitating mesoscale convective systems (MCSs) has been quantified from TRMM precipitation radar and microwave imager retrievals. MCSs generate more than 50% of the rainfall in most tropical regions. MCSs usually have horizontal scales of a few hundred kilometers (km); therefore, a large domain with several hundred km is required for realistic simulations of MCSs in cloud-resolving models (CRMs). Almost all traditional global and climate models do not have adequate parameterizations to represent MCSs. Typical multiscale modeling frameworks (MMFs) may also lack the resolution (4 km grid spacing) and domain size (128 km) to realistically simulate MCSs. The impact of MCSs on precipitation is examined by conducting model simulations using the Goddard Cumulus Ensemble (GCE, a CRM) model and Goddard MMF that uses the GCEs as its embedded CRMs. Both models can realistically simulate MCSs with more grid points (i.e., 128 and 256) and higher resolutions (1 or 2 km) compared to those simulations with fewer grid points (i.e., 32 and 64) and low resolution (4 km). The modeling results also show the strengths of the Hadley circulations, mean zonal and regional vertical velocities, surface evaporation, and amount of surface rainfall are weaker or reduced in the Goddard MMF when using more CRM grid points and higher CRM resolution. In addition, the results indicate that large-scale surface evaporation and wind feedback are key processes for determining the surface rainfall amount in the GMMF. A sensitivity test with reduced sea surface temperatures shows both reduced surface rainfall and evaporation.
Hierarchical algorithms for modeling the ocean on hierarchical architectures
NASA Astrophysics Data System (ADS)
Hill, C. N.
2012-12-01
This presentation will describe an approach to using accelerator/co-processor technology that maps hierarchical, multi-scale modeling techniques to an underlying hierarchical hardware architecture. The focus of this work is on making effective use of both CPU and accelerator/co-processor parts of a system, for large scale ocean modeling. In the work, a lower resolution basin scale ocean model is locally coupled to multiple, "embedded", limited area higher resolution sub-models. The higher resolution models execute on co-processor/accelerator hardware and do not interact directly with other sub-models. The lower resolution basin scale model executes on the system CPU(s). The result is a multi-scale algorithm that aligns with hardware designs in the co-processor/accelerator space. We demonstrate this approach being used to substitute explicit process models for standard parameterizations. Code for our sub-models is implemented through a generic abstraction layer, so that we can target multiple accelerator architectures with different programming environments. We will present two application and implementation examples. One uses the CUDA programming environment and targets GPU hardware. This example employs a simple non-hydrostatic two dimensional sub-model to represent vertical motion more accurately. The second example uses a highly threaded three-dimensional model at high resolution. This targets a MIC/Xeon Phi like environment and uses sub-models as a way to explicitly compute sub-mesoscale terms. In both cases the accelerator/co-processor capability provides extra compute cycles that allow improved model fidelity for little or no extra wall-clock time cost.
Simulations and Evaluation of Mesoscale Convective Systems in a Multi-scale Modeling Framework (MMF)
NASA Astrophysics Data System (ADS)
Chern, J. D.; Tao, W. K.
2017-12-01
It is well known that the mesoscale convective systems (MCS) produce more than 50% of rainfall in most tropical regions and play important roles in regional and global water cycles. Simulation of MCSs in global and climate models is a very challenging problem. Typical MCSs have horizontal scale of a few hundred kilometers. Models with a domain of several hundred kilometers and fine enough resolution to properly simulate individual clouds are required to realistically simulate MCSs. The multiscale modeling framework (MMF), which replaces traditional cloud parameterizations with cloud-resolving models (CRMs) within a host atmospheric general circulation model (GCM), has shown some capabilities of simulating organized MCS-like storm signals and propagations. However, its embedded CRMs typically have small domain (less than 128 km) and coarse resolution ( 4 km) that cannot realistically simulate MCSs and individual clouds. In this study, a series of simulations were performed using the Goddard MMF. The impacts of the domain size and model grid resolution of the embedded CRMs on simulating MCSs are examined. The changes of cloud structure, occurrence, and properties such as cloud types, updraft and downdraft, latent heating profile, and cold pool strength in the embedded CRMs are examined in details. The simulated MCS characteristics are evaluated against satellite measurements using the Goddard Satellite Data Simulator Unit. The results indicate that embedded CRMs with large domain and fine resolution tend to produce better simulations compared to those simulations with typical MMF configuration (128 km domain size and 4 km model grid spacing).
A new algorithm for construction of coarse-grained sites of large biomolecules.
Li, Min; Zhang, John Z H; Xia, Fei
2016-04-05
The development of coarse-grained (CG) models for large biomolecules remains a challenge in multiscale simulations, including a rigorous definition of CG representations for them. In this work, we proposed a new stepwise optimization imposed with the boundary-constraint (SOBC) algorithm to construct the CG sites of large biomolecules, based on the s cheme of essential dynamics CG. By means of SOBC, we can rigorously derive the CG representations of biomolecules with less computational cost. The SOBC is particularly efficient for the CG definition of large systems with thousands of residues. The resulted CG sites can be parameterized as a CG model using the normal mode analysis based fluctuation matching method. Through normal mode analysis, the obtained modes of CG model can accurately reflect the functionally related slow motions of biomolecules. The SOBC algorithm can be used for the construction of CG sites of large biomolecules such as F-actin and for the study of mechanical properties of biomaterials. © 2015 Wiley Periodicals, Inc.
Lim, Hojun; Battaile, Corbett C.; Brown, Justin L.; ...
2016-06-14
In this work, we develop a tantalum strength model that incorporates e ects of temperature, strain rate and pressure. Dislocation kink-pair theory is used to incorporate temperature and strain rate e ects while the pressure dependent yield is obtained through the pressure dependent shear modulus. Material constants used in the model are parameterized from tantalum single crystal tests and polycrystalline ramp compression experiments. It is shown that the proposed strength model agrees well with the temperature and strain rate dependent yield obtained from polycrystalline tantalum experiments. Furthermore, the model accurately reproduces the pressure dependent yield stresses up to 250 GPa.more » The proposed strength model is then used to conduct simulations of a Taylor cylinder impact test and validated with experiments. This approach provides a physically-based multi-scale strength model that is able to predict the plastic deformation of polycrystalline tantalum through a wide range of temperature, strain and pressure regimes.« less
Knightes, Christopher D.; Golden, Heather E.; Journey, Celeste A.; Davis, Gary M.; Conrads, Paul; Marvin-DiPasquale, Mark; Brigham, Mark E.; Bradley, Paul M.
2014-01-01
Mercury is a ubiquitous global environmental toxicant responsible for most US fish advisories. Processes governing mercury concentrations in rivers and streams are not well understood, particularly at multiple spatial scales. We investigate how insights gained from reach-scale mercury data and model simulations can be applied at broader watershed scales using a spatially and temporally explicit watershed hydrology and biogeochemical cycling model, VELMA. We simulate fate and transport using reach-scale (0.1 km2) study data and evaluate applications to multiple watershed scales. Reach-scale VELMA parameterization was applied to two nested sub-watersheds (28 km2 and 25 km2) and the encompassing watershed (79 km2). Results demonstrate that simulated flow and total mercury concentrations compare reasonably to observations at different scales, but simulated methylmercury concentrations are out-of-phase with observations. These findings suggest that intricacies of methylmercury biogeochemical cycling and transport are under-represented in VELMA and underscore the complexity of simulating mercury fate and transport.
NASA Technical Reports Server (NTRS)
Saether, Erik; Glaessgen, Edward H.
2009-01-01
Atomistic simulations of intergranular fracture have indicated that grain-scale crack growth in polycrystalline metals can be direction dependent. At these material length scales, the atomic environment greatly influences the nature of intergranular crack propagation, through either brittle or ductile mechanisms, that are a function of adjacent grain orientation and direction of crack propagation. Methods have been developed to obtain cohesive zone models (CZM) directly from molecular dynamics simulations. These CZMs may be incorporated into decohesion finite element formulations to simulate fracture at larger length scales. A new directional decohesion element is presented that calculates the direction of Mode I opening and incorporates a material criterion for dislocation emission based on the local crystallographic environment to automatically select the CZM that best represents crack growth. The simulation of fracture in 2-D and 3-D aluminum polycrystals is used to illustrate the effect of parameterized CZMs and the effectiveness of directional decohesion finite elements.
Breaking the power law: Multiscale simulations of self-ion irradiated tungsten
NASA Astrophysics Data System (ADS)
Jin, Miaomiao; Permann, Cody; Short, Michael P.
2018-06-01
The initial stage of radiation defect creation has often been shown to follow a power law distribution at short time scales, recently so with tungsten, following many self-organizing patterns found in nature. The evolution of this damage, however, is dominated by interactions between defect clusters, as the coalescence of smaller defects into clusters depends on the balance between transport, absorption, and emission to/from existing clusters. The long-time evolution of radiation-induced defects in tungsten is studied with cluster dynamics parameterized with lower length scale simulations, and is shown to deviate from a power law size distribution. The effects of parameters such as dose rate and total dose, as parameters affecting the strength of the driving force for defect evolution, are also analyzed. Excellent agreement is achieved with regards to an experimentally measured defect size distribution at 30 K. This study provides another satisfactory explanation for experimental observations in addition to that of primary radiation damage, which should be reconciled with additional validation data.
Hosseinbor, Ameer Pasha; Kim, Won Hwa; Adluru, Nagesh; Acharya, Amit; Vorperian, Houri K; Chung, Moo K
2014-01-01
Recently, the HyperSPHARM algorithm was proposed to parameterize multiple disjoint objects in a holistic manner using the 4D hyperspherical harmonics. The HyperSPHARM coefficients are global; they cannot be used to directly infer localized variations in signal. In this paper, we present a unified wavelet framework that links Hyper-SPHARM to the diffusion wavelet transform. Specifically, we will show that the HyperSPHARM basis forms a subset of a wavelet-based multiscale representation of surface-based signals. This wavelet, termed the hyperspherical diffusion wavelet, is a consequence of the equivalence of isotropic heat diffusion smoothing and the diffusion wavelet transform on the hypersphere. Our framework allows for the statistical inference of highly localized anatomical changes, which we demonstrate in the first-ever developmental study on the hyoid bone investigating gender and age effects. We also show that the hyperspherical wavelet successfully picks up group-wise differences that are barely detectable using SPHARM.
Hosseinbor, A. Pasha; Kim, Won Hwa; Adluru, Nagesh; Acharya, Amit; Vorperian, Houri K.; Chung, Moo K.
2014-01-01
Recently, the HyperSPHARM algorithm was proposed to parameterize multiple disjoint objects in a holistic manner using the 4D hyperspherical harmonics. The HyperSPHARM coefficients are global; they cannot be used to directly infer localized variations in signal. In this paper, we present a unified wavelet framework that links HyperSPHARM to the diffusion wavelet transform. Specifically, we will show that the HyperSPHARM basis forms a subset of a wavelet-based multiscale representation of surface-based signals. This wavelet, termed the hyperspherical diffusion wavelet, is a consequence of the equivalence of isotropic heat diffusion smoothing and the diffusion wavelet transform on the hypersphere. Our framework allows for the statistical inference of highly localized anatomical changes, which we demonstrate in the firstever developmental study on the hyoid bone investigating gender and age effects. We also show that the hyperspherical wavelet successfully picks up group-wise differences that are barely detectable using SPHARM. PMID:25320783
RACER a Coarse-Grained RNA Model for Capturing Folding Free Energy in Molecular Dynamics Simulations
NASA Astrophysics Data System (ADS)
Cheng, Sara; Bell, David; Ren, Pengyu
RACER is a coarse-grained RNA model that can be used in molecular dynamics simulations to predict native structures and sequence-specific variation of free energy of various RNA structures. RACER is capable of accurate prediction of native structures of duplexes and hairpins (average RMSD of 4.15 angstroms), and RACER can capture sequence-specific variation of free energy in excellent agreement with experimentally measured stabilities (r-squared =0.98). The RACER model implements a new effective non-bonded potential and re-parameterization of hydrogen bond and Debye-Huckel potentials. Insights from the RACER model include the importance of treating pairing and stacking interactions separately in order to distinguish folded an unfolded states and identification of hydrogen-bonding, base stacking, and electrostatic interactions as essential driving forces for RNA folding. Future applications of the RACER model include predicting free energy landscapes of more complex RNA structures and use of RACER for multiscale simulations.
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo
2006-01-01
Recent GEWEX Cloud System Study (GCSS) model comparison projects have indicated that cloud-resolving models (CRMs) agree with observations better than traditional single-column models in simulating various types of clouds and cloud systems from different geographic locations. Current and future NASA satellite programs can provide cloud, precipitation, aerosol and other data at very fine spatial and temporal scales. It requires a coupled global circulation model (GCM) and cloud-scale model (termed a super-parameterization or multi-scale modeling framework, MMF) to use these satellite data to improve the understanding of the physical processes that are responsible for the variation in global and regional climate and hydrological systems. The use of a GCM will enable global coverage, and the use of a CRM will allow for better and more sophisticated physical parameterization. NASA satellite and field campaign cloud related datasets can provide initial conditions as well as validation for both the MMF and CFWs. The Goddard MMF is based on the 2D Goddard Cumulus Ensemble (GCE) model and the Goddard finite volume general circulation model (fvGCM), and it has started production runs with two years results (1 998 and 1999). In this talk, I will present: (1) A brief review on GCE model and its applications on precipitation processes (microphysical and land processes), (2) The Goddard MMF and the major difference between two existing MMFs (CSU MMF and Goddard MMF), and preliminary results (the comparison with traditional GCMs), and (3) A discussion on the Goddard WRF version (its developments and applications).
NASA Astrophysics Data System (ADS)
Schneider, Tapio; Lan, Shiwei; Stuart, Andrew; Teixeira, João.
2017-12-01
Climate projections continue to be marred by large uncertainties, which originate in processes that need to be parameterized, such as clouds, convection, and ecosystems. But rapid progress is now within reach. New computational tools and methods from data assimilation and machine learning make it possible to integrate global observations and local high-resolution simulations in an Earth system model (ESM) that systematically learns from both and quantifies uncertainties. Here we propose a blueprint for such an ESM. We outline how parameterization schemes can learn from global observations and targeted high-resolution simulations, for example, of clouds and convection, through matching low-order statistics between ESMs, observations, and high-resolution simulations. We illustrate learning algorithms for ESMs with a simple dynamical system that shares characteristics of the climate system; and we discuss the opportunities the proposed framework presents and the challenges that remain to realize it.
NASA Astrophysics Data System (ADS)
Gao, Yang; Leung, L. Ruby; Zhao, Chun; Hagos, Samson
2017-03-01
Simulating summer precipitation is a significant challenge for climate models that rely on cumulus parameterizations to represent moist convection processes. Motivated by recent advances in computing that support very high-resolution modeling, this study aims to systematically evaluate the effects of model resolution and convective parameterizations across the gray zone resolutions. Simulations using the Weather Research and Forecasting model were conducted at grid spacings of 36 km, 12 km, and 4 km for two summers over the conterminous U.S. The convection-permitting simulations at 4 km grid spacing are most skillful in reproducing the observed precipitation spatial distributions and diurnal variability. Notable differences are found between simulations with the traditional Kain-Fritsch (KF) and the scale-aware Grell-Freitas (GF) convection schemes, with the latter more skillful in capturing the nocturnal timing in the Great Plains and North American monsoon regions. The GF scheme also simulates a smoother transition from convective to large-scale precipitation as resolution increases, resulting in reduced sensitivity to model resolution compared to the KF scheme. Nonhydrostatic dynamics has a positive impact on precipitation over complex terrain even at 12 km and 36 km grid spacings. With nudging of the winds toward observations, we show that the conspicuous warm biases in the Southern Great Plains are related to precipitation biases induced by large-scale circulation biases, which are insensitive to model resolution. Overall, notable improvements in simulating summer rainfall and its diurnal variability through convection-permitting modeling and scale-aware parameterizations suggest promising venues for improving climate simulations of water cycle processes.
NASA Astrophysics Data System (ADS)
Lee, S.-H.; Kim, S.-W.; Angevine, W. M.; Bianco, L.; McKeen, S. A.; Senff, C. J.; Trainer, M.; Tucker, S. C.; Zamora, R. J.
2011-03-01
The performance of different urban surface parameterizations in the WRF (Weather Research and Forecasting) in simulating urban boundary layer (UBL) was investigated using extensive measurements during the Texas Air Quality Study 2006 field campaign. The extensive field measurements collected on surface (meteorological, wind profiler, energy balance flux) sites, a research aircraft, and a research vessel characterized 3-dimensional atmospheric boundary layer structures over the Houston-Galveston Bay area, providing a unique opportunity for the evaluation of the physical parameterizations. The model simulations were performed over the Houston metropolitan area for a summertime period (12-17 August) using a bulk urban parameterization in the Noah land surface model (original LSM), a modified LSM, and a single-layer urban canopy model (UCM). The UCM simulation compared quite well with the observations over the Houston urban areas, reducing the systematic model biases in the original LSM simulation by 1-2 °C in near-surface air temperature and by 200-400 m in UBL height, on average. A more realistic turbulent (sensible and latent heat) energy partitioning contributed to the improvements in the UCM simulation. The original LSM significantly overestimated the sensible heat flux (~200 W m-2) over the urban areas, resulting in warmer and higher UBL. The modified LSM slightly reduced warm and high biases in near-surface air temperature (0.5-1 °C) and UBL height (~100 m) as a result of the effects of urban vegetation. The relatively strong thermal contrast between the Houston area and the water bodies (Galveston Bay and the Gulf of Mexico) in the LSM simulations enhanced the sea/bay breezes, but the model performance in predicting local wind fields was similar among the simulations in terms of statistical evaluations. These results suggest that a proper surface representation (e.g. urban vegetation, surface morphology) and explicit parameterizations of urban physical processes are required for accurate urban atmospheric numerical modeling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ali, Melkamu; Ye, Sheng; Li, Hongyi
2014-07-19
Subsurface stormflow is an important component of the rainfall-runoff response, especially in steep forested regions. However; its contribution is poorly represented in current generation of land surface hydrological models (LSMs) and catchment-scale rainfall-runoff models. The lack of physical basis of common parameterizations precludes a priori estimation (i.e. without calibration), which is a major drawback for prediction in ungauged basins, or for use in global models. This paper is aimed at deriving physically based parameterizations of the storage-discharge relationship relating to subsurface flow. These parameterizations are derived through a two-step up-scaling procedure: firstly, through simulations with a physically based (Darcian) subsurfacemore » flow model for idealized three dimensional rectangular hillslopes, accounting for within-hillslope random heterogeneity of soil hydraulic properties, and secondly, through subsequent up-scaling to the catchment scale by accounting for between-hillslope and within-catchment heterogeneity of topographic features (e.g., slope). These theoretical simulation results produced parameterizations of the storage-discharge relationship in terms of soil hydraulic properties, topographic slope and their heterogeneities, which were consistent with results of previous studies. Yet, regionalization of the resulting storage-discharge relations across 50 actual catchments in eastern United States, and a comparison of the regionalized results with equivalent empirical results obtained on the basis of analysis of observed streamflow recession curves, revealed a systematic inconsistency. It was found that the difference between the theoretical and empirically derived results could be explained, to first order, by climate in the form of climatic aridity index. This suggests a possible codependence of climate, soils, vegetation and topographic properties, and suggests that subsurface flow parameterization needed for ungauged locations must account for both the physics of flow in heterogeneous landscapes, and the co-dependence of soil and topographic properties with climate, including possibly the mediating role of vegetation.« less
NASA Astrophysics Data System (ADS)
Khodayari, Arezoo; Olsen, Seth C.; Wuebbles, Donald J.; Phoenix, Daniel B.
2015-07-01
Atmospheric chemistry-climate models are often used to calculate the effect of aviation NOx emissions on atmospheric ozone (O3) and methane (CH4). Due to the long (∼10 yr) atmospheric lifetime of methane, model simulations must be run for long time periods, typically for more than 40 simulation years, to reach steady-state if using CH4 emission fluxes. Because of the computational expense of such long runs, studies have traditionally used specified CH4 mixing ratio lower boundary conditions (BCs) and then applied a simple parameterization based on the change in CH4 lifetime between the control and NOx-perturbed simulations to estimate the change in CH4 concentration induced by NOx emissions. In this parameterization a feedback factor (typically a value of 1.4) is used to account for the feedback of CH4 concentrations on its lifetime. Modeling studies comparing simulations using CH4 surface fluxes and fixed mixing ratio BCs are used to examine the validity of this parameterization. The latest version of the Community Earth System Model (CESM), with the CAM5 atmospheric model, was used for this study. Aviation NOx emissions for 2006 were obtained from the AEDT (Aviation Environmental Design Tool) global commercial aircraft emissions. Results show a 31.4 ppb change in CH4 concentration when estimated using the parameterization and a 1.4 feedback factor, and a 28.9 ppb change when the concentration was directly calculated in the CH4 flux simulations. The model calculated value for CH4 feedback on its own lifetime agrees well with the 1.4 feedback factor. Systematic comparisons between the separate runs indicated that the parameterization technique overestimates the CH4 concentration by 8.6%. Therefore, it is concluded that the estimation technique is good to within ∼10% and decreases the computational requirements in our simulations by nearly a factor of 8.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Du, Qiang
The rational design of materials, the development of accurate and efficient material simulation algorithms, and the determination of the response of materials to environments and loads occurring in practice all require an understanding of mechanics at disparate spatial and temporal scales. The project addresses mathematical and numerical analyses for material problems for which relevant scales range from those usually treated by molecular dynamics all the way up to those most often treated by classical elasticity. The prevalent approach towards developing a multiscale material model couples two or more well known models, e.g., molecular dynamics and classical elasticity, each of whichmore » is useful at a different scale, creating a multiscale multi-model. However, the challenges behind such a coupling are formidable and largely arise because the atomistic and continuum models employ nonlocal and local models of force, respectively. The project focuses on a multiscale analysis of the peridynamics materials model. Peridynamics can be used as a transition between molecular dynamics and classical elasticity so that the difficulties encountered when directly coupling those two models are mitigated. In addition, in some situations, peridynamics can be used all by itself as a material model that accurately and efficiently captures the behavior of materials over a wide range of spatial and temporal scales. Peridynamics is well suited to these purposes because it employs a nonlocal model of force, analogous to that of molecular dynamics; furthermore, at sufficiently large length scales and assuming smooth deformation, peridynamics can be approximated by classical elasticity. The project will extend the emerging mathematical and numerical analysis of peridynamics. One goal is to develop a peridynamics-enabled multiscale multi-model that potentially provides a new and more extensive mathematical basis for coupling classical elasticity and molecular dynamics, thus enabling next generation atomistic-to-continuum multiscale simulations. In addition, a rigorous studyof nite element discretizations of peridynamics will be considered. Using the fact that peridynamics is spatially derivative free, we will also characterize the space of admissible peridynamic solutions and carry out systematic analyses of the models, in particular rigorously showing how peridynamics encompasses fracture and other failure phenomena. Additional aspects of the project include the mathematical and numerical analysis of peridynamics applied to stochastic peridynamics models. In summary, the project will make feasible mathematically consistent multiscale models for the analysis and design of advanced materials.« less
Analytical probabilistic proton dose calculation and range uncertainties
NASA Astrophysics Data System (ADS)
Bangert, M.; Hennig, P.; Oelfke, U.
2014-03-01
We introduce the concept of analytical probabilistic modeling (APM) to calculate the mean and the standard deviation of intensity-modulated proton dose distributions under the influence of range uncertainties in closed form. For APM, range uncertainties are modeled with a multivariate Normal distribution p(z) over the radiological depths z. A pencil beam algorithm that parameterizes the proton depth dose d(z) with a weighted superposition of ten Gaussians is used. Hence, the integrals ∫ dz p(z) d(z) and ∫ dz p(z) d(z)2 required for the calculation of the expected value and standard deviation of the dose remain analytically tractable and can be efficiently evaluated. The means μk, widths δk, and weights ωk of the Gaussian components parameterizing the depth dose curves are found with least squares fits for all available proton ranges. We observe less than 0.3% average deviation of the Gaussian parameterizations from the original proton depth dose curves. Consequently, APM yields high accuracy estimates for the expected value and standard deviation of intensity-modulated proton dose distributions for two dimensional test cases. APM can accommodate arbitrary correlation models and account for the different nature of random and systematic errors in fractionated radiation therapy. Beneficial applications of APM in robust planning are feasible.
NASA Astrophysics Data System (ADS)
Gornostyrev, Yu. N.; Katsnelson, M. I.; Mryasov, Oleg N.; Freeman, A. J.; Trefilov, M. V.
1998-03-01
Theoretical analysis of the fracture behaviour of fcc Au, Ir and Al have been performed within various brittle/ductile criteria (BDC) with ab-initio, embedded atom (EAM), and pseudopotential parameterizations. We systematically examined several important aspects of the fracture behaviour: (i) dislocation structure, (ii) energetics of the cleavage decohesion and (iii) character of the interatomic interactions. Unit dislocation structures were analyzed within a two dimensional generalization of the Peierls-Nabarro model with restoring forces determined from ab-initio total energy calculations and found to be split with well defined highly mobile partials for all considered metals. We find from ab-initio and pseudopotential that in contrast with most of fcc metals, cleavage decohesion curve for Al appreciably differs from UBER relation. Finally, using ab-initio, EAM and pseudopotential parameterizations, we demonstrate that (i) Au (as a typical example of a ductile metal) is well described within existing BDC's, (ii) anomalous cleavage-like crack propagation of Ir is driven predominantly by it's high elastic modulus and (iii) Al is not described within BDC due to it's long-range interatomic interactions (and hence requires adjustments of the brittle/ductile criteria).
NASA Astrophysics Data System (ADS)
Durigon, Angelica; Lier, Quirijn de Jong van; Metselaar, Klaas
2016-10-01
To date, measuring plant transpiration at canopy scale is laborious and its estimation by numerical modelling can be used to assess high time frequency data. When using the model by Jacobs (1994) to simulate transpiration of water stressed plants it needs to be reparametrized. We compare the importance of model variables affecting simulated transpiration of water stressed plants. A systematic literature review was performed to recover existing parameterizations to be tested in the model. Data from a field experiment with common bean under full and deficit irrigation were used to correlate estimations to forcing variables applying principal component analysis. New parameterizations resulted in a moderate reduction of prediction errors and in an increase in model performance. Ags model was sensitive to changes in the mesophyll conductance and leaf angle distribution parameterizations, allowing model improvement. Simulated transpiration could be separated in temporal components. Daily, afternoon depression and long-term components for the fully irrigated treatment were more related to atmospheric forcing variables (specific humidity deficit between stomata and air, relative air humidity and canopy temperature). Daily and afternoon depression components for the deficit-irrigated treatment were related to both atmospheric and soil dryness, and long-term component was related to soil dryness.
(abstract) A Geomagnetic Contribution to Climate Change in this Century
NASA Technical Reports Server (NTRS)
Feynman, J.; Ruzmaikin, A.; Lawrence, J.
1996-01-01
There is a myth that all solar effects can be parameterized by the sun spot number. This is not true. For example, the level of geomagnetic activity during this century was not proportional to the sunspot number. Instead there is a large systematic increase in geomagnetic activity, not reflected in the sunspot number. This increase occurred gradually over at least 60 years. The 11 year solar cycle variation was superimposed on this systematic increase. Here we show that this systematic increase in activity is well correlated to the simultaneous increase in terrestrial temperature that occurred during the first half of this century. We discuss these findings in terms of mechanisms by which geomagnetics can be coupled to climate. These mechanisms include possible changes in weather patterns and cloud cover due to increased cosmic ray fluxes, or to increased fluxes of high energy electrons. We suggest that this systematic increase in geomagnetic activity contributed (along with anthropogenic effects and possible changes in solar irradiance) to the changes in climate recorded during this period.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Majda, Andrew J.; Xing, Yulong; Mohammadian, Majid
Determining the finite-amplitude preconditioned states in the hurricane embryo, which lead to tropical cyclogenesis, is a central issue in contemporary meteorology. In the embryo there is competition between different preconditioning mechanisms involving hydrodynamics and moist thermodynamics, which can lead to cyclogenesis. Here systematic asymptotic methods from applied mathematics are utilized to develop new simplified moist multi-scale models starting from the moist anelastic equations. Three interesting multi-scale models emerge in the analysis. The balanced mesoscale vortex (BMV) dynamics and the microscale balanced hot tower (BHT) dynamics involve simplified balanced equations without gravity waves for vertical vorticity amplification due to moist heatmore » sources and incorporate nonlinear advective fluxes across scales. The BMV model is the central one for tropical cyclogenesis in the embryo. The moist mesoscale wave (MMW) dynamics involves simplified equations for mesoscale moisture fluctuations, as well as linear hydrostatic waves driven by heat sources from moisture and eddy flux divergences. A simplified cloud physics model for deep convection is introduced here and used to study moist axisymmetric plumes in the BHT model. A simple application in periodic geometry involving the effects of mesoscale vertical shear and moist microscale hot towers on vortex amplification is developed here to illustrate features of the coupled multi-scale models. These results illustrate the use of these models in isolating key mechanisms in the embryo in a simplified content.« less
Nonlinear intrinsic variables and state reconstruction in multiscale simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dsilva, Carmeline J., E-mail: cdsilva@princeton.edu; Talmon, Ronen, E-mail: ronen.talmon@yale.edu; Coifman, Ronald R., E-mail: coifman@math.yale.edu
2013-11-14
Finding informative low-dimensional descriptions of high-dimensional simulation data (like the ones arising in molecular dynamics or kinetic Monte Carlo simulations of physical and chemical processes) is crucial to understanding physical phenomena, and can also dramatically assist in accelerating the simulations themselves. In this paper, we discuss and illustrate the use of nonlinear intrinsic variables (NIV) in the mining of high-dimensional multiscale simulation data. In particular, we focus on the way NIV allows us to functionally merge different simulation ensembles, and different partial observations of these ensembles, as well as to infer variables not explicitly measured. The approach relies on certainmore » simple features of the underlying process variability to filter out measurement noise and systematically recover a unique reference coordinate frame. We illustrate the approach through two distinct sets of atomistic simulations: a stochastic simulation of an enzyme reaction network exhibiting both fast and slow time scales, and a molecular dynamics simulation of alanine dipeptide in explicit water.« less
Nonlinear intrinsic variables and state reconstruction in multiscale simulations
NASA Astrophysics Data System (ADS)
Dsilva, Carmeline J.; Talmon, Ronen; Rabin, Neta; Coifman, Ronald R.; Kevrekidis, Ioannis G.
2013-11-01
Finding informative low-dimensional descriptions of high-dimensional simulation data (like the ones arising in molecular dynamics or kinetic Monte Carlo simulations of physical and chemical processes) is crucial to understanding physical phenomena, and can also dramatically assist in accelerating the simulations themselves. In this paper, we discuss and illustrate the use of nonlinear intrinsic variables (NIV) in the mining of high-dimensional multiscale simulation data. In particular, we focus on the way NIV allows us to functionally merge different simulation ensembles, and different partial observations of these ensembles, as well as to infer variables not explicitly measured. The approach relies on certain simple features of the underlying process variability to filter out measurement noise and systematically recover a unique reference coordinate frame. We illustrate the approach through two distinct sets of atomistic simulations: a stochastic simulation of an enzyme reaction network exhibiting both fast and slow time scales, and a molecular dynamics simulation of alanine dipeptide in explicit water.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cao, Zhen; Voth, Gregory A., E-mail: gavoth@uchicago.edu
It is essential to be able to systematically construct coarse-grained (CG) models that can efficiently and accurately reproduce key properties of higher-resolution models such as all-atom. To fulfill this goal, a mapping operator is needed to transform the higher-resolution configuration to a CG configuration. Certain mapping operators, however, may lose information related to the underlying electrostatic properties. In this paper, a new mapping operator based on the centers of charge of CG sites is proposed to address this issue. Four example systems are chosen to demonstrate this concept. Within the multiscale coarse-graining framework, CG models that use this mapping operatormore » are found to better reproduce the structural correlations of atomistic models. The present work also demonstrates the flexibility of the mapping operator and the robustness of the force matching method. For instance, important functional groups can be isolated and emphasized in the CG model.« less
Scale-dependent intrinsic entropies of complex time series.
Yeh, Jia-Rong; Peng, Chung-Kang; Huang, Norden E
2016-04-13
Multi-scale entropy (MSE) was developed as a measure of complexity for complex time series, and it has been applied widely in recent years. The MSE algorithm is based on the assumption that biological systems possess the ability to adapt and function in an ever-changing environment, and these systems need to operate across multiple temporal and spatial scales, such that their complexity is also multi-scale and hierarchical. Here, we present a systematic approach to apply the empirical mode decomposition algorithm, which can detrend time series on various time scales, prior to analysing a signal's complexity by measuring the irregularity of its dynamics on multiple time scales. Simulated time series of fractal Gaussian noise and human heartbeat time series were used to study the performance of this new approach. We show that our method can successfully quantify the fractal properties of the simulated time series and can accurately distinguish modulations in human heartbeat time series in health and disease. © 2016 The Author(s).
Lunar Geologic Mapping Program: 2008 Update
NASA Technical Reports Server (NTRS)
Gaddis, L.; Tanaka, K.; Skinner, J.; Hawke, B. R.
2008-01-01
The NASA Lunar Geologic Mapping Program is underway and a mappers handbook is in preparation. This program for systematic, global lunar geologic mapping at 1:2.5M scale incorporates digital, multi-scale data from a wide variety of sources. Many of these datasets have been tied to the new Unified Lunar Control Network 2005 [1] and are available online. This presentation summarizes the current status of this mapping program, the datasets now available, and how they might be used for mapping on the Moon.
Multifractal evaluation of simulated precipitation intensities from the COSMO NWP model
NASA Astrophysics Data System (ADS)
Wolfensberger, Daniel; Gires, Auguste; Tchiguirinskaia, Ioulia; Schertzer, Daniel; Berne, Alexis
2017-12-01
The framework of universal multifractals (UM) characterizes the spatio-temporal variability in geophysical data over a wide range of scales with only a limited number of scale-invariant parameters. This work aims to clarify the link between multifractals (MFs) and more conventional weather descriptors and to show how they can be used to perform a multi-scale evaluation of model data. The first part of this work focuses on a MF analysis of the climatology of precipitation intensities simulated by the COSMO numerical weather prediction model. Analysis of the spatial structure of the MF parameters, and their correlations with external meteorological and topographical descriptors, reveals that simulated precipitation tends to be smoother at higher altitudes, and that the mean intermittency is mostly influenced by the latitude. A hierarchical clustering was performed on the external descriptors, yielding three different clusters, which correspond roughly to Alpine/continental, Mediterranean and temperate regions. Distributions of MF parameters within these three clusters are shown to be statistically significantly different, indicating that the MF signature of rain is indeed geographically dependent. The second part of this work is event-based and focuses on the smaller scales. The MF parameters of precipitation intensities at the ground are compared with those obtained from the Swiss radar composite during three events corresponding to typical synoptic conditions over Switzerland. The results of this analysis show that the COSMO simulations exhibit spatial scaling breaks that are not present in the radar data, indicating that the model is not able to simulate the observed variability at all scales. A comparison of the operational one-moment microphysical parameterization scheme of COSMO with a more advanced two-moment scheme reveals that, while no scheme systematically outperforms the other, the two-moment scheme tends to produce larger extreme values and more discontinuous precipitation fields, which agree better with the radar composite.
2014-09-30
existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this...through downscaling future projection simulations. APPROACH To address the scientific objectives, we plan to develop, implement, and validate a...ITD and FSD at the same time. The development of MIZMAS will be based on systematic model parameterization, calibration, and validation, and data
NASA Astrophysics Data System (ADS)
Gantt, B.; Kelly, J. T.; Bash, J. O.
2015-11-01
Sea spray aerosols (SSAs) impact the particle mass concentration and gas-particle partitioning in coastal environments, with implications for human and ecosystem health. Model evaluations of SSA emissions have mainly focused on the global scale, but regional-scale evaluations are also important due to the localized impact of SSAs on atmospheric chemistry near the coast. In this study, SSA emissions in the Community Multiscale Air Quality (CMAQ) model were updated to enhance the fine-mode size distribution, include sea surface temperature (SST) dependency, and reduce surf-enhanced emissions. Predictions from the updated CMAQ model and those of the previous release version, CMAQv5.0.2, were evaluated using several coastal and national observational data sets in the continental US. The updated emissions generally reduced model underestimates of sodium, chloride, and nitrate surface concentrations for coastal sites in the Bay Regional Atmospheric Chemistry Experiment (BRACE) near Tampa, Florida. Including SST dependency to the SSA emission parameterization led to increased sodium concentrations in the southeastern US and decreased concentrations along parts of the Pacific coast and northeastern US. The influence of sodium on the gas-particle partitioning of nitrate resulted in higher nitrate particle concentrations in many coastal urban areas due to increased condensation of nitric acid in the updated simulations, potentially affecting the predicted nitrogen deposition in sensitive ecosystems. Application of the updated SSA emissions to the California Research at the Nexus of Air Quality and Climate Change (CalNex) study period resulted in a modest improvement in the predicted surface concentration of sodium and nitrate at several central and southern California coastal sites. This update of SSA emissions enabled a more realistic simulation of the atmospheric chemistry in coastal environments where marine air mixes with urban pollution.
MULTI-SCALE MODELING AND APPROXIMATION ASSISTED OPTIMIZATION OF BARE TUBE HEAT EXCHANGERS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bacellar, Daniel; Ling, Jiazhen; Aute, Vikrant
2014-01-01
Air-to-refrigerant heat exchangers are very common in air-conditioning, heat pump and refrigeration applications. In these heat exchangers, there is a great benefit in terms of size, weight, refrigerant charge and heat transfer coefficient, by moving from conventional channel sizes (~ 9mm) to smaller channel sizes (< 5mm). This work investigates new designs for air-to-refrigerant heat exchangers with tube outer diameter ranging from 0.5 to 2.0mm. The goal of this research is to develop and optimize the design of these heat exchangers and compare their performance with existing state of the art designs. The air-side performance of various tube bundle configurationsmore » are analyzed using a Parallel Parameterized CFD (PPCFD) technique. PPCFD allows for fast-parametric CFD analyses of various geometries with topology change. Approximation techniques drastically reduce the number of CFD evaluations required during optimization. Maximum Entropy Design method is used for sampling and Kriging method is used for metamodeling. Metamodels are developed for the air-side heat transfer coefficients and pressure drop as a function of tube-bundle dimensions and air velocity. The metamodels are then integrated with an air-to-refrigerant heat exchanger design code. This integration allows a multi-scale analysis of air-side performance heat exchangers including air-to-refrigerant heat transfer and phase change. Overall optimization is carried out using a multi-objective genetic algorithm. The optimal designs found can exhibit 50 percent size reduction, 75 percent decrease in air side pressure drop and doubled air heat transfer coefficients compared to a high performance compact micro channel heat exchanger with same capacity and flow rates.« less
A Multi-Scale Energy Food Systems Modeling Framework For Climate Adaptation
NASA Astrophysics Data System (ADS)
Siddiqui, S.; Bakker, C.; Zaitchik, B. F.; Hobbs, B. F.; Broaddus, E.; Neff, R.; Haskett, J.; Parker, C.
2016-12-01
Our goal is to understand coupled system dynamics across scales in a manner that allows us to quantify the sensitivity of critical human outcomes (nutritional satisfaction, household economic well-being) to development strategies and to climate or market induced shocks in sub-Saharan Africa. We adopt both bottom-up and top-down multi-scale modeling approaches focusing our efforts on food, energy, water (FEW) dynamics to define, parameterize, and evaluate modeled processes nationally as well as across climate zones and communities. Our framework comprises three complementary modeling techniques spanning local, sub-national and national scales to capture interdependencies between sectors, across time scales, and on multiple levels of geographic aggregation. At the center is a multi-player micro-economic (MME) partial equilibrium model for the production, consumption, storage, and transportation of food, energy, and fuels, which is the focus of this presentation. We show why such models can be very useful for linking and integrating across time and spatial scales, as well as a wide variety of models including an agent-based model applied to rural villages and larger population centers, an optimization-based electricity infrastructure model at a regional scale, and a computable general equilibrium model, which is applied to understand FEW resources and economic patterns at national scale. The MME is based on aggregating individual optimization problems for relevant players in an energy, electricity, or food market and captures important food supply chain components of trade and food distribution accounting for infrastructure and geography. Second, our model considers food access and utilization by modeling food waste and disaggregating consumption by income and age. Third, the model is set up to evaluate the effects of seasonality and system shocks on supply, demand, infrastructure, and transportation in both energy and food.
Evaluation of a Cloud Resolving Model Using TRMM Observations for Multiscale Modeling Applications
NASA Technical Reports Server (NTRS)
Posselt, Derek J.; L'Ecuyer, Tristan; Tao, Wei-Kuo; Hou, Arthur Y.; Stephens, Graeme L.
2007-01-01
The climate change simulation community is moving toward use of global cloud resolving models (CRMs), however, current computational resources are not sufficient to run global CRMs over the hundreds of years necessary to produce climate change estimates. As an intermediate step between conventional general circulation models (GCMs) and global CRMs, many climate analysis centers are embedding a CRM in each grid cell of a conventional GCM. These Multiscale Modeling Frameworks (MMFs) represent a theoretical advance over the use of conventional GCM cloud and convection parameterizations, but have been shown to exhibit an overproduction of precipitation in the tropics during the northern hemisphere summer. In this study, simulations of clouds, precipitation, and radiation over the South China Sea using the CRM component of the NASA Goddard MMF are evaluated using retrievals derived from the instruments aboard the Tropical Rainfall Measuring Mission (TRMM) satellite platform for a 46-day time period that spans 5 May - 20 June 1998. The NASA Goddard Cumulus Ensemble (GCE) model is forced with observed largescale forcing derived from soundings taken during the intensive observing period of the South China Sea Monsoon Experiment. It is found that the GCE configuration used in the NASA Goddard MMF responds too vigorously to the imposed large-scale forcing, accumulating too much moisture and producing too much cloud cover during convective phases, and overdrying the atmosphere and suppressing clouds during monsoon break periods. Sensitivity experiments reveal that changes to ice cloud microphysical parameters have a relatively large effect on simulated clouds, precipitation, and radiation, while changes to grid spacing and domain length have little effect on simulation results. The results motivate a more detailed and quantitative exploration of the sources and magnitude of the uncertainty associated with specified cloud microphysical parameters in the CRM components of MMFs.
Physical controls and predictability of stream hyporheic flow evaluated with a multiscale model
Stonedahl, Susa H.; Harvey, Judson W.; Detty, Joel; Aubeneau, Antoine; Packman, Aaron I.
2012-01-01
Improved predictions of hyporheic exchange based on easily measured physical variables are needed to improve assessment of solute transport and reaction processes in watersheds. Here we compare physically based model predictions for an Indiana stream with stream tracer results interpreted using the Transient Storage Model (TSM). We parameterized the physically based, Multiscale Model (MSM) of stream-groundwater interactions with measured stream planform and discharge, stream velocity, streambed hydraulic conductivity and porosity, and topography of the streambed at distinct spatial scales (i.e., ripple, bar, and reach scales). We predicted hyporheic exchange fluxes and hyporheic residence times using the MSM. A Continuous Time Random Walk (CTRW) model was used to convert the MSM output into predictions of in stream solute transport, which we compared with field observations and TSM parameters obtained by fitting solute transport data. MSM simulations indicated that surface-subsurface exchange through smaller topographic features such as ripples was much faster than exchange through larger topographic features such as bars. However, hyporheic exchange varies nonlinearly with groundwater discharge owing to interactions between flows induced at different topographic scales. MSM simulations showed that groundwater discharge significantly decreased both the volume of water entering the subsurface and the time it spent in the subsurface. The MSM also characterized longer timescales of exchange than were observed by the tracer-injection approach. The tracer data, and corresponding TSM fits, were limited by tracer measurement sensitivity and uncertainty in estimates of background tracer concentrations. Our results indicate that rates and patterns of hyporheic exchange are strongly influenced by a continuum of surface-subsurface hydrologic interactions over a wide range of spatial and temporal scales rather than discrete processes.
Reduced-Order Biogeochemical Flux Model for High-Resolution Multi-Scale Biophysical Simulations
NASA Astrophysics Data System (ADS)
Smith, Katherine; Hamlington, Peter; Pinardi, Nadia; Zavatarelli, Marco
2017-04-01
Biogeochemical tracers and their interactions with upper ocean physical processes such as submesoscale circulations and small-scale turbulence are critical for understanding the role of the ocean in the global carbon cycle. These interactions can cause small-scale spatial and temporal heterogeneity in tracer distributions that can, in turn, greatly affect carbon exchange rates between the atmosphere and interior ocean. For this reason, it is important to take into account small-scale biophysical interactions when modeling the global carbon cycle. However, explicitly resolving these interactions in an earth system model (ESM) is currently infeasible due to the enormous associated computational cost. As a result, understanding and subsequently parameterizing how these small-scale heterogeneous distributions develop and how they relate to larger resolved scales is critical for obtaining improved predictions of carbon exchange rates in ESMs. In order to address this need, we have developed the reduced-order, 17 state variable Biogeochemical Flux Model (BFM-17) that follows the chemical functional group approach, which allows for non-Redfield stoichiometric ratios and the exchange of matter through units of carbon, nitrate, and phosphate. This model captures the behavior of open-ocean biogeochemical systems without substantially increasing computational cost, thus allowing the model to be combined with computationally-intensive, fully three-dimensional, non-hydrostatic large eddy simulations (LES). In this talk, we couple BFM-17 with the Princeton Ocean Model and show good agreement between predicted monthly-averaged results and Bermuda testbed area field data (including the Bermuda-Atlantic Time-series Study and Bermuda Testbed Mooring). Through these tests, we demonstrate the capability of BFM-17 to accurately model open-ocean biochemistry. Additionally, we discuss the use of BFM-17 within a multi-scale LES framework and outline how this will further our understanding of turbulent biophysical interactions in the upper ocean.
NASA Astrophysics Data System (ADS)
Du, Wenbo
A common attribute of electric-powered aerospace vehicles and systems such as unmanned aerial vehicles, hybrid- and fully-electric aircraft, and satellites is that their performance is usually limited by the energy density of their batteries. Although lithium-ion batteries offer distinct advantages such as high voltage and low weight over other battery technologies, they are a relatively new development, and thus significant gaps in the understanding of the physical phenomena that govern battery performance remain. As a result of this limited understanding, batteries must often undergo a cumbersome design process involving many manual iterations based on rules of thumb and ad-hoc design principles. A systematic study of the relationship between operational, geometric, morphological, and material-dependent properties and performance metrics such as energy and power density is non-trivial due to the multiphysics, multiphase, and multiscale nature of the battery system. To address these challenges, two numerical frameworks are established in this dissertation: a process for analyzing and optimizing several key design variables using surrogate modeling tools and gradient-based optimizers, and a multi-scale model that incorporates more detailed microstructural information into the computationally efficient but limited macro-homogeneous model. In the surrogate modeling process, multi-dimensional maps for the cell energy density with respect to design variables such as the particle size, ion diffusivity, and electron conductivity of the porous cathode material are created. A combined surrogate- and gradient-based approach is employed to identify optimal values for cathode thickness and porosity under various operating conditions, and quantify the uncertainty in the surrogate model. The performance of multiple cathode materials is also compared by defining dimensionless transport parameters. The multi-scale model makes use of detailed 3-D FEM simulations conducted at the particle-level. A monodisperse system of ellipsoidal particles is used to simulate the effective transport coefficients and interfacial reaction current density within the porous microstructure. Microscopic simulation results are shown to match well with experimental measurements, while differing significantly from homogenization approximations used in the macroscopic model. Global sensitivity analysis and surrogate modeling tools are applied to couple the two length scales and complete the multi-scale model.
Application of a Multiscale Model of Tantalum Deformation at Megabar Pressures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cavallo, R M; Park, H; Barton, N R
A new multiscale simulation tool has been developed to model the strength of tantalum under high-pressure dynamic compression. This new model combines simulations at multiple length scales to explain macroscopic properties of materials. Previously known continuum models of material response under load have built upon a mixture of theoretical physics and experimental phenomenology. Experimental data, typically measured at static pressures, are used as a means of calibration to construct models that parameterize the material properties; e.g., yield stress, work hardening, strain-rate dependence, etc. The pressure dependence for most models enters through the shear modulus, which is used to scale themore » flow stress. When these models are applied to data taken far outside the calibrated regions of phase space (e.g., strain rate or pressure) they often diverge in their predicted behavior of material deformation. The new multiscale model, developed at Lawrence Livermore National Laboratory, starts with interatomic quantum mechanical potential and is based on the motion and multiplication of dislocations. The basis for the macroscale model is plastic deformation by phonon drag and thermally activated dislocation motion and strain hardening resulting from elastic interactions among dislocations. The dislocation density, {rho}, and dislocation velocity, {nu}, are connected to the plastic strain rate {var_epsilon}{sup p}, via Orowan's equation: {var_epsilon}{sup p} = {rho}b{nu}/M, where b is the Burger's vector, the shear magnitude associated with a dislocation, and M is the Taylor factor, which accounts for geometric effects in how slip systems accommodate the deformation. The evolution of the dislocation density and velocity is carried out in the continuum model by parameterized fits to smaller scale simulations, each informed by calculations on smaller length scales down to atomistic dimensions. We apply this new model for tantalum to two sets of experiments and compare the results with more traditional models. The experiments are based on the Barnes's technique in which a low density material loads against a metal surface containing a pre-imposed rippled pattern. The loaded sample is Rayleigh-Taylor unstable and the rippled amplitudes grow with time. The rate of growth differs depending on the material strength, with stronger materials growing less, even to the point of saturation. One set of experiments was conducted at the pRad facility at LANSCE at Los Alamos National Laboratory in 2007 using high-explosive (HE) driven tantalum samples. The other set of experiments was done at the Omega laser at the Laboratory for Laser Energetics at the University of Rochester, which used high-powered lasers to create plasmas to dynamically compress a rippled tantalum sample. The two techniques provide data at different pressures and strain rates: The HE technique drives the samples at around 2 x 10{sup 5} s{sup -1} strain rate and pressures near 500 kbar, while the laser technique hits strain rates around 2 x 10{sup 7} s{sup -1} and pressures close to 1.4 Mbar. The most recent laser experiments were conducted in February 2010 and they present a sample of the data in Figure 1, which shows a face-on radiograph at a time of 65 ns after the laser was turned on. From this radiograph, they measure the growth factor which is defined to be the change in amplitude of the ripples relative to their initial amplitude. Figure 2 shows the resulting growth factors along with various model fits. The error bars are typically 20-25%. Only the multiscale model predictions match the experimental measurements. The growth factors via the HE technique are determined from multiple side-on proton radiography images and thus provide a full growth curve per single experiment. A sample growth curve is shown in Figure 3, also with various model fits and error bars estimated at 25%. It should be noted that by 7.5 {micro}s the growth in this sample has exceeded the initial target thickness indicating that localizations not captured in the overall simulation have probably become dominant, i.e., the target is likely breaking up. Application of the multiscale dislocation dynamics model as implemented in the Ares hydrodynamics code shows excellent agreement with both the pRad and Omega data. They also compare the Steinberg-Lund (SL), Preston-Tonks-Wallace (PTW), and Stainberg-Guinan (SG) models with the data. The PTW and SG models provide good fits to the pRad data but over-predict the growth (underestimate the strength) on the laser platform. The SL model under-predicts the pRad data and over-predicts the Omega data. The excellent agreement of the multiscale model with the data over two orders of magnitude in strain rate and more than a factor of two in pressure lends credibility to the model. They continue to stress the model by conducting experiments at 5 Mbars and beyond at the National Ignition Facility at LLNL in the near future.« less
A Comprehensive Two-moment Warm Microphysical Bulk Scheme :
NASA Astrophysics Data System (ADS)
Caro, D.; Wobrock, W.; Flossmann, A.; Chaumerliac, N.
The microphysic properties of gaz, aerosol particles, and hydrometeors have impli- cations at local scale (precipitations, pollution peak,..), at regional scale (inundation, acid rains,...), and also, at global scale (radiative forcing,...). So, a multi-scale study is necessary to understand and forecast in a good way meteorological phenomena con- cerning clouds. However, it cannot be carried with detailed microphysic model, on account of computers limitations. So, microphysical bulk schemes have to estimate the n´ large scale z properties of clouds due to smaller scale processes and charac- teristics. So, the development of such bulk scheme is rather important to go further in the knowledge of earth climate and in the forecasting of intense meteorological phenomena. Here, a quasi-spectral microphysic warm scheme has been developed to predict the concentrations and mixing ratios of aerosols, cloud droplets and raindrops. It considers, explicitely and analytically, the nucleation of droplets (Abdul-Razzak et al., 2000), condensation/evaporation (Chaumerliac et al., 1987), the breakup and collision-coalescence processes with the Long (1974) Ss kernels and the Berry and ´ Reinhardt (1974) Ss autoconversion parameterization, but also, the aerosols and gaz ´ scavenging. First, the parameterization has been estimated in the simplest dynamic framework of an air parcel model, with the results of the detailed scavenging model, DESCAM (Flossmann et al., 1985). Then, it has been tested, in the dynamic frame- work of a kinematic model (Szumowski et al., 1998) dedicated to the HaRP cam- paign (Hawaiian Rainband Project, 1990), with the observations and with the results of the two dimensional detailed microphysic scheme, DESCAM 2-D (Flossmann et al., 1988), implement in the CLARK model (Clark and Farley, 1984).
Coupled fvGCM-GCE Modeling System, TRMM Latent Heating and Cloud Library
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo
2004-01-01
Recent GEWEX Cloud System Study (GCSS) model comparison projects have indicated that cloud-resolving models (CRMs) agree with observations better than traditional single-column models in simulating various types of clouds and cloud systems from different geographic locations. Current and future NASA satellite programs can provide cloud, precipitation, aerosol and other data at very fine spatial and temporal scales. It requires a coupled global circulation model (GCM) and cloud-scale model (termed a super-parameterization or multi-scale modeling framework, MMF) to use these satellite data to imiprove the understanding of the physical processes that are responsible for the variation in global and regional climate and hydrological systems. The use of a GCM will enable global coverage, and the use of a CRM will allow for better and more sophisticated physical parameterization. NASA satellite and field campaign cloud related datasets can provide initial conditions as well as validation for both the MMF and CRMs. A seed fund is available at NASA Goddard to build a MMF based on the 2D GCE model and the Goddard finite volume general circulation model (fvGCM). A prototype MMF will be developed by the end of 2004 and production runs will be conducted at the beginning of 2005. The purpose of this proposal is to augment the current Goddard MMF and other cloud modeling activities. I this talk, I will present: (1) A summary of the second Cloud Modeling Workshop took place at NASA Goddard, (2) A summary of the third TRMM Latent Heating Workshop took place at Nara Japan, (3) A brief discussion on the Goddard research plan of using Weather Research Forecast (WRF) model, and (4) A brief discussion on the GCE model on developing a global cloud simulator.
Coupled fvGCM-GCE Modeling System: TRMM Latent Heating and Cloud Library
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo
2005-01-01
Recent GEWEX Cloud System Study (GCSS) model comparison projects have indicated that cloud-resolving models (CRMs) agree with observations better than traditional single-column models in simulating various types of clouds and cloud systems from different geographic locations. Current and future NASA satellite programs can provide cloud, precipitation, aerosol and other data at very fine spatial and temporal scales. It requires a coupled global circulation model (GCM) and cloud-scale model (termed a super-parameterization or multi-scale modeling framework, MMF) to use these satellite data to improve the understanding of the physical processes that are responsible for the variation in global and regional climate and hydrological systems. The use of a GCM will enable global coverage, and the use of a CRM will allow for better and more sophisticated physical parameterization. NASA satellite and field campaign cloud related datasets can provide initial conditions as well as validation for both the MMF and CRMs. A seed fund is available at NASA Goddard to build a MMF based on the 2D GCE model and the Goddard finite volume general circulation model (fvGCM). A prototype MMF will be developed by the end of 2004 and production runs will be conducted at the beginning of 2005. The purpose of this proposal is to augment the current Goddard MMF and other cloud modeling activities. In this talk, I will present: (1) A summary of the second Cloud Modeling Workshop took place at NASA Goddard, (2) A summary of the third TRMM Latent Heating Workshop took place at Nara Japan, (3) A brief discussion on the GCE model on developing a global cloud simulator.
Coupled fvGCM-GCE Modeling System, 3D Cloud-Resolving Model and Cloud Library
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo
2005-01-01
Recent GEWEX Cloud System Study (GCSS) model comparison projects have indicated that cloud- resolving models (CRMs) agree with observations better than traditional single-column models in simulating various types of clouds and cloud systems from different geographic locations. Current and future NASA satellite programs can provide cloud, precipitation, aerosol and other data at very fine spatial and temporal scales. It requires a coupled global circulation model (GCM) and cloud-scale model (termed a super-parameterization or multi-scale modeling framework, MMF) to use these satellite data to improve the understanding of the physical processes that are responsible for the variation in global and regional climate and hydrological systems. The use of a GCM will enable global coverage, and the use of a CRM will allow for better and more sophisticated physical parameterization. NASA satellite and field campaign cloud related datasets can provide initial conditions as well as validation for both the MMF and CRMs. A seed fund is available at NASA Goddard to build a MMF based on the 2D Goddard Cumulus Ensemble (GCE) model and the Goddard finite volume general circulation model (fvGCM). A prototype MMF in being developed and production runs will be conducted at the beginning of 2005. In this talk, I will present: (1) A brief review on GCE model and its applications on precipitation processes, ( 2 ) The Goddard MMF and the major difference between two existing MMFs (CSU MMF and Goddard MMF), (3) A cloud library generated by Goddard MMF, and 3D GCE model, and (4) A brief discussion on the GCE model on developing a global cloud simulator.
NASA Technical Reports Server (NTRS)
Hou, Arthur Y.; Einaudi, Franco (Technical Monitor)
2001-01-01
I will discuss the need for accurate rainfall observations to improve our ability to model the earth's climate and improve short-range weather forecasts. I will give an overview of the recent progress in using of rainfall data provided by TRMM and other microwave instruments in data assimilation to improve global analyses and diagnose state-dependent systematic errors in physical parameterizations. I will outline the current and future research strategies in preparation for the Global Precipitation Mission.
NASA Technical Reports Server (NTRS)
Frenklach, Michael; Wang, Hai; Rabinowitz, Martin J.
1992-01-01
A method of systematic optimization, solution mapping, as applied to a large-scale dynamic model is presented. The basis of the technique is parameterization of model responses in terms of model parameters by simple algebraic expressions. These expressions are obtained by computer experiments arranged in a factorial design. The developed parameterized responses are then used in a joint multiparameter multidata-set optimization. A brief review of the mathematical background of the technique is given. The concept of active parameters is discussed. The technique is applied to determine an optimum set of parameters for a methane combustion mechanism. Five independent responses - comprising ignition delay times, pre-ignition methyl radical concentration profiles, and laminar premixed flame velocities - were optimized with respect to thirteen reaction rate parameters. The numerical predictions of the optimized model are compared to those computed with several recent literature mechanisms. The utility of the solution mapping technique in situations where the optimum is not unique is also demonstrated.
Multiscale Persistent Functions for Biomolecular Structure Characterization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xia, Kelin; Li, Zhiming; Mu, Lin
Here in this paper, we introduce multiscale persistent functions for biomolecular structure characterization. The essential idea is to combine our multiscale rigidity functions (MRFs) with persistent homology analysis, so as to construct a series of multiscale persistent functions, particularly multiscale persistent entropies, for structure characterization. To clarify the fundamental idea of our method, the multiscale persistent entropy (MPE) model is discussed in great detail. Mathematically, unlike the previous persistent entropy (Chintakunta et al. in Pattern Recognit 48(2):391–401, 2015; Merelli et al. in Entropy 17(10):6872–6892, 2015; Rucco et al. in: Proceedings of ECCS 2014, Springer, pp 117–128, 2016), a special resolutionmore » parameter is incorporated into our model. Various scales can be achieved by tuning its value. Physically, our MPE can be used in conformational entropy evaluation. More specifically, it is found that our method incorporates in it a natural classification scheme. This is achieved through a density filtration of an MRF built from angular distributions. To further validate our model, a systematical comparison with the traditional entropy evaluation model is done. Additionally, it is found that our model is able to preserve the intrinsic topological features of biomolecular data much better than traditional approaches, particularly for resolutions in the intermediate range. Moreover, by comparing with traditional entropies from various grid sizes, bond angle-based methods and a persistent homology-based support vector machine method (Cang et al. in Mol Based Math Biol 3:140–162, 2015), we find that our MPE method gives the best results in terms of average true positive rate in a classic protein structure classification test. More interestingly, all-alpha and all-beta protein classes can be clearly separated from each other with zero error only in our model. Finally, a special protein structure index (PSI) is proposed, for the first time, to describe the “regularity” of protein structures. Basically, a protein structure is deemed as regular if it has a consistent and orderly configuration. Our PSI model is tested on a database of 110 proteins; we find that structures with larger portions of loops and intrinsically disorder regions are always associated with larger PSI, meaning an irregular configuration, while proteins with larger portions of secondary structures, i.e., alpha-helix or beta-sheet, have smaller PSI. Essentially, PSI can be used to describe the “regularity” information in any systems.« less
Cell-ECM Interactions During Cancer Invasion
NASA Astrophysics Data System (ADS)
Jiang, Yi
The extracellular matrix (ECM), a fibrous material that forms a network in a tissue, significantly affects many aspects of cellular behavior, including cell movement and proliferation. Transgenic mouse tumor studies indicate that excess collagen, a major component of ECM, enhances tumor formation and invasiveness. Clinically, tumor associated collagen signatures are strong markers for breast cancer survival. However, the underlying mechanisms are unclear since the properties of ECM are complex, with diverse structural and mechanical properties depending on various biophysical parameters. We have developed a three-dimensional elastic fiber network model, and parameterized it with in vitro collagen mechanics. Using this model, we study ECM remodeling as a result of local deformation and cell migration through the ECM as a network percolation problem. We have also developed a three-dimensional, multiscale model of cell migration and interaction with ECM. Our model reproduces quantitative single cell migration experiments. This model is a first step toward a fully biomechanical cell-matrix interaction model and may shed light on tumor associated collagen signatures in breast cancer. This work was partially supported by NIH-U01CA143069.
Knightes, C D; Golden, H E; Journey, C A; Davis, G M; Conrads, P A; Marvin-DiPasquale, M; Brigham, M E; Bradley, P M
2014-04-01
Mercury is a ubiquitous global environmental toxicant responsible for most US fish advisories. Processes governing mercury concentrations in rivers and streams are not well understood, particularly at multiple spatial scales. We investigate how insights gained from reach-scale mercury data and model simulations can be applied at broader watershed scales using a spatially and temporally explicit watershed hydrology and biogeochemical cycling model, VELMA. We simulate fate and transport using reach-scale (0.1 km(2)) study data and evaluate applications to multiple watershed scales. Reach-scale VELMA parameterization was applied to two nested sub-watersheds (28 km(2) and 25 km(2)) and the encompassing watershed (79 km(2)). Results demonstrate that simulated flow and total mercury concentrations compare reasonably to observations at different scales, but simulated methylmercury concentrations are out-of-phase with observations. These findings suggest that intricacies of methylmercury biogeochemical cycling and transport are under-represented in VELMA and underscore the complexity of simulating mercury fate and transport. Published by Elsevier Ltd.
A Mass Diffusion Model for Dry Snow Utilizing a Fabric Tensor to Characterize Anisotropy
NASA Astrophysics Data System (ADS)
Shertzer, Richard H.; Adams, Edward E.
2018-03-01
A homogenization algorithm for randomly distributed microstructures is applied to develop a mass diffusion model for dry snow. Homogenization is a multiscale approach linking constituent behavior at the microscopic level—among ice and air—to the macroscopic material—snow. Principles of continuum mechanics at the microscopic scale describe water vapor diffusion across an ice grain's surface to the air-filled pore space. Volume averaging and a localization assumption scale up and down, respectively, between microscopic and macroscopic scales. The model yields a mass diffusivity expression at the macroscopic scale that is, in general, a second-order tensor parameterized by both bulk and microstructural variables. The model predicts a mass diffusivity of water vapor through snow that is less than that through air. Mass diffusivity is expected to decrease linearly with ice volume fraction. Potential anisotropy in snow's mass diffusivity is captured due to the tensor representation. The tensor is built from directional data assigned to specific, idealized microstructural features. Such anisotropy has been observed in the field and laboratories in snow morphologies of interest such as weak layers of depth hoar and near-surface facets.
NASA Astrophysics Data System (ADS)
Chen, Chen; Arntsen, Christopher; Voth, Gregory A.
2017-10-01
Incorporation of quantum mechanical electronic structure data is necessary to properly capture the physics of many chemical processes. Proton hopping in water, which involves rearrangement of chemical and hydrogen bonds, is one such example of an inherently quantum mechanical process. Standard ab initio molecular dynamics (AIMD) methods, however, do not yet accurately predict the structure of water and are therefore less than optimal for developing force fields. We have instead utilized a recently developed method which minimally biases AIMD simulations to match limited experimental data to develop novel multiscale reactive molecular dynamics (MS-RMD) force fields by using relative entropy minimization. In this paper, we present two new MS-RMD models using such a parameterization: one which employs water with harmonic internal vibrations and another which uses anharmonic water. We show that the newly developed MS-RMD models very closely reproduce the solvation structure of the hydrated excess proton in the target AIMD data. We also find that the use of anharmonic water increases proton hopping, thereby increasing the proton diffusion constant.
Reconstruction of late Holocene climate based on tree growth and mechanistic hierarchical models
Tipton, John; Hooten, Mevin B.; Pederson, Neil; Tingley, Martin; Bishop, Daniel
2016-01-01
Reconstruction of pre-instrumental, late Holocene climate is important for understanding how climate has changed in the past and how climate might change in the future. Statistical prediction of paleoclimate from tree ring widths is challenging because tree ring widths are a one-dimensional summary of annual growth that represents a multi-dimensional set of climatic and biotic influences. We develop a Bayesian hierarchical framework using a nonlinear, biologically motivated tree ring growth model to jointly reconstruct temperature and precipitation in the Hudson Valley, New York. Using a common growth function to describe the response of a tree to climate, we allow for species-specific parameterizations of the growth response. To enable predictive backcasts, we model the climate variables with a vector autoregressive process on an annual timescale coupled with a multivariate conditional autoregressive process that accounts for temporal correlation and cross-correlation between temperature and precipitation on a monthly scale. Our multi-scale temporal model allows for flexibility in the climate response through time at different temporal scales and predicts reasonable climate scenarios given tree ring width data.
Samant, Asawari; Ogunnaike, Babatunde A; Vlachos, Dionisios G
2007-05-24
The fundamental role that intrinsic stochasticity plays in cellular functions has been shown via numerous computational and experimental studies. In the face of such evidence, it is important that intracellular networks are simulated with stochastic algorithms that can capture molecular fluctuations. However, separation of time scales and disparity in species population, two common features of intracellular networks, make stochastic simulation of such networks computationally prohibitive. While recent work has addressed each of these challenges separately, a generic algorithm that can simultaneously tackle disparity in time scales and population scales in stochastic systems is currently lacking. In this paper, we propose the hybrid, multiscale Monte Carlo (HyMSMC) method that fills in this void. The proposed HyMSMC method blends stochastic singular perturbation concepts, to deal with potential stiffness, with a hybrid of exact and coarse-grained stochastic algorithms, to cope with separation in population sizes. In addition, we introduce the computational singular perturbation (CSP) method as a means of systematically partitioning fast and slow networks and computing relaxation times for convergence. We also propose a new criteria of convergence of fast networks to stochastic low-dimensional manifolds, which further accelerates the algorithm. We use several prototype and biological examples, including a gene expression model displaying bistability, to demonstrate the efficiency, accuracy and applicability of the HyMSMC method. Bistable models serve as stringent tests for the success of multiscale MC methods and illustrate limitations of some literature methods.
Modeling the formation and aging of secondary organic aerosols in Los Angeles during CalNex 2010
Hayes, P. L.; Carlton, A. G.; Baker, K. R.; ...
2015-05-26
Four different literature parameterizations for the formation and evolution of urban secondary organic aerosol (SOA) frequently used in 3-D models are evaluated using a 0-D box model representing the Los Angeles metropolitan region during the California Research at the Nexus of Air Quality and Climate Change (CalNex) 2010 campaign. We constrain the model predictions with measurements from several platforms and compare predictions with particle- and gas-phase observations from the CalNex Pasadena ground site. That site provides a unique opportunity to study aerosol formation close to anthropogenic emission sources with limited recirculation. The model SOA that formed only from the oxidationmore » of VOCs (V-SOA) is insufficient to explain the observed SOA concentrations, even when using SOA parameterizations with multi-generation oxidation that produce much higher yields than have been observed in chamber experiments, or when increasing yields to their upper limit estimates accounting for recently reported losses of vapors to chamber walls. The Community Multiscale Air Quality (WRF-CMAQ) model (version 5.0.1) provides excellent predictions of secondary inorganic particle species but underestimates the observed SOA mass by a factor of 25 when an older VOC-only parameterization is used, which is consistent with many previous model–measurement comparisons for pre-2007 anthropogenic SOA modules in urban areas. Including SOA from primary semi-volatile and intermediate-volatility organic compounds (P-S/IVOCs) following the parameterizations of Robinson et al. (2007), Grieshop et al. (2009), or Pye and Seinfeld (2010) improves model–measurement agreement for mass concentration. The results from the three parameterizations show large differences (e.g., a factor of 3 in SOA mass) and are not well constrained, underscoring the current uncertainties in this area. Our results strongly suggest that other precursors besides VOCs, such as P-S/IVOCs, are needed to explain the observed SOA concentrations in Pasadena. All the recent parameterizations overpredict urban SOA formation at long photochemical ages (≈ 3 days) compared to observations from multiple sites, which can lead to problems in regional and especially global modeling. However, reducing IVOC emissions by one-half in the model to better match recent IVOC measurements improves SOA predictions at these long photochemical ages. Among the explicitly modeled VOCs, the precursor compounds that contribute the greatest SOA mass are methylbenzenes. Measured polycyclic aromatic hydrocarbons (naphthalenes) contribute 0.7% of the modeled SOA mass. The amounts of SOA mass from diesel vehicles, gasoline vehicles, and cooking emissions are estimated to be 16–27, 35–61, and 19–35%, respectively, depending on the parameterization used, which is consistent with the observed fossil fraction of urban SOA, 71(±3) %. The relative contribution of each source is uncertain by almost a factor of 2 depending on the parameterization used. In-basin biogenic VOCs are predicted to contribute only a few percent to SOA. A regional SOA background of approximately 2.1 μg m −3 is also present due to the long-distance transport of highly aged OA, likely with a substantial contribution from regional biogenic SOA. The percentage of SOA from diesel vehicle emissions is the same, within the estimated uncertainty, as reported in previous work that analyzed the weekly cycles in OA concentrations (Bahreini et al., 2012; Hayes et al., 2013). However, the modeling work presented here suggests a strong anthropogenic source of modern carbon in SOA, due to cooking emissions, which was not accounted for in those previous studies and which is higher on weekends. Lastly, this work adapts a simple two-parameter model to predict SOA concentration and O/C from urban emissions. This model successfully predicts SOA concentration, and the optimal parameter combination is very similar to that found for Mexico City. This approach provides a computationally inexpensive method for predicting urban SOA in global and climate models. We estimate pollution SOA to account for 26 Tg yr −1 of SOA globally, or 17% of global SOA, one-third of which is likely to be non-fossil.« less
Modeling the formation and aging of secondary organic aerosols in Los Angeles during CalNex 2010
NASA Astrophysics Data System (ADS)
Hayes, P. L.; Carlton, A. G.; Baker, K. R.; Ahmadov, R.; Washenfelder, R. A.; Alvarez, S.; Rappengluck, B.; Gilman, J. B.; Kuster, W. C.; de Gouw, J. A.; Zotter, P.; Prevot, A. S. H.; Szidat, S.; Kleindienst, T. E.; Offenberg, J. H.; Ma, P. K.; Jimenez, J. L.
2015-05-01
Four different literature parameterizations for the formation and evolution of urban secondary organic aerosol (SOA) frequently used in 3-D models are evaluated using a 0-D box model representing the Los Angeles metropolitan region during the California Research at the Nexus of Air Quality and Climate Change (CalNex) 2010 campaign. We constrain the model predictions with measurements from several platforms and compare predictions with particle- and gas-phase observations from the CalNex Pasadena ground site. That site provides a unique opportunity to study aerosol formation close to anthropogenic emission sources with limited recirculation. The model SOA that formed only from the oxidation of VOCs (V-SOA) is insufficient to explain the observed SOA concentrations, even when using SOA parameterizations with multi-generation oxidation that produce much higher yields than have been observed in chamber experiments, or when increasing yields to their upper limit estimates accounting for recently reported losses of vapors to chamber walls. The Community Multiscale Air Quality (WRF-CMAQ) model (version 5.0.1) provides excellent predictions of secondary inorganic particle species but underestimates the observed SOA mass by a factor of 25 when an older VOC-only parameterization is used, which is consistent with many previous model-measurement comparisons for pre-2007 anthropogenic SOA modules in urban areas. Including SOA from primary semi-volatile and intermediate-volatility organic compounds (P-S/IVOCs) following the parameterizations of Robinson et al. (2007), Grieshop et al. (2009), or Pye and Seinfeld (2010) improves model-measurement agreement for mass concentration. The results from the three parameterizations show large differences (e.g., a factor of 3 in SOA mass) and are not well constrained, underscoring the current uncertainties in this area. Our results strongly suggest that other precursors besides VOCs, such as P-S/IVOCs, are needed to explain the observed SOA concentrations in Pasadena. All the recent parameterizations overpredict urban SOA formation at long photochemical ages (~ 3 days) compared to observations from multiple sites, which can lead to problems in regional and especially global modeling. However, reducing IVOC emissions by one-half in the model to better match recent IVOC measurements improves SOA predictions at these long photochemical ages. Among the explicitly modeled VOCs, the precursor compounds that contribute the greatest SOA mass are methylbenzenes. Measured polycyclic aromatic hydrocarbons (naphthalenes) contribute 0.7% of the modeled SOA mass. The amounts of SOA mass from diesel vehicles, gasoline vehicles, and cooking emissions are estimated to be 16-27, 35-61, and 19-35%, respectively, depending on the parameterization used, which is consistent with the observed fossil fraction of urban SOA, 71(±3) %. The relative contribution of each source is uncertain by almost a factor of 2 depending on the parameterization used. In-basin biogenic VOCs are predicted to contribute only a few percent to SOA. A regional SOA background of approximately 2.1 μg m-3 is also present due to the long-distance transport of highly aged OA, likely with a substantial contribution from regional biogenic SOA. The percentage of SOA from diesel vehicle emissions is the same, within the estimated uncertainty, as reported in previous work that analyzed the weekly cycles in OA concentrations (Bahreini et al., 2012; Hayes et al., 2013). However, the modeling work presented here suggests a strong anthropogenic source of modern carbon in SOA, due to cooking emissions, which was not accounted for in those previous studies and which is higher on weekends. Lastly, this work adapts a simple two-parameter model to predict SOA concentration and O/C from urban emissions. This model successfully predicts SOA concentration, and the optimal parameter combination is very similar to that found for Mexico City. This approach provides a computationally inexpensive method for predicting urban SOA in global and climate models. We estimate pollution SOA to account for 26 Tg yr-1 of SOA globally, or 17% of global SOA, one-third of which is likely to be non-fossil.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hayes, P. L.; Carlton, A. G.; Baker, K. R.
Four different literature parameterizations for the formation and evolution of urban secondary organic aerosol (SOA) frequently used in 3-D models are evaluated using a 0-D box model representing the Los Angeles metropolitan region during the California Research at the Nexus of Air Quality and Climate Change (CalNex) 2010 campaign. We constrain the model predictions with measurements from several platforms and compare predictions with particle- and gas-phase observations from the CalNex Pasadena ground site. That site provides a unique opportunity to study aerosol formation close to anthropogenic emission sources with limited recirculation. The model SOA that formed only from the oxidationmore » of VOCs (V-SOA) is insufficient to explain the observed SOA concentrations, even when using SOA parameterizations with multi-generation oxidation that produce much higher yields than have been observed in chamber experiments, or when increasing yields to their upper limit estimates accounting for recently reported losses of vapors to chamber walls. The Community Multiscale Air Quality (WRF-CMAQ) model (version 5.0.1) provides excellent predictions of secondary inorganic particle species but underestimates the observed SOA mass by a factor of 25 when an older VOC-only parameterization is used, which is consistent with many previous model–measurement comparisons for pre-2007 anthropogenic SOA modules in urban areas. Including SOA from primary semi-volatile and intermediate-volatility organic compounds (P-S/IVOCs) following the parameterizations of Robinson et al. (2007), Grieshop et al. (2009), or Pye and Seinfeld (2010) improves model–measurement agreement for mass concentration. The results from the three parameterizations show large differences (e.g., a factor of 3 in SOA mass) and are not well constrained, underscoring the current uncertainties in this area. Our results strongly suggest that other precursors besides VOCs, such as P-S/IVOCs, are needed to explain the observed SOA concentrations in Pasadena. All the recent parameterizations overpredict urban SOA formation at long photochemical ages (≈ 3 days) compared to observations from multiple sites, which can lead to problems in regional and especially global modeling. However, reducing IVOC emissions by one-half in the model to better match recent IVOC measurements improves SOA predictions at these long photochemical ages. Among the explicitly modeled VOCs, the precursor compounds that contribute the greatest SOA mass are methylbenzenes. Measured polycyclic aromatic hydrocarbons (naphthalenes) contribute 0.7% of the modeled SOA mass. The amounts of SOA mass from diesel vehicles, gasoline vehicles, and cooking emissions are estimated to be 16–27, 35–61, and 19–35%, respectively, depending on the parameterization used, which is consistent with the observed fossil fraction of urban SOA, 71(±3) %. The relative contribution of each source is uncertain by almost a factor of 2 depending on the parameterization used. In-basin biogenic VOCs are predicted to contribute only a few percent to SOA. A regional SOA background of approximately 2.1 μg m −3 is also present due to the long-distance transport of highly aged OA, likely with a substantial contribution from regional biogenic SOA. The percentage of SOA from diesel vehicle emissions is the same, within the estimated uncertainty, as reported in previous work that analyzed the weekly cycles in OA concentrations (Bahreini et al., 2012; Hayes et al., 2013). However, the modeling work presented here suggests a strong anthropogenic source of modern carbon in SOA, due to cooking emissions, which was not accounted for in those previous studies and which is higher on weekends. Lastly, this work adapts a simple two-parameter model to predict SOA concentration and O/C from urban emissions. This model successfully predicts SOA concentration, and the optimal parameter combination is very similar to that found for Mexico City. This approach provides a computationally inexpensive method for predicting urban SOA in global and climate models. We estimate pollution SOA to account for 26 Tg yr −1 of SOA globally, or 17% of global SOA, one-third of which is likely to be non-fossil.« less
Objective calibration of regional climate models
NASA Astrophysics Data System (ADS)
Bellprat, O.; Kotlarski, S.; Lüthi, D.; SchäR, C.
2012-12-01
Climate models are subject to high parametric uncertainty induced by poorly confined model parameters of parameterized physical processes. Uncertain model parameters are typically calibrated in order to increase the agreement of the model with available observations. The common practice is to adjust uncertain model parameters manually, often referred to as expert tuning, which lacks objectivity and transparency in the use of observations. These shortcomings often haze model inter-comparisons and hinder the implementation of new model parameterizations. Methods which would allow to systematically calibrate model parameters are unfortunately often not applicable to state-of-the-art climate models, due to computational constraints facing the high dimensionality and non-linearity of the problem. Here we present an approach to objectively calibrate a regional climate model, using reanalysis driven simulations and building upon a quadratic metamodel presented by Neelin et al. (2010) that serves as a computationally cheap surrogate of the model. Five model parameters originating from different parameterizations are selected for the optimization according to their influence on the model performance. The metamodel accurately estimates spatial averages of 2 m temperature, precipitation and total cloud cover, with an uncertainty of similar magnitude as the internal variability of the regional climate model. The non-linearities of the parameter perturbations are well captured, such that only a limited number of 20-50 simulations are needed to estimate optimal parameter settings. Parameter interactions are small, which allows to further reduce the number of simulations. In comparison to an ensemble of the same model which has undergone expert tuning, the calibration yields similar optimal model configurations, but leading to an additional reduction of the model error. The performance range captured is much wider than sampled with the expert-tuned ensemble and the presented methodology is effective and objective. It is argued that objective calibration is an attractive tool and could become standard procedure after introducing new model implementations, or after a spatial transfer of a regional climate model. Objective calibration of parameterizations with regional models could also serve as a strategy toward improving parameterization packages of global climate models.
Scientific goals of the Cooperative Multiscale Experiment (CME)
NASA Technical Reports Server (NTRS)
Cotton, William
1993-01-01
Mesoscale Convective Systems (MCS) form the focus of CME. Recent developments in global climate models, the urgent need to improve the representation of the physics of convection, radiation, the boundary layer, and orography, and the surge of interest in coupling hydrologic, chemistry, and atmospheric models of various scales, have emphasized the need for a broad interdisciplinary and multi-scale approach to understanding and predicting MCS's and their interactions with processes at other scales. The role of mesoscale systems in the large-scale atmospheric circulation, the representation of organized convection and other mesoscale flux sources in terms of bulk properties, and the mutually consistent treatment of water vapor, clouds, radiation, and precipitation, are all key scientific issues concerning which CME will seek to increase understanding. The manner in which convective, mesoscale, and larger scale processes interact to produce and organize MCS's, the moisture cycling properties of MCS's, and the use of coupled cloud/mesoscale models to better understand these processes, are also major objectives of CME. Particular emphasis will be placed on the multi-scale role of MCS's in the hydrological cycle and in the production and transport of chemical trace constituents. The scientific goals of the CME consist of the following: understand how the large and small scales of motion influence the location, structure, intensity, and life cycles of MCS's; understand processes and conditions that determine the relative roles of balanced (slow manifold) and unbalanced (fast manifold) circulations in the dynamics of MCS's throughout their life cycles; assess the predictability of MCS's and improve the quantitative forecasting of precipitation and severe weather events; quantify the upscale feedback of MCS's to the large-scale environment and determine interrelationships between MCS occurrence and variations in the large-scale flow and surface forcing; provide a data base for initialization and verification of coupled regional, mesoscale/hydrologic, mesoscale/chemistry, and prototype mesoscale/cloud-resolving models for prediction of severe weather, ceilings, and visibility; provide a data base for initialization and validation of cloud-resolving models, and for assisting in the fabrication, calibration, and testing of cloud and MCS parameterization schemes; and provide a data base for validation of four dimensional data assimilation schemes and algorithms for retrieving cloud and state parameters from remote sensing instrumentation.
Hu, Meng; Liang, Hualou
2013-04-01
Generalized flash suppression (GFS), in which a salient visual stimulus can be rendered invisible despite continuous retinal input, provides a rare opportunity to directly study the neural mechanism of visual perception. Previous work based on linear methods, such as spectral analysis, on local field potential (LFP) during GFS has shown that the LFP power at distinctive frequency bands are differentially modulated by perceptual suppression. Yet, the linear method alone may be insufficient for the full assessment of neural dynamic due to the fundamentally nonlinear nature of neural signals. In this study, we set forth to analyze the LFP data collected from multiple visual areas in V1, V2 and V4 of macaque monkeys while performing the GFS task using a nonlinear method - adaptive multi-scale entropy (AME) - to reveal the neural dynamic of perceptual suppression. In addition, we propose a new cross-entropy measure at multiple scales, namely adaptive multi-scale cross-entropy (AMCE), to assess the nonlinear functional connectivity between two cortical areas. We show that: (1) multi-scale entropy exhibits percept-related changes in all three areas, with higher entropy observed during perceptual suppression; (2) the magnitude of the perception-related entropy changes increases systematically over successive hierarchical stages (i.e. from lower areas V1 to V2, up to higher area V4); and (3) cross-entropy between any two cortical areas reveals higher degree of asynchrony or dissimilarity during perceptual suppression, indicating a decreased functional connectivity between cortical areas. These results, taken together, suggest that perceptual suppression is related to a reduced functional connectivity and increased uncertainty of neural responses, and the modulation of perceptual suppression is more effective at higher visual cortical areas. AME is demonstrated to be a useful technique in revealing the underlying dynamic of nonlinear/nonstationary neural signal.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neilson, Hilding R.; Lester, John B.; McNeil, Joseph T.
The transit method, employed by Microvariability and Oscillation of Stars ( MOST ), Kepler , and various ground-based surveys has enabled the characterization of extrasolar planets to unprecedented precision. These results are precise enough to begin to measure planet atmosphere composition, planetary oblateness, starspots, and other phenomena at the level of a few hundred parts per million. However, these results depend on our understanding of stellar limb darkening, that is, the intensity distribution across the stellar disk that is sequentially blocked as the planet transits. Typically, stellar limb darkening is assumed to be a simple parameterization with two coefficients thatmore » are derived from stellar atmosphere models or fit directly. In this work, we revisit this assumption and compute synthetic planetary-transit light curves directly from model stellar atmosphere center-to-limb intensity variations (CLIVs) using the plane-parallel Atlas and spherically symmetric SAtlas codes. We compare these light curves to those constructed using best-fit limb-darkening parameterizations. We find that adopting parametric stellar limb-darkening laws leads to systematic differences from the more geometrically realistic model stellar atmosphere CLIV of about 50–100 ppm at the transit center and up to 300 ppm at ingress/egress. While these errors are small, they are systematic, and they appear to limit the precision necessary to measure secondary effects. Our results may also have a significant impact on transit spectra.« less
Systematic and reliable multiscale modelling of lithium batteries
NASA Astrophysics Data System (ADS)
Atalay, Selcuk; Schmuck, Markus
2017-11-01
Motivated by the increasing interest in lithium batteries as energy storage devices (e.g. cars/bycicles/public transport, social robot companions, mobile phones, and tablets), we investigate three basic cells: (i) a single intercalation host; (ii) a periodic arrangement of intercalation hosts; and (iii) a rigorously upscaled formulation of (ii) as initiated in. By systematically accounting for Li transport and interfacial reactions in (i)-(iii), we compute the associated chracteristic current-voltage curves and power densities. Finally, we discuss the influence of how the intercalation particles are arranged. Our findings are expected to improve the understanding of how microscopic properties affect the battery behaviour observed on the macroscale and at the same time, the upscaled formulation (iii) serves as an efficient computational tool. This work has been supported by EPSRC, UK, through the Grant No. EP/P011713/1.
NASA Astrophysics Data System (ADS)
Bhargava, K.; Kalnay, E.; Carton, J.; Yang, F.
2017-12-01
Systematic forecast errors, arising from model deficiencies, form a significant portion of the total forecast error in weather prediction models like the Global Forecast System (GFS). While much effort has been expended to improve models, substantial model error remains. The aim here is to (i) estimate the model deficiencies in the GFS that lead to systematic forecast errors, (ii) implement an online correction (i.e., within the model) scheme to correct GFS following the methodology of Danforth et al. [2007] and Danforth and Kalnay [2008, GRL]. Analysis Increments represent the corrections that new observations make on, in this case, the 6-hr forecast in the analysis cycle. Model bias corrections are estimated from the time average of the analysis increments divided by 6-hr, assuming that initial model errors grow linearly and first ignoring the impact of observation bias. During 2012-2016, seasonal means of the 6-hr model bias are generally robust despite changes in model resolution and data assimilation systems, and their broad continental scales explain their insensitivity to model resolution. The daily bias dominates the sub-monthly analysis increments and consists primarily of diurnal and semidiurnal components, also requiring a low dimensional correction. Analysis increments in 2015 and 2016 are reduced over oceans, which is attributed to improvements in the specification of the SSTs. These results encourage application of online correction, as suggested by Danforth and Kalnay, for mean, seasonal and diurnal and semidiurnal model biases in GFS to reduce both systematic and random errors. As the error growth in the short-term is still linear, estimated model bias corrections can be added as a forcing term in the model tendency equation to correct online. Preliminary experiments with GFS, correcting temperature and specific humidity online show reduction in model bias in 6-hr forecast. This approach can then be used to guide and optimize the design of sub-grid scale physical parameterizations, more accurate discretization of the model dynamics, boundary conditions, radiative transfer codes, and other potential model improvements which can then replace the empirical correction scheme. The analysis increments also provide guidance in testing new physical parameterizations.
NASA Astrophysics Data System (ADS)
Horstemeyer, M. F.
This review of multiscale modeling covers a brief history of various multiscale methodologies related to solid materials and the associated experimental influences, the various influence of multiscale modeling on different disciplines, and some examples of multiscale modeling in the design of structural components. Although computational multiscale modeling methodologies have been developed in the late twentieth century, the fundamental notions of multiscale modeling have been around since da Vinci studied different sizes of ropes. The recent rapid growth in multiscale modeling is the result of the confluence of parallel computing power, experimental capabilities to characterize structure-property relations down to the atomic level, and theories that admit multiple length scales. The ubiquitous research that focus on multiscale modeling has broached different disciplines (solid mechanics, fluid mechanics, materials science, physics, mathematics, biological, and chemistry), different regions of the world (most continents), and different length scales (from atoms to autos).
Selway, Nichola; Chan, Vincent; Stokes, Jason R
2017-02-22
Friction (and lubrication) between soft contacts is prevalent in many natural and engineered systems and plays a crucial role in determining their functionality. The contribution of viscoelastic hysteresis losses to friction in these systems has been well-established and defined for dry contacts; however, the influence of fluid viscosity and wetting on these components of friction has largely been overlooked. We provide systematic experimental evidence of the influence of lubricant viscosity and wetting on lubrication across multiple regimes within a viscoelastic contact. These effects are investigated for comparatively smooth and rough elastomeric contacts (PTFE-PDMS and PDMS-PDMS) lubricated by a series of Newtonian fluids with systematically controlled viscosity and static wetting properties, using a ball-on-disc tribometer. The distinct tribological behaviour, characterised generally by a decrease in the friction coefficient with increasing fluid viscosity and wettability, is explained in terms of lubricant dewetting and squeeze-out dynamics and their impact on multi-scale viscoelastic dissipation mechanisms at the bulk-, asperity-, sub-asperity- and molecular-scale. It is proposed that lubrication within the (non-molecularly) smooth contact is governed by localised fluid entrapment and molecular-scale (interfacial) viscoelastic effects, while additional rubber hysteresis stimulated by fluid-asperity interactions, combined with rapid fluid drainage at low speeds within the rough contact, alter the general shape of the Stribeck curve. This fluid viscosity effect is in some agreement with theoretical predictions. Conventional methods for analysing and interpreting tribological data, which typically involve scaling sliding velocity with lubricant viscosity, need to be revised for viscoelastic contacts with consideration of these indirect viscosity effects.
NASA Astrophysics Data System (ADS)
Bush, Stephanie; Turner, Andrew; Woolnough, Steve; Martin, Gill
2013-04-01
Global circulation models (GCMs) are a key tool for understanding and predicting monsoon rainfall, now and under future climate change. However, many GCMs show significant, systematic biases in their simulation of monsoon rainfall and dynamics that spin up over very short time scales and persist in the climate mean state. We describe several of these biases as simulated in the Met Office Unified Model and show they are sensitive to changes in the convective parameterization's entrainment rate. To improve our understanding of the biases and inform efforts to improve convective parameterizations, we explore the reasons for this sensitivity. We show the results of experiments where we increase the entrainment rate in regions of especially large bias: the western equatorial Indian Ocean, western north Pacific and India itself. We use the results to determine whether improvements in biases are due to the local increase in entrainment or are the remote response of the entrainment increase elsewhere in the GCM. We find that feedbacks usually strengthen the local response, but the local response leads to a different mean state change in different regions. We also show results from experiments which demonstrate the spin-up of the local response, which we use to further understand the response in complex regions such as the Western North Pacific. Our work demonstrates that local application of parameterization changes is a powerful tool for understanding their global impact.
NASA Astrophysics Data System (ADS)
Parishani, H.; Pritchard, M. S.; Bretherton, C. S.; Wyant, M. C.; Khairoutdinov, M.; Singh, B.
2017-12-01
Biases and parameterization formulation uncertainties in the representation of boundary layer clouds remain a leading source of possible systematic error in climate projections. Here we show the first results of cloud feedback to +4K SST warming in a new experimental climate model, the ``Ultra-Parameterized (UP)'' Community Atmosphere Model, UPCAM. We have developed UPCAM as an unusually high-resolution implementation of cloud superparameterization (SP) in which a global set of cloud resolving arrays is embedded in a host global climate model. In UP, the cloud-resolving scale includes sufficient internal resolution to explicitly generate the turbulent eddies that form marine stratocumulus and trade cumulus clouds. This is computationally costly but complements other available approaches for studying low clouds and their climate interaction, by avoiding parameterization of the relevant scales. In a recent publication we have shown that UP, while not without its own complexity trade-offs, can produce encouraging improvements in low cloud climatology in multi-month simulations of the present climate and is a promising target for exascale computing (Parishani et al. 2017). Here we show results of its low cloud feedback to warming in multi-year simulations for the first time. References: Parishani, H., M. S. Pritchard, C. S. Bretherton, M. C. Wyant, and M. Khairoutdinov (2017), Toward low-cloud-permitting cloud superparameterization with explicit boundary layer turbulence, J. Adv. Model. Earth Syst., 9, doi:10.1002/2017MS000968.
Systematic Parameterization of Lignin for the CHARMM Force Field
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vermaas, Joshua; Petridis, Loukas; Beckham, Gregg
Plant cell walls have three primary components, cellulose, hemicellulose, and lignin, the latter of which is a recalcitrant, aromatic heteropolymer that provides structure to plants, water and nutrient transport through plant tissues, and a highly effective defense against pathogens. Overcoming the recalcitrance of lignin is key to effective biomass deconstruction, which would in turn enable the use of biomass as a feedstock for industrial processes. Our understanding of lignin structure in the plant cell wall is hampered by the limitations of the available lignin forcefields, which currently only account for a single linkage between lignins and lack explicit parameterization formore » emerging lignin structures both from natural variants and engineered lignin structures. Since polymerization of lignin occurs via radical intermediates, multiple C-O and C-C linkages have been isolated , and the current force field only represents a small subset of lignin the diverse lignin structures found in plants. In order to take into account the wide range of lignin polymerization chemistries, monomers and dimer combinations of C-, H-, G-, and S-lignins as well as with hydroxycinnamic acid linkages were subjected to extensive quantum mechanical calculations to establish target data from which to build a complete molecular mechanics force field tuned specifically for diverse lignins. This was carried out in a GPU-accelerated global optimization process, whereby all molecules were parameterized simultaneously using the same internal parameter set. By parameterizing lignin specifically, we are able to more accurately represent the interactions and conformations of lignin monomers and dimers relative to a general force field. This new force field will enables computational researchers to study the effects of different linkages on the structure of lignin, as well as construct more accurate plant cell wall models based on observed statistical distributions of lignin that differ between disparate feedstocks, and guide further lignin engineering efforts.« less
NASA Astrophysics Data System (ADS)
Meng, X.; Lyu, S.; Zhang, T.; Zhao, L.; Li, Z.; Han, B.; Li, S.; Ma, D.; Chen, H.; Ao, Y.; Luo, S.; Shen, Y.; Guo, J.; Wen, L.
2018-04-01
Systematic cold biases exist in the simulation for 2 m air temperature in the Tibetan Plateau (TP) when using regional climate models and global atmospheric general circulation models. We updated the albedo in the Weather Research and Forecasting (WRF) Model lower boundary condition using the Global LAnd Surface Satellite Moderate-Resolution Imaging Spectroradiometer albedo products and demonstrated evident improvement for cold temperature biases in the TP. It is the large overestimation of albedo in winter and spring in the WRF model that resulted in the large cold temperature biases. The overestimated albedo was caused by the simulated precipitation biases and over-parameterization of snow albedo. Furthermore, light-absorbing aerosols can result in a large reduction of albedo in snow and ice cover. The results suggest the necessity of developing snow albedo parameterization using observations in the TP, where snow cover and melting are very different from other low-elevation regions, and the influence of aerosols should be considered as well. In addition to defining snow albedo, our results show an urgent call for improving precipitation simulation in the TP.
Validating Remotely Sensed Land Surface Evapotranspiration Based on Multi-scale Field Measurements
NASA Astrophysics Data System (ADS)
Jia, Z.; Liu, S.; Ziwei, X.; Liang, S.
2012-12-01
The land surface evapotranspiration plays an important role in the surface energy balance and the water cycle. There have been significant technical and theoretical advances in our knowledge of evapotranspiration over the past two decades. Acquisition of the temporally and spatially continuous distribution of evapotranspiration using remote sensing technology has attracted the widespread attention of researchers and managers. However, remote sensing technology still has many uncertainties coming from model mechanism, model inputs, parameterization schemes, and scaling issue in the regional estimation. Achieving remotely sensed evapotranspiration (RS_ET) with confident certainty is required but difficult. As a result, it is indispensable to develop the validation methods to quantitatively assess the accuracy and error sources of the regional RS_ET estimations. This study proposes an innovative validation method based on multi-scale evapotranspiration acquired from field measurements, with the validation results including the accuracy assessment, error source analysis, and uncertainty analysis of the validation process. It is a potentially useful approach to evaluate the accuracy and analyze the spatio-temporal properties of RS_ET at both the basin and local scales, and is appropriate to validate RS_ET in diverse resolutions at different time-scales. An independent RS_ET validation using this method was presented over the Hai River Basin, China in 2002-2009 as a case study. Validation at the basin scale showed good agreements between the 1 km annual RS_ET and the validation data such as the water balanced evapotranspiration, MODIS evapotranspiration products, precipitation, and landuse types. Validation at the local scale also had good results for monthly, daily RS_ET at 30 m and 1 km resolutions, comparing to the multi-scale evapotranspiration measurements from the EC and LAS, respectively, with the footprint model over three typical landscapes. Although some validation experiments demonstrated that the models yield accurate estimates at flux measurement sites, the question remains whether they are performing well over the broader landscape. Moreover, a large number of RS_ET products have been released in recent years. Thus, we also pay attention to the cross-validation method of RS_ET derived from multi-source models. "The Multi-scale Observation Experiment on Evapotranspiration over Heterogeneous Land Surfaces: Flux Observation Matrix" campaign is carried out at the middle reaches of the Heihe River Basin, China in 2012. Flux measurements from an observation matrix composed of 22 EC and 4 LAS are acquired to investigate the cross-validation of multi-source models over different landscapes. In this case, six remote sensing models, including the empirical statistical model, the one-source and two-source models, the Penman-Monteith equation based model, the Priestley-Taylor equation based model, and the complementary relationship based model, are used to perform an intercomparison. All the results from the two cases of RS_ET validation showed that the proposed validation methods are reasonable and feasible.
A Hybrid Coarse-graining Approach for Lipid Bilayers at Large Length and Time Scales
Ayton, Gary S.; Voth, Gregory A.
2009-01-01
A hybrid analytic-systematic (HAS) coarse-grained (CG) lipid model is developed and employed in a large-scale simulation of a liposome. The methodology is termed hybrid analyticsystematic as one component of the interaction between CG sites is variationally determined from the multiscale coarse-graining (MS-CG) methodology, while the remaining component utilizes an analytic potential. The systematic component models the in-plane center of mass interaction of the lipids as determined from an atomistic-level MD simulation of a bilayer. The analytic component is based on the well known Gay-Berne ellipsoid of revolution liquid crystal model, and is designed to model the highly anisotropic interactions at a highly coarse-grained level. The HAS CG approach is the first step in an “aggressive” CG methodology designed to model multi-component biological membranes at very large length and timescales. PMID:19281167
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tawhai, Merryn; Bischoff, Jeff; Einstein, Daniel R.
2009-05-01
Abstract In this article, we describe some current multiscale modeling issues in computational biomechanics from the perspective of the musculoskeletal and respiratory systems and mechanotransduction. First, we outline the necessity of multiscale simulations in these biological systems. Then we summarize challenges inherent to multiscale biomechanics modeling, regardless of the subdiscipline, followed by computational challenges that are system-specific. We discuss some of the current tools that have been utilized to aid research in multiscale mechanics simulations, and the priorities to further the field of multiscale biomechanics computation.
Progress in the Development of a Global Quasi-3-D Multiscale Modeling Framework
NASA Astrophysics Data System (ADS)
Jung, J.; Konor, C. S.; Randall, D. A.
2017-12-01
The Quasi-3-D Multiscale Modeling Framework (Q3D MMF) is a second-generation MMF, which has following advances over the first-generation MMF: 1) The cloud-resolving models (CRMs) that replace conventional parameterizations are not confined to the large-scale dynamical-core grid cells, and are seamlessly connected to each other, 2) The CRMs sense the three-dimensional large- and cloud-scale environment, 3) Two perpendicular sets of CRM channels are used, and 4) The CRMs can resolve the steep surface topography along the channel direction. The basic design of the Q3D MMF has been developed and successfully tested in a limited-area modeling framework. Currently, global versions of the Q3D MMF are being developed for both weather and climate applications. The dynamical cores governing the large-scale circulation in the global Q3D MMF are selected from two cube-based global atmospheric models. The CRM used in the model is the 3-D nonhydrostatic anelastic Vector-Vorticity Model (VVM), which has been tested with the limited-area version for its suitability for this framework. As a first step of the development, the VVM has been reconstructed on the cubed-sphere grid so that it can be applied to global channel domains and also easily fitted to the large-scale dynamical cores. We have successfully tested the new VVM by advecting a bell-shaped passive tracer and simulating the evolutions of waves resulted from idealized barotropic and baroclinic instabilities. For improvement of the model, we also modified the tracer advection scheme to yield positive-definite results and plan to implement a new physics package that includes a double-moment microphysics and an aerosol physics. The interface for coupling the large-scale dynamical core and the VVM is under development. In this presentation, we shall describe the recent progress in the development and show some test results.
NASA Astrophysics Data System (ADS)
Alessandri, Andrea; Catalano, Franco; De Felice, Matteo; Van Den Hurk, Bart; Doblas Reyes, Francisco; Boussetta, Souhail; Balsamo, Gianpaolo; Miller, Paul
2016-04-01
The EC-Earth earth system model has been recently developed to include the dynamics of vegetation. In its original formulation, vegetation variability is simply operated by the Leaf Area Index (LAI), which affects climate basically by changing the vegetation physiological resistance to evapotranspiration. This coupling has been found to have only a weak effect on the surface climate modeled by EC-Earth. In reality, the effective sub-grid vegetation fractional coverage will vary seasonally and at interannual time-scales in response to leaf-canopy growth, phenology and senescence. Therefore it affects biophysical parameters such as the albedo, surface roughness and soil field capacity. To adequately represent this effect in EC-Earth, we included an exponential dependence of the vegetation cover on the LAI. By comparing two sets of simulations performed with and without the new variable fractional-coverage parameterization, spanning retrospective predictions at the decadal (5-years), seasonal and sub-seasonal time-scales, we show for the first time a significant multi-scale enhancement of vegetation impacts in climate simulation and prediction over land. Particularly large effects at multiple time scales are shown over boreal winter middle-to-high latitudes over Canada, West US, Eastern Europe, Russia and eastern Siberia due to the implemented time-varying shadowing effect by tree-vegetation on snow surfaces. Over Northern Hemisphere boreal forest regions the improved representation of vegetation cover tends to correct the winter warm biases, improves the climate change sensitivity, the decadal potential predictability as well as the skill of forecasts at seasonal and sub-seasonal time-scales. Significant improvements of the prediction of 2m temperature and rainfall are also shown over transitional land surface hot spots. Both the potential predictability at decadal time-scale and seasonal-forecasts skill are enhanced over Sahel, North American Great Plains, Nordeste Brazil and South East Asia, mainly related to improved performance in the surface evapotranspiration.
NASA Astrophysics Data System (ADS)
Alessandri, A.; Catalano, F.; De Felice, M.; van den Hurk, B.; Doblas-Reyes, F. J.; Boussetta, S.; Balsamo, G.; Miller, P. A.
2016-12-01
The European consortium earth system model (EC-Earth; http://www.ec-earth.org) has been recently developed to include the dynamics of vegetation. In its original formulation, vegetation variability is simply operated by the Leaf Area Index (LAI), which affects climate basically by changing the vegetation physiological resistance to evapotranspiration. This coupling has been found to have only a weak effect on the surface climate modeled by EC-Earth. In reality, the effective sub-grid vegetation fractional coverage will vary seasonally and at interannual time-scales in response to leaf-canopy growth, phenology and senescence. Therefore it affects biophysical parameters such as the albedo, surface roughness and soil field capacity. To adequately represent this effect in EC-Earth, we included an exponential dependence of the vegetation cover on the LAI. By comparing two sets of simulations performed with and without the new variable fractional-coverage parameterization, spanning from centennial (20th Century) simulations and retrospective predictions to the decadal (5-years), seasonal and weather time-scales, we show for the first time a significant multi-scale enhancement of vegetation impacts in climate simulation and prediction over land. Particularly large effects at multiple time scales are shown over boreal winter middle-to-high latitudes over Canada, West US, Eastern Europe, Russia and eastern Siberia due to the implemented time-varying shadowing effect by tree-vegetation on snow surfaces. Over Northern Hemisphere boreal forest regions the improved representation of vegetation cover tends to correct the winter warm biases, improves the climate change sensitivity, the decadal potential predictability as well as the skill of forecasts at seasonal and weather time-scales. Significant improvements of the prediction of 2m temperature and rainfall are also shown over transitional land surface hot spots. Both the potential predictability at decadal time-scale and seasonal-forecasts skill are enhanced over Sahel, North American Great Plains, Nordeste Brazil and South East Asia, mainly related to improved performance in the surface evapotranspiration.
NASA Astrophysics Data System (ADS)
Alessandri, A.; Catalano, F.; De Felice, M.; Hurk, B. V. D.; Doblas-Reyes, F. J.; Boussetta, S.; Balsamo, G.; Miller, P. A.
2017-12-01
Here we demonstrate, for the first time, that the implementation of a realistic representation of vegetation in Earth System Models (ESMs) can significantly improve climate simulation and prediction across multiple time-scales. The effective sub-grid vegetation fractional coverage vary seasonally and at interannual time-scales in response to leaf-canopy growth, phenology and senescence. Therefore it affects biophysical parameters such as the surface resistance to evapotranspiration, albedo, roughness lenght, and soil field capacity. To adequately represent this effect in the EC-Earth ESM, we included an exponential dependence of the vegetation cover on the Leaf Area Index.By comparing two sets of simulations performed with and without the new variable fractional-coverage parameterization, spanning from centennial (20th Century) simulations and retrospective predictions to the decadal (5-years), seasonal (2-4 months) and weather (4 days) time-scales, we show for the first time a significant multi-scale enhancement of vegetation impacts in climate simulation and prediction over land. Particularly large effects at multiple time scales are shown over boreal winter middle-to-high latitudes over Canada, West US, Eastern Europe, Russia and eastern Siberia due to the implemented time-varying shadowing effect by tree-vegetation on snow surfaces. Over Northern Hemisphere boreal forest regions the improved representation of vegetation-cover consistently correct the winter warm biases, improves the climate change sensitivity, the decadal potential predictability as well as the skill of forecasts at seasonal and weather time-scales. Significant improvements of the prediction of 2m temperature and rainfall are also shown over transitional land surface hot spots. Both the potential predictability at decadal time-scale and seasonal-forecasts skill are enhanced over Sahel, North American Great Plains, Nordeste Brazil and South East Asia, mainly related to improved performance in the surface evapotranspiration.Above results are discussed in a peer-review paper just being accepted for publication on Climate Dynamics (Alessandri et al., 2017; doi:10.1007/s00382-017-3766-y).
Olson, Mark A; Feig, Michael; Brooks, Charles L
2008-04-15
This article examines ab initio methods for the prediction of protein loops by a computational strategy of multiscale conformational sampling and physical energy scoring functions. Our approach consists of initial sampling of loop conformations from lattice-based low-resolution models followed by refinement using all-atom simulations. To allow enhanced conformational sampling, the replica exchange method was implemented. Physical energy functions based on CHARMM19 and CHARMM22 parameterizations with generalized Born (GB) solvent models were applied in scoring loop conformations extracted from the lattice simulations and, in the case of all-atom simulations, the ensemble of conformations were generated and scored with these models. Predictions are reported for 25 loop segments, each eight residues long and taken from a diverse set of 22 protein structures. We find that the simulations generally sampled conformations with low global root-mean-square-deviation (RMSD) for loop backbone coordinates from the known structures, whereas clustering conformations in RMSD space and scoring detected less favorable loop structures. Specifically, the lattice simulations sampled basins that exhibited an average global RMSD of 2.21 +/- 1.42 A, whereas clustering and scoring the loop conformations determined an RMSD of 3.72 +/- 1.91 A. Using CHARMM19/GB to refine the lattice conformations improved the sampling RMSD to 1.57 +/- 0.98 A and detection to 2.58 +/- 1.48 A. We found that further improvement could be gained from extending the upper temperature in the all-atom refinement from 400 to 800 K, where the results typically yield a reduction of approximately 1 A or greater in the RMSD of the detected loop. Overall, CHARMM19 with a simple pairwise GB solvent model is more efficient at sampling low-RMSD loop basins than CHARMM22 with a higher-resolution modified analytical GB model; however, the latter simulation method provides a more accurate description of the all-atom energy surface, yet demands a much greater computational cost. (c) 2007 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Dong, X.; Fu, J. S.; Huang, K.; Tong, D.
2015-12-01
The Community Multiscale Air Quality (CMAQ) model has been further developed in terms of simulating natural wind-blown dust in this study, with a series of modifications aimed at improving the model's capability to predict the emission, transport, and chemical reactions of dust aerosols. The default parameterization of threshold friction velocity constants in the CMAQ are revised to avoid double counting of the impact of soil moisture based on the re-analysis of field experiment data; source-dependent speciation profiles for dust emission are derived based on local measurements for the Gobi and Taklamakan deserts in East Asia; and dust heterogeneous chemistry is implemented to simulate the reactions involving dust aerosol. The improved dust module in the CMAQ was applied over East Asia for March and April from 2006 to 2010. Evaluation against observations has demonstrated that simulation bias of PM10 and aerosol optical depth (AOD) is reduced from -55.42 and -31.97 % in the original CMAQ to -16.05 and -22.1 % in the revised CMAQ, respectively. Comparison with observations at the nearby Gobi stations of Duolun and Yulin indicates that applying a source-dependent profile helps reduce simulation bias for trace metals. Implementing heterogeneous chemistry is also found to result in better agreement with observations for sulfur dioxide (SO2), sulfate (SO42-), nitric acid (HNO3), nitrous oxides (NOx), and nitrate (NO3-). Investigation of a severe dust storm episode from 19 to 21 March 2010 suggests that the revised CMAQ is capable of capturing the spatial distribution and temporal variations of dust aerosols. Model evaluation indicates potential uncertainties within the excessive soil moisture fraction used by meteorological simulation. The mass contribution of fine mode aerosol in dust emission may be underestimated by 50 %. The revised revised CMAQ provides a useful tool for future studies to investigate the emission, transport, and impact of wind-blown dust over East Asia and elsewhere.
NASA Astrophysics Data System (ADS)
Dong, Xinyi; Fu, Joshua S.; Huang, Kan; Tong, Daniel; Zhuang, Guoshun
2016-07-01
The Community Multiscale Air Quality (CMAQ) model has been further developed in terms of simulating natural wind-blown dust in this study, with a series of modifications aimed at improving the model's capability to predict the emission, transport, and chemical reactions of dust. The default parameterization of initial threshold friction velocity constants are revised to correct the double counting of the impact of soil moisture in CMAQ by the reanalysis of field experiment data; source-dependent speciation profiles for dust emission are derived based on local measurements for the Gobi and Taklamakan deserts in East Asia; and dust heterogeneous chemistry is also implemented. The improved dust module in the CMAQ is applied over East Asia for March and April from 2006 to 2010. The model evaluation result shows that the simulation bias of PM10 and aerosol optical depth (AOD) is reduced, respectively, from -55.42 and -31.97 % by the original CMAQ to -16.05 and -22.1 % by the revised CMAQ. Comparison with observations at the nearby Gobi stations of Duolun and Yulin indicates that applying a source-dependent profile helps reduce simulation bias for trace metals. Implementing heterogeneous chemistry also results in better agreement with observations for sulfur dioxide (SO2), sulfate (SO42-), nitric acid (HNO3), nitrous oxides (NOx), and nitrate (NO3-). The investigation of a severe dust storm episode from 19 to 21 March 2010 suggests that the revised CMAQ is capable of capturing the spatial distribution and temporal variation of dust. The model evaluation also indicates potential uncertainty within the excessive soil moisture used by meteorological simulation. The mass contribution of fine-mode particles in dust emission may be underestimated by 50 %. The revised CMAQ model provides a useful tool for future studies to investigate the emission, transport, and impact of wind-blown dust over East Asia and elsewhere.
NASA Astrophysics Data System (ADS)
Anderson, C. J.; Wildhaber, M. L.; Wikle, C. K.; Moran, E. H.; Franz, K. J.; Dey, R.
2012-12-01
Climate change operates over a broad range of spatial and temporal scales. Understanding the effects of change on ecosystems requires accounting for the propagation of information and uncertainty across these scales. For example, to understand potential climate change effects on fish populations in riverine ecosystems, climate conditions predicted by course-resolution atmosphere-ocean global climate models must first be translated to the regional climate scale. In turn, this regional information is used to force watershed models, which are used to force river condition models, which impact the population response. A critical challenge in such a multiscale modeling environment is to quantify sources of uncertainty given the highly nonlinear nature of interactions between climate variables and the individual organism. We use a hierarchical modeling approach for accommodating uncertainty in multiscale ecological impact studies. This framework allows for uncertainty due to system models, model parameter settings, and stochastic parameterizations. This approach is a hybrid between physical (deterministic) downscaling and statistical downscaling, recognizing that there is uncertainty in both. We use NARCCAP data to determine confidence the capability of climate models to simulate relevant processes and to quantify regional climate variability within the context of the hierarchical model of uncertainty quantification. By confidence, we mean the ability of the regional climate model to replicate observed mechanisms. We use the NCEP-driven simulations for this analysis. This provides a base from which regional change can be categorized as either a modification of previously observed mechanisms or emergence of new processes. The management implications for these categories of change are significantly different in that procedures to address impacts from existing processes may already be known and need adjustment; whereas, an emergent processes may require new management strategies. The results from hierarchical analysis of uncertainty are used to study the relative change in weights of the endangered Missouri River pallid sturgeon (Scaphirhynchus albus) under a 21st century climate scenario.
NASA Astrophysics Data System (ADS)
Alessandri, Andrea; Catalano, Franco; De Felice, Matteo; Van Den Hurk, Bart; Doblas Reyes, Francisco; Boussetta, Souhail; Balsamo, Gianpaolo; Miller, Paul A.
2017-08-01
The EC-Earth earth system model has been recently developed to include the dynamics of vegetation. In its original formulation, vegetation variability is simply operated by the Leaf Area Index (LAI), which affects climate basically by changing the vegetation physiological resistance to evapotranspiration. This coupling has been found to have only a weak effect on the surface climate modeled by EC-Earth. In reality, the effective sub-grid vegetation fractional coverage will vary seasonally and at interannual time-scales in response to leaf-canopy growth, phenology and senescence. Therefore it affects biophysical parameters such as the albedo, surface roughness and soil field capacity. To adequately represent this effect in EC-Earth, we included an exponential dependence of the vegetation cover on the LAI. By comparing two sets of simulations performed with and without the new variable fractional-coverage parameterization, spanning from centennial (twentieth century) simulations and retrospective predictions to the decadal (5-years), seasonal and weather time-scales, we show for the first time a significant multi-scale enhancement of vegetation impacts in climate simulation and prediction over land. Particularly large effects at multiple time scales are shown over boreal winter middle-to-high latitudes over Canada, West US, Eastern Europe, Russia and eastern Siberia due to the implemented time-varying shadowing effect by tree-vegetation on snow surfaces. Over Northern Hemisphere boreal forest regions the improved representation of vegetation cover tends to correct the winter warm biases, improves the climate change sensitivity, the decadal potential predictability as well as the skill of forecasts at seasonal and weather time-scales. Significant improvements of the prediction of 2 m temperature and rainfall are also shown over transitional land surface hot spots. Both the potential predictability at decadal time-scale and seasonal-forecasts skill are enhanced over Sahel, North American Great Plains, Nordeste Brazil and South East Asia, mainly related to improved performance in the surface evapotranspiration.
NASA Astrophysics Data System (ADS)
Alessandri, Andrea; Catalano, Franco; De Felice, Matteo; Van Den Hurk, Bart; Doblas Reyes, Francisco; Boussetta, Souhail; Balsamo, Gianpaolo; Miller, Paul A.
2017-04-01
The EC-Earth earth system model has been recently developed to include the dynamics of vegetation. In its original formulation, vegetation variability is simply operated by the Leaf Area Index (LAI), which affects climate basically by changing the vegetation physiological resistance to evapotranspiration. This coupling has been found to have only a weak effect on the surface climate modeled by EC-Earth. In reality, the effective sub-grid vegetation fractional coverage will vary seasonally and at interannual time-scales in response to leaf-canopy growth, phenology and senescence. Therefore it affects biophysical parameters such as the albedo, surface roughness and soil field capacity. To adequately represent this effect in EC-Earth, we included an exponential dependence of the vegetation cover on the LAI. By comparing two sets of simulations performed with and without the new variable fractional-coverage parameterization, spanning from centennial (20th Century) simulations and retrospective predictions to the decadal (5-years), seasonal and weather time-scales, we show for the first time a significant multi-scale enhancement of vegetation impacts in climate simulation and prediction over land. Particularly large effects at multiple time scales are shown over boreal winter middle-to-high latitudes over Canada, West US, Eastern Europe, Russia and eastern Siberia due to the implemented time-varying shadowing effect by tree-vegetation on snow surfaces. Over Northern Hemisphere boreal forest regions the improved representation of vegetation cover tends to correct the winter warm biases, improves the climate change sensitivity, the decadal potential predictability as well as the skill of forecasts at seasonal and weather time-scales. Significant improvements of the prediction of 2m temperature and rainfall are also shown over transitional land surface hot spots. Both the potential predictability at decadal time-scale and seasonal-forecasts skill are enhanced over Sahel, North American Great Plains, Nordeste Brazil and South East Asia, mainly related to improved performance in the surface evapotranspiration.
Zager, Michael G.; Barton, Hugh A.
2012-01-01
A systems-level mathematical model is presented that describes the effects of inhibiting the enzyme 5α-reductase (5aR) on the ventral prostate of the adult male rat under chronic administration of the 5aR inhibitor, finasteride. 5aR is essential for androgen regulation in males, both in normal conditions and disease states. The hormone kinetics and downstream effects on reproductive organs associated with perturbing androgen regulation are complex and not necessarily intuitive. Inhibition of 5aR decreases the metabolism of testosterone (T) to the potent androgen 5α-dihydrotestosterone (DHT). This results in decreased cell proliferation, fluid production and 5aR expression as well as increased apoptosis in the ventral prostate. These regulatory changes collectively result in decreased prostate size and function, which can be beneficial to men suffering from benign prostatic hyperplasia (BPH) and could play a role in prostate cancer. There are two distinct isoforms of 5aR in male humans and rats, and thus developing a 5aR inhibitor is a challenging pursuit. Several inhibitors are on the market for treatment of BPH, including finasteride and dutasteride. In this effort, comparisons of simulated vs. experimental T and DHT levels and prostate size are depicted, demonstrating the model accurately described an approximate 77% decrease in prostate size and nearly complete depletion of prostatic DHT following 21 days of daily finasteride dosing in rats. This implies T alone is not capable of maintaining a normal prostate size. Further model analysis suggests the possibility of alternative dosing strategies resulting in similar or greater effects on prostate size, due to complex kinetics between T, DHT and gene occupancy. With appropriate scaling and parameterization for humans, this model provides a multiscale modeling platform for drug discovery teams to test and generate hypotheses about drugging strategies for indications like BPH and prostate cancer, such as compound binding properties, dosing regimens, and target validation. PMID:22970204
Community Multiscale Air Quality Model
The U.S. EPA developed the Community Multiscale Air Quality (CMAQ) system to apply a “one atmosphere” multiscale and multi-pollutant modeling approach based mainly on the “first principles” description of the atmosphere. The multiscale capability is supported by the governing di...
Emulating RRTMG Radiation with Deep Neural Networks for the Accelerated Model for Climate and Energy
NASA Astrophysics Data System (ADS)
Pal, A.; Norman, M. R.
2017-12-01
The RRTMG radiation scheme in the Accelerated Model for Climate and Energy Multi-scale Model Framework (ACME-MMF), is a bottleneck and consumes approximately 50% of the computational time. To simulate a case using RRTMG radiation scheme in ACME-MMF with high throughput and high resolution will therefore require a speed-up of this calculation while retaining physical fidelity. In this study, RRTMG radiation is emulated with Deep Neural Networks (DNNs). The first step towards this goal is to run a case with ACME-MMF and generate input data sets for the DNNs. A principal component analysis of these input data sets are carried out. Artificial data sets are created using the previous data sets to cover a wider space. These artificial data sets are used in a standalone RRTMG radiation scheme to generate outputs in a cost effective manner. These input-output pairs are used to train multiple architectures DNNs(1). Another DNN(2) is trained using the inputs to predict the error. A reverse emulation is trained to map the output to input. An error controlled code is developed with the two DNNs (1 and 2) and will determine when/if the original parameterization needs to be used.
Impact of entrainment on cloud droplet spectra: theory, observations, and modeling
NASA Astrophysics Data System (ADS)
Grabowski, W.
2016-12-01
Understanding the impact of entrainment and mixing on microphysical properties of warm boundary layer clouds is an important aspect of the representation of such clouds in large-scale models of weather and climate. Entrainment leads to a reduction of the liquid water content in agreement with the fundamental thermodynamics, but its impact on the droplet spectrum is difficult to quantify in observations and modeling. For in-situ (e.g., aircraft) observations, it is impossible to follow air parcels and observe processes that lead to changes of the droplet spectrum in different regions of a cloud. For similar reasons traditional modeling methodologies (e.g., the Eulerian large eddy simulation) are not useful either. Moreover, both observations and modeling can resolve only relatively narrow range of spatial scales. Theory, typically focusing on differences between idealized concepts of homogeneous and inhomogeneous mixing, is also of a limited use for the multiscale turbulent mixing between a cloud and its environment. This presentation will illustrate the above points and argue that the Lagrangian large-eddy simulation with appropriate subgrid-scale scheme may provide key insights and eventually lead to novel parameterizations for large-scale models.
NASA Astrophysics Data System (ADS)
Zhou, L.; Baker, K. R.; Napelenok, S. L.; Pouliot, G.; Elleman, R. A.; ONeill, S. M.; Urbanski, S. P.; Wong, D. C.
2017-12-01
Crop residue burning has long been a common practice in agriculture with the smoke emissions from the burning linked to negative health impacts. A field study in eastern Washington and northern Idaho in August 2013 consisted of multiple burns of well characterized fuels with nearby surface and aerial measurements including trace species concentrations, plume rise height and boundary layer structure. The chemical transport model CMAQ (Community Multiscale Air Quality Model) was used to assess the fire emissions and subsequent vertical plume transport. The study first compared assumptions made by the 2014 National Emission Inventory approach for crop residue burning with the fuel and emissions information obtained from the field study and then investigated the sensitivity of modeled carbon monoxide (CO) and PM2.5 concentrations to these different emission estimates and plume rise treatment with CMAQ. The study suggests that improvements to the current parameterizations are needed in order for CMAQ to reliably reproduce smoke plumes from burning. In addition, there is enough variability in the smoke emissions, stemming from variable field-specific information such as field size, that attempts to model crop residue burning should use field-specific information whenever possible.
Robust curb detection with fusion of 3D-Lidar and camera data.
Tan, Jun; Li, Jian; An, Xiangjing; He, Hangen
2014-05-21
Curb detection is an essential component of Autonomous Land Vehicles (ALV), especially important for safe driving in urban environments. In this paper, we propose a fusion-based curb detection method through exploiting 3D-Lidar and camera data. More specifically, we first fuse the sparse 3D-Lidar points and high-resolution camera images together to recover a dense depth image of the captured scene. Based on the recovered dense depth image, we propose a filter-based method to estimate the normal direction within the image. Then, by using the multi-scale normal patterns based on the curb's geometric property, curb point features fitting the patterns are detected in the normal image row by row. After that, we construct a Markov Chain to model the consistency of curb points which utilizes the continuous property of the curb, and thus the optimal curb path which links the curb points together can be efficiently estimated by dynamic programming. Finally, we perform post-processing operations to filter the outliers, parameterize the curbs and give the confidence scores on the detected curbs. Extensive evaluations clearly show that our proposed method can detect curbs with strong robustness at real-time speed for both static and dynamic scenes.
Lim, H.; Hale, L. M.; Zimmerman, J. A.; ...
2015-01-05
In this study, we develop an atomistically informed crystal plasticity finite element (CP-FE) model for body-centered-cubic (BCC) α-Fe that incorporates non-Schmid stress dependent slip with temperature and strain rate effects. Based on recent insights obtained from atomistic simulations, we propose a new constitutive model that combines a generalized non-Schmid yield law with aspects from a line tension (LT) model for describing activation enthalpy required for the motion of dislocation kinks. Atomistic calculations are conducted to quantify the non-Schmid effects while both experimental data and atomistic simulations are used to assess the temperature and strain rate effects. The parameterized constitutive equationmore » is implemented into a BCC CP-FE model to simulate plastic deformation of single and polycrystalline Fe which is compared with experimental data from the literature. This direct comparison demonstrates that the atomistically informed model accurately captures the effects of crystal orientation, temperature and strain rate on the flow behavior of siangle crystal Fe. Furthermore, our proposed CP-FE model exhibits temperature and strain rate dependent flow and yield surfaces in polycrystalline Fe that deviate from conventional CP-FE models based on Schmid's law.« less
Some conservation issues for the dynamical cores of NWP and climate models
NASA Astrophysics Data System (ADS)
Thuburn, J.
2008-03-01
The rationale for designing atmospheric numerical model dynamical cores with certain conservation properties is reviewed. The conceptual difficulties associated with the multiscale nature of realistic atmospheric flow, and its lack of time-reversibility, are highlighted. A distinction is made between robust invariants, which are conserved or nearly conserved in the adiabatic and frictionless limit, and non-robust invariants, which are not conserved in the limit even though they are conserved by exactly adiabatic frictionless flow. For non-robust invariants, a further distinction is made between processes that directly transfer some quantity from large to small scales, and processes involving a cascade through a continuous range of scales; such cascades may either be explicitly parameterized, or handled implicitly by the dynamical core numerics, accepting the implied non-conservation. An attempt is made to estimate the relative importance of different conservation laws. It is argued that satisfactory model performance requires spurious sources of a conservable quantity to be much smaller than any true physical sources; for several conservable quantities the magnitudes of the physical sources are estimated in order to provide benchmarks against which any spurious sources may be measured.
Efficient processing of fluorescence images using directional multiscale representations.
Labate, D; Laezza, F; Negi, P; Ozcan, B; Papadakis, M
2014-01-01
Recent advances in high-resolution fluorescence microscopy have enabled the systematic study of morphological changes in large populations of cells induced by chemical and genetic perturbations, facilitating the discovery of signaling pathways underlying diseases and the development of new pharmacological treatments. In these studies, though, due to the complexity of the data, quantification and analysis of morphological features are for the vast majority handled manually, slowing significantly data processing and limiting often the information gained to a descriptive level. Thus, there is an urgent need for developing highly efficient automated analysis and processing tools for fluorescent images. In this paper, we present the application of a method based on the shearlet representation for confocal image analysis of neurons. The shearlet representation is a newly emerged method designed to combine multiscale data analysis with superior directional sensitivity, making this approach particularly effective for the representation of objects defined over a wide range of scales and with highly anisotropic features. Here, we apply the shearlet representation to problems of soma detection of neurons in culture and extraction of geometrical features of neuronal processes in brain tissue, and propose it as a new framework for large-scale fluorescent image analysis of biomedical data.
Modeling multiscale evolution of numerous voids in shocked brittle material.
Yu, Yin; Wang, Wenqiang; He, Hongliang; Lu, Tiecheng
2014-04-01
The influence of the evolution of numerous voids on macroscopic properties of materials is a multiscale problem that challenges computational research. A shock-wave compression model for brittle material, which can obtain both microscopic evolution and macroscopic shock properties, was developed using discrete element methods (lattice model). Using a model interaction-parameter-mapping procedure, qualitative features, as well as trends in the calculated shock-wave profiles, are shown to agree with experimental results. The shock wave splits into an elastic wave and a deformation wave in porous brittle materials, indicating significant shock plasticity. Void collapses in the deformation wave were the natural reason for volume shrinkage and deformation. However, media slippage and rotation deformations indicated by complex vortex patterns composed of relative velocity vectors were also confirmed as an important source of shock plasticity. With increasing pressure, the contribution from slippage deformation to the final plastic strain increased. Porosity was found to determine the amplitude of the elastic wave; porosity and shock stress together determine propagation speed of the deformation wave, as well as stress and strain on the final equilibrium state. Thus, shock behaviors of porous brittle material can be systematically designed for specific applications.
NASA Technical Reports Server (NTRS)
Collinson, Glyn A.; Dorelli, John Charles; Avanov, Leon A.; Lewis, Gethyn R.; Moore, Thomas E.; Pollock, Craig; Kataria, Dhiren O.; Bedington, Robert; Arridge, Chris S.; Chornay, Dennis J.;
2012-01-01
We report our findings comparing the geometric factor (GF) as determined from simulations and laboratory measurements of the new Dual Electron Spectrometer (DES) being developed at NASA Goddard Space Flight Center as part of the Fast Plasma Investigation on NASA's Magnetospheric Multiscale mission. Particle simulations are increasingly playing an essential role in the design and calibration of electrostatic analyzers, facilitating the identification and mitigation of the many sources of systematic error present in laboratory calibration. While equations for laboratory measurement of the Geometric Factpr (GF) have been described in the literature, these are not directly applicable to simulation since the two are carried out under substantially different assumptions and conditions, making direct comparison very challenging. Starting from first principles, we derive generalized expressions for the determination of the GF in simulation and laboratory, and discuss how we have estimated errors in both cases. Finally, we apply these equations to the new DES instrument and show that the results agree within errors. Thus we show that the techniques presented here will produce consistent results between laboratory and simulation, and present the first description of the performance of the new DES instrument in the literature.
Multiscale modeling of a rectifying bipolar nanopore: Comparing Poisson-Nernst-Planck to Monte Carlo
NASA Astrophysics Data System (ADS)
Matejczyk, Bartłomiej; Valiskó, Mónika; Wolfram, Marie-Therese; Pietschmann, Jan-Frederik; Boda, Dezső
2017-03-01
In the framework of a multiscale modeling approach, we present a systematic study of a bipolar rectifying nanopore using a continuum and a particle simulation method. The common ground in the two methods is the application of the Nernst-Planck (NP) equation to compute ion transport in the framework of the implicit-water electrolyte model. The difference is that the Poisson-Boltzmann theory is used in the Poisson-Nernst-Planck (PNP) approach, while the Local Equilibrium Monte Carlo (LEMC) method is used in the particle simulation approach (NP+LEMC) to relate the concentration profile to the electrochemical potential profile. Since we consider a bipolar pore which is short and narrow, we perform simulations using two-dimensional PNP. In addition, results of a non-linear version of PNP that takes crowding of ions into account are shown. We observe that the mean field approximation applied in PNP is appropriate to reproduce the basic behavior of the bipolar nanopore (e.g., rectification) for varying parameters of the system (voltage, surface charge, electrolyte concentration, and pore radius). We present current data that characterize the nanopore's behavior as a device, as well as concentration, electrical potential, and electrochemical potential profiles.
Matejczyk, Bartłomiej; Valiskó, Mónika; Wolfram, Marie-Therese; Pietschmann, Jan-Frederik; Boda, Dezső
2017-03-28
In the framework of a multiscale modeling approach, we present a systematic study of a bipolar rectifying nanopore using a continuum and a particle simulation method. The common ground in the two methods is the application of the Nernst-Planck (NP) equation to compute ion transport in the framework of the implicit-water electrolytemodel. The difference is that the Poisson-Boltzmann theory is used in the Poisson-Nernst-Planck (PNP) approach, while the Local Equilibrium Monte Carlo (LEMC) method is used in the particle simulation approach (NP+LEMC) to relate the concentration profile to the electrochemical potential profile. Since we consider a bipolar pore which is short and narrow, we perform simulations using two-dimensional PNP. In addition, results of a non-linear version of PNP that takes crowding of ions into account are shown. We observe that the mean field approximation applied in PNP is appropriate to reproduce the basic behavior of the bipolar nanopore (e.g., rectification) for varying parameters of the system (voltage, surface charge,electrolyte concentration, and pore radius). We present current data that characterize the nanopore's behavior as a device, as well as concentration, electrical potential, and electrochemical potential profiles.
Efficient processing of fluorescence images using directional multiscale representations
Labate, D.; Laezza, F.; Negi, P.; Ozcan, B.; Papadakis, M.
2017-01-01
Recent advances in high-resolution fluorescence microscopy have enabled the systematic study of morphological changes in large populations of cells induced by chemical and genetic perturbations, facilitating the discovery of signaling pathways underlying diseases and the development of new pharmacological treatments. In these studies, though, due to the complexity of the data, quantification and analysis of morphological features are for the vast majority handled manually, slowing significantly data processing and limiting often the information gained to a descriptive level. Thus, there is an urgent need for developing highly efficient automated analysis and processing tools for fluorescent images. In this paper, we present the application of a method based on the shearlet representation for confocal image analysis of neurons. The shearlet representation is a newly emerged method designed to combine multiscale data analysis with superior directional sensitivity, making this approach particularly effective for the representation of objects defined over a wide range of scales and with highly anisotropic features. Here, we apply the shearlet representation to problems of soma detection of neurons in culture and extraction of geometrical features of neuronal processes in brain tissue, and propose it as a new framework for large-scale fluorescent image analysis of biomedical data. PMID:28804225
Demuzere, M; Orru, K; Heidrich, O; Olazabal, E; Geneletti, D; Orru, H; Bhave, A G; Mittal, N; Feliu, E; Faehnle, M
2014-12-15
In order to develop climate resilient urban areas and reduce emissions, several opportunities exist starting from conscious planning and design of green (and blue) spaces in these landscapes. Green urban infrastructure has been regarded as beneficial, e.g. by balancing water flows, providing thermal comfort. This article explores the existing evidence on the contribution of green spaces to climate change mitigation and adaptation services. We suggest a framework of ecosystem services for systematizing the evidence on the provision of bio-physical benefits (e.g. CO2 sequestration) as well as social and psychological benefits (e.g. improved health) that enable coping with (adaptation) or reducing the adverse effects (mitigation) of climate change. The multi-functional and multi-scale nature of green urban infrastructure complicates the categorization of services and benefits, since in reality the interactions between various benefits are manifold and appear on different scales. We will show the relevance of the benefits from green urban infrastructures on three spatial scales (i.e. city, neighborhood and site specific scales). We will further report on co-benefits and trade-offs between the various services indicating that a benefit could in turn be detrimental in relation to other functions. The manuscript identifies avenues for further research on the role of green urban infrastructure, in different types of cities, climates and social contexts. Our systematic understanding of the bio-physical and social processes defining various services allows targeting stressors that may hamper the provision of green urban infrastructure services in individual behavior as well as in wider planning and environmental management in urban areas. Copyright © 2014 Elsevier Ltd. All rights reserved.
Improved methods for the measurement and analysis of stellar magnetic fields
NASA Technical Reports Server (NTRS)
Saar, Steven H.
1988-01-01
The paper presents several improved methods for the measurement of magnetic fields on cool stars which take into account simple radiative transfer effects and the exact Zeeman patterns. Using these methods, high-resolution, low-noise data can be fitted with theoretical line profiles to determine the mean magnetic field strength in stellar active regions and a model-dependent fraction of the stellar surface (filling factor) covered by these regions. Random errors in the derived field strength and filling factor are parameterized in terms of signal-to-noise ratio, wavelength, spectral resolution, stellar rotation rate, and the magnetic parameters themselves. Weak line blends, if left uncorrected, can have significant systematic effects on the derived magnetic parameters, and thus several methods are developed to compensate partially for them. The magnetic parameters determined by previous methods likely have systematic errors because of such line blends and because of line saturation effects. Other sources of systematic error are explored in detail. These sources of error currently make it difficult to determine the magnetic parameters of individual stars to better than about + or - 20 percent.
A Coupled fcGCM-GCE Modeling System: A 3D Cloud Resolving Model and a Regional Scale Model
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo
2005-01-01
Recent GEWEX Cloud System Study (GCSS) model comparison projects have indicated that cloud-resolving models (CRMs) agree with observations better than traditional single-column models in simulating various types of clouds and cloud systems from different geographic locations. Current and future NASA satellite programs can provide cloud, precipitation, aerosol and other data at very fine spatial and temporal scales. It requires a coupled global circulation model (GCM) and cloud-scale model (termed a super-parameterization or multi-scale modeling framework, MMF) to use these satellite data to improve the understanding of the physical processes that are responsible for the variation in global and regional climate and hydrological systems. The use of a GCM will enable global coverage, and the use of a CRM will allow for better and ore sophisticated physical parameterization. NASA satellite and field campaign cloud related datasets can provide initial conditions as well as validation for both the MMF and CRMs. The Goddard MMF is based on the 2D Goddard Cumulus Ensemble (GCE) model and the Goddard finite volume general circulation model (fvGCM), and it has started production runs with two years results (1998 and 1999). Also, at Goddard, we have implemented several Goddard microphysical schemes (21CE, several 31CE), Goddard radiation (including explicity calculated cloud optical properties), and Goddard Land Information (LIS, that includes the CLM and NOAH land surface models) into a next generation regional scale model, WRF. In this talk, I will present: (1) A Brief review on GCE model and its applications on precipitation processes (microphysical and land processes), (2) The Goddard MMF and the major difference between two existing MMFs (CSU MMF and Goddard MMF), and preliminary results (the comparison with traditional GCMs), (3) A discussion on the Goddard WRF version (its developments and applications), and (4) The characteristics of the four-dimensional cloud data sets (or cloud library) stored at Goddard.
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo
2006-01-01
Recent GEWEX Cloud System Study (GCSS) model comparison projects have indicated that cloud-resolving models (CRMs) agree with observations better than traditional single-column models in simulating various types of clouds and cloud systems from different geographic locations. Current and future NASA satellite programs can provide cloud, precipitation, aerosol and other data at very fine spatial and temporal scales. It requires a coupled global circulation model (GCM) and cloud-scale model (termed a super-parameterization or multi-scale modeling framework, MMF) to use these satellite data to improve the understanding of the physical processes that are responsible for the variation in global and regional climate and hydrological systems. The use of a GCM will enable global coverage, and the use of a CRM will allow for better and more sophisticated physical parameterization. NASA satellite and field campaign cloud related datasets can provide initial conditions as well as validation for both the MMF and CRMs. The Goddard MMF is based on the 2D Goddard Cumulus Ensemble (GCE) model and the Goddard finite volume general circulation model (fvGCM), and it has started production runs with two years results (1998 and 1999). Also, at Goddard, we have implemented several Goddard microphysical schemes (21CE, several 31CE), Goddard radiation (including explicitly calculated cloud optical properties), and Goddard Land Information (LIS, that includes the CLM and NOAH land surface models) into a next generation regional scale model, WRF. In this talk, I will present: (1) A brief review on GCE model and its applications on precipitation processes (microphysical and land processes), (2) The Goddard MMF and the major difference between two existing MMFs (CSU MMF and Goddard MMF), and preliminary results (the comparison with traditional GCMs), and (3) A discussion on the Goddard WRF version (its developments and applications).
Upper Mantle Seismic Structure for NE Tibet From Multiscale Tomography Method
NASA Astrophysics Data System (ADS)
Guo, B.; Liu, Q.; Chen, J.
2013-12-01
In the real seismic experiments, the spatial sampling of rays inside the studied volume is basically nonuniform because of the unequispaced distribution of the seismic stations as well as the earthquake events. The conventional seismic tomography schemes adopt fixed size of cells or grid spacing while the actual resolution varies. As a result, either the phantom velocity anomalies may be aroused in regions that are poorly illuminated by the seismic rays, or the best detailed velocity model is unable to be extracted from those with fine ray coverage. We present an adaptive wavelet parameterization solution for three-dimensional traveltime seismic tomography problem and apply it to the study of the tectonics in the Northeast Tibet region. Different from the traditional parameterization schemes, we discretize the velocity model in terms of the Haar wavelets and the parameters are adjusted adaptively based on both the density and the azimuthal coverage of rays. Therefore, the fine grids are used in regions with the good data coverage, whereas the poorly resolved areas are represented by the coarse grids. Using the traveltime data recorded by the portable seismic array and the regional seismic network in the northeastern Tibet area, we investigate the P wave velocity structure of the crust and upper mantle. Our results show that the structure of the crust and upper mantle in the northeastern Tibet region manifests a strong laterally inhomogeneity, which appears not only in the adjacent areas between the different blocks, but also within each block. The velocity of the crust and upper mantle is highly different between the northeastern Tibet and the Ordos plateau. Of these two regions, the former possesses a low-velocity feature while the latter is referred to a high-velocity pattern. Between the northeastern Tibet and the Ordos plateau, there is a transition zone of about 200km wide, which is associated with an extremely complex velocity structure in crust and upper mantle.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nenes, Athanasios
The goal of this proposed project is to assess the climatic importance and sensitivity of aerosol indirect effect (AIE) to cloud and aerosol processes and feedbacks, which include organic aerosol hygroscopicity, cloud condensation nuclei (CCN) activation kinetics, Giant CCN, cloud-scale entrainment, ice nucleation in mixed-phase and cirrus clouds, and treatment of subgrid variability of vertical velocity. A key objective was to link aerosol, cloud microphysics and dynamics feedbacks in CAM5 with a suite of internally consistent and integrated parameterizations that provide the appropriate degrees of freedom to capture the various aspects of the aerosol indirect effect. The proposal integrated newmore » parameterization elements into the cloud microphysics, moist turbulence and aerosol modules used by the NCAR Community Atmospheric Model version 5 (CAM5). The CAM5 model was then used to systematically quantify the uncertainties of aerosol indirect effects through a series of sensitivity tests with present-day and preindustrial aerosol emissions. New parameterization elements were developed as a result of these efforts, and new diagnostic tools & methodologies were also developed to quantify the impacts of aerosols on clouds and climate within fully coupled models. Observations were used to constrain key uncertainties in the aerosol-cloud links. Advanced sensitivity tools were developed and implements to probe the drivers of cloud microphysical variability with unprecedented temporal and spatial scale. All these results have been published in top and high impact journals (or are in the final stages of publication). This proposal has also supported a number of outstanding graduate students.« less
Multiscale mechanics of graphene oxide and graphene based composite films
NASA Astrophysics Data System (ADS)
Cao, Changhong
The mechanical behavior of graphene oxide is length scale dependent: orders of magnitude different between the bulk forms and monolayer counterparts. Understanding the underlying mechanisms plays a significant role in their versatile application. A systematic multiscale mechanical study from monolayer to multilayer, including the interactions between layers of GO, can provide fundamental support for material engineering. In this thesis, an experimental coupled with simulation approach was used to study the multiscale mechanics of graphene oxide (GO) and the methods developed for GO study are proved to be applicable also to mechanical study of graphene based composites. GO is a layered nanomaterial comprised of hierarchical units whose characteristic dimension lies between monolayer GO (0.7 nm - 1.2 nm) and bulk GO papers (≥ 1 mum). Mechanical behaviors of monolayer GO and GO nanosheets (10 nm- 100 nm) were comprehensively studied this work. Monolayer GO was measured to have an average strength of 24.7 GPa,, orders of magnitude higher than previously reported values for GO paper and approximately 50% of the 2D intrinsic strength of pristine graphene. The huge discrepancy between the strength of monolayer GO and that of bulk GO paper motivated the study of GO at the intermediate length scale (GO nanosheets). Experimental results showed that GO nanosheets possess high strength in the gigapascal range. Molecular Dynamic simulations showed that the transition in the failure behavior from interplanar fracture to intraplanar fracture was responsible for the huge strength discrepancy between nanometer scale GO and bulk GO papers. Additionally, the interfacial shear strength between GO layers was found to be a key contributing factor to the distinct mechanical behavior among hierarchical units of GO. The understanding of the multiscale mechanics of GO is transferrable in heterogeneous layered nanomaterials, such as graphene-metal oxide based anode materials in Li-ion batteries. The novel methods developed in this work to study GO multilayered structures were also applied to study the mechanics of graphene-TiO 2 composites. It was found that a critical thickness range of TiO2 deposition on graphene is required for the observed stiffness enhancement effect of graphene to influence the mechanical behavior of the composite.
Modeling human target acquisition in ground-to-air weapon systems
NASA Technical Reports Server (NTRS)
Phatak, A. V.; Mohr, R. L.; Vikmanis, M.; Wei, K. C.
1982-01-01
The problems associated with formulating and validating mathematical models for describing and predicting human target acquisition response are considered. In particular, the extension of the human observer model to include the acquisition phase as well as the tracking segment is presented. Relationship of the Observer model structure to the more complex Standard Optimal Control model formulation and to the simpler Transfer Function/Noise representation is discussed. Problems pertinent to structural identifiability and the form of the parameterization are elucidated. A systematic approach toward the identification of the observer acquisition model parameters from ensemble tracking error data is presented.
Reduction operators of Burgers equation.
Pocheketa, Oleksandr A; Popovych, Roman O
2013-02-01
The solution of the problem on reduction operators and nonclassical reductions of the Burgers equation is systematically treated and completed. A new proof of the theorem on the special "no-go" case of regular reduction operators is presented, and the representation of the coefficients of operators in terms of solutions of the initial equation is constructed for this case. All possible nonclassical reductions of the Burgers equation to single ordinary differential equations are exhaustively described. Any Lie reduction of the Burgers equation proves to be equivalent via the Hopf-Cole transformation to a parameterized family of Lie reductions of the linear heat equation.
Reduction operators of Burgers equation
Pocheketa, Oleksandr A.; Popovych, Roman O.
2013-01-01
The solution of the problem on reduction operators and nonclassical reductions of the Burgers equation is systematically treated and completed. A new proof of the theorem on the special “no-go” case of regular reduction operators is presented, and the representation of the coefficients of operators in terms of solutions of the initial equation is constructed for this case. All possible nonclassical reductions of the Burgers equation to single ordinary differential equations are exhaustively described. Any Lie reduction of the Burgers equation proves to be equivalent via the Hopf–Cole transformation to a parameterized family of Lie reductions of the linear heat equation. PMID:23576819
NASA Astrophysics Data System (ADS)
Macioł, Piotr; Regulski, Krzysztof
2016-08-01
We present a process of semantic meta-model development for data management in an adaptable multiscale modeling framework. The main problems in ontology design are discussed, and a solution achieved as a result of the research is presented. The main concepts concerning the application and data management background for multiscale modeling were derived from the AM3 approach—object-oriented Agile multiscale modeling methodology. The ontological description of multiscale models enables validation of semantic correctness of data interchange between submodels. We also present a possibility of using the ontological model as a supervisor in conjunction with a multiscale model controller and a knowledge base system. Multiscale modeling formal ontology (MMFO), designed for describing multiscale models' data and structures, is presented. A need for applying meta-ontology in the MMFO development process is discussed. Examples of MMFO application in describing thermo-mechanical treatment of metal alloys are discussed. Present and future applications of MMFO are described.
NASA Technical Reports Server (NTRS)
Pineda, Evan J.; Fassin, Marek; Bednarcyk, Brett A.; Reese, Stefanie; Simon, Jaan-Willem
2017-01-01
Three different multiscale models, based on the method of cells (generalized and high fidelity) micromechanics models were developed and used to predict the elastic properties of C/C-SiC composites. In particular, the following multiscale modeling strategies were employed: Concurrent multiscale modeling of all phases using the generalized method of cells, synergistic (two-way coupling in space) multiscale modeling with the generalized method of cells, and hierarchical (one-way coupling in space) multiscale modeling with the high fidelity generalized method of cells. The three models are validated against data from a hierarchical multiscale finite element model in the literature for a repeating unit cell of C/C-SiC. Furthermore, the multiscale models are used in conjunction with classical lamination theory to predict the stiffness of C/C-SiC plates manufactured via a wet filament winding and liquid silicon infiltration process recently developed by the German Aerospace Institute.
Corsini, Chiara; Biglino, Giovanni; Schievano, Silvia; Hsia, Tain-Yen; Migliavacca, Francesco; Pennati, Giancarlo; Taylor, Andrew M
2014-08-01
The size of the modified Blalock-Taussig shunt and the additional presence of aortic coarctation can affect the hemodynamics of the Norwood physiology. Multiscale modeling was used to gather insight into the effects of these variables, in particular on coronary perfusion. A model was reconstructed from cardiac magnetic resonance imaging data of a representative patient, and then simplified with computer-aided design software. Changes were systematically imposed to the semi-idealized three-dimensional model, resulting in a family of nine models (3-, 3.5-, and 4-mm shunt diameter; 0%, 60%, and 90% coarctation severity). Each model was coupled to a lumped parameter network representing the remainder of the circulation to run multiscale simulations. Simulations were repeated including the effect of preserved cerebral perfusion. The concomitant presence of a large shunt and tight coarctation was detrimental in terms of coronary perfusion (13.4% maximal reduction, 1.07 versus 0.927 mL/s) and oxygen delivery (29% maximum reduction, 422 versus 300 mL·min(-1)·m(-2)). A variation in the ratio of pulmonary to systemic blood flow from 0.9 to 1.6 also indicated a "stealing" phenomenon to the detriment of the coronary circulation. A difference could be further appreciated in the computational ventricular pressure-volume loops, with augmented systolic pressures and decreased stroke volumes for tighter coarctation. Accounting for constant cerebral perfusion did not produce substantially different results. Multiscale simulations performed in a parametric fashion revealed a reduction in coronary perfusion in the presence of a large modified Blalock-Taussig shunt and severe coarctation in Norwood patients. Copyright © 2014 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.
Laser induced hierarchical calcium phosphate structures.
Kurella, Anil; Dahotre, Narendra B
2006-11-01
The surface properties of biomedical implant materials control the dynamic interactions at tissue-implant interfaces. At such interfaces, if the nanoscale features influence protein interactions, those of the microscale and mesoscale aid cell orientation and provide tissue integration, respectively. It seems imperative that the synthetic materials expected to replace natural hard tissues are engineered to mimic the complexity of their hierarchical assembly. However, the current surface engineering approaches are single scaled. It is demonstrated that using laser surface engineering a controlled multiscale surface can be synthesized for bioactive functions. A systematic organization of bioactive calcium phosphate coating with multiphase composition on Ti-alloy substrate ranging from nano- to mesoscale has been achieved by effectively controlling the thermo physical interactions during laser processing. The morphology of the coating consisted of a periodic arrangement of Ti-rich and Ca-P-deficient star-like phases uniformly distributed inside a Ca-P-rich self-assembled cellular structure with the presence of CaO, alpha-tricalcium phosphate, CaTiO(3), TiO(2) and Ti phase in the coating matrix. The cellular structures ranged in diameter from 2.5 microm to 10 microm as an assembly of cuboid shaped particles of dimensions of approximately 200 nm x 1 microm. The multiscale texture also included nanoscale particles that are the precursors for many of these phases. The rapid cooling associated with the laser processing resulted in formation, organization and controlling dimensions of the Ca-P-rich glassy phase into a micron scale cellular morphology and submicron scale clusters of CaTiO(3) phase inside the cellular structures. The self-assembly of the coating into multiscale structure was influenced by chemical and physical interactions among the multiphases that evolved during laser processing.
Nonholonomic Hamiltonian Method for Meso-macroscale Simulations of Reacting Shocks
NASA Astrophysics Data System (ADS)
Fahrenthold, Eric; Lee, Sangyup
2015-06-01
The seamless integration of macroscale, mesoscale, and molecular scale models of reacting shock physics has been hindered by dramatic differences in the model formulation techniques normally used at different scales. In recent research the authors have developed the first unified discrete Hamiltonian approach to multiscale simulation of reacting shock physics. Unlike previous work, the formulation employs reacting themomechanical Hamiltonian formulations at all scales, including the continuum. Unlike previous work, the formulation employs a nonholonomic modeling approach to systematically couple the models developed at all scales. Example applications of the method show meso-macroscale shock to detonation simulations in nitromethane and RDX. Research supported by the Defense Threat Reduction Agency.
Parallelization of fine-scale computation in Agile Multiscale Modelling Methodology
NASA Astrophysics Data System (ADS)
Macioł, Piotr; Michalik, Kazimierz
2016-10-01
Nowadays, multiscale modelling of material behavior is an extensively developed area. An important obstacle against its wide application is high computational demands. Among others, the parallelization of multiscale computations is a promising solution. Heterogeneous multiscale models are good candidates for parallelization, since communication between sub-models is limited. In this paper, the possibility of parallelization of multiscale models based on Agile Multiscale Methodology framework is discussed. A sequential, FEM based macroscopic model has been combined with concurrently computed fine-scale models, employing a MatCalc thermodynamic simulator. The main issues, being investigated in this work are: (i) the speed-up of multiscale models with special focus on fine-scale computations and (ii) on decreasing the quality of computations enforced by parallel execution. Speed-up has been evaluated on the basis of Amdahl's law equations. The problem of `delay error', rising from the parallel execution of fine scale sub-models, controlled by the sequential macroscopic sub-model is discussed. Some technical aspects of combining third-party commercial modelling software with an in-house multiscale framework and a MPI library are also discussed.
Bae, Won-Gyu; Kim, Hong Nam; Kim, Doogon; Park, Suk-Hee; Jeong, Hoon Eui; Suh, Kahp-Yang
2014-02-01
Multiscale, hierarchically patterned surfaces, such as lotus leaves, butterfly wings, adhesion pads of gecko lizards are abundantly found in nature, where microstructures are usually used to strengthen the mechanical stability while nanostructures offer the main functionality, i.e., wettability, structural color, or dry adhesion. To emulate such hierarchical structures in nature, multiscale, multilevel patterning has been extensively utilized for the last few decades towards various applications ranging from wetting control, structural colors, to tissue scaffolds. In this review, we highlight recent advances in scalable multiscale patterning to bring about improved functions that can even surpass those found in nature, with particular focus on the analogy between natural and synthetic architectures in terms of the role of different length scales. This review is organized into four sections. First, the role and importance of multiscale, hierarchical structures is described with four representative examples. Second, recent achievements in multiscale patterning are introduced with their strengths and weaknesses. Third, four application areas of wetting control, dry adhesives, selectively filtrating membranes, and multiscale tissue scaffolds are overviewed by stressing out how and why multiscale structures need to be incorporated to carry out their performances. Finally, we present future directions and challenges for scalable, multiscale patterned surfaces. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Uncertainty quantification for optical model parameters
Lovell, A. E.; Nunes, F. M.; Sarich, J.; ...
2017-02-21
Although uncertainty quantification has been making its way into nuclear theory, these methods have yet to be explored in the context of reaction theory. For example, it is well known that different parameterizations of the optical potential can result in different cross sections, but these differences have not been systematically studied and quantified. The purpose of our work is to investigate the uncertainties in nuclear reactions that result from fitting a given model to elastic-scattering data, as well as to study how these uncertainties propagate to the inelastic and transfer channels. We use statistical methods to determine a best fitmore » and create corresponding 95% confidence bands. A simple model of the process is fit to elastic-scattering data and used to predict either inelastic or transfer cross sections. In this initial work, we assume that our model is correct, and the only uncertainties come from the variation of the fit parameters. Here, we study a number of reactions involving neutron and deuteron projectiles with energies in the range of 5–25 MeV/u, on targets with mass A=12–208. We investigate the correlations between the parameters in the fit. The case of deuterons on 12C is discussed in detail: the elastic-scattering fit and the prediction of 12C(d,p) 13C transfer angular distributions, using both uncorrelated and correlated χ 2 minimization functions. The general features for all cases are compiled in a systematic manner to identify trends. This work shows that, in many cases, the correlated χ 2 functions (in comparison to the uncorrelated χ 2 functions) provide a more natural parameterization of the process. These correlated functions do, however, produce broader confidence bands. Further optimization may require improvement in the models themselves and/or more information included in the fit.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holanda, R. F. L.; Lima, J. A. S.; Ribeiro, M. B., E-mail: limajas@astro.iag.usp.b
In this Letter, we propose a new and model-independent cosmological test for the distance-duality (DD) relation, {eta} = D{sub L} (z)(1 + z){sup -2}/D{sub A} (z) = 1, where D{sub L} and D{sub A} are, respectively, the luminosity and angular diameter distances. For D{sub L} we consider two sub-samples of Type Ia supernovae (SNe Ia) taken from Constitution data whereas D{sub A} distances are provided by two samples of galaxy clusters compiled by De Filippis et al. and Bonamente et al. by combining Sunyaev-Zeldovich effect and X-ray surface brightness. The SNe Ia redshifts of each sub-sample were carefully chosen tomore » coincide with the ones of the associated galaxy cluster sample ({Delta}z < 0.005), thereby allowing a direct test of the DD relation. Since for very low redshifts, D{sub A} (z) ape D{sub L} (z), we have tested the DD relation by assuming that {eta} is a function of the redshift parameterized by two different expressions: {eta}(z) = 1 + {eta}{sub 0} z and {eta}(z) = 1 + {eta}{sub 0} z/(1 + z), where {eta}{sub 0} is a constant parameter quantifying a possible departure from the strict validity of the reciprocity relation ({eta}{sub 0} = 0). In the best scenario (linear parameterization), we obtain {eta}{sub 0} = -0.28{sup +0.44} {sub -0.44} (2{sigma}, statistical + systematic errors) for the De Filippis et al. sample (elliptical geometry), a result only marginally compatible with the DD relation. However, for the Bonamente et al. sample (spherical geometry) the constraint is {eta}{sub 0} = -0.42{sup +0.34} {sub -0.34} (3{sigma}, statistical + systematic errors), which is clearly incompatible with the duality-distance relation.« less
NEW EQUATIONS OF STATE IN SIMULATIONS OF CORE-COLLAPSE SUPERNOVAE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hempel, M.; Liebendoerfer, M.; Fischer, T.
2012-03-20
We discuss three new equations of state (EOS) in core-collapse supernova simulations. The new EOS are based on the nuclear statistical equilibrium model of Hempel and Schaffner-Bielich (HS), which includes excluded volume effects and relativistic mean-field (RMF) interactions. We consider the RMF parameterizations TM1, TMA, and FSUgold. These EOS are implemented into our spherically symmetric core-collapse supernova model, which is based on general relativistic radiation hydrodynamics and three-flavor Boltzmann neutrino transport. The results obtained for the new EOS are compared with the widely used EOS of H. Shen et al. and Lattimer and Swesty. The systematic comparison shows that themore » model description of inhomogeneous nuclear matter is as important as the parameterization of the nuclear interactions for the supernova dynamics and the neutrino signal. Furthermore, several new aspects of nuclear physics are investigated: the HS EOS contains distributions of nuclei, including nuclear shell effects. The appearance of light nuclei, e.g., deuterium and tritium, is also explored, which can become as abundant as alphas and free protons. In addition, we investigate the black hole formation in failed core-collapse supernovae, which is mainly determined by the high-density EOS. We find that temperature effects lead to a systematically faster collapse for the non-relativistic LS EOS in comparison with the RMF EOS. We deduce a new correlation for the time until black hole formation, which allows the determination of the maximum mass of proto-neutron stars, if the neutrino signal from such a failed supernova would be measured in the future. This would give a constraint for the nuclear EOS at finite entropy, complementary to observations of cold neutron stars.« less
Pion, Kaon, Proton and Antiproton Production in Proton-Proton Collisions
NASA Technical Reports Server (NTRS)
Norbury, John W.; Blattnig, Steve R.
2008-01-01
Inclusive pion, kaon, proton, and antiproton production from proton-proton collisions is studied at a variety of proton energies. Various available parameterizations of Lorentz-invariant differential cross sections as a function of transverse momentum and rapidity are compared with experimental data. The Badhwar and Alper parameterizations are moderately satisfactory for charged pion production. The Badhwar parameterization provides the best fit for charged kaon production. For proton production, the Alper parameterization is best, and for antiproton production the Carey parameterization works best. However, no parameterization is able to fully account for all the data.
Multiscale analysis of heart rate dynamics: entropy and time irreversibility measures.
Costa, Madalena D; Peng, Chung-Kang; Goldberger, Ary L
2008-06-01
Cardiovascular signals are largely analyzed using traditional time and frequency domain measures. However, such measures fail to account for important properties related to multiscale organization and non-equilibrium dynamics. The complementary role of conventional signal analysis methods and emerging multiscale techniques, is, therefore, an important frontier area of investigation. The key finding of this presentation is that two recently developed multiscale computational tools--multiscale entropy and multiscale time irreversibility--are able to extract information from cardiac interbeat interval time series not contained in traditional methods based on mean, variance or Fourier spectrum (two-point correlation) techniques. These new methods, with careful attention to their limitations, may be useful in diagnostics, risk stratification and detection of toxicity of cardiac drugs.
Multiscale Analysis of Heart Rate Dynamics: Entropy and Time Irreversibility Measures
Peng, Chung-Kang; Goldberger, Ary L.
2016-01-01
Cardiovascular signals are largely analyzed using traditional time and frequency domain measures. However, such measures fail to account for important properties related to multiscale organization and nonequilibrium dynamics. The complementary role of conventional signal analysis methods and emerging multiscale techniques, is, therefore, an important frontier area of investigation. The key finding of this presentation is that two recently developed multiscale computational tools— multiscale entropy and multiscale time irreversibility—are able to extract information from cardiac interbeat interval time series not contained in traditional methods based on mean, variance or Fourier spectrum (two-point correlation) techniques. These new methods, with careful attention to their limitations, may be useful in diagnostics, risk stratification and detection of toxicity of cardiac drugs. PMID:18172763
Hydraulic Conductivity Estimation using Bayesian Model Averaging and Generalized Parameterization
NASA Astrophysics Data System (ADS)
Tsai, F. T.; Li, X.
2006-12-01
Non-uniqueness in parameterization scheme is an inherent problem in groundwater inverse modeling due to limited data. To cope with the non-uniqueness problem of parameterization, we introduce a Bayesian Model Averaging (BMA) method to integrate a set of selected parameterization methods. The estimation uncertainty in BMA includes the uncertainty in individual parameterization methods as the within-parameterization variance and the uncertainty from using different parameterization methods as the between-parameterization variance. Moreover, the generalized parameterization (GP) method is considered in the geostatistical framework in this study. The GP method aims at increasing the flexibility of parameterization through the combination of a zonation structure and an interpolation method. The use of BMP with GP avoids over-confidence in a single parameterization method. A normalized least-squares estimation (NLSE) is adopted to calculate the posterior probability for each GP. We employee the adjoint state method for the sensitivity analysis on the weighting coefficients in the GP method. The adjoint state method is also applied to the NLSE problem. The proposed methodology is implemented to the Alamitos Barrier Project (ABP) in California, where the spatially distributed hydraulic conductivity is estimated. The optimal weighting coefficients embedded in GP are identified through the maximum likelihood estimation (MLE) where the misfits between the observed and calculated groundwater heads are minimized. The conditional mean and conditional variance of the estimated hydraulic conductivity distribution using BMA are obtained to assess the estimation uncertainty.
Improving Mixed-phase Cloud Parameterization in Climate Model with the ACRF Measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Zhien
Mixed-phase cloud microphysical and dynamical processes are still poorly understood, and their representation in GCMs is a major source of uncertainties in overall cloud feedback in GCMs. Thus improving mixed-phase cloud parameterizations in climate models is critical to reducing the climate forecast uncertainties. This study aims at providing improved knowledge of mixed-phase cloud properties from the long-term ACRF observations and improving mixed-phase clouds simulations in the NCAR Community Atmosphere Model version 5 (CAM5). The key accomplishments are: 1) An improved retrieval algorithm was developed to provide liquid droplet concentration for drizzling or mixed-phase stratiform clouds. 2) A new ice concentrationmore » retrieval algorithm for stratiform mixed-phase clouds was developed. 3) A strong seasonal aerosol impact on ice generation in Arctic mixed-phase clouds was identified, which is mainly attributed to the high dust occurrence during the spring season. 4) A suite of multi-senor algorithms was applied to long-term ARM observations at the Barrow site to provide a complete dataset (LWC and effective radius profile for liquid phase, and IWC, Dge profiles and ice concentration for ice phase) to characterize Arctic stratiform mixed-phase clouds. This multi-year stratiform mixed-phase cloud dataset provides necessary information to study related processes, evaluate model stratiform mixed-phase cloud simulations, and improve model stratiform mixed-phase cloud parameterization. 5). A new in situ data analysis method was developed to quantify liquid mass partition in convective mixed-phase clouds. For the first time, we reliably compared liquid mass partitions in stratiform and convective mixed-phase clouds. Due to the different dynamics in stratiform and convective mixed-phase clouds, the temperature dependencies of liquid mass partitions are significantly different due to much higher ice concentrations in convective mixed phase clouds. 6) Systematic evaluations of mixed-phase cloud simulations by CAM5 were performed. Measurement results indicate that ice concentrations control stratiform mixed-phase cloud properties. The improvement of ice concentration parameterization in the CAM5 was done in close collaboration with Dr. Xiaohong Liu, PNNL (now at University of Wyoming).« less
Importance of ensembles in projecting regional climate trends
NASA Astrophysics Data System (ADS)
Arritt, Raymond; Daniel, Ariele; Groisman, Pavel
2016-04-01
We have performed an ensemble of simulations using RegCM4 to examine the ability to reproduce observed trends in precipitation intensity and to project future changes through the 21st century for the central United States. We created a matrix of simulations over the CORDEX North America domain for 1950-2099 by driving the regional model with two different global models (HadGEM2-ES and GFDL-ESM2M, both for RCP8.5), by performing simulations at both 50 km and 25 km grid spacing, and by using three different convective parameterizations. The result is a set of 12 simulations (two GCMs by two resolutions by three convective parameterizations) that can be used to systematically evaluate the influence of simulation design on predicted precipitation. The two global models were selected to bracket the range of climate sensitivity in the CMIP5 models: HadGEM2-ES has the highest ECS of the CMIP5 models, while GFDL-ESM2M has one of the lowestt. Our evaluation metrics differ from many other RCM studies in that we focus on the skill of the models in reproducing past trends rather than the mean climate state. Trends in frequency of extreme precipitation (defined as amounts exceeding 76.2 mm/day) for most simulations are similar to the observed trend but with notable variations depending on RegCM4 configuration and on the driving GCM. There are complex interactions among resolution, choice of convective parameterization, and the driving GCM that carry over into the future climate projections. We also note that biases in the current climate do not correspond to biases in trends. As an example of these points the Emanuel scheme is consistently "wet" (positive bias in precipitation) yet it produced the smallest precipitation increase of the three convective parameterizations when used in simulations driven by HadGEM2-ES. However, it produced the largest increase when driven by GFDL-ESM2M. These findings reiterate that ensembles using multiple RCM configurations and driving GCMs are essential for projecting regional climate change, even when a single RCM is used. This research was sponsored by the U.S. Department of Agriculture National Institute of Food and Agriculture.
Multiscale measurement error models for aggregated small area health data.
Aregay, Mehreteab; Lawson, Andrew B; Faes, Christel; Kirby, Russell S; Carroll, Rachel; Watjou, Kevin
2016-08-01
Spatial data are often aggregated from a finer (smaller) to a coarser (larger) geographical level. The process of data aggregation induces a scaling effect which smoothes the variation in the data. To address the scaling problem, multiscale models that link the convolution models at different scale levels via the shared random effect have been proposed. One of the main goals in aggregated health data is to investigate the relationship between predictors and an outcome at different geographical levels. In this paper, we extend multiscale models to examine whether a predictor effect at a finer level hold true at a coarser level. To adjust for predictor uncertainty due to aggregation, we applied measurement error models in the framework of multiscale approach. To assess the benefit of using multiscale measurement error models, we compare the performance of multiscale models with and without measurement error in both real and simulated data. We found that ignoring the measurement error in multiscale models underestimates the regression coefficient, while it overestimates the variance of the spatially structured random effect. On the other hand, accounting for the measurement error in multiscale models provides a better model fit and unbiased parameter estimates. © The Author(s) 2016.
Assessment of the GECKO-A Modeling Tool and Simplified 3D Model Parameterizations for SOA Formation
NASA Astrophysics Data System (ADS)
Aumont, B.; Hodzic, A.; La, S.; Camredon, M.; Lannuque, V.; Lee-Taylor, J. M.; Madronich, S.
2014-12-01
Explicit chemical mechanisms aim to embody the current knowledge of the transformations occurring in the atmosphere during the oxidation of organic matter. These explicit mechanisms are therefore useful tools to explore the fate of organic matter during its tropospheric oxidation and examine how these chemical processes shape the composition and properties of the gaseous and the condensed phases. Furthermore, explicit mechanisms provide powerful benchmarks to design and assess simplified parameterizations to be included 3D model. Nevertheless, the explicit mechanism describing the oxidation of hydrocarbons with backbones larger than few carbon atoms involves millions of secondary organic compounds, far exceeding the size of chemical mechanisms that can be written manually. Data processing tools can however be designed to overcome these difficulties and automatically generate consistent and comprehensive chemical mechanisms on a systematic basis. The Generator for Explicit Chemistry and Kinetics of Organics in the Atmosphere (GECKO-A) has been developed for the automatic writing of explicit chemical schemes of organic species and their partitioning between the gas and condensed phases. GECKO-A can be viewed as an expert system that mimics the steps by which chemists might develop chemical schemes. GECKO-A generates chemical schemes according to a prescribed protocol assigning reaction pathways and kinetics data on the basis of experimental data and structure-activity relationships. In its current version, GECKO-A can generate the full atmospheric oxidation scheme for most linear, branched and cyclic precursors, including alkanes and alkenes up to C25. Assessments of the GECKO-A modeling tool based on chamber SOA observations will be presented. GECKO-A was recently used to design a parameterization for SOA formation based on a Volatility Basis Set (VBS) approach. First results will be presented.
NASA Astrophysics Data System (ADS)
Baek, Sunghye
2017-07-01
For more efficient and accurate computation of radiative flux, improvements have been achieved in two aspects, integration of the radiative transfer equation over space and angle. First, the treatment of the Monte Carlo-independent column approximation (MCICA) is modified focusing on efficiency using a reduced number of random samples ("G-packed") within a reconstructed and unified radiation package. The original McICA takes 20% of CPU time of radiation in the Global/Regional Integrated Model systems (GRIMs). The CPU time consumption of McICA is reduced by 70% without compromising accuracy. Second, parameterizations of shortwave two-stream approximations are revised to reduce errors with respect to the 16-stream discrete ordinate method. Delta-scaled two-stream approximation (TSA) is almost unanimously used in Global Circulation Model (GCM) but contains systematic errors which overestimate forward peak scattering as solar elevation decreases. These errors are alleviated by adjusting the parameterizations of each scattering element—aerosol, liquid, ice and snow cloud particles. Parameterizations are determined with 20,129 atmospheric columns of the GRIMs data and tested with 13,422 independent data columns. The result shows that the root-mean-square error (RMSE) over the all atmospheric layers is decreased by 39% on average without significant increase in computational time. Revised TSA developed and validated with a separate one-dimensional model is mounted on GRIMs for mid-term numerical weather forecasting. Monthly averaged global forecast skill scores are unchanged with revised TSA but the temperature at lower levels of the atmosphere (pressure ≥ 700 hPa) is slightly increased (< 0.5 K) with corrected atmospheric absorption.
Systematic Parameterization, Storage, and Representation of Volumetric DICOM Data.
Fischer, Felix; Selver, M Alper; Gezer, Sinem; Dicle, Oğuz; Hillen, Walter
Tomographic medical imaging systems produce hundreds to thousands of slices, enabling three-dimensional (3D) analysis. Radiologists process these images through various tools and techniques in order to generate 3D renderings for various applications, such as surgical planning, medical education, and volumetric measurements. To save and store these visualizations, current systems use snapshots or video exporting, which prevents further optimizations and requires the storage of significant additional data. The Grayscale Softcopy Presentation State extension of the Digital Imaging and Communications in Medicine (DICOM) standard resolves this issue for two-dimensional (2D) data by introducing an extensive set of parameters, namely 2D Presentation States (2DPR), that describe how an image should be displayed. 2DPR allows storing these parameters instead of storing parameter applied images, which cause unnecessary duplication of the image data. Since there is currently no corresponding extension for 3D data, in this study, a DICOM-compliant object called 3D presentation states (3DPR) is proposed for the parameterization and storage of 3D medical volumes. To accomplish this, the 3D medical visualization process is divided into four tasks, namely pre-processing, segmentation, post-processing, and rendering. The important parameters of each task are determined. Special focus is given to the compression of segmented data, parameterization of the rendering process, and DICOM-compliant implementation of the 3DPR object. The use of 3DPR was tested in a radiology department on three clinical cases, which require multiple segmentations and visualizations during the workflow of radiologists. The results show that 3DPR can effectively simplify the workload of physicians by directly regenerating 3D renderings without repeating intermediate tasks, increase efficiency by preserving all user interactions, and provide efficient storage as well as transfer of visualized data.
Spielman, Stephanie J; Wilke, Claus O
2016-11-01
The mutation-selection model of coding sequence evolution has received renewed attention for its use in estimating site-specific amino acid propensities and selection coefficient distributions. Two computationally tractable mutation-selection inference frameworks have been introduced: One framework employs a fixed-effects, highly parameterized maximum likelihood approach, whereas the other employs a random-effects Bayesian Dirichlet Process approach. While both implementations follow the same model, they appear to make distinct predictions about the distribution of selection coefficients. The fixed-effects framework estimates a large proportion of highly deleterious substitutions, whereas the random-effects framework estimates that all substitutions are either nearly neutral or weakly deleterious. It remains unknown, however, how accurately each method infers evolutionary constraints at individual sites. Indeed, selection coefficient distributions pool all site-specific inferences, thereby obscuring a precise assessment of site-specific estimates. Therefore, in this study, we use a simulation-based strategy to determine how accurately each approach recapitulates the selective constraint at individual sites. We find that the fixed-effects approach, despite its extensive parameterization, consistently and accurately estimates site-specific evolutionary constraint. By contrast, the random-effects Bayesian approach systematically underestimates the strength of natural selection, particularly for slowly evolving sites. We also find that, despite the strong differences between their inferred selection coefficient distributions, the fixed- and random-effects approaches yield surprisingly similar inferences of site-specific selective constraint. We conclude that the fixed-effects mutation-selection framework provides the more reliable software platform for model application and future development. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Janisse, Kevyn; Doucet, Stéphanie M.
2017-01-01
Perceptual models of animal vision have greatly contributed to our understanding of animal-animal and plant-animal communication. The receptor-noise model of color contrasts has been central to this research as it quantifies the difference between two colors for any visual system of interest. However, if the properties of the visual system are unknown, assumptions regarding parameter values must be made, generally with unknown consequences. In this study, we conduct a sensitivity analysis of the receptor-noise model using avian visual system parameters to systematically investigate the influence of variation in light environment, photoreceptor sensitivities, photoreceptor densities, and light transmission properties of the ocular media and the oil droplets. We calculated the chromatic contrast of 15 plumage patches to quantify a dichromatism score for 70 species of Galliformes, a group of birds that display a wide range of sexual dimorphism. We found that the photoreceptor densities and the wavelength of maximum sensitivity of the short-wavelength-sensitive photoreceptor 1 (SWS1) can change dichromatism scores by 50% to 100%. In contrast, the light environment, transmission properties of the oil droplets, transmission properties of the ocular media, and the peak sensitivities of the cone photoreceptors had a smaller impact on the scores. By investigating the effect of varying two or more parameters simultaneously, we further demonstrate that improper parameterization could lead to differences between calculated and actual contrasts of more than 650%. Our findings demonstrate that improper parameterization of tetrachromatic visual models can have very large effects on measures of dichromatism scores, potentially leading to erroneous inferences. We urge more complete characterization of avian retinal properties and recommend that researchers either determine whether their species of interest possess an ultraviolet or near-ultraviolet sensitive SWS1 photoreceptor, or present models for both. PMID:28076391
NASA Astrophysics Data System (ADS)
Lin, Shangfei; Sheng, Jinyu
2017-12-01
Depth-induced wave breaking is the primary dissipation mechanism for ocean surface waves in shallow waters. Different parametrizations were developed for parameterizing depth-induced wave breaking process in ocean surface wave models. The performance of six commonly-used parameterizations in simulating significant wave heights (SWHs) is assessed in this study. The main differences between these six parameterizations are representations of the breaker index and the fraction of breaking waves. Laboratory and field observations consisting of 882 cases from 14 sources of published observational data are used in the assessment. We demonstrate that the six parameterizations have reasonable performance in parameterizing depth-induced wave breaking in shallow waters, but with their own limitations and drawbacks. The widely-used parameterization suggested by Battjes and Janssen (1978, BJ78) has a drawback of underpredicting the SWHs in the locally-generated wave conditions and overpredicting in the remotely-generated wave conditions over flat bottoms. The drawback of BJ78 was addressed by a parameterization suggested by Salmon et al. (2015, SA15). But SA15 had relatively larger errors in SWHs over sloping bottoms than BJ78. We follow SA15 and propose a new parameterization with a dependence of the breaker index on the normalized water depth in deep waters similar to SA15. In shallow waters, the breaker index of the new parameterization has a nonlinear dependence on the local bottom slope rather than the linear dependence used in SA15. Overall, this new parameterization has the best performance with an average scatter index of ∼8.2% in comparison with the three best performing existing parameterizations with the average scatter index between 9.2% and 13.6%.
Defining Clonal Color in Fluorescent Multi-Clonal Tracking
Wu, Juwell W.; Turcotte, Raphaël; Alt, Clemens; Runnels, Judith M.; Tsao, Hensin; Lin, Charles P.
2016-01-01
Clonal heterogeneity and selection underpin many biological processes including development and tumor progression. Combinatorial fluorescent protein expression in germline cells has proven its utility for tracking the formation and regeneration of different organ systems. Such cell populations encoded by combinatorial fluorescent proteins are also attractive tools for understanding clonal expansion and clonal competition in cancer. However, the assignment of clonal identity requires an analytical framework in which clonal markings can be parameterized and validated. Here we present a systematic and quantitative method for RGB analysis of fluorescent melanoma cancer clones. We then demonstrate refined clonal trackability of melanoma cells using this scheme. PMID:27073117
Renormalization group analysis of turbulence
NASA Technical Reports Server (NTRS)
Smith, Leslie M.
1989-01-01
The objective is to understand and extend a recent theory of turbulence based on dynamic renormalization group (RNG) techniques. The application of RNG methods to hydrodynamic turbulence was explored most extensively by Yakhot and Orszag (1986). An eddy viscosity was calculated which was consistent with the Kolmogorov inertial range by systematic elimination of the small scales in the flow. Further, assumed smallness of the nonlinear terms in the redefined equations for the large scales results in predictions for important flow constants such as the Kolmogorov constant. It is emphasized that no adjustable parameters are needed. The parameterization of the small scales in a self-consistent manner has important implications for sub-grid modeling.
Origin and thermal evolution of Mars
NASA Technical Reports Server (NTRS)
Schubert, G.; Solomon, Sean C.; Turcotte, D. L.; Drake, M. J.; Sleep, N. H.
1993-01-01
The thermal evolution of Mars is governed by subsolidus mantle convection beneath a thick lithosphere. Models of the interior evolution are developed by parameterizing mantle convective heat transport in terms of mantle viscosity, the superadiabatic temperature rise across the mantle and mantle heat production. Geological, geophysical, and geochemical observations of the composition and structure of the interior and of the timing of major events in Martian evolution, such as global differentiation, atmospheric outgassing and the formation of the hemispherical dichotomy and Tharsis, are used to constrain the model computations. Isotope systematics of SNC meteorites suggest core formation essentially contemporaneously with the completion of accretion. Other aspects of this investigation are discussed.
NASA Technical Reports Server (NTRS)
Petty, Grant W.; Katsaros, Kristina B.
1994-01-01
Based on a geometric optics model and the assumption of an isotropic Gaussian surface slope distribution, the component of ocean surface microwave emissivity variation due to large-scale surface roughness is parameterized for the frequencies and approximate viewing angle of the Special Sensor Microwave/Imager. Independent geophysical variables in the parameterization are the effective (microwave frequency dependent) slope variance and the sea surface temperature. Using the same physical model, the change in the effective zenith angle of reflected sky radiation arising from large-scale roughness is also parameterized. Independent geophysical variables in this parameterization are the effective slope variance and the atmospheric optical depth at the frequency in question. Both of the above model-based parameterizations are intended for use in conjunction with empirical parameterizations relating effective slope variance and foam coverage to near-surface wind speed. These empirical parameterizations are the subject of a separate paper.
Good coupling for the multiscale patch scheme on systems with microscale heterogeneity
NASA Astrophysics Data System (ADS)
Bunder, J. E.; Roberts, A. J.; Kevrekidis, I. G.
2017-05-01
Computational simulation of microscale detailed systems is frequently only feasible over spatial domains much smaller than the macroscale of interest. The 'equation-free' methodology couples many small patches of microscale computations across space to empower efficient computational simulation over macroscale domains of interest. Motivated by molecular or agent simulations, we analyse the performance of various coupling schemes for patches when the microscale is inherently 'rough'. As a canonical problem in this universality class, we systematically analyse the case of heterogeneous diffusion on a lattice. Computer algebra explores how the dynamics of coupled patches predict the large scale emergent macroscale dynamics of the computational scheme. We determine good design for the coupling of patches by comparing the macroscale predictions from patch dynamics with the emergent macroscale on the entire domain, thus minimising the computational error of the multiscale modelling. The minimal error on the macroscale is obtained when the coupling utilises averaging regions which are between a third and a half of the patch. Moreover, when the symmetry of the inter-patch coupling matches that of the underlying microscale structure, patch dynamics predicts the desired macroscale dynamics to any specified order of error. The results confirm that the patch scheme is useful for macroscale computational simulation of a range of systems with microscale heterogeneity.
Zufiria, Pedro J; Pastor-Escuredo, David; Úbeda-Medina, Luis; Hernandez-Medina, Miguel A; Barriales-Valbuena, Iker; Morales, Alfredo J; Jacques, Damien C; Nkwambi, Wilfred; Diop, M Bamba; Quinn, John; Hidalgo-Sanchís, Paula; Luengo-Oroz, Miguel
2018-01-01
We propose a framework for the systematic analysis of mobile phone data to identify relevant mobility profiles in a population. The proposed framework allows finding distinct human mobility profiles based on the digital trace of mobile phone users characterized by a Matrix of Individual Trajectories (IT-Matrix). This matrix gathers a consistent and regularized description of individual trajectories that enables multi-scale representations along time and space, which can be used to extract aggregated indicators such as a dynamic multi-scale population count. Unsupervised clustering of individual trajectories generates mobility profiles (clusters of similar individual trajectories) which characterize relevant group behaviors preserving optimal aggregation levels for detailed and privacy-secured mobility characterization. The application of the proposed framework is illustrated by analyzing fully anonymized data on human mobility from mobile phones in Senegal at the arrondissement level over a calendar year. The analysis of monthly mobility patterns at the livelihood zone resolution resulted in the discovery and characterization of seasonal mobility profiles related with economic activities, agricultural calendars and rainfalls. The use of these mobility profiles could support the timely identification of mobility changes in vulnerable populations in response to external shocks (such as natural disasters, civil conflicts or sudden increases of food prices) to monitor food security.
NASA Astrophysics Data System (ADS)
Xu, Hao; Pei, Yongmao; Li, Faxin; Fang, Daining
2018-05-01
The magnetic, electric and mechanical behaviors are strongly coupled in magnetoelectric (ME) materials, making them great promising in the application of functional devices. In this paper, the magneto-electro-mechanical fully coupled constitutive behaviors of ME laminates are systematically studied both theoretically and experimentally. A new probabilistic domain switching function considering the surface ferromagnetic anisotropy and the interface charge-mediated effect is proposed. Then a multi-scale multi-field coupling nonlinear constitutive model for layered ME composites is developed with physical measureable parameters. The experiments were performed to compare the theoretical predictions with the experimental data. The theoretical predictions have a good agreement with experimental results. The proposed constitutive relation can be used to describe the nonlinear multi-field coupling properties of both ME laminates and thin films. Several novel coupling experimental phenomena such as the electric-field control of magnetization, and the magnetic-field tuning of polarization are observed and analyzed. Furthermore, the size-effect of the electric tuning behavior of magnetization is predicted, which demonstrates a competition mechanism between the interface strain-mediated effect and the charge-driven effect. Our study offers deep insight into the coupling microscopic mechanism and macroscopic properties of ME layered composites, which is benefit for the design of electromagnetic functional devices.
Towards a Multiscale Approach to Cybersecurity Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hogan, Emilie A.; Hui, Peter SY; Choudhury, Sutanay
2013-11-12
We propose a multiscale approach to modeling cyber networks, with the goal of capturing a view of the network and overall situational awareness with respect to a few key properties--- connectivity, distance, and centrality--- for a system under an active attack. We focus on theoretical and algorithmic foundations of multiscale graphs, coming from an algorithmic perspective, with the goal of modeling cyber system defense as a specific use case scenario. We first define a notion of \\emph{multiscale} graphs, in contrast with their well-studied single-scale counterparts. We develop multiscale analogs of paths and distance metrics. As a simple, motivating example ofmore » a common metric, we present a multiscale analog of the all-pairs shortest-path problem, along with a multiscale analog of a well-known algorithm which solves it. From a cyber defense perspective, this metric might be used to model the distance from an attacker's position in the network to a sensitive machine. In addition, we investigate probabilistic models of connectivity. These models exploit the hierarchy to quantify the likelihood that sensitive targets might be reachable from compromised nodes. We believe that our novel multiscale approach to modeling cyber-physical systems will advance several aspects of cyber defense, specifically allowing for a more efficient and agile approach to defending these systems.« less
Filters for Improvement of Multiscale Data from Atomistic Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gardner, David J.; Reynolds, Daniel R.
Multiscale computational models strive to produce accurate and efficient numerical simulations of systems involving interactions across multiple spatial and temporal scales that typically differ by several orders of magnitude. Some such models utilize a hybrid continuum-atomistic approach combining continuum approximations with first-principles-based atomistic models to capture multiscale behavior. By following the heterogeneous multiscale method framework for developing multiscale computational models, unknown continuum scale data can be computed from an atomistic model. Concurrently coupling the two models requires performing numerous atomistic simulations which can dominate the computational cost of the method. Furthermore, when the resulting continuum data is noisy due tomore » sampling error, stochasticity in the model, or randomness in the initial conditions, filtering can result in significant accuracy gains in the computed multiscale data without increasing the size or duration of the atomistic simulations. In this work, we demonstrate the effectiveness of spectral filtering for increasing the accuracy of noisy multiscale data obtained from atomistic simulations. Moreover, we present a robust and automatic method for closely approximating the optimum level of filtering in the case of additive white noise. By improving the accuracy of this filtered simulation data, it leads to a dramatic computational savings by allowing for shorter and smaller atomistic simulations to achieve the same desired multiscale simulation precision.« less
Filters for Improvement of Multiscale Data from Atomistic Simulations
Gardner, David J.; Reynolds, Daniel R.
2017-01-05
Multiscale computational models strive to produce accurate and efficient numerical simulations of systems involving interactions across multiple spatial and temporal scales that typically differ by several orders of magnitude. Some such models utilize a hybrid continuum-atomistic approach combining continuum approximations with first-principles-based atomistic models to capture multiscale behavior. By following the heterogeneous multiscale method framework for developing multiscale computational models, unknown continuum scale data can be computed from an atomistic model. Concurrently coupling the two models requires performing numerous atomistic simulations which can dominate the computational cost of the method. Furthermore, when the resulting continuum data is noisy due tomore » sampling error, stochasticity in the model, or randomness in the initial conditions, filtering can result in significant accuracy gains in the computed multiscale data without increasing the size or duration of the atomistic simulations. In this work, we demonstrate the effectiveness of spectral filtering for increasing the accuracy of noisy multiscale data obtained from atomistic simulations. Moreover, we present a robust and automatic method for closely approximating the optimum level of filtering in the case of additive white noise. By improving the accuracy of this filtered simulation data, it leads to a dramatic computational savings by allowing for shorter and smaller atomistic simulations to achieve the same desired multiscale simulation precision.« less
Towards practical multiscale approach for analysis of reinforced concrete structures
NASA Astrophysics Data System (ADS)
Moyeda, Arturo; Fish, Jacob
2017-12-01
We present a novel multiscale approach for analysis of reinforced concrete structural elements that overcomes two major hurdles in utilization of multiscale technologies in practice: (1) coupling between material and structural scales due to consideration of large representative volume elements (RVE), and (2) computational complexity of solving complex nonlinear multiscale problems. The former is accomplished using a variant of computational continua framework that accounts for sizeable reinforced concrete RVEs by adjusting the location of quadrature points. The latter is accomplished by means of reduced order homogenization customized for structural elements. The proposed multiscale approach has been verified against direct numerical simulations and validated against experimental results.
A concurrent multiscale micromorphic molecular dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Shaofan, E-mail: shaofan@berkeley.edu; Tong, Qi
2015-04-21
In this work, we have derived a multiscale micromorphic molecular dynamics (MMMD) from first principle to extend the (Andersen)-Parrinello-Rahman molecular dynamics to mesoscale and continuum scale. The multiscale micromorphic molecular dynamics is a con-current three-scale dynamics that couples a fine scale molecular dynamics, a mesoscale micromorphic dynamics, and a macroscale nonlocal particle dynamics together. By choosing proper statistical closure conditions, we have shown that the original Andersen-Parrinello-Rahman molecular dynamics is the homogeneous and equilibrium case of the proposed multiscale micromorphic molecular dynamics. In specific, we have shown that the Andersen-Parrinello-Rahman molecular dynamics can be rigorously formulated and justified from firstmore » principle, and its general inhomogeneous case, i.e., the three scale con-current multiscale micromorphic molecular dynamics can take into account of macroscale continuum mechanics boundary condition without the limitation of atomistic boundary condition or periodic boundary conditions. The discovered multiscale scale structure and the corresponding multiscale dynamics reveal a seamless transition from atomistic scale to continuum scale and the intrinsic coupling mechanism among them based on first principle formulation.« less
NASA Astrophysics Data System (ADS)
Martin, Gill; Levine, Richard; Klingaman, Nicholas; Bush, Stephanie; Turner, Andrew; Woolnough, Steven
2015-04-01
Despite considerable efforts worldwide to improve model simulations of the Asian summer monsoon, significant biases still remain in climatological seasonal mean rainfall distribution, timing of the onset, and northward and eastward extent of the monsoon domain (Sperber et al., 2013). Many modelling studies have shown sensitivity to convection and boundary layer parameterization, cloud microphysics and land surface properties, as well as model resolution. Here we examine the problems in representing short-timescale rainfall variability (related to convection parameterization), problems in representing synoptic-scale systems such as monsoon depressions (related to model resolution), and the relationship of each of these with longer-term systematic biases. Analysis of the spatial distribution of rainfall intensity on a range of timescales ranging from ~30 minutes to daily, in the MetUM and in observations (where available), highlights how rainfall biases in the South Asian monsoon region on different timescales in different regions can be achieved in models through a combination of the incorrect frequency and/or intensity of rainfall. Over the Indian land area, the typical dry bias is related to sub-daily rainfall events being too infrequent, despite being too intense when they occur. In contrast, the wet bias regions over the equatorial Indian Ocean are mainly related to too frequent occurrence of lower-than-observed 3-hourly rainfall accumulations which result in too frequent occurrence of higher-than-observed daily rainfall accumulations. This analysis sheds light on the model deficiencies behind the climatological seasonal mean rainfall biases that many models exhibit in this region. Changing physical parameterizations alters this behaviour, with associated adjustments in the climatological rainfall distribution, although the latter is not always improved (Bush et al., 2014). This suggests a more complex interaction between the diabatic heating and the large-scale circulation than is indicated by the intensity and frequency of rainfall alone. Monsoon depressions and low pressure systems are important contributors to monsoon rainfall over central and northern India, areas where MetUM climate simulations typically show deficient monsoon rainfall. Analysis of MetUM climate simulations at resolutions ranging from N96 (~135km) to N512 (~25km) suggests that at lower resolution the numbers and intensities of monsoon depressions and low pressure systems and their associated rainfall are very low compared with re-analyses/observations. We show that there are substantial increases with horizontal resolution, but resolution is not the only factor. Idealised simulations, either using nudged atmospheric winds or initialised coupled hindcasts, which improve (strengthen) the mean state monsoon and cyclonic circulation over the Indian peninsula, also result in a substantial increase in monsoon depressions and associated rainfall. This suggests that a more realistic representation of monsoon depressions is possible even at lower resolution if the larger-scale systematic error pattern in the monsoon is improved.
Multiscale Materials Modeling Workshop Summary
DOT National Transportation Integrated Search
2013-12-01
This report summarizes a 2-day workshop held to share information on multiscale material modeling. The aim was to gain expert feedback on the state of the art and identify Exploratory Advanced Research (EAR) Program opportunities for multiscale mater...
NASA Astrophysics Data System (ADS)
Cho, Hyesung; Moon Kim, Sang; Sik Kang, Yun; Kim, Junsoo; Jang, Segeun; Kim, Minhyoung; Park, Hyunchul; Won Bang, Jung; Seo, Soonmin; Suh, Kahp-Yang; Sung, Yung-Eun; Choi, Mansoo
2015-09-01
The production of multiscale architectures is of significant interest in materials science, and the integration of those structures could provide a breakthrough for various applications. Here we report a simple yet versatile strategy that allows for the LEGO-like integrations of microscale membranes by quantitatively controlling the oxygen inhibition effects of ultraviolet-curable materials, leading to multilevel multiscale architectures. The spatial control of oxygen concentration induces different curing contrasts in a resin allowing the selective imprinting and bonding at different sides of a membrane, which enables LEGO-like integration together with the multiscale pattern formation. Utilizing the method, the multilevel multiscale Nafion membranes are prepared and applied to polymer electrolyte membrane fuel cell. Our multiscale membrane fuel cell demonstrates significant enhancement of performance while ensuring mechanical robustness. The performance enhancement is caused by the combined effect of the decrease of membrane resistance and the increase of the electrochemical active surface area.
A complete categorization of multiscale models of infectious disease systems.
Garira, Winston
2017-12-01
Modelling of infectious disease systems has entered a new era in which disease modellers are increasingly turning to multiscale modelling to extend traditional modelling frameworks into new application areas and to achieve higher levels of detail and accuracy in characterizing infectious disease systems. In this paper we present a categorization framework for categorizing multiscale models of infectious disease systems. The categorization framework consists of five integration frameworks and five criteria. We use the categorization framework to give a complete categorization of host-level immuno-epidemiological models (HL-IEMs). This categorization framework is also shown to be applicable in categorizing other types of multiscale models of infectious diseases beyond HL-IEMs through modifying the initial categorization framework presented in this study. Categorization of multiscale models of infectious disease systems in this way is useful in bringing some order to the discussion on the structure of these multiscale models.
Evaluation of improved land use and canopy representation in ...
Biogenic volatile organic compounds (BVOC) participate in reactions that can lead to secondarily formed ozone and particulate matter (PM) impacting air quality and climate. BVOC emissions are important inputs to chemical transport models applied on local to global scales but considerable uncertainty remains in the representation of canopy parameterizations and emission algorithms from different vegetation species. The Biogenic Emission Inventory System (BEIS) has been used to support both scientific and regulatory model assessments for ozone and PM. Here we describe a new version of BEIS which includes updated input vegetation data and canopy model formulation for estimating leaf temperature and vegetation data on estimated BVOC. The Biogenic Emission Landuse Database (BELD) was revised to incorporate land use data from the Moderate Resolution Imaging Spectroradiometer (MODIS) land product and 2006 National Land Cover Database (NLCD) land coverage. Vegetation species data are based on the US Forest Service (USFS) Forest Inventory and Analysis (FIA) version 5.1 for 2002–2013 and US Department of Agriculture (USDA) 2007 census of agriculture data. This update results in generally higher BVOC emissions throughout California compared with the previous version of BEIS. Baseline and updated BVOC emission estimates are used in Community Multiscale Air Quality (CMAQ) Model simulations with 4 km grid resolution and evaluated with measurements of isoprene and monoterp
Robust Curb Detection with Fusion of 3D-Lidar and Camera Data
Tan, Jun; Li, Jian; An, Xiangjing; He, Hangen
2014-01-01
Curb detection is an essential component of Autonomous Land Vehicles (ALV), especially important for safe driving in urban environments. In this paper, we propose a fusion-based curb detection method through exploiting 3D-Lidar and camera data. More specifically, we first fuse the sparse 3D-Lidar points and high-resolution camera images together to recover a dense depth image of the captured scene. Based on the recovered dense depth image, we propose a filter-based method to estimate the normal direction within the image. Then, by using the multi-scale normal patterns based on the curb's geometric property, curb point features fitting the patterns are detected in the normal image row by row. After that, we construct a Markov Chain to model the consistency of curb points which utilizes the continuous property of the curb, and thus the optimal curb path which links the curb points together can be efficiently estimated by dynamic programming. Finally, we perform post-processing operations to filter the outliers, parameterize the curbs and give the confidence scores on the detected curbs. Extensive evaluations clearly show that our proposed method can detect curbs with strong robustness at real-time speed for both static and dynamic scenes. PMID:24854364
Chemical transport model simulations of organic aerosol in ...
Gasoline- and diesel-fueled engines are ubiquitous sources of air pollution in urban environments. They emit both primary particulate matter and precursor gases that react to form secondary particulate matter in the atmosphere. In this work, we updated the organic aerosol module and organic emissions inventory of a three-dimensional chemical transport model, the Community Multiscale Air Quality Model (CMAQ), using recent, experimentally derived inputs and parameterizations for mobile sources. The updated model included a revised volatile organic compound (VOC) speciation for mobile sources and secondary organic aerosol (SOA) formation from unspeciated intermediate volatility organic compounds (IVOCs). The updated model was used to simulate air quality in southern California during May and June 2010, when the California Research at the Nexus of Air Quality and Climate Change (CalNex) study was conducted. Compared to the Traditional version of CMAQ, which is commonly used for regulatory applications, the updated model did not significantly alter the predicted organic aerosol (OA) mass concentrations but did substantially improve predictions of OA sources and composition (e.g., POA–SOA split), as well as ambient IVOC concentrations. The updated model, despite substantial differences in emissions and chemistry, performed similar to a recently released research version of CMAQ (Woody et al., 2016) that did not include the updated VOC and IVOC emissions and SOA data
Parameterizing deep convection using the assumed probability density function method
Storer, R. L.; Griffin, B. M.; Höft, J.; ...
2014-06-11
Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method. The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and mid-latitude deep convection. These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing ismore » weak. The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.« less
Parameterizing deep convection using the assumed probability density function method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Storer, R. L.; Griffin, B. M.; Höft, J.
2015-01-06
Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method.The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and midlatitude deep convection. These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing is weak.more » The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.« less
Parameterizing deep convection using the assumed probability density function method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Storer, R. L.; Griffin, B. M.; Hoft, Jan
2015-01-06
Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method.The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and mid-latitude deep convection.These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing is weak. Themore » same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.« less
A Generalized Hybrid Multiscale Modeling Approach for Flow and Reactive Transport in Porous Media
NASA Astrophysics Data System (ADS)
Yang, X.; Meng, X.; Tang, Y. H.; Guo, Z.; Karniadakis, G. E.
2017-12-01
Using emerging understanding of biological and environmental processes at fundamental scales to advance predictions of the larger system behavior requires the development of multiscale approaches, and there is strong interest in coupling models at different scales together in a hybrid multiscale simulation framework. A limited number of hybrid multiscale simulation methods have been developed for subsurface applications, mostly using application-specific approaches for model coupling. The proposed generalized hybrid multiscale approach is designed with minimal intrusiveness to the at-scale simulators (pre-selected) and provides a set of lightweight C++ scripts to manage a complex multiscale workflow utilizing a concurrent coupling approach. The workflow includes at-scale simulators (using the lattice-Boltzmann method, LBM, at the pore and Darcy scale, respectively), scripts for boundary treatment (coupling and kriging), and a multiscale universal interface (MUI) for data exchange. The current study aims to apply the generalized hybrid multiscale modeling approach to couple pore- and Darcy-scale models for flow and mixing-controlled reaction with precipitation/dissolution in heterogeneous porous media. The model domain is packed heterogeneously that the mixing front geometry is more complex and not known a priori. To address those challenges, the generalized hybrid multiscale modeling approach is further developed to 1) adaptively define the locations of pore-scale subdomains, 2) provide a suite of physical boundary coupling schemes and 3) consider the dynamic change of the pore structures due to mineral precipitation/dissolution. The results are validated and evaluated by comparing with single-scale simulations in terms of velocities, reactive concentrations and computing cost.
Alternative transitions between existing representations in multi-scale maps
NASA Astrophysics Data System (ADS)
Dumont, Marion; Touya, Guillaume; Duchêne, Cécile
2018-05-01
Map users may have issues to achieve multi-scale navigation tasks, as cartographic objects may have various representations across scales. We assume that adding intermediate representations could be one way to reduce the differences between existing representations, and to ease the transitions across scales. We consider an existing multiscale map on the scale range from 1 : 25k to 1 : 100k scales. Based on hypotheses about intermediate representations design, we build custom multi-scale maps with alternative transitions. We will conduct in a next future a user evaluation to compare the efficiency of these alternative maps for multi-scale navigation. This paper discusses the hypotheses and production process of these alternative maps.
Nonlocal and Mixed-Locality Multiscale Finite Element Methods
Costa, Timothy B.; Bond, Stephen D.; Littlewood, David J.
2018-03-27
In many applications the resolution of small-scale heterogeneities remains a significant hurdle to robust and reliable predictive simulations. In particular, while material variability at the mesoscale plays a fundamental role in processes such as material failure, the resolution required to capture mechanisms at this scale is often computationally intractable. Multiscale methods aim to overcome this difficulty through judicious choice of a subscale problem and a robust manner of passing information between scales. One promising approach is the multiscale finite element method, which increases the fidelity of macroscale simulations by solving lower-scale problems that produce enriched multiscale basis functions. Here, inmore » this study, we present the first work toward application of the multiscale finite element method to the nonlocal peridynamic theory of solid mechanics. This is achieved within the context of a discontinuous Galerkin framework that facilitates the description of material discontinuities and does not assume the existence of spatial derivatives. Analysis of the resulting nonlocal multiscale finite element method is achieved using the ambulant Galerkin method, developed here with sufficient generality to allow for application to multiscale finite element methods for both local and nonlocal models that satisfy minimal assumptions. Finally, we conclude with preliminary results on a mixed-locality multiscale finite element method in which a nonlocal model is applied at the fine scale and a local model at the coarse scale.« less
Nonlocal and Mixed-Locality Multiscale Finite Element Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Costa, Timothy B.; Bond, Stephen D.; Littlewood, David J.
In many applications the resolution of small-scale heterogeneities remains a significant hurdle to robust and reliable predictive simulations. In particular, while material variability at the mesoscale plays a fundamental role in processes such as material failure, the resolution required to capture mechanisms at this scale is often computationally intractable. Multiscale methods aim to overcome this difficulty through judicious choice of a subscale problem and a robust manner of passing information between scales. One promising approach is the multiscale finite element method, which increases the fidelity of macroscale simulations by solving lower-scale problems that produce enriched multiscale basis functions. Here, inmore » this study, we present the first work toward application of the multiscale finite element method to the nonlocal peridynamic theory of solid mechanics. This is achieved within the context of a discontinuous Galerkin framework that facilitates the description of material discontinuities and does not assume the existence of spatial derivatives. Analysis of the resulting nonlocal multiscale finite element method is achieved using the ambulant Galerkin method, developed here with sufficient generality to allow for application to multiscale finite element methods for both local and nonlocal models that satisfy minimal assumptions. Finally, we conclude with preliminary results on a mixed-locality multiscale finite element method in which a nonlocal model is applied at the fine scale and a local model at the coarse scale.« less
Construction of multi-scale consistent brain networks: methods and applications.
Ge, Bao; Tian, Yin; Hu, Xintao; Chen, Hanbo; Zhu, Dajiang; Zhang, Tuo; Han, Junwei; Guo, Lei; Liu, Tianming
2015-01-01
Mapping human brain networks provides a basis for studying brain function and dysfunction, and thus has gained significant interest in recent years. However, modeling human brain networks still faces several challenges including constructing networks at multiple spatial scales and finding common corresponding networks across individuals. As a consequence, many previous methods were designed for a single resolution or scale of brain network, though the brain networks are multi-scale in nature. To address this problem, this paper presents a novel approach to constructing multi-scale common structural brain networks from DTI data via an improved multi-scale spectral clustering applied on our recently developed and validated DICCCOLs (Dense Individualized and Common Connectivity-based Cortical Landmarks). Since the DICCCOL landmarks possess intrinsic structural correspondences across individuals and populations, we employed the multi-scale spectral clustering algorithm to group the DICCCOL landmarks and their connections into sub-networks, meanwhile preserving the intrinsically-established correspondences across multiple scales. Experimental results demonstrated that the proposed method can generate multi-scale consistent and common structural brain networks across subjects, and its reproducibility has been verified by multiple independent datasets. As an application, these multi-scale networks were used to guide the clustering of multi-scale fiber bundles and to compare the fiber integrity in schizophrenia and healthy controls. In general, our methods offer a novel and effective framework for brain network modeling and tract-based analysis of DTI data.
Intercomparison of Multiscale Modeling Approaches in Simulating Subsurface Flow and Transport
NASA Astrophysics Data System (ADS)
Yang, X.; Mehmani, Y.; Barajas-Solano, D. A.; Song, H. S.; Balhoff, M.; Tartakovsky, A. M.; Scheibe, T. D.
2016-12-01
Hybrid multiscale simulations that couple models across scales are critical to advance predictions of the larger system behavior using understanding of fundamental processes. In the current study, three hybrid multiscale methods are intercompared: multiscale loose-coupling method, multiscale finite volume (MsFV) method and multiscale mortar method. The loose-coupling method enables a parallel workflow structure based on the Swift scripting environment that manages the complex process of executing coupled micro- and macro-scale models without being intrusive to the at-scale simulators. The MsFV method applies microscale and macroscale models over overlapping subdomains of the modeling domain and enforces continuity of concentration and transport fluxes between models via restriction and prolongation operators. The mortar method is a non-overlapping domain decomposition approach capable of coupling all permutations of pore- and continuum-scale models with each other. In doing so, Lagrange multipliers are used at interfaces shared between the subdomains so as to establish continuity of species/fluid mass flux. Subdomain computations can be performed either concurrently or non-concurrently depending on the algorithm used. All the above methods have been proven to be accurate and efficient in studying flow and transport in porous media. However, there has not been any field-scale applications and benchmarking among various hybrid multiscale approaches. To address this challenge, we apply all three hybrid multiscale methods to simulate water flow and transport in a conceptualized 2D modeling domain of the hyporheic zone, where strong interactions between groundwater and surface water exist across multiple scales. In all three multiscale methods, fine-scale simulations are applied to a thin layer of riverbed alluvial sediments while the macroscopic simulations are used for the larger subsurface aquifer domain. Different numerical coupling methods are then applied between scales and inter-compared. Comparisons are drawn in terms of velocity distributions, solute transport behavior, algorithm-induced numerical error and computing cost. The intercomparison work provides support for confidence in a variety of hybrid multiscale methods and motivates further development and applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang Shaojie; Tang Xiangyang; School of Automation, Xi'an University of Posts and Telecommunications, Xi'an, Shaanxi 710121
2012-09-15
Purposes: The suppression of noise in x-ray computed tomography (CT) imaging is of clinical relevance for diagnostic image quality and the potential for radiation dose saving. Toward this purpose, statistical noise reduction methods in either the image or projection domain have been proposed, which employ a multiscale decomposition to enhance the performance of noise suppression while maintaining image sharpness. Recognizing the advantages of noise suppression in the projection domain, the authors propose a projection domain multiscale penalized weighted least squares (PWLS) method, in which the angular sampling rate is explicitly taken into consideration to account for the possible variation ofmore » interview sampling rate in advanced clinical or preclinical applications. Methods: The projection domain multiscale PWLS method is derived by converting an isotropic diffusion partial differential equation in the image domain into the projection domain, wherein a multiscale decomposition is carried out. With adoption of the Markov random field or soft thresholding objective function, the projection domain multiscale PWLS method deals with noise at each scale. To compensate for the degradation in image sharpness caused by the projection domain multiscale PWLS method, an edge enhancement is carried out following the noise reduction. The performance of the proposed method is experimentally evaluated and verified using the projection data simulated by computer and acquired by a CT scanner. Results: The preliminary results show that the proposed projection domain multiscale PWLS method outperforms the projection domain single-scale PWLS method and the image domain multiscale anisotropic diffusion method in noise reduction. In addition, the proposed method can preserve image sharpness very well while the occurrence of 'salt-and-pepper' noise and mosaic artifacts can be avoided. Conclusions: Since the interview sampling rate is taken into account in the projection domain multiscale decomposition, the proposed method is anticipated to be useful in advanced clinical and preclinical applications where the interview sampling rate varies.« less
Symptoms of change in multi-scale observations of arctic ecosystem carbon cycling
NASA Astrophysics Data System (ADS)
Stoy, P. C.; Williams, M. D.; Hartley, I. P.; Street, L.; Hill, T. C.; Prieto-Blanco, A.; Wayolle, A.; Disney, M.; Evans, J.; Fletcher, B.; Poyatos, R.; Wookey, P.; Merbold, L.; Wade, T. J.; Moncrieff, J.
2009-12-01
Arctic ecosystems are responding rapidly to observed climate change. Quantifying the magnitude of these changes, and their implications for the climate system, requires observations of their current structure and function, as well as extrapolation and modelling (i.e. ‘upscaling’) across time and space. Here, we describe the major results of the International Polar Year (IPY) ABACUS project, a multi-scale investigation across arctic Fennoscandia that couples plant and soil process studies, isotope analyses, flux and micrometeorological measurements, process modelling, and aircraft and satellite observations to improve predictions of the response of the arctic terrestrial biosphere to global change. We begin with a synthesis of eddy covariance observations from the global FLUXNET database. We demonstrate that a simple model parameterized using pan-arctic chamber measurements explains over 80% of the variance of half-hourly CO2 fluxes during the growing season across most arctic and montane tundra ecosystems given accurate measurements of leaf area index (LAI), which agrees with the recently proposed ‘functional convergence’ paradigm for tundra vegetation. The ability of MODIS to deliver accurate LAI estimates is briefly discussed and an adjusted algorithm is presented and validated using direct observations. We argue for an Information Theory-based framework for upscaling in Earth science by conceptualizing multi-scale research as a transfer of information across scales. We then demonstrate how error in upscaled arctic C flux estimates can be reduced to less than 4% from their high-resolution counterpart by formally preserving the information content of high spatial and spectral resolution aircraft and satellite imagery. Jaynes’ classic Maximum Entropy (MaxEnt) principle is employed to incorporate logical, biological and physical constraints to reduce error in downscaled flux estimates. Errors are further reduced by assimilating flux, biological and remote sensing data into the DALEC ecosystem model using the ensemble Kalman filter. We use a flux footprint analysis to demonstrate that the ABACUS study ecosystems display functional convergence at chamber, tower and aircraft scales. The importance of the rapidly changing cold and ‘shoulder’ seasons to annual CO2 flux is emphasized; these represent over 20% of annual C exchange at our field sites. The role of moss in determining non-growing season C uptake and loss is highlighted using direct chamber-based observations. We demonstrate ‘priming’ of the decomposition of older forest soil during the period of vegetative activity using 14CO2 observations, and show that tundra ecosystems paradoxically store more C than birch forests in the region. This biological priming of older C stocks is not included in current models of the arctic C cycle.
NASA Astrophysics Data System (ADS)
Chang, Y.; Hung, S.; Kuo, B.; Kuochen, H.
2012-12-01
Taiwan is one of the archetypical places for studying the active orogenic process in the world, where the Luzon arc has obliquely collided into the southwest China continental margin since 5 Ma ago. Because of the lack of convincing evidence for the structure in the lithospheric mantle and at even greater depths, several competing models have been proposed for the Taiwan mountain-building process. With the deployment of ocean-bottom seismometers (OBSs) on the seafloor around Taiwan from the TAIGER (TAiwan Integrated GEodynamic Research) and IES seismic experiments, the aperture of the seismic network is greatly extended to improve the depth resolution of tomographic imaging, which is critical to illuminate the nature of the arc-continent collision and accretion in Taiwan. In this study, we use relative travel-time residuals between a collection of teleseismic body wave arrivals to tomographically image the velocity structure beneath Taiwan. In addition to those from common distant earthquakes observed across an array of stations, we take advantage of dense seismicity in the vicinity of Taiwan and the source and receiver reciprocity to augment the data coverage from clustered earthquakes recorded by global stations. As waveforms are dependent of source mechanisms, we carry out the cluster analysis to group the phase arrivals with similar waveforms into clusters and simultaneously determine relative travel-time anomalies in the same cluster accurately by a cross correlation method. The combination of these two datasets would particularly enhance the resolvability of the tomographic models offshore of eastern Taiwan, where the two subduction systems of opposite polarity are taking place and have primarily shaped the present tectonic framework of Taiwan. On the other hand, our inversion adopts an innovation that invokes wavelet-based, multi-scale parameterization and finite-frequency theory. Not only does this approach make full use of frequency-dependent travel-time data providing different, but complementary sensitivity to velocity heterogeneity, but it also objectively addresses the intrinsically multi-scale characters of unevenly distributed data which yields the model with spatially-varying, data-adaptive resolution. Besides, we employ a parallelized singular value decomposition algorithm to directly solve for the resolution matrix and point spread functions (PSF). While the spatial distribution of a PSF is considered as the probability density function of multivariate normal distribution, we employ the principal component analysis (PCA) to estimate the lengths and directions of the principal axes of the PSF distribution, used for quantitative assessment of the resolvable scale-length and degree of smearing of the model and guidance to interpret the robust and trustworthy features in the resolved models.
Global model comparison of heterogeneous ice nucleation parameterizations in mixed phase clouds
NASA Astrophysics Data System (ADS)
Yun, Yuxing; Penner, Joyce E.
2012-04-01
A new aerosol-dependent mixed phase cloud parameterization for deposition/condensation/immersion (DCI) ice nucleation and one for contact freezing are compared to the original formulations in a coupled general circulation model and aerosol transport model. The present-day cloud liquid and ice water fields and cloud radiative forcing are analyzed and compared to observations. The new DCI freezing parameterization changes the spatial distribution of the cloud water field. Significant changes are found in the cloud ice water fraction and in the middle cloud fractions. The new DCI freezing parameterization predicts less ice water path (IWP) than the original formulation, especially in the Southern Hemisphere. The smaller IWP leads to a less efficient Bergeron-Findeisen process resulting in a larger liquid water path, shortwave cloud forcing, and longwave cloud forcing. It is found that contact freezing parameterizations have a greater impact on the cloud water field and radiative forcing than the two DCI freezing parameterizations that we compared. The net solar flux at top of atmosphere and net longwave flux at the top of the atmosphere change by up to 8.73 and 3.52 W m-2, respectively, due to the use of different DCI and contact freezing parameterizations in mixed phase clouds. The total climate forcing from anthropogenic black carbon/organic matter in mixed phase clouds is estimated to be 0.16-0.93 W m-2using the aerosol-dependent parameterizations. A sensitivity test with contact ice nuclei concentration in the original parameterization fit to that recommended by Young (1974) gives results that are closer to the new contact freezing parameterization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abulencia, A.; Acosta, D.; Adelman, Jahred A.
2006-02-01
The authors describe a measurement of the top quark mass from events produced in p{bar p} collisions at a center-of-mass energy of 1.96 TeV, using the Collider Detector at Fermilab. They identify t{bar t} candidates where both W bosons from the top quarks decay into leptons (e{nu}, {mu}{nu}, or {tau}{nu}) from a data sample of 360 pb{sup -1}. The top quark mass is reconstructed in each event separately by three different methods, which draw upon simulated distributions of the neutrino pseudorapidity, t{bar t} longitudinal momentum, or neutrino azimuthal angle in order to extract probability distributions for the top quark mass.more » For each method, representative mass distributions, or templates, are constructed from simulated samples of signal and background events, and parameterized to form continuous probability density functions. A likelihood fit incorporating these parameterized templates is then performed on the data sample masses in order to derive a final top quark mass. Combining the three template methods, taking into account correlations in their statistical and systematic uncertainties, results in a top quark mass measurement of 170.1 {+-} 6.0(stat.) {+-} 4.1(syst.) GeV/c{sup 2}.« less
Lloyd, Christopher W; Shmuylovich, Leonid; Holland, Mark R; Miller, James G; Kovács, Sándor J
2011-08-01
Myocardial tissue characterization represents an extension of currently available echocardiographic imaging. The systematic variation of backscattered energy during the cardiac cycle (the "cyclic variation" of backscatter) has been employed to characterize cardiac function in a wide range of investigations. However, the mechanisms responsible for observed cyclic variation remain incompletely understood. As a step toward determining the features of cardiac structure and function that are responsible for the observed cyclic variation, the present study makes use of a kinematic approach of diastolic function quantitation to identify diastolic function determinants that influence the magnitude and timing of cyclic variation. Echocardiographic measurements of 32 subjects provided data for determination of the cyclic variation of backscatter to diastolic function relation characterized in terms of E-wave determined, kinematic model-based parameters of chamber stiffness, viscosity/relaxation and load. The normalized time delay of cyclic variation appears to be related to the relative viscoelasticity of the chamber and predictive of the kinematic filling dynamics as determined using the parameterized diastolic filling formalism (with r-values ranging from .44 to .59). The magnitude of cyclic variation does not appear to be strongly related to the kinematic parameters. Copyright © 2011 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.
Tang, Shuaiqi; Zhang, Minghua; Xie, Shaocheng
2016-01-05
Large-scale atmospheric forcing data can greatly impact the simulations of atmospheric process models including Large Eddy Simulations (LES), Cloud Resolving Models (CRMs) and Single-Column Models (SCMs), and impact the development of physical parameterizations in global climate models. This study describes the development of an ensemble variationally constrained objective analysis of atmospheric large-scale forcing data and its application to evaluate the cloud biases in the Community Atmospheric Model (CAM5). Sensitivities of the variational objective analysis to background data, error covariance matrix and constraint variables are described and used to quantify the uncertainties in the large-scale forcing data. Application of the ensemblemore » forcing in the CAM5 SCM during March 2000 intensive operational period (IOP) at the Southern Great Plains (SGP) of the Atmospheric Radiation Measurement (ARM) program shows systematic biases in the model simulations that cannot be explained by the uncertainty of large-scale forcing data, which points to the deficiencies of physical parameterizations. The SCM is shown to overestimate high clouds and underestimate low clouds. These biases are found to also exist in the global simulation of CAM5 when it is compared with satellite data.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Shuaiqi; Zhang, Minghua; Xie, Shaocheng
Large-scale atmospheric forcing data can greatly impact the simulations of atmospheric process models including Large Eddy Simulations (LES), Cloud Resolving Models (CRMs) and Single-Column Models (SCMs), and impact the development of physical parameterizations in global climate models. This study describes the development of an ensemble variationally constrained objective analysis of atmospheric large-scale forcing data and its application to evaluate the cloud biases in the Community Atmospheric Model (CAM5). Sensitivities of the variational objective analysis to background data, error covariance matrix and constraint variables are described and used to quantify the uncertainties in the large-scale forcing data. Application of the ensemblemore » forcing in the CAM5 SCM during March 2000 intensive operational period (IOP) at the Southern Great Plains (SGP) of the Atmospheric Radiation Measurement (ARM) program shows systematic biases in the model simulations that cannot be explained by the uncertainty of large-scale forcing data, which points to the deficiencies of physical parameterizations. The SCM is shown to overestimate high clouds and underestimate low clouds. These biases are found to also exist in the global simulation of CAM5 when it is compared with satellite data.« less
An approach to multiscale modelling with graph grammars.
Ong, Yongzhi; Streit, Katarína; Henke, Michael; Kurth, Winfried
2014-09-01
Functional-structural plant models (FSPMs) simulate biological processes at different spatial scales. Methods exist for multiscale data representation and modification, but the advantages of using multiple scales in the dynamic aspects of FSPMs remain unclear. Results from multiscale models in various other areas of science that share fundamental modelling issues with FSPMs suggest that potential advantages do exist, and this study therefore aims to introduce an approach to multiscale modelling in FSPMs. A three-part graph data structure and grammar is revisited, and presented with a conceptual framework for multiscale modelling. The framework is used for identifying roles, categorizing and describing scale-to-scale interactions, thus allowing alternative approaches to model development as opposed to correlation-based modelling at a single scale. Reverse information flow (from macro- to micro-scale) is catered for in the framework. The methods are implemented within the programming language XL. Three example models are implemented using the proposed multiscale graph model and framework. The first illustrates the fundamental usage of the graph data structure and grammar, the second uses probabilistic modelling for organs at the fine scale in order to derive crown growth, and the third combines multiscale plant topology with ozone trends and metabolic network simulations in order to model juvenile beech stands under exposure to a toxic trace gas. The graph data structure supports data representation and grammar operations at multiple scales. The results demonstrate that multiscale modelling is a viable method in FSPM and an alternative to correlation-based modelling. Advantages and disadvantages of multiscale modelling are illustrated by comparisons with single-scale implementations, leading to motivations for further research in sensitivity analysis and run-time efficiency for these models.
An approach to multiscale modelling with graph grammars
Ong, Yongzhi; Streit, Katarína; Henke, Michael; Kurth, Winfried
2014-01-01
Background and Aims Functional–structural plant models (FSPMs) simulate biological processes at different spatial scales. Methods exist for multiscale data representation and modification, but the advantages of using multiple scales in the dynamic aspects of FSPMs remain unclear. Results from multiscale models in various other areas of science that share fundamental modelling issues with FSPMs suggest that potential advantages do exist, and this study therefore aims to introduce an approach to multiscale modelling in FSPMs. Methods A three-part graph data structure and grammar is revisited, and presented with a conceptual framework for multiscale modelling. The framework is used for identifying roles, categorizing and describing scale-to-scale interactions, thus allowing alternative approaches to model development as opposed to correlation-based modelling at a single scale. Reverse information flow (from macro- to micro-scale) is catered for in the framework. The methods are implemented within the programming language XL. Key Results Three example models are implemented using the proposed multiscale graph model and framework. The first illustrates the fundamental usage of the graph data structure and grammar, the second uses probabilistic modelling for organs at the fine scale in order to derive crown growth, and the third combines multiscale plant topology with ozone trends and metabolic network simulations in order to model juvenile beech stands under exposure to a toxic trace gas. Conclusions The graph data structure supports data representation and grammar operations at multiple scales. The results demonstrate that multiscale modelling is a viable method in FSPM and an alternative to correlation-based modelling. Advantages and disadvantages of multiscale modelling are illustrated by comparisons with single-scale implementations, leading to motivations for further research in sensitivity analysis and run-time efficiency for these models. PMID:25134929
Ho, Lap; Cheng, Haoxiang; Wang, Jun; Simon, James E; Wu, Qingli; Zhao, Danyue; Carry, Eileen; Ferruzzi, Mario G; Faith, Jeremiah; Valcarcel, Breanna; Hao, Ke; Pasinetti, Giulio M
2018-03-05
The development of a given botanical preparation for eventual clinical application requires extensive, detailed characterizations of the chemical composition, as well as the biological availability, biological activity, and safety profiles of the botanical. These issues are typically addressed using diverse experimental protocols and model systems. Based on this consideration, in this study we established a comprehensive database and analysis framework for the collection, collation, and integrative analysis of diverse, multiscale data sets. Using this framework, we conducted an integrative analysis of heterogeneous data from in vivo and in vitro investigation of a complex bioactive dietary polyphenol-rich preparation (BDPP) and built an integrated network linking data sets generated from this multitude of diverse experimental paradigms. We established a comprehensive database and analysis framework as well as a systematic and logical means to catalogue and collate the diverse array of information gathered, which is securely stored and added to in a standardized manner to enable fast query. We demonstrated the utility of the database in (1) a statistical ranking scheme to prioritize response to treatments and (2) in depth reconstruction of functionality studies. By examination of these data sets, the system allows analytical querying of heterogeneous data and the access of information related to interactions, mechanism of actions, functions, etc., which ultimately provide a global overview of complex biological responses. Collectively, we present an integrative analysis framework that leads to novel insights on the biological activities of a complex botanical such as BDPP that is based on data-driven characterizations of interactions between BDPP-derived phenolic metabolites and their mechanisms of action, as well as synergism and/or potential cancellation of biological functions. Out integrative analytical approach provides novel means for a systematic integrative analysis of heterogeneous data types in the development of complex botanicals such as polyphenols for eventual clinical and translational applications.
Evaluation of Warm-Rain Microphysical Parameterizations in Cloudy Boundary Layer Transitions
NASA Astrophysics Data System (ADS)
Nelson, K.; Mechem, D. B.
2014-12-01
Common warm-rain microphysical parameterizations used for marine boundary layer (MBL) clouds are either tuned for specific cloud types (e.g., the Khairoutdinov and Kogan 2000 parameterization, "KK2000") or are altogether ill-posed (Kessler 1969). An ideal microphysical parameterization should be "unified" in the sense of being suitable across MBL cloud regimes that include stratocumulus, cumulus rising into stratocumulus, and shallow trade cumulus. The recent parameterization of Kogan (2013, "K2013") was formulated for shallow cumulus but has been shown in a large-eddy simulation environment to work quite well for stratocumulus as well. We report on our efforts to implement and test this parameterization into a regional forecast model (NRL COAMPS). Results from K2013 and KK2000 are compared with the operational Kessler parameterization for a 5-day period of the VOCALS-REx field campaign, which took place over the southeast Pacific. We focus on both the relative performance of the three parameterizations and also on how they compare to the VOCALS-REx observations from the NOAA R/V Ronald H. Brown, in particular estimates of boundary-layer depth, liquid water path (LWP), cloud base, and area-mean precipitation rate obtained from C-band radar.
Thayer-Calder, K.; Gettelman, A.; Craig, C.; ...
2015-06-30
Most global climate models parameterize separate cloud types using separate parameterizations. This approach has several disadvantages, including obscure interactions between parameterizations and inaccurate triggering of cumulus parameterizations. Alternatively, a unified cloud parameterization uses one equation set to represent all cloud types. Such cloud types include stratiform liquid and ice cloud, shallow convective cloud, and deep convective cloud. Vital to the success of a unified parameterization is a general interface between clouds and microphysics. One such interface involves drawing Monte Carlo samples of subgrid variability of temperature, water vapor, cloud liquid, and cloud ice, and feeding the sample points into amore » microphysics scheme.This study evaluates a unified cloud parameterization and a Monte Carlo microphysics interface that has been implemented in the Community Atmosphere Model (CAM) version 5.3. Results describing the mean climate and tropical variability from global simulations are presented. The new model shows a degradation in precipitation skill but improvements in short-wave cloud forcing, liquid water path, long-wave cloud forcing, precipitable water, and tropical wave simulation. Also presented are estimations of computational expense and investigation of sensitivity to number of subcolumns.« less
Thayer-Calder, Katherine; Gettelman, A.; Craig, Cheryl; ...
2015-12-01
Most global climate models parameterize separate cloud types using separate parameterizations.This approach has several disadvantages, including obscure interactions between parameterizations and inaccurate triggering of cumulus parameterizations. Alternatively, a unified cloud parameterization uses one equation set to represent all cloud types. Such cloud types include stratiform liquid and ice cloud, shallow convective cloud, and deep convective cloud. Vital to the success of a unified parameterization is a general interface between clouds and microphysics. One such interface involves drawing Monte Carlo samples of subgrid variability of temperature, water vapor, cloud liquid, and cloud ice, and feeding the sample points into a microphysicsmore » scheme. This study evaluates a unified cloud parameterization and a Monte Carlo microphysics interface that has been implemented in the Community Atmosphere Model (CAM) version 5.3. Results describing the mean climate and tropical variability from global simulations are presented. In conclusion, the new model shows a degradation in precipitation skill but improvements in short-wave cloud forcing, liquid water path, long-wave cloud forcing, perceptible water, and tropical wave simulation. Also presented are estimations of computational expense and investigation of sensitivity to number of subcolumns.« less
Novel Multiscale Modeling Tool Applied to Pseudomonas aeruginosa Biofilm Formation
Biggs, Matthew B.; Papin, Jason A.
2013-01-01
Multiscale modeling is used to represent biological systems with increasing frequency and success. Multiscale models are often hybrids of different modeling frameworks and programming languages. We present the MATLAB-NetLogo extension (MatNet) as a novel tool for multiscale modeling. We demonstrate the utility of the tool with a multiscale model of Pseudomonas aeruginosa biofilm formation that incorporates both an agent-based model (ABM) and constraint-based metabolic modeling. The hybrid model correctly recapitulates oxygen-limited biofilm metabolic activity and predicts increased growth rate via anaerobic respiration with the addition of nitrate to the growth media. In addition, a genome-wide survey of metabolic mutants and biofilm formation exemplifies the powerful analyses that are enabled by this computational modeling tool. PMID:24147108
Novel multiscale modeling tool applied to Pseudomonas aeruginosa biofilm formation.
Biggs, Matthew B; Papin, Jason A
2013-01-01
Multiscale modeling is used to represent biological systems with increasing frequency and success. Multiscale models are often hybrids of different modeling frameworks and programming languages. We present the MATLAB-NetLogo extension (MatNet) as a novel tool for multiscale modeling. We demonstrate the utility of the tool with a multiscale model of Pseudomonas aeruginosa biofilm formation that incorporates both an agent-based model (ABM) and constraint-based metabolic modeling. The hybrid model correctly recapitulates oxygen-limited biofilm metabolic activity and predicts increased growth rate via anaerobic respiration with the addition of nitrate to the growth media. In addition, a genome-wide survey of metabolic mutants and biofilm formation exemplifies the powerful analyses that are enabled by this computational modeling tool.
Zanin, Massimiliano; Chorbev, Ivan; Stres, Blaz; Stalidzans, Egils; Vera, Julio; Tieri, Paolo; Castiglione, Filippo; Groen, Derek; Zheng, Huiru; Baumbach, Jan; Schmid, Johannes A; Basilio, José; Klimek, Peter; Debeljak, Nataša; Rozman, Damjana; Schmidt, Harald H H W
2017-12-05
Systems medicine holds many promises, but has so far provided only a limited number of proofs of principle. To address this road block, possible barriers and challenges of translating systems medicine into clinical practice need to be identified and addressed. The members of the European Cooperation in Science and Technology (COST) Action CA15120 Open Multiscale Systems Medicine (OpenMultiMed) wish to engage the scientific community of systems medicine and multiscale modelling, data science and computing, to provide their feedback in a structured manner. This will result in follow-up white papers and open access resources to accelerate the clinical translation of systems medicine. © The Author 2017. Published by Oxford University Press.
Performance of distributed multiscale simulations
Borgdorff, J.; Ben Belgacem, M.; Bona-Casas, C.; Fazendeiro, L.; Groen, D.; Hoenen, O.; Mizeranschi, A.; Suter, J. L.; Coster, D.; Coveney, P. V.; Dubitzky, W.; Hoekstra, A. G.; Strand, P.; Chopard, B.
2014-01-01
Multiscale simulations model phenomena across natural scales using monolithic or component-based code, running on local or distributed resources. In this work, we investigate the performance of distributed multiscale computing of component-based models, guided by six multiscale applications with different characteristics and from several disciplines. Three modes of distributed multiscale computing are identified: supplementing local dependencies with large-scale resources, load distribution over multiple resources, and load balancing of small- and large-scale resources. We find that the first mode has the apparent benefit of increasing simulation speed, and the second mode can increase simulation speed if local resources are limited. Depending on resource reservation and model coupling topology, the third mode may result in a reduction of resource consumption. PMID:24982258
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jablonowski, Christiane
The research investigates and advances strategies how to bridge the scale discrepancies between local, regional and global phenomena in climate models without the prohibitive computational costs of global cloud-resolving simulations. In particular, the research explores new frontiers in computational geoscience by introducing high-order Adaptive Mesh Refinement (AMR) techniques into climate research. AMR and statically-adapted variable-resolution approaches represent an emerging trend for atmospheric models and are likely to become the new norm in future-generation weather and climate models. The research advances the understanding of multi-scale interactions in the climate system and showcases a pathway how to model these interactions effectively withmore » advanced computational tools, like the Chombo AMR library developed at the Lawrence Berkeley National Laboratory. The research is interdisciplinary and combines applied mathematics, scientific computing and the atmospheric sciences. In this research project, a hierarchy of high-order atmospheric models on cubed-sphere computational grids have been developed that serve as an algorithmic prototype for the finite-volume solution-adaptive Chombo-AMR approach. The foci of the investigations have lied on the characteristics of both static mesh adaptations and dynamically-adaptive grids that can capture flow fields of interest like tropical cyclones. Six research themes have been chosen. These are (1) the introduction of adaptive mesh refinement techniques into the climate sciences, (2) advanced algorithms for nonhydrostatic atmospheric dynamical cores, (3) an assessment of the interplay between resolved-scale dynamical motions and subgrid-scale physical parameterizations, (4) evaluation techniques for atmospheric model hierarchies, (5) the comparison of AMR refinement strategies and (6) tropical cyclone studies with a focus on multi-scale interactions and variable-resolution modeling. The results of this research project demonstrate significant advances in all six research areas. The major conclusions are that statically-adaptive variable-resolution modeling is currently becoming mature in the climate sciences, and that AMR holds outstanding promise for future-generation weather and climate models on high-performance computing architectures.« less
NASA Astrophysics Data System (ADS)
Beguet, Benoit; Guyon, Dominique; Boukir, Samia; Chehata, Nesrine
2014-10-01
The main goal of this study is to design a method to describe the structure of forest stands from Very High Resolution satellite imagery, relying on some typical variables such as crown diameter, tree height, trunk diameter, tree density and tree spacing. The emphasis is placed on the automatization of the process of identification of the most relevant image features for the forest structure retrieval task, exploiting both spectral and spatial information. Our approach is based on linear regressions between the forest structure variables to be estimated and various spectral and Haralick's texture features. The main drawback of this well-known texture representation is the underlying parameters which are extremely difficult to set due to the spatial complexity of the forest structure. To tackle this major issue, an automated feature selection process is proposed which is based on statistical modeling, exploring a wide range of parameter values. It provides texture measures of diverse spatial parameters hence implicitly inducing a multi-scale texture analysis. A new feature selection technique, we called Random PRiF, is proposed. It relies on random sampling in feature space, carefully addresses the multicollinearity issue in multiple-linear regression while ensuring accurate prediction of forest variables. Our automated forest variable estimation scheme was tested on Quickbird and Pléiades panchromatic and multispectral images, acquired at different periods on the maritime pine stands of two sites in South-Western France. It outperforms two well-established variable subset selection techniques. It has been successfully applied to identify the best texture features in modeling the five considered forest structure variables. The RMSE of all predicted forest variables is improved by combining multispectral and panchromatic texture features, with various parameterizations, highlighting the potential of a multi-resolution approach for retrieving forest structure variables from VHR satellite images. Thus an average prediction error of ˜ 1.1 m is expected on crown diameter, ˜ 0.9 m on tree spacing, ˜ 3 m on height and ˜ 0.06 m on diameter at breast height.
Multi-scale dynamical behavior of spatially distributed systems: a deterministic point of view
NASA Astrophysics Data System (ADS)
Mangiarotti, S.; Le Jean, F.; Drapeau, L.; Huc, M.
2015-12-01
Physical and biophysical systems are spatially distributed systems. Their behavior can be observed or modelled spatially at various resolutions. In this work, a deterministic point of view is adopted to analyze multi-scale behavior taking a set of ordinary differential equation (ODE) as elementary part of the system.To perform analyses, scenes of study are thus generated based on ensembles of identical elementary ODE systems. Without any loss of generality, their dynamics is chosen chaotic in order to ensure sensitivity to initial conditions, that is, one fundamental property of atmosphere under instable conditions [1]. The Rössler system [2] is used for this purpose for both its topological and algebraic simplicity [3,4].Two cases are thus considered: the chaotic oscillators composing the scene of study are taken either independent, or in phase synchronization. Scale behaviors are analyzed considering the scene of study as aggregations (basically obtained by spatially averaging the signal) or as associations (obtained by concatenating the time series). The global modeling technique is used to perform the numerical analyses [5].One important result of this work is that, under phase synchronization, a scene of aggregated dynamics can be approximated by the elementary system composing the scene, but modifying its parameterization [6]. This is shown based on numerical analyses. It is then demonstrated analytically and generalized to a larger class of ODE systems. Preliminary applications to cereal crops observed from satellite are also presented.[1] Lorenz, Deterministic nonperiodic flow. J. Atmos. Sci., 20, 130-141 (1963).[2] Rössler, An equation for continuous chaos, Phys. Lett. A, 57, 397-398 (1976).[3] Gouesbet & Letellier, Global vector-field reconstruction by using a multivariate polynomial L2 approximation on nets, Phys. Rev. E 49, 4955-4972 (1994).[4] Letellier, Roulin & Rössler, Inequivalent topologies of chaos in simple equations, Chaos, Solitons & Fractals, 28, 337-360 (2006).[5] Mangiarotti, Coudret, Drapeau, & Jarlan, Polynomial search and global modeling, Phys. Rev. E 86(4), 046205 (2012).[6] Mangiarotti, Modélisation globale et Caractérisation Topologique de dynamiques environnementales. Habilitation à Diriger des Recherches, Univ. Toulouse 3 (2014).
An optimized algorithm for multiscale wideband deconvolution of radio astronomical images
NASA Astrophysics Data System (ADS)
Offringa, A. R.; Smirnov, O.
2017-10-01
We describe a new multiscale deconvolution algorithm that can also be used in a multifrequency mode. The algorithm only affects the minor clean loop. In single-frequency mode, the minor loop of our improved multiscale algorithm is over an order of magnitude faster than the casa multiscale algorithm, and produces results of similar quality. For multifrequency deconvolution, a technique named joined-channel cleaning is used. In this mode, the minor loop of our algorithm is two to three orders of magnitude faster than casa msmfs. We extend the multiscale mode with automated scale-dependent masking, which allows structures to be cleaned below the noise. We describe a new scale-bias function for use in multiscale cleaning. We test a second deconvolution method that is a variant of the moresane deconvolution technique, and uses a convex optimization technique with isotropic undecimated wavelets as dictionary. On simple well-calibrated data, the convex optimization algorithm produces visually more representative models. On complex or imperfect data, the convex optimization algorithm has stability issues.
Cho, Hyesung; Moon Kim, Sang; Sik Kang, Yun; Kim, Junsoo; Jang, Segeun; Kim, Minhyoung; Park, Hyunchul; Won Bang, Jung; Seo, Soonmin; Suh, Kahp-Yang; Sung, Yung-Eun; Choi, Mansoo
2015-01-01
The production of multiscale architectures is of significant interest in materials science, and the integration of those structures could provide a breakthrough for various applications. Here we report a simple yet versatile strategy that allows for the LEGO-like integrations of microscale membranes by quantitatively controlling the oxygen inhibition effects of ultraviolet-curable materials, leading to multilevel multiscale architectures. The spatial control of oxygen concentration induces different curing contrasts in a resin allowing the selective imprinting and bonding at different sides of a membrane, which enables LEGO-like integration together with the multiscale pattern formation. Utilizing the method, the multilevel multiscale Nafion membranes are prepared and applied to polymer electrolyte membrane fuel cell. Our multiscale membrane fuel cell demonstrates significant enhancement of performance while ensuring mechanical robustness. The performance enhancement is caused by the combined effect of the decrease of membrane resistance and the increase of the electrochemical active surface area. PMID:26412619
2D deblending using the multi-scale shaping scheme
NASA Astrophysics Data System (ADS)
Li, Qun; Ban, Xingan; Gong, Renbin; Li, Jinnuo; Ge, Qiang; Zu, Shaohuan
2018-01-01
Deblending can be posed as an inversion problem, which is ill-posed and requires constraint to obtain unique and stable solution. In blended record, signal is coherent, whereas interference is incoherent in some domains (e.g., common receiver domain and common offset domain). Due to the different sparsity, coefficients of signal and interference locate in different curvelet scale domains and have different amplitudes. Take into account the two differences, we propose a 2D multi-scale shaping scheme to constrain the sparsity to separate the blended record. In the domain where signal concentrates, the multi-scale scheme passes all the coefficients representing signal, while, in the domain where interference focuses, the multi-scale scheme suppresses the coefficients representing interference. Because the interference is suppressed evidently at each iteration, the constraint of multi-scale shaping operator in all scale domains are weak to guarantee the convergence of algorithm. We evaluate the performance of the multi-scale shaping scheme and the traditional global shaping scheme by using two synthetic and one field data examples.
Microphysics in Multi-scale Modeling System with Unified Physics
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo
2012-01-01
Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, a review of developments and applications of the multi-scale modeling system will be presented. In particular, the microphysics development and its performance for the multi-scale modeling system will be presented.
Multiscale analysis of neural spike trains.
Ramezan, Reza; Marriott, Paul; Chenouri, Shojaeddin
2014-01-30
This paper studies the multiscale analysis of neural spike trains, through both graphical and Poisson process approaches. We introduce the interspike interval plot, which simultaneously visualizes characteristics of neural spiking activity at different time scales. Using an inhomogeneous Poisson process framework, we discuss multiscale estimates of the intensity functions of spike trains. We also introduce the windowing effect for two multiscale methods. Using quasi-likelihood, we develop bootstrap confidence intervals for the multiscale intensity function. We provide a cross-validation scheme, to choose the tuning parameters, and study its unbiasedness. Studying the relationship between the spike rate and the stimulus signal, we observe that adjusting for the first spike latency is important in cross-validation. We show, through examples, that the correlation between spike trains and spike count variability can be multiscale phenomena. Furthermore, we address the modeling of the periodicity of the spike trains caused by a stimulus signal or by brain rhythms. Within the multiscale framework, we introduce intensity functions for spike trains with multiplicative and additive periodic components. Analyzing a dataset from the retinogeniculate synapse, we compare the fit of these models with the Bayesian adaptive regression splines method and discuss the limitations of the methodology. Computational efficiency, which is usually a challenge in the analysis of spike trains, is one of the highlights of these new models. In an example, we show that the reconstruction quality of a complex intensity function demonstrates the ability of the multiscale methodology to crack the neural code. Copyright © 2013 John Wiley & Sons, Ltd.
Nanoparticle interaction potentials constructed by multiscale computation
NASA Astrophysics Data System (ADS)
Lee, Cheng K.; Hua, Chi C.
2010-06-01
The van der Waals (vdW) potentials governing macroscopic objects have long been formulated in the context of classical theories, such as Hamaker's microscopic theory and Lifshitz's continuum theory. This work addresses the possibility of constructing the vdW interaction potentials of nanoparticle species using multiscale simulation schemes. Amorphous silica nanoparticles were considered as a benchmark example for which a series of (SiO2)n (n being an integer) has been systematically surveyed as the potential candidates of the packing units that reproduce known bulk material properties in atomistic molecular dynamics simulations. This strategy led to the identification of spherical Si6O12 molecules, later utilized as the elementary coarse-grained (CG) particles to compute the pair interaction potentials of silica nanoparticles ranging from 0.62 to 100 nm in diameter. The model nanoparticles so built may, in turn, serve as the children CG particles to construct nanoparticles assuming arbitrary sizes and shapes. Major observations are as follows. The pair interaction potentials for all the investigated spherical silica nanoparticles can be cast into a semiempirical, generalized Lennard-Jones 2α-α potential (α being a size-dependent, large integral number). In its reduced form, we discuss the implied universalities for the vdW potentials governing a certain range of amorphous nanoparticle species as well as how thermodynamic transferability can be fulfilled automatically. In view of future applications with colloidal suspensions, we briefly evaluated the vdW potential in the presence of a "screening" medium mimicking the effects of electrical double layers or grafting materials atop the nanoparticle core. The general observations shed new light on strategies to attain a microscopic control over interparticle attractions. In future perspectives, the proposed multiscale computation scheme shall help bridge the current gap between the modeling of polymer chains and macroscopic objects by introducing molecular models coarse-grained at a similar level so that the interactions between these two can be treated in a consistent and faithful way.
NASA Astrophysics Data System (ADS)
Wang, He; Otsu, Hideaki; Sakurai, Hiroyoshi; Ahn, DeukSoon; Aikawa, Masayuki; Ando, Takashi; Araki, Shouhei; Chen, Sidong; Chiga, Nobuyuki; Doornenbal, Pieter; Fukuda, Naoki; Isobe, Tadaaki; Kawakami, Shunsuke; Kawase, Shoichiro; Kin, Tadahiro; Kondo, Yosuke; Koyama, Shupei; Kubono, Shigeru; Maeda, Yukie; Makinaga, Ayano; Matsushita, Masafumi; Matsuzaki, Teiichiro; Michimasa, Shinichiro; Momiyama, Satoru; Nagamine, Shunsuke; Nakamura, Takashi; Nakano, Keita; Niikura, Megumi; Ozaki, Tomoyuki; Saito, Atsumi; Saito, Takeshi; Shiga, Yoshiaki; Shikata, Mizuki; Shimizu, Yohei; Shimoura, Susumu; Sumikama, Toshiyuki; Söderström, Pär-Anders; Suzuki, Hiroshi; Takeda, Hiroyuki; Takeuchi, Satoshi; Taniuchi, Ryo; Togano, Yasuhiro; Tsubota, Junichi; Uesaka, Meiko; Watanabe, Yasushi; Watanabe, Yukinobu; Wimmer, Kathrin; Yamamoto, Tatsuya; Yoshida, Koichi
2017-09-01
Spallation reactions for the long-lived fission products 137Cs, 90Sr and 107Pd have been studied for the purpose of nuclear waste transmutation. The cross sections on the proton- and deuteron-induced spallation were obtained in inverse kinematics at the RIKEN Radioactive Isotope Beam Factory. Both the target and energy dependences of cross sections have been investigated systematically. and the cross-section differences between the proton and deuteron are found to be larger for lighter fragments. The experimental data are compared with the SPACS semi-empirical parameterization and the PHITS calculations including both the intra-nuclear cascade and evaporation processes.
Development of renormalization group analysis of turbulence
NASA Technical Reports Server (NTRS)
Smith, L. M.
1990-01-01
The renormalization group (RG) procedure for nonlinear, dissipative systems is now quite standard, and its applications to the problem of hydrodynamic turbulence are becoming well known. In summary, the RG method isolates self similar behavior and provides a systematic procedure to describe scale invariant dynamics in terms of large scale variables only. The parameterization of the small scales in a self consistent manner has important implications for sub-grid modeling. This paper develops the homogeneous, isotropic turbulence and addresses the meaning and consequence of epsilon-expansion. The theory is then extended to include a weak mean flow and application of the RG method to a sequence of models is shown to converge to the Navier-Stokes equations.
Climate model biases in seasonality of continental water storage revealed by satellite gravimetry
Swenson, Sean; Milly, P.C.D.
2006-01-01
Satellite gravimetric observations of monthly changes in continental water storage are compared with outputs from five climate models. All models qualitatively reproduce the global pattern of annual storage amplitude, and the seasonal cycle of global average storage is reproduced well, consistent with earlier studies. However, global average agreements mask systematic model biases in low latitudes. Seasonal extrema of low‐latitude, hemispheric storage generally occur too early in the models, and model‐specific errors in amplitude of the low‐latitude annual variations are substantial. These errors are potentially explicable in terms of neglected or suboptimally parameterized water stores in the land models and precipitation biases in the climate models.
Multiple Scales in Fluid Dynamics and Meteorology: The DFG Priority Programme 1276 MetStröm
NASA Astrophysics Data System (ADS)
von Larcher, Th; Klein, R.
2012-04-01
Geophysical fluid motions are characterized by a very wide range of length and time scales, and by a rich collection of varying physical phenomena. The mathematical description of these motions reflects this multitude of scales and mechanisms in that it involves strong non-linearities and various scale-dependent singular limit regimes. Considerable progress has been made in recent years in the mathematical modelling and numerical simulation of such flows in detailed process studies, numerical weather forecasting, and climate research. One task of outstanding importance in this context has been and will remain for the foreseeable future the subgrid scale parameterization of the net effects of non-resolved processes that take place on spacio-temporal scales not resolvable even by the largest most recent supercomputers. Since the advent of numerical weather forecasting some 60 years ago, one simple but efficient means to achieve improved forecasting skills has been increased spacio-temporal resolution. This seems quite consistent with the concept of convergence of numerical methods in Applied Mathematics and Computational Fluid Dynamics (CFD) at a first glance. Yet, the very notion of increased resolution in atmosphere-ocean science is very different from the one used in Applied Mathematics: For the mathematician, increased resolution provides the benefit of getting closer to the ideal of a converged solution of some given partial differential equations. On the other hand, the atmosphere-ocean scientist would naturally refine the computational grid and adjust his mathematical model, such that it better represents the relevant physical processes that occur at smaller scales. This conceptual contradiction remains largely irrelevant as long as geophysical flow models operate with fixed computational grids and time steps and with subgrid scale parameterizations being optimized accordingly. The picture changes fundamentally when modern techniques from CFD involving spacio-temporal grid adaptivity get invoked in order to further improve the net efficiency in exploiting the given computational resources. In the setting of geophysical flow simulation one must then employ subgrid scale parameterizations that dynamically adapt to the changing grid sizes and time steps, implement ways to judiciously control and steer the newly available flexibility of resolution, and invent novel ways of quantifying the remaining errors. The DFG priority program MetStröm covers the expertise of Meteorology, Fluid Dynamics, and Applied Mathematics to develop model- as well as grid-adaptive numerical simulation concepts in multidisciplinary projects. The goal of this priority programme is to provide simulation models which combine scale-dependent (mathematical) descriptions of key physical processes with adaptive flow discretization schemes. Deterministic continuous approaches and discrete and/or stochastic closures and their possible interplay are taken into consideration. Research focuses on the theory and methodology of multiscale meteorological-fluid mechanics modelling. Accompanying reference experiments support model validation.
NASA Astrophysics Data System (ADS)
Chawla, Ila; Osuri, Krishna K.; Mujumdar, Pradeep P.; Niyogi, Dev
2018-02-01
Reliable estimates of extreme rainfall events are necessary for an accurate prediction of floods. Most of the global rainfall products are available at a coarse resolution, rendering them less desirable for extreme rainfall analysis. Therefore, regional mesoscale models such as the advanced research version of the Weather Research and Forecasting (WRF) model are often used to provide rainfall estimates at fine grid spacing. Modelling heavy rainfall events is an enduring challenge, as such events depend on multi-scale interactions, and the model configurations such as grid spacing, physical parameterization and initialization. With this background, the WRF model is implemented in this study to investigate the impact of different processes on extreme rainfall simulation, by considering a representative event that occurred during 15-18 June 2013 over the Ganga Basin in India, which is located at the foothills of the Himalayas. This event is simulated with ensembles involving four different microphysics (MP), two cumulus (CU) parameterizations, two planetary boundary layers (PBLs) and two land surface physics options, as well as different resolutions (grid spacing) within the WRF model. The simulated rainfall is evaluated against the observations from 18 rain gauges and the Tropical Rainfall Measuring Mission Multi-Satellite Precipitation Analysis (TMPA) 3B42RT version 7 data. From the analysis, it should be noted that the choice of MP scheme influences the spatial pattern of rainfall, while the choice of PBL and CU parameterizations influences the magnitude of rainfall in the model simulations. Further, the WRF run with Goddard MP, Mellor-Yamada-Janjic PBL and Betts-Miller-Janjic CU scheme is found to perform best
in simulating this heavy rain event. The selected configuration is evaluated for several heavy to extremely heavy rainfall events that occurred across different months of the monsoon season in the region. The model performance improved through incorporation of detailed land surface processes involving prognostic soil moisture evolution in Noah scheme compared to the simple Slab model. To analyse the effect of model grid spacing, two sets of downscaling ratios - (i) 1 : 3, global to regional (G2R) scale and (ii) 1 : 9, global to convection-permitting scale (G2C) - are employed. Results indicate that a higher downscaling ratio (G2C) causes higher variability and consequently large errors in the simulations. Therefore, G2R is adopted as a suitable choice for simulating heavy rainfall event in the present case study. Further, the WRF-simulated rainfall is found to exhibit less bias when compared with the NCEP FiNaL (FNL) reanalysis data.
2015-06-13
The Berkeley Out-of-Order Machine (BOOM): An Industry- Competitive, Synthesizable, Parameterized RISC-V Processor Christopher Celio David A...Synthesizable, Parameterized RISC-V Processor Christopher Celio, David Patterson, and Krste Asanović University of California, Berkeley, California 94720...Order Machine BOOM is a synthesizable, parameterized, superscalar out- of-order RISC-V core designed to serve as the prototypical baseline processor
Feedbacks between Air-Quality, Meteorology, and the Forest Environment
NASA Astrophysics Data System (ADS)
Makar, Paul; Akingunola, Ayodeji; Stroud, Craig; Zhang, Junhua; Gong, Wanmin; Moran, Michael; Zheng, Qiong; Brook, Jeffrey; Sills, David
2017-04-01
The outcome of air quality forecasts depend in part on how the local environment surrounding the emissions regions influences chemical reaction rates and transport from those regions to the larger spatial scales. Forested areas alter atmospheric chemistry through reducing photolysis rates and vertical diffusivities within the forest canopy. The emitted pollutants, and their reaction products, are in turn capable of altering meteorology, through the well-known direct and indirect effects of particulate matter on radiative transfer. The combination of these factors was examined using version 2 of the Global Environmental Multiscale - Modelling Air-quality and CHemistry (GEM-MACH) on-line air pollution model. The model configuration used for this study included 12 aerosol size bins, eight aerosol species, homogeneous core Mie scattering, the Milbrandt-Yao two-moment cloud microphysics scheme with cloud condensation nuclei generated from model aerosols using the scheme of Abdul-Razzak and Ghan, and a new parameterization for forest canopy shading and turbulence. The model was nested to 2.5km resolution for a domain encompassing the lower Great Lakes, for simulations of a period in August of 2015 during the Pan American Games, held in Toronto, Canada. Four scenarios were carried out: (1) a "Base Case" scenario (the original model, in which coupling between chemistry and weather is not permitted; instead, the meteorological model's internal climatologies for aerosol optical and cloud condensation properties are used for direct and indirect effect calculations); (2) a "Feedback" scenario (the aerosol properties were derived from the internally simulated chemistry, and coupled to the meteorological model's radiative transfer and cloud formation modules); (3) a "Forest" scenario (canopy shading and turbulence were added to the Base Case); (4) a "Combined" scenario (including both direct and indirect effect coupling between meteorology and chemistry, as well as the forest canopy parameterization). The simulations suggest that the feedbacks between simulated aerosols and meteorology may strengthen the existing lake breeze circulation, modifying the resulting meteorological and air-quality forecasts, while the forest canopy's influence may extend throughout the planetary boundary layer, and may also influence the weather. The simulations will be compared to available observations, in order to determine their relative impact on model performance.
Evaluation of gas-particle partitioning in a regional air quality model for organic pollutants
NASA Astrophysics Data System (ADS)
Efstathiou, Christos I.; Matejovičová, Jana; Bieser, Johannes; Lammel, Gerhard
2016-12-01
Persistent organic pollutants (POPs) are of considerable concern due to their well-recognized toxicity and their potential to bioaccumulate and engage in long-range transport. These compounds are semi-volatile and, therefore, create a partition between vapour and condensed phases in the atmosphere, while both phases can undergo chemical reactions. This work describes the extension of the Community Multiscale Air Quality (CMAQ) modelling system to POPs with a focus on establishing an adaptable framework that accounts for gaseous chemistry, heterogeneous reactions, and gas-particle partitioning (GPP). The effect of GPP is assessed by implementing a set of independent parameterizations within the CMAQ aerosol module, including the Junge-Pankow (JP) adsorption model, the Harner-Bidleman (HB) organic matter (OM) absorption model, and the dual Dachs-Eisenreich (DE) black carbon (BC) adsorption and OM absorption model. Use of these descriptors in a modified version of CMAQ for benzo[a]pyrene (BaP) results in different fate and transport patterns as demonstrated by regional-scale simulations performed for a European domain during 2006. The dual DE model predicted 24.1 % higher average domain concentrations compared to the HB model, which was in turn predicting 119.2 % higher levels compared to the baseline JP model. Evaluation with measurements from the European Monitoring and Evaluation Programme (EMEP) reveals the capability of the more extensive DE model to better capture the ambient levels and seasonal behaviour of BaP. It is found that the heterogeneous reaction of BaP with O3 may decrease its atmospheric lifetime by 25.2 % (domain and annual average) and near-ground concentrations by 18.8 %. Marginally better model performance was found for one of the six EMEP stations (Košetice) when heterogeneous BaP reactivity was included. Further analysis shows that, for the rest of the EMEP locations, the model continues to underestimate BaP levels, an observation that can be attributed to low emission estimates for such remote areas. These findings suggest that, when modelling the fate and transport of organic pollutants on large spatio-temporal scales, the selection and parameterization of GPP can be as important as degradation (reactivity).
NASA Astrophysics Data System (ADS)
Chen, Y.; Liu, X.; Mankoff, K. D.; Gulley, J. D.
2016-12-01
The surfaces of subglacial conduits are very complex, coupling multi-scale roughness, large sinuosity, and cross-sectional variations together. Those features significantly affect the friction law and drainage efficiency inside the conduit by altering velocity and pressure distributions, thus posing considerable influences on the dynamic development of the conduit. Parameterizing the above surface features is a first step towards understanding their hydraulic influences. A Matlab package is developed to extract the roughness field, the conduit centerline, and associated area and curvature data from the conduit surface, acquired from 3D scanning. By using those data, the characteristic vertical and horizontal roughness scales are then estimated based on the structure functions. The centerline sinuosities, defined through three concepts, i.e., the traditional definition of a fluvial river, entropy-based sinuosity, and curvature-based sinuosity, are also calculated and compared. The cross-sectional area and equivalent circular diameter along the centerline are also calculated. Among those features, the roughness is especially important due to its pivotal role in determining the wall friction, and thus an estimation of the equivalent roughness height is of great importance. To achieve such a goal, the original conduit is firstly simplified into a straight smooth pipe with the same volume and centerline length, and the roughness field obtained above is then reconstructed into the simplified pipe. An OpenFOAM-based Large-eddy-simulation (LES) is then performed based on the reconstructed pipe. Considering that the Reynolds number is of the order 106, and the relative roughness is larger than 5% for 60% of the conduit, we test the validity of the resistance law for completely rough pipe. The friction factor is calculated based on the pressure drop and mean velocity in the simulation. Working together, the equivalent roughness height can be calculated. However, whether the assumption is applicable for the current case, i.e., high relative roughness, is a question. Two other roughness heights, i.e., the vertical roughness scale based on structure functions and viscous sublayer thickness determined from the wall boundary layer are also calculated and compared with the equivalent roughness height.
NASA Astrophysics Data System (ADS)
Porritt, R. W.; Becker, T. W.; Auer, L.; Boschi, L.
2017-12-01
We present a whole-mantle, variable resolution, shear-wave tomography model based on newly available and existing seismological datasets including regional body-wave delay times and multi-mode Rayleigh and Love wave phase delays. Our body wave dataset includes 160,000 S wave delays used in the DNA13 regional tomographic model focused on the western and central US, 86,000 S and SKS delays measured on stations in western South America (Porritt et al., in prep), and 3,900,000 S+ phases measured by correlation between data observed at stations in the IRIS global networks (IU, II) and stations in the continuous US, against synthetic data generated with IRIS Syngine. The surface wave dataset includes fundamental mode and overtone Rayleigh wave data from Schaeffer and Levedev (2014), ambient noise derived Rayleigh wave and Love wave measurements from Ekstrom (2013), newly computed fundamental mode ambient noise Rayleigh wave phase delays for the continuous US up to July 2017, and other, previously published, measurements. These datasets, along with a data-adaptive parameterization utilized for the SAVANI model (Auer et al., 2014), should allow significantly finer-scale imaging than previous global models, rivaling that of regional-scale approaches, under the USArray footprint in the continuous US, while seamlessly integrating into a global model. We parameterize the model for both vertically (vSV) and horizontally (vSH) polarized shear velocities by accounting for the different sensitivities of the various phases and wave types. The resulting, radially anisotropic model should allow for a range of new geodynamic analysis, including estimates of mantle flow induced topography or seismic anisotropy, without generating artifacts due to edge effects, or requiring assumptions about the structure of the region outside the well resolved model space. Our model shows a number of features, including indications of the effects of edge-driven convection in the Cordillera and along the eastern margin and larger-scale convection due to the subduction of the Farallon slab and along the edge of the Laurentia cratonic margin.
NASA Astrophysics Data System (ADS)
Hailegeorgis, Teklu T.; Alfredsen, Knut; Abdella, Yisak S.; Kolberg, Sjur
2015-03-01
Identification of proper parameterizations of spatial heterogeneity is required for precipitation-runoff models. However, relevant studies with a specific aim at hourly runoff simulation in boreal mountainous catchments are not common. We conducted calibration and evaluation of hourly runoff simulation in a boreal mountainous watershed based on six different parameterizations of the spatial heterogeneity of subsurface storage capacity for a semi-distributed (subcatchments hereafter called elements) and distributed (1 × 1 km2 grid) setup. We evaluated representation of element-to-element, grid-to-grid, and probabilistic subcatchment/subbasin, subelement and subgrid heterogeneities. The parameterization cases satisfactorily reproduced the streamflow hydrographs with Nash-Sutcliffe efficiency values for the calibration and validation periods up to 0.84 and 0.86 respectively, and similarly for the log-transformed streamflow up to 0.85 and 0.90. The parameterizations reproduced the flow duration curves, but predictive reliability in terms of quantile-quantile (Q-Q) plots indicated marked over and under predictions. The simple and parsimonious parameterizations with no subelement or no subgrid heterogeneities provided equivalent simulation performance compared to the more complex cases. The results indicated that (i) identification of parameterizations require measurements from denser precipitation stations than what is required for acceptable calibration of the precipitation-streamflow relationships, (ii) there is challenges in the identification of parameterizations based on only calibration to catchment integrated streamflow observations and (iii) a potential preference for the simple and parsimonious parameterizations for operational forecast contingent on their equivalent simulation performance for the available input data. In addition, the effects of non-identifiability of parameters (interactions and equifinality) can contribute to the non-identifiability of the parameterizations.
NASA Astrophysics Data System (ADS)
Guo, Yamin; Cheng, Jie; Liang, Shunlin
2018-02-01
Surface downward longwave radiation (SDLR) is a key variable for calculating the earth's surface radiation budget. In this study, we evaluated seven widely used clear-sky parameterization methods using ground measurements collected from 71 globally distributed fluxnet sites. The Bayesian model averaging (BMA) method was also introduced to obtain a multi-model ensemble estimate. As a whole, the parameterization method of Carmona et al. (2014) performs the best, with an average BIAS, RMSE, and R 2 of - 0.11 W/m2, 20.35 W/m2, and 0.92, respectively, followed by the parameterization methods of Idso (1981), Prata (Q J R Meteorol Soc 122:1127-1151, 1996), Brunt and Sc (Q J R Meteorol Soc 58:389-420, 1932), and Brutsaert (Water Resour Res 11:742-744, 1975). The accuracy of the BMA is close to that of the parameterization method of Carmona et al. (2014) and comparable to that of the parameterization method of Idso (1981). The advantage of the BMA is that it achieves balanced results compared to the integrated single parameterization methods. To fully assess the performance of the parameterization methods, the effects of climate type, land cover, and surface elevation were also investigated. The five parameterization methods and BMA all failed over land with the tropical climate type, with high water vapor, and had poor results over forest, wetland, and ice. These methods achieved better results over desert, bare land, cropland, and grass and had acceptable accuracies for sites at different elevations, except for the parameterization method of Carmona et al. (2014) over high elevation sites. Thus, a method that can be successfully applied everywhere does not exist.
Multi-Scale Validation of a Nanodiamond Drug Delivery System and Multi-Scale Engineering Education
ERIC Educational Resources Information Center
Schwalbe, Michelle Kristin
2010-01-01
This dissertation has two primary concerns: (i) evaluating the uncertainty and prediction capabilities of a nanodiamond drug delivery model using Bayesian calibration and bias correction, and (ii) determining conceptual difficulties of multi-scale analysis from an engineering education perspective. A Bayesian uncertainty quantification scheme…
A weak Galerkin generalized multiscale finite element method
Mu, Lin; Wang, Junping; Ye, Xiu
2016-03-31
In this study, we propose a general framework for weak Galerkin generalized multiscale (WG-GMS) finite element method for the elliptic problems with rapidly oscillating or high contrast coefficients. This general WG-GMS method features in high order accuracy on general meshes and can work with multiscale basis derived by different numerical schemes. A special case is studied under this WG-GMS framework in which the multiscale basis functions are obtained by solving local problem with the weak Galerkin finite element method. Convergence analysis and numerical experiments are obtained for the special case.
A weak Galerkin generalized multiscale finite element method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mu, Lin; Wang, Junping; Ye, Xiu
In this study, we propose a general framework for weak Galerkin generalized multiscale (WG-GMS) finite element method for the elliptic problems with rapidly oscillating or high contrast coefficients. This general WG-GMS method features in high order accuracy on general meshes and can work with multiscale basis derived by different numerical schemes. A special case is studied under this WG-GMS framework in which the multiscale basis functions are obtained by solving local problem with the weak Galerkin finite element method. Convergence analysis and numerical experiments are obtained for the special case.
Multi-Scale Scattering Transform in Music Similarity Measuring
NASA Astrophysics Data System (ADS)
Wang, Ruobai
Scattering transform is a Mel-frequency spectrum based, time-deformation stable method, which can be used in evaluating music similarity. Compared with Dynamic time warping, it has better performance in detecting similar audio signals under local time-frequency deformation. Multi-scale scattering means to combine scattering transforms of different window lengths. This paper argues that, multi-scale scattering transform is a good alternative of dynamic time warping in music similarity measuring. We tested the performance of multi-scale scattering transform against other popular methods, with data designed to represent different conditions.
NASA Astrophysics Data System (ADS)
Scukins, A.; Nerukh, D.; Pavlov, E.; Karabasov, S.; Markesteijn, A.
2015-09-01
A multiscale Molecular Dynamics/Hydrodynamics implementation of the 2D Mercedes Benz (MB or BN2D) [1] water model is developed and investigated. The concept and the governing equations of multiscale coupling together with the results of the two-way coupling implementation are reported. The sensitivity of the multiscale model for obtaining macroscopic and microscopic parameters of the system, such as macroscopic density and velocity fluctuations, radial distribution and velocity autocorrelation functions of MB particles, is evaluated. Critical issues for extending the current model to large systems are discussed.
Single Image Super-Resolution Based on Multi-Scale Competitive Convolutional Neural Network
Qu, Xiaobo; He, Yifan
2018-01-01
Deep convolutional neural networks (CNNs) are successful in single-image super-resolution. Traditional CNNs are limited to exploit multi-scale contextual information for image reconstruction due to the fixed convolutional kernel in their building modules. To restore various scales of image details, we enhance the multi-scale inference capability of CNNs by introducing competition among multi-scale convolutional filters, and build up a shallow network under limited computational resources. The proposed network has the following two advantages: (1) the multi-scale convolutional kernel provides the multi-context for image super-resolution, and (2) the maximum competitive strategy adaptively chooses the optimal scale of information for image reconstruction. Our experimental results on image super-resolution show that the performance of the proposed network outperforms the state-of-the-art methods. PMID:29509666
Single Image Super-Resolution Based on Multi-Scale Competitive Convolutional Neural Network.
Du, Xiaofeng; Qu, Xiaobo; He, Yifan; Guo, Di
2018-03-06
Deep convolutional neural networks (CNNs) are successful in single-image super-resolution. Traditional CNNs are limited to exploit multi-scale contextual information for image reconstruction due to the fixed convolutional kernel in their building modules. To restore various scales of image details, we enhance the multi-scale inference capability of CNNs by introducing competition among multi-scale convolutional filters, and build up a shallow network under limited computational resources. The proposed network has the following two advantages: (1) the multi-scale convolutional kernel provides the multi-context for image super-resolution, and (2) the maximum competitive strategy adaptively chooses the optimal scale of information for image reconstruction. Our experimental results on image super-resolution show that the performance of the proposed network outperforms the state-of-the-art methods.
NASA Astrophysics Data System (ADS)
Liu, Daiming; Wang, Qingkang; Wang, Qing
2018-05-01
Surface texturing is of great significance in light trapping for solar cells. Herein, the multiscale texture, consisting of microscale pyramids and nanoscale porous arrangement, was fabricated on crystalline Si by KOH etching and Ag-assisted HF etching processes and subsequently replicated onto glass with high fidelity by UV nanoimprint method. Light trapping of the multiscale texture was studied by spectral (reflectance, haze ratio) characterizations. Results reveal the multiscale texture provides the broadband reflection reducing, the highlighted light scattering and the additional self-cleaning behaviors. Compared with bare cell, the multiscale textured micromorph cell achieves a 4% relative increase in power conversion efficiency. This surface texturing route paves a promising way for developing low-cost, large-scale and high-efficiency solar applications.
A high-order multiscale finite-element method for time-domain acoustic-wave modeling
NASA Astrophysics Data System (ADS)
Gao, Kai; Fu, Shubin; Chung, Eric T.
2018-05-01
Accurate and efficient wave equation modeling is vital for many applications in such as acoustics, electromagnetics, and seismology. However, solving the wave equation in large-scale and highly heterogeneous models is usually computationally expensive because the computational cost is directly proportional to the number of grids in the model. We develop a novel high-order multiscale finite-element method to reduce the computational cost of time-domain acoustic-wave equation numerical modeling by solving the wave equation on a coarse mesh based on the multiscale finite-element theory. In contrast to existing multiscale finite-element methods that use only first-order multiscale basis functions, our new method constructs high-order multiscale basis functions from local elliptic problems which are closely related to the Gauss-Lobatto-Legendre quadrature points in a coarse element. Essentially, these basis functions are not only determined by the order of Legendre polynomials, but also by local medium properties, and therefore can effectively convey the fine-scale information to the coarse-scale solution with high-order accuracy. Numerical tests show that our method can significantly reduce the computation time while maintain high accuracy for wave equation modeling in highly heterogeneous media by solving the corresponding discrete system only on the coarse mesh with the new high-order multiscale basis functions.
A high-order multiscale finite-element method for time-domain acoustic-wave modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, Kai; Fu, Shubin; Chung, Eric T.
Accurate and efficient wave equation modeling is vital for many applications in such as acoustics, electromagnetics, and seismology. However, solving the wave equation in large-scale and highly heterogeneous models is usually computationally expensive because the computational cost is directly proportional to the number of grids in the model. We develop a novel high-order multiscale finite-element method to reduce the computational cost of time-domain acoustic-wave equation numerical modeling by solving the wave equation on a coarse mesh based on the multiscale finite-element theory. In contrast to existing multiscale finite-element methods that use only first-order multiscale basis functions, our new method constructsmore » high-order multiscale basis functions from local elliptic problems which are closely related to the Gauss–Lobatto–Legendre quadrature points in a coarse element. Essentially, these basis functions are not only determined by the order of Legendre polynomials, but also by local medium properties, and therefore can effectively convey the fine-scale information to the coarse-scale solution with high-order accuracy. Numerical tests show that our method can significantly reduce the computation time while maintain high accuracy for wave equation modeling in highly heterogeneous media by solving the corresponding discrete system only on the coarse mesh with the new high-order multiscale basis functions.« less
Integrating Multiscale Modeling with Drug Effects for Cancer Treatment.
Li, Xiangfang L; Oduola, Wasiu O; Qian, Lijun; Dougherty, Edward R
2015-01-01
In this paper, we review multiscale modeling for cancer treatment with the incorporation of drug effects from an applied system's pharmacology perspective. Both the classical pharmacology and systems biology are inherently quantitative; however, systems biology focuses more on networks and multi factorial controls over biological processes rather than on drugs and targets in isolation, whereas systems pharmacology has a strong focus on studying drugs with regard to the pharmacokinetic (PK) and pharmacodynamic (PD) relations accompanying drug interactions with multiscale physiology as well as the prediction of dosage-exposure responses and economic potentials of drugs. Thus, it requires multiscale methods to address the need for integrating models from the molecular levels to the cellular, tissue, and organism levels. It is a common belief that tumorigenesis and tumor growth can be best understood and tackled by employing and integrating a multifaceted approach that includes in vivo and in vitro experiments, in silico models, multiscale tumor modeling, continuous/discrete modeling, agent-based modeling, and multiscale modeling with PK/PD drug effect inputs. We provide an example application of multiscale modeling employing stochastic hybrid system for a colon cancer cell line HCT-116 with the application of Lapatinib drug. It is observed that the simulation results are similar to those observed from the setup of the wet-lab experiments at the Translational Genomics Research Institute.
A high-order multiscale finite-element method for time-domain acoustic-wave modeling
Gao, Kai; Fu, Shubin; Chung, Eric T.
2018-02-04
Accurate and efficient wave equation modeling is vital for many applications in such as acoustics, electromagnetics, and seismology. However, solving the wave equation in large-scale and highly heterogeneous models is usually computationally expensive because the computational cost is directly proportional to the number of grids in the model. We develop a novel high-order multiscale finite-element method to reduce the computational cost of time-domain acoustic-wave equation numerical modeling by solving the wave equation on a coarse mesh based on the multiscale finite-element theory. In contrast to existing multiscale finite-element methods that use only first-order multiscale basis functions, our new method constructsmore » high-order multiscale basis functions from local elliptic problems which are closely related to the Gauss–Lobatto–Legendre quadrature points in a coarse element. Essentially, these basis functions are not only determined by the order of Legendre polynomials, but also by local medium properties, and therefore can effectively convey the fine-scale information to the coarse-scale solution with high-order accuracy. Numerical tests show that our method can significantly reduce the computation time while maintain high accuracy for wave equation modeling in highly heterogeneous media by solving the corresponding discrete system only on the coarse mesh with the new high-order multiscale basis functions.« less
Multiscale decoding for reliable brain-machine interface performance over time.
Han-Lin Hsieh; Wong, Yan T; Pesaran, Bijan; Shanechi, Maryam M
2017-07-01
Recordings from invasive implants can degrade over time, resulting in a loss of spiking activity for some electrodes. For brain-machine interfaces (BMI), such a signal degradation lowers control performance. Achieving reliable performance over time is critical for BMI clinical viability. One approach to improve BMI longevity is to simultaneously use spikes and other recording modalities such as local field potentials (LFP), which are more robust to signal degradation over time. We have developed a multiscale decoder that can simultaneously model the different statistical profiles of multi-scale spike/LFP activity (discrete spikes vs. continuous LFP). This decoder can also run at multiple time-scales (millisecond for spikes vs. tens of milliseconds for LFP). Here, we validate the multiscale decoder for estimating the movement of 7 major upper-arm joint angles in a non-human primate (NHP) during a 3D reach-to-grasp task. The multiscale decoder uses motor cortical spike/LFP recordings as its input. We show that the multiscale decoder can improve decoding accuracy by adding information from LFP to spikes, while running at the fast millisecond time-scale of the spiking activity. Moreover, this improvement is achieved using relatively few LFP channels, demonstrating the robustness of the approach. These results suggest that using multiscale decoders has the potential to improve the reliability and longevity of BMIs.
NASA Technical Reports Server (NTRS)
Chao, Winston C.
2015-01-01
The excessive precipitation over steep and high mountains (EPSM) in GCMs and meso-scale models is due to a lack of parameterization of the thermal effects of the subgrid-scale topographic variation. These thermal effects drive subgrid-scale heated slope induced vertical circulations (SHVC). SHVC provide a ventilation effect of removing heat from the boundary layer of resolvable-scale mountain slopes and depositing it higher up. The lack of SHVC parameterization is the cause of EPSM. The author has previously proposed a method of parameterizing SHVC, here termed SHVC.1. Although this has been successful in avoiding EPSM, the drawback of SHVC.1 is that it suppresses convective type precipitation in the regions where it is applied. In this article we propose a new method of parameterizing SHVC, here termed SHVC.2. In SHVC.2 the potential temperature and mixing ratio of the boundary layer are changed when used as input to the cumulus parameterization scheme over mountainous regions. This allows the cumulus parameterization to assume the additional function of SHVC parameterization. SHVC.2 has been tested in NASA Goddard's GEOS-5 GCM. It achieves the primary goal of avoiding EPSM while also avoiding the suppression of convective-type precipitation in regions where it is applied.
NASA Astrophysics Data System (ADS)
Perler, D.; Geiger, A.; Rothacher, M.
2011-12-01
Water vapor is involved in many atmospheric processes and is therefore a crucial quantity in numerical weather prediction (NWP). Recent flood events in Switzerland have pointed out several deficiencies in planning and prediction methods used for flood risk mitigation. Investigations have shown that one of the limiting factors to forecast such events with NWP models is the insufficient knowledge of the water vapor distribution in the atmosphere. Global Navigation Satellite System (GNSS) ground-based tomography is a technique to monitor the 4D distribution of water vapor in the troposphere and has the potential to considerably improve the initial water vapor field used in NWP. We developed a GNSS tomography software called AWATOS-2 which is based on the Kalman filter technique and provides different parameterizations of the tropospheric wet refractivity field (Perler et al., 2010; Perler et al., 2011). The software can be used for the assimilation of different observations such as GNSS zero-differences, GNSS double-differences and any kind of point observations (e.g. balloons, aircrafts). In this talk, we present the results of a long-term study where GPS double-difference delays have been processed. The tomographic solutions have been investigated in view of their assimilation into local NWP models. The data set comprises observations from 46 GPS stations collected during 1 year. The core area of the investigation is located in Central Europe. We analyzed the performance of different voxel parameterizations used in the tomographic reconstruction of the troposphere and developed a new bias correction model which minimizes systematic differences. The correction model reduces the root-mean-square error (RMSE) with respected to the NWP model from 4.6 ppm to 3.0 ppm. After bias correction, high-elevation stations still show high RMSEs. In the presentation, we will discuss the treatment of such stations in terms of assimilation into NWP models and will show how sophisticated voxel parameterizations improve the accuracy. Perler, D.; Hurter, F.; Brockmann, E.; Leuenberger, D.; Ruffieux, D.; Geiger, A. and Rothacher, M. (2010). In Proceedings of the 7th Management Committee (MC7) and Working Group (WG) Meeting, Colone (Germany), 8 pp. Perler, D.; Geiger, A. and Hurter, F. (2011). 4D GPS water vapor tomography: new parameterized approaches. J. Geodesy 85(8), pp. 539-550, DOI 10.1007/s00190-011-0454-2.
NASA Astrophysics Data System (ADS)
Yadav, S.; Bamotra, S.
2017-12-01
A comprehensive study was done on the mass, composition and sources of fine aerosol associated non-polar organics in Jammu, an urban location in the foothill region of North - Western Himalayas. Systematic multi-scale sampling was done from October, 2015 to February, 2017 to collect fine aerosol (PM2.5) samples every week using a Fine Particulate Sampler (Envirotech, APM 550 MFC) which operates at a constant flow rate of 16.7 L/minute. The Non- polar organic compounds comprising of n-alkanes, PAHs, isoprenoid hydrocarbons and nicotine were analyzed using Thermal desorption Gas Chromatography Mass Spectrometry (TD-GC-MS) method. The n-alkane associated diagnostic parameters include—mass weighted Averaged Chain Length (ACL); Carbon number with maximum concentration (Cmax); Petroleum derived n-alkanes (PNA%), Carbon Preference Index (CPI) and the percentage contribution of Wax n-alkanes from plants (WNA%). These diagnostic parameters along with PAH based molecular ratios were used to understand the diurnal and seasonal variations in different biogenic and petrogenic source contributions in this part of Himalayas. The presence of source specific tracers like Levoglucosan, Retene, Isoquinoline and nicotine also corroborated our findings. Further Fine aerosols associated Black Carbon, an important marker for burning was determined using Optical Transmissometer. Significant multiscale variations were found in the Fine aerosol load, associated Non-polar organics, source tracers/contributions and Black Carbon.
van der Sluis, Olaf; Vossen, Bart; Geers, Marc
2018-01-01
Metal-elastomer interfacial systems, often encountered in stretchable electronics, demonstrate remarkably high interface fracture toughness values. Evidently, a large gap exists between the rather small adhesion energy levels at the microscopic scale (‘intrinsic adhesion’) and the large measured macroscopic work-of-separation. This energy gap is closed here by unravelling the underlying dissipative mechanisms through a systematic numerical/experimental multi-scale approach. This self-containing contribution collects and reviews previously published results and addresses the remaining open questions by providing new and independent results obtained from an alternative experimental set-up. In particular, the experimental studies on Cu-PDMS (Poly(dimethylsiloxane)) samples conclusively reveal the essential role of fibrillation mechanisms at the micro-meter scale during the metal-elastomer delamination process. The micro-scale numerical analyses on single and multiple fibrils show that the dynamic release of the stored elastic energy by multiple fibril fracture, including the interaction with the adjacent deforming bulk PDMS and its highly nonlinear behaviour, provide a mechanistic understanding of the high work-of-separation. An experimentally validated quantitative relation between the macroscopic work-of-separation and peel front height is established from the simulation results. Finally, it is shown that a micro-mechanically motivated shape of the traction-separation law in cohesive zone models is essential to describe the delamination process in fibrillating metal-elastomer systems in a physically meaningful way. PMID:29393908
Sperry, Megan M; Kartha, Sonia; Granquist, Eric J; Winkelstein, Beth A
2018-07-01
Inter-subject networks are used to model correlations between brain regions and are particularly useful for metabolic imaging techniques, like 18F-2-deoxy-2-(18F)fluoro-D-glucose (FDG) positron emission tomography (PET). Since FDG PET typically produces a single image, correlations cannot be calculated over time. Little focus has been placed on the basic properties of inter-subject networks and if they are affected by group size and image normalization. FDG PET images were acquired from rats (n = 18), normalized by whole brain, visual cortex, or cerebellar FDG uptake, and used to construct correlation matrices. Group size effects on network stability were investigated by systematically adding rats and evaluating local network connectivity (node strength and clustering coefficient). Modularity and community structure were also evaluated in the differently normalized networks to assess meso-scale network relationships. Local network properties are stable regardless of normalization region for groups of at least 10. Whole brain-normalized networks are more modular than visual cortex- or cerebellum-normalized network (p < 0.00001); however, community structure is similar at network resolutions where modularity differs most between brain and randomized networks. Hierarchical analysis reveals consistent modules at different scales and clustering of spatially-proximate brain regions. Findings suggest inter-subject FDG PET networks are stable for reasonable group sizes and exhibit multi-scale modularity.
A multiphysics and multiscale model for low frequency electromagnetic direct-chill casting
NASA Astrophysics Data System (ADS)
Košnik, N.; Guštin, A. Z.; Mavrič, B.; Šarler, B.
2016-03-01
Simulation and control of macrosegregation, deformation and grain size in low frequency electromagnetic (EM) direct-chill casting (LFEMC) is important for downstream processing. Respectively, a multiphysics and multiscale model is developed for solution of Lorentz force, temperature, velocity, concentration, deformation and grain structure of LFEMC processed aluminum alloys, with focus on axisymmetric billets. The mixture equations with lever rule, linearized phase diagram, and stationary thermoelastic solid phase are assumed, together with EM induction equation for the field imposed by the coil. Explicit diffuse approximate meshless solution procedure [1] is used for solving the EM field, and the explicit local radial basis function collocation method [2] is used for solving the coupled transport phenomena and thermomechanics fields. Pressure-velocity coupling is performed by the fractional step method [3]. The point automata method with modified KGT model is used to estimate the grain structure [4] in a post-processing mode. Thermal, mechanical, EM and grain structure outcomes of the model are demonstrated. A systematic study of the complicated influences of the process parameters can be investigated by the model, including intensity and frequency of the electromagnetic field. The meshless solution framework, with the implemented simplest physical models, will be further extended by including more sophisticated microsegregation and grain structure models, as well as a more realistic solid and solid-liquid phase rheology.
NASA Technical Reports Server (NTRS)
Coffey, V. N.; Chandler, M. O.
2017-01-01
The scientific target of NASA's Magnetospheric Multiscale (MMS) mission is to study the fundamentally important phenomenon of magnetic reconnection. Theoretical models of this process predict a small size, on the order of hundred kilometers, for the ion diffusion region where ions are demagnetized at the dayside magnetopause. This region may typically sweep over the spacecraft at relatively high speeds of 50 km/s, requiring the fast plasma investigation (FPI) instrument suite to have an extremely high time resolution for measurements of the 3D particle distribution functions. As part of the FPI on MMS, the 16 dual ion spectrometers (DIS) will provide fast (150 ms) 3D ion velocity distributions, from 10 to 30,000 eV/q, by combining the measurements from four dual spectrometers on each of four MMS spacecraft. For any multispacecraft mission, the response uniformity among the spectrometer set assumes an enhanced importance. Due to these demanding instrument requirements and the effort of calibrating more than 32 sensors (16 × 2) within a tight schedule, a highly systematic and precise calibration was required for measurement repeatability. To illustrate how this challenge was met, a brief overview of the FPI DIS was presented with a detailed discussion of the calibration method of approach and implementation. Finally, a discussion of DIS performance results, their unit-to-unit variation, and the lessons learned from this calibration effort are presented.
Evaluation of scale-aware subgrid mesoscale eddy models in a global eddy-rich model
NASA Astrophysics Data System (ADS)
Pearson, Brodie; Fox-Kemper, Baylor; Bachman, Scott; Bryan, Frank
2017-07-01
Two parameterizations for horizontal mixing of momentum and tracers by subgrid mesoscale eddies are implemented in a high-resolution global ocean model. These parameterizations follow on the techniques of large eddy simulation (LES). The theory underlying one parameterization (2D Leith due to Leith, 1996) is that of enstrophy cascades in two-dimensional turbulence, while the other (QG Leith) is designed for potential enstrophy cascades in quasi-geostrophic turbulence. Simulations using each of these parameterizations are compared with a control simulation using standard biharmonic horizontal mixing.Simulations using the 2D Leith and QG Leith parameterizations are more realistic than those using biharmonic mixing. In particular, the 2D Leith and QG Leith simulations have more energy in resolved mesoscale eddies, have a spectral slope more consistent with turbulence theory (an inertial enstrophy or potential enstrophy cascade), have bottom drag and vertical viscosity as the primary sinks of energy instead of lateral friction, and have isoneutral parameterized mesoscale tracer transport. The parameterization choice also affects mass transports, but the impact varies regionally in magnitude and sign.
Statistical field estimators for multiscale simulations.
Eapen, Jacob; Li, Ju; Yip, Sidney
2005-11-01
We present a systematic approach for generating smooth and accurate fields from particle simulation data using the notions of statistical inference. As an extension to a parametric representation based on the maximum likelihood technique previously developed for velocity and temperature fields, a nonparametric estimator based on the principle of maximum entropy is proposed for particle density and stress fields. Both estimators are applied to represent molecular dynamics data on shear-driven flow in an enclosure which exhibits a high degree of nonlinear characteristics. We show that the present density estimator is a significant improvement over ad hoc bin averaging and is also free of systematic boundary artifacts that appear in the method of smoothing kernel estimates. Similarly, the velocity fields generated by the maximum likelihood estimator do not show any edge effects that can be erroneously interpreted as slip at the wall. For low Reynolds numbers, the velocity fields and streamlines generated by the present estimator are benchmarked against Newtonian continuum calculations. For shear velocities that are a significant fraction of the thermal speed, we observe a form of shear localization that is induced by the confining boundary.
New Vistas in Chemical Product and Process Design.
Zhang, Lei; Babi, Deenesh K; Gani, Rafiqul
2016-06-07
Design of chemicals-based products is broadly classified into those that are process centered and those that are product centered. In this article, the designs of both classes of products are reviewed from a process systems point of view; developments related to the design of the chemical product, its corresponding process, and its integration are highlighted. Although significant advances have been made in the development of systematic model-based techniques for process design (also for optimization, operation, and control), much work is needed to reach the same level for product design. Timeline diagrams illustrating key contributions in product design, process design, and integrated product-process design are presented. The search for novel, innovative, and sustainable solutions must be matched by consideration of issues related to the multidisciplinary nature of problems, the lack of data needed for model development, solution strategies that incorporate multiscale options, and reliability versus predictive power. The need for an integrated model-experiment-based design approach is discussed together with benefits of employing a systematic computer-aided framework with built-in design templates.
Multi-scale calculation based on dual domain material point method combined with molecular dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dhakal, Tilak Raj
This dissertation combines the dual domain material point method (DDMP) with molecular dynamics (MD) in an attempt to create a multi-scale numerical method to simulate materials undergoing large deformations with high strain rates. In these types of problems, the material is often in a thermodynamically non-equilibrium state, and conventional constitutive relations are often not available. In this method, the closure quantities, such as stress, at each material point are calculated from a MD simulation of a group of atoms surrounding the material point. Rather than restricting the multi-scale simulation in a small spatial region, such as phase interfaces, or crackmore » tips, this multi-scale method can be used to consider non-equilibrium thermodynamic e ects in a macroscopic domain. This method takes advantage that the material points only communicate with mesh nodes, not among themselves; therefore MD simulations for material points can be performed independently in parallel. First, using a one-dimensional shock problem as an example, the numerical properties of the original material point method (MPM), the generalized interpolation material point (GIMP) method, the convected particle domain interpolation (CPDI) method, and the DDMP method are investigated. Among these methods, only the DDMP method converges as the number of particles increases, but the large number of particles needed for convergence makes the method very expensive especially in our multi-scale method where we calculate stress in each material point using MD simulation. To improve DDMP, the sub-point method is introduced in this dissertation, which provides high quality numerical solutions with a very small number of particles. The multi-scale method based on DDMP with sub-points is successfully implemented for a one dimensional problem of shock wave propagation in a cerium crystal. The MD simulation to calculate stress in each material point is performed in GPU using CUDA to accelerate the computation. The numerical properties of the multiscale method are investigated as well as the results from this multi-scale calculation are compared of particles needed for convergence makes the method very expensive especially in our multi-scale method where we calculate stress in each material point using MD simulation. To improve DDMP, the sub-point method is introduced in this dissertation, which provides high quality numerical solutions with a very small number of particles. The multi-scale method based on DDMP with sub-points is successfully implemented for a one dimensional problem of shock wave propagation in a cerium crystal. The MD simulation to calculate stress in each material point is performed in GPU using CUDA to accelerate the computation. The numerical properties of the multiscale method are investigated as well as the results from this multi-scale calculation are compared with direct MD simulation results to demonstrate the feasibility of the method. Also, the multi-scale method is applied for a two dimensional problem of jet formation around copper notch under a strong impact.« less
This paper proposes a general procedure to link meteorological data with air quality models, such as U.S. EPA's Models-3 Community Multi-scale Air Quality (CMAQ) modeling system. CMAQ is intended to be used for studying multi-scale (urban and regional) and multi-pollutant (ozon...
Multiscale Models in the Biomechanics of Plant Growth
Fozard, John A.
2015-01-01
Plant growth occurs through the coordinated expansion of tightly adherent cells, driven by regulated softening of cell walls. It is an intrinsically multiscale process, with the integrated properties of multiple cell walls shaping the whole tissue. Multiscale models encode physical relationships to bring new understanding to plant physiology and development. PMID:25729061
Filter-based multiscale entropy analysis of complex physiological time series.
Xu, Yuesheng; Zhao, Liang
2013-08-01
Multiscale entropy (MSE) has been widely and successfully used in analyzing the complexity of physiological time series. We reinterpret the averaging process in MSE as filtering a time series by a filter of a piecewise constant type. From this viewpoint, we introduce filter-based multiscale entropy (FME), which filters a time series to generate multiple frequency components, and then we compute the blockwise entropy of the resulting components. By choosing filters adapted to the feature of a given time series, FME is able to better capture its multiscale information and to provide more flexibility for studying its complexity. Motivated by the heart rate turbulence theory, which suggests that the human heartbeat interval time series can be described in piecewise linear patterns, we propose piecewise linear filter multiscale entropy (PLFME) for the complexity analysis of the time series. Numerical results from PLFME are more robust to data of various lengths than those from MSE. The numerical performance of the adaptive piecewise constant filter multiscale entropy without prior information is comparable to that of PLFME, whose design takes prior information into account.
Multiscale Pressure-Balanced Structures in Three-dimensional Magnetohydrodynamic Turbulence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Liping; Zhang, Lei; Feng, Xueshang
2017-02-10
Observations of solar wind turbulence indicate the existence of multiscale pressure-balanced structures (PBSs) in the solar wind. In this work, we conduct a numerical simulation to investigate multiscale PBSs and in particular their formation in compressive magnetohydrodynamic turbulence. By the use of the higher-order Godunov code Athena, a driven compressible turbulence with an imposed uniform guide field is simulated. The simulation results show that both the magnetic pressure and the thermal pressure exhibit a turbulent spectrum with a Kolmogorov-like power law, and that in many regions of the simulation domain they are anticorrelated. The computed wavelet cross-coherence spectra of themore » magnetic pressure and the thermal pressure, as well as their space series, indicate the existence of multiscale PBSs, with the small PBSs being embedded in the large ones. These multiscale PBSs are likely to be related to the highly oblique-propagating slow-mode waves, as the traced multiscale PBS is found to be traveling in a certain direction at a speed consistent with that predicted theoretically for a slow-mode wave propagating in the same direction.« less
Multi-scale signed envelope inversion
NASA Astrophysics Data System (ADS)
Chen, Guo-Xin; Wu, Ru-Shan; Wang, Yu-Qing; Chen, Sheng-Chang
2018-06-01
Envelope inversion based on modulation signal mode was proposed to reconstruct large-scale structures of underground media. In order to solve the shortcomings of conventional envelope inversion, multi-scale envelope inversion was proposed using new envelope Fréchet derivative and multi-scale inversion strategy to invert strong contrast models. In multi-scale envelope inversion, amplitude demodulation was used to extract the low frequency information from envelope data. However, only to use amplitude demodulation method will cause the loss of wavefield polarity information, thus increasing the possibility of inversion to obtain multiple solutions. In this paper we proposed a new demodulation method which can contain both the amplitude and polarity information of the envelope data. Then we introduced this demodulation method into multi-scale envelope inversion, and proposed a new misfit functional: multi-scale signed envelope inversion. In the numerical tests, we applied the new inversion method to the salt layer model and SEG/EAGE 2-D Salt model using low-cut source (frequency components below 4 Hz were truncated). The results of numerical test demonstrated the effectiveness of this method.
Search for subgrid scale parameterization by projection pursuit regression
NASA Technical Reports Server (NTRS)
Meneveau, C.; Lund, T. S.; Moin, Parviz
1992-01-01
The dependence of subgrid-scale stresses on variables of the resolved field is studied using direct numerical simulations of isotropic turbulence, homogeneous shear flow, and channel flow. The projection pursuit algorithm, a promising new regression tool for high-dimensional data, is used to systematically search through a large collection of resolved variables, such as components of the strain rate, vorticity, velocity gradients at neighboring grid points, etc. For the case of isotropic turbulence, the search algorithm recovers the linear dependence on the rate of strain (which is necessary to transfer energy to subgrid scales) but is unable to determine any other more complex relationship. For shear flows, however, new systematic relations beyond eddy viscosity are found. For the homogeneous shear flow, the results suggest that products of the mean rotation rate tensor with both the fluctuating strain rate and fluctuating rotation rate tensors are important quantities in parameterizing the subgrid-scale stresses. A model incorporating these terms is proposed. When evaluated with direct numerical simulation data, this model significantly increases the correlation between the modeled and exact stresses, as compared with the Smagorinsky model. In the case of channel flow, the stresses are found to correlate with products of the fluctuating strain and rotation rate tensors. The mean rates of rotation or strain do not appear to be important in this case, and the model determined for homogeneous shear flow does not perform well when tested with channel flow data. Many questions remain about the physical mechanisms underlying these findings, about possible Reynolds number dependence, and, given the low level of correlations, about their impact on modeling. Nevertheless, demonstration of the existence of causal relations between sgs stresses and large-scale characteristics of turbulent shear flows, in addition to those necessary for energy transfer, provides important insight into the relation between scales in turbulent flows.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scheibe, Timothy D.; Murphy, Ellyn M.; Chen, Xingyuan
2015-01-01
One of the most significant challenges facing hydrogeologic modelers is the disparity between those spatial and temporal scales at which fundamental flow, transport and reaction processes can best be understood and quantified (e.g., microscopic to pore scales, seconds to days) and those at which practical model predictions are needed (e.g., plume to aquifer scales, years to centuries). While the multiscale nature of hydrogeologic problems is widely recognized, technological limitations in computational and characterization restrict most practical modeling efforts to fairly coarse representations of heterogeneous properties and processes. For some modern problems, the necessary level of simplification is such that modelmore » parameters may lose physical meaning and model predictive ability is questionable for any conditions other than those to which the model was calibrated. Recently, there has been broad interest across a wide range of scientific and engineering disciplines in simulation approaches that more rigorously account for the multiscale nature of systems of interest. In this paper, we review a number of such approaches and propose a classification scheme for defining different types of multiscale simulation methods and those classes of problems to which they are most applicable. Our classification scheme is presented in terms of a flow chart (Multiscale Analysis Platform or MAP), and defines several different motifs of multiscale simulation. Within each motif, the member methods are reviewed and example applications are discussed. We focus attention on hybrid multiscale methods, in which two or more models with different physics described at fundamentally different scales are directly coupled within a single simulation. Very recently these methods have begun to be applied to groundwater flow and transport simulations, and we discuss these applications in the context of our classification scheme. As computational and characterization capabilities continue to improve, we envision that hybrid multiscale modeling will become more common and may become a viable alternative to conventional single-scale models in the near future.« less
Scheibe, Timothy D; Murphy, Ellyn M; Chen, Xingyuan; Rice, Amy K; Carroll, Kenneth C; Palmer, Bruce J; Tartakovsky, Alexandre M; Battiato, Ilenia; Wood, Brian D
2015-01-01
One of the most significant challenges faced by hydrogeologic modelers is the disparity between the spatial and temporal scales at which fundamental flow, transport, and reaction processes can best be understood and quantified (e.g., microscopic to pore scales and seconds to days) and at which practical model predictions are needed (e.g., plume to aquifer scales and years to centuries). While the multiscale nature of hydrogeologic problems is widely recognized, technological limitations in computation and characterization restrict most practical modeling efforts to fairly coarse representations of heterogeneous properties and processes. For some modern problems, the necessary level of simplification is such that model parameters may lose physical meaning and model predictive ability is questionable for any conditions other than those to which the model was calibrated. Recently, there has been broad interest across a wide range of scientific and engineering disciplines in simulation approaches that more rigorously account for the multiscale nature of systems of interest. In this article, we review a number of such approaches and propose a classification scheme for defining different types of multiscale simulation methods and those classes of problems to which they are most applicable. Our classification scheme is presented in terms of a flowchart (Multiscale Analysis Platform), and defines several different motifs of multiscale simulation. Within each motif, the member methods are reviewed and example applications are discussed. We focus attention on hybrid multiscale methods, in which two or more models with different physics described at fundamentally different scales are directly coupled within a single simulation. Very recently these methods have begun to be applied to groundwater flow and transport simulations, and we discuss these applications in the context of our classification scheme. As computational and characterization capabilities continue to improve, we envision that hybrid multiscale modeling will become more common and also a viable alternative to conventional single-scale models in the near future. © 2014, National Ground Water Association.
Evaluation of Surface Flux Parameterizations with Long-Term ARM Observations
Liu, Gang; Liu, Yangang; Endo, Satoshi
2013-02-01
Surface momentum, sensible heat, and latent heat fluxes are critical for atmospheric processes such as clouds and precipitation, and are parameterized in a variety of models ranging from cloud-resolving models to large-scale weather and climate models. However, direct evaluation of the parameterization schemes for these surface fluxes is rare due to limited observations. This study takes advantage of the long-term observations of surface fluxes collected at the Southern Great Plains site by the Department of Energy Atmospheric Radiation Measurement program to evaluate the six surface flux parameterization schemes commonly used in the Weather Research and Forecasting (WRF) model and threemore » U.S. general circulation models (GCMs). The unprecedented 7-yr-long measurements by the eddy correlation (EC) and energy balance Bowen ratio (EBBR) methods permit statistical evaluation of all six parameterizations under a variety of stability conditions, diurnal cycles, and seasonal variations. The statistical analyses show that the momentum flux parameterization agrees best with the EC observations, followed by latent heat flux, sensible heat flux, and evaporation ratio/Bowen ratio. The overall performance of the parameterizations depends on atmospheric stability, being best under neutral stratification and deteriorating toward both more stable and more unstable conditions. Further diagnostic analysis reveals that in addition to the parameterization schemes themselves, the discrepancies between observed and parameterized sensible and latent heat fluxes may stem from inadequate use of input variables such as surface temperature, moisture availability, and roughness length. The results demonstrate the need for improving the land surface models and measurements of surface properties, which would permit the evaluation of full land surface models.« less
Endalamaw, Abraham; Bolton, W. Robert; Young-Robertson, Jessica M.; ...
2017-09-14
Modeling hydrological processes in the Alaskan sub-arctic is challenging because of the extreme spatial heterogeneity in soil properties and vegetation communities. Nevertheless, modeling and predicting hydrological processes is critical in this region due to its vulnerability to the effects of climate change. Coarse-spatial-resolution datasets used in land surface modeling pose a new challenge in simulating the spatially distributed and basin-integrated processes since these datasets do not adequately represent the small-scale hydrological, thermal, and ecological heterogeneity. The goal of this study is to improve the prediction capacity of mesoscale to large-scale hydrological models by introducing a small-scale parameterization scheme, which bettermore » represents the spatial heterogeneity of soil properties and vegetation cover in the Alaskan sub-arctic. The small-scale parameterization schemes are derived from observations and a sub-grid parameterization method in the two contrasting sub-basins of the Caribou Poker Creek Research Watershed (CPCRW) in Interior Alaska: one nearly permafrost-free (LowP) sub-basin and one permafrost-dominated (HighP) sub-basin. The sub-grid parameterization method used in the small-scale parameterization scheme is derived from the watershed topography. We found that observed soil thermal and hydraulic properties – including the distribution of permafrost and vegetation cover heterogeneity – are better represented in the sub-grid parameterization method than the coarse-resolution datasets. Parameters derived from the coarse-resolution datasets and from the sub-grid parameterization method are implemented into the variable infiltration capacity (VIC) mesoscale hydrological model to simulate runoff, evapotranspiration (ET), and soil moisture in the two sub-basins of the CPCRW. Simulated hydrographs based on the small-scale parameterization capture most of the peak and low flows, with similar accuracy in both sub-basins, compared to simulated hydrographs based on the coarse-resolution datasets. On average, the small-scale parameterization scheme improves the total runoff simulation by up to 50 % in the LowP sub-basin and by up to 10 % in the HighP sub-basin from the large-scale parameterization. This study shows that the proposed sub-grid parameterization method can be used to improve the performance of mesoscale hydrological models in the Alaskan sub-arctic watersheds.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Endalamaw, Abraham; Bolton, W. Robert; Young-Robertson, Jessica M.
Modeling hydrological processes in the Alaskan sub-arctic is challenging because of the extreme spatial heterogeneity in soil properties and vegetation communities. Nevertheless, modeling and predicting hydrological processes is critical in this region due to its vulnerability to the effects of climate change. Coarse-spatial-resolution datasets used in land surface modeling pose a new challenge in simulating the spatially distributed and basin-integrated processes since these datasets do not adequately represent the small-scale hydrological, thermal, and ecological heterogeneity. The goal of this study is to improve the prediction capacity of mesoscale to large-scale hydrological models by introducing a small-scale parameterization scheme, which bettermore » represents the spatial heterogeneity of soil properties and vegetation cover in the Alaskan sub-arctic. The small-scale parameterization schemes are derived from observations and a sub-grid parameterization method in the two contrasting sub-basins of the Caribou Poker Creek Research Watershed (CPCRW) in Interior Alaska: one nearly permafrost-free (LowP) sub-basin and one permafrost-dominated (HighP) sub-basin. The sub-grid parameterization method used in the small-scale parameterization scheme is derived from the watershed topography. We found that observed soil thermal and hydraulic properties – including the distribution of permafrost and vegetation cover heterogeneity – are better represented in the sub-grid parameterization method than the coarse-resolution datasets. Parameters derived from the coarse-resolution datasets and from the sub-grid parameterization method are implemented into the variable infiltration capacity (VIC) mesoscale hydrological model to simulate runoff, evapotranspiration (ET), and soil moisture in the two sub-basins of the CPCRW. Simulated hydrographs based on the small-scale parameterization capture most of the peak and low flows, with similar accuracy in both sub-basins, compared to simulated hydrographs based on the coarse-resolution datasets. On average, the small-scale parameterization scheme improves the total runoff simulation by up to 50 % in the LowP sub-basin and by up to 10 % in the HighP sub-basin from the large-scale parameterization. This study shows that the proposed sub-grid parameterization method can be used to improve the performance of mesoscale hydrological models in the Alaskan sub-arctic watersheds.« less
Understanding Prairie Fen Hydrology - a Hierarchical Multi-Scale Groundwater Modeling Approach
NASA Astrophysics Data System (ADS)
Sampath, P.; Liao, H.; Abbas, H.; Ma, L.; Li, S.
2012-12-01
Prairie fens provide critical habitat to more than 50 rare species and significantly contribute to the biodiversity of the upper Great Lakes region. The sustainability of these globally unique ecosystems, however, requires that they be fed by a steady supply of pristine, calcareous groundwater. Understanding the hydrology that supports the existence of such fens is essential in preserving these valuable habitats. This research uses process-based multi-scale groundwater modeling for this purpose. Two fen-sites, MacCready Fen and Ives Road Fen, in Southern Michigan were systematically studied. A hierarchy of nested steady-state models was built for each fen-site to capture the system's dynamics at spatial scales ranging from the regional groundwater-shed to the local fens. The models utilize high-resolution Digital Elevation Models (DEM), National Hydrologic Datasets (NHD), a recently-assembled water-well database, and results from a state-wide groundwater mapping project to represent the complex hydro-geological and stress framework. The modeling system simulates both shallow glacial and deep bedrock aquifers as well as the interaction between surface water and groundwater. Aquifer heterogeneities were explicitly simulated with multi-scale transition probability geo-statistics. A two-way hydraulic head feedback mechanism was set up between the nested models, such that the parent models provided boundary conditions to the child models, and in turn the child models provided local information to the parent models. A hierarchical mass budget analysis was performed to estimate the seepage fluxes at the surface water/groundwater interfaces and to assess the relative importance of the processes at multiple scales that contribute water to the fens. The models were calibrated using observed base-flows at stream gauging stations and/or static water levels at wells. Three-dimensional particle tracking was used to predict the sources of water to the fens. We observed from the multi-scale simulations that the water system that supports the fens is a much larger, more connected, and more complex one than expected. The water in the fen can be traced back to a network of sources, including lakes and wetlands at different elevations, which are connected to a regional mound through a "cascade delivery mechanism". This "master recharge area" is the ultimate source of water not only to the fens in its vicinity, but also for many major rivers and aquifers. The implication of this finding is that prairie fens must be managed as part of a much larger, multi-scale groundwater system and we must consider protection of the shorter and long-term water sources. This will require a fundamental reassessment of our current approach to fen conservation, which is primarily based on protection of individual fens and their immediate surroundings. Clearly, in the future we must plan for conservation of the broad recharge areas and the multiple fen complexes they support.
NASA Astrophysics Data System (ADS)
Yin, Yi; Shang, Pengjian
2013-12-01
We use multiscale detrended fluctuation analysis (MSDFA) and multiscale detrended cross-correlation analysis (MSDCCA) to investigate auto-correlation (AC) and cross-correlation (CC) in the US and Chinese stock markets during 1997-2012. The results show that US and Chinese stock indices differ in terms of their multiscale AC structures. Stock indices in the same region also differ with regard to their multiscale AC structures. We analyze AC and CC behaviors among indices for the same region to determine similarity among six stock indices and divide them into four groups accordingly. We choose S&P500, NQCI, HSI, and the Shanghai Composite Index as representative samples for simplicity. MSDFA and MSDCCA results and average MSDFA spectra for local scaling exponents (LSEs) for individual series are presented. We find that the MSDCCA spectrum for LSE CC between two time series generally tends to be greater than the average MSDFA LSE spectrum for individual series. We obtain detailed multiscale structures and relations for CC between the four representatives. MSDFA and MSDCCA with secant rolling windows of different sizes are then applied to reanalyze the AC and CC. Vertical and horizontal comparisons of different window sizes are made. The MSDFA and MSDCCA results for the original window size are confirmed and some new interesting characteristics and conclusions regarding multiscale correlation structures are obtained.
Extensions and applications of a second-order landsurface parameterization
NASA Technical Reports Server (NTRS)
Andreou, S. A.; Eagleson, P. S.
1983-01-01
Extensions and applications of a second order land surface parameterization, proposed by Andreou and Eagleson are developed. Procedures for evaluating the near surface storage depth used in one cell land surface parameterizations are suggested and tested by using the model. Sensitivity analysis to the key soil parameters is performed. A case study involving comparison with an "exact" numerical model and another simplified parameterization, under very dry climatic conditions and for two different soil types, is also incorporated.
NASA Technical Reports Server (NTRS)
Stauffer, David R.; Seaman, Nelson L.; Munoz, Ricardo C.
2000-01-01
The objective of this investigation was to study the role of shallow convection on the regional water cycle of the Mississippi and Little Washita Basins using a 3-D mesoscale model, the PSUINCAR MM5. The underlying premise of the project was that current modeling of regional-scale climate and moisture cycles over the continents is deficient without adequate treatment of shallow convection. It was hypothesized that an improved treatment of the regional water cycle can be achieved by using a 3-D mesoscale numerical model having a detailed land-surface parameterization, an advanced boundary-layer parameterization, and a more complete shallow convection parameterization than are available in most current models. The methodology was based on the application in the MM5 of new or recently improved parameterizations covering these three physical processes. Therefore, the work plan focused on integrating, improving, and testing these parameterizations in the MM5 and applying them to study water-cycle processes over the Southern Great Plains (SGP): (1) the Parameterization for Land-Atmosphere-Cloud Exchange (PLACE) described by Wetzel and Boone; (2) the 1.5-order turbulent kinetic energy (TKE)-predicting scheme of Shafran et al.; and (3) the hybrid-closure sub-grid shallow convection parameterization of Deng. Each of these schemes has been tested extensively through this study and the latter two have been improved significantly to extend their capabilities.
ERIC Educational Resources Information Center
Wood, Brian D.
2009-01-01
Although the multiscale structure of many important processes in engineering is becoming more widely acknowledged, making this connection in the classroom is a difficult task. This is due in part because the concept of multiscale structure itself is challenging and it requires the students to develop new conceptual pictures of physical systems,…
Maria Vergara; Samuel A. Cushman; Fermin Urra; Aritz Ruiz-Gonzalez
2016-01-01
Multispecies and multiscale habitat suitability models (HSM) are important to identify the environmental variables and scales influencing habitat selection and facilitate the comparison of closely related species with different ecological requirements. Objectives This study explores the multiscale relationships of habitat suitability for the pine (Martes...
A Simple Parameterization of 3 x 3 Magic Squares
ERIC Educational Resources Information Center
Trenkler, Gotz; Schmidt, Karsten; Trenkler, Dietrich
2012-01-01
In this article a new parameterization of magic squares of order three is presented. This parameterization permits an easy computation of their inverses, eigenvalues, eigenvectors and adjoints. Some attention is paid to the Luoshu, one of the oldest magic squares.
Adaptation of a Fast Optimal Interpolation Algorithm to the Mapping of Oceangraphic Data
NASA Technical Reports Server (NTRS)
Menemenlis, Dimitris; Fieguth, Paul; Wunsch, Carl; Willsky, Alan
1997-01-01
A fast, recently developed, multiscale optimal interpolation algorithm has been adapted to the mapping of hydrographic and other oceanographic data. This algorithm produces solution and error estimates which are consistent with those obtained from exact least squares methods, but at a small fraction of the computational cost. Problems whose solution would be completely impractical using exact least squares, that is, problems with tens or hundreds of thousands of measurements and estimation grid points, can easily be solved on a small workstation using the multiscale algorithm. In contrast to methods previously proposed for solving large least squares problems, our approach provides estimation error statistics while permitting long-range correlations, using all measurements, and permitting arbitrary measurement locations. The multiscale algorithm itself, published elsewhere, is not the focus of this paper. However, the algorithm requires statistical models having a very particular multiscale structure; it is the development of a class of multiscale statistical models, appropriate for oceanographic mapping problems, with which we concern ourselves in this paper. The approach is illustrated by mapping temperature in the northeastern Pacific. The number of hydrographic stations is kept deliberately small to show that multiscale and exact least squares results are comparable. A portion of the data were not used in the analysis; these data serve to test the multiscale estimates. A major advantage of the present approach is the ability to repeat the estimation procedure a large number of times for sensitivity studies, parameter estimation, and model testing. We have made available by anonymous Ftp a set of MATLAB-callable routines which implement the multiscale algorithm and the statistical models developed in this paper.
Soil Structure - A Neglected Component of Land-Surface Models
NASA Astrophysics Data System (ADS)
Fatichi, S.; Or, D.; Walko, R. L.; Vereecken, H.; Kollet, S. J.; Young, M.; Ghezzehei, T. A.; Hengl, T.; Agam, N.; Avissar, R.
2017-12-01
Soil structure is largely absent in most standard sampling and measurements and in the subsequent parameterization of soil hydraulic properties deduced from soil maps and used in Earth System Models. The apparent omission propagates into the pedotransfer functions that deduce parameters of soil hydraulic properties primarily from soil textural information. Such simple parameterization is an essential ingredient in the practical application of any land surface model. Despite the critical role of soil structure (biopores formed by decaying roots, aggregates, etc.) in defining soil hydraulic functions, only a few studies have attempted to incorporate soil structure into models. They mostly looked at the effects on preferential flow and solute transport pathways at the soil profile scale; yet, the role of soil structure in mediating large-scale fluxes remains understudied. Here, we focus on rectifying this gap and demonstrating potential impacts on surface and subsurface fluxes and system wide eco-hydrologic responses. The study proposes a systematic way for correcting the soil water retention and hydraulic conductivity functions—accounting for soil-structure—with major implications for near saturated hydraulic conductivity. Modification to the basic soil hydraulic parameterization is assumed as a function of biological activity summarized by Gross Primary Production. A land-surface model with dynamic vegetation is used to carry out numerical simulations with and without the role of soil-structure for 20 locations characterized by different climates and biomes across the globe. Including soil structure affects considerably the partition between infiltration and runoff and consequently leakage at the base of the soil profile (recharge). In several locations characterized by wet climates, a few hundreds of mm per year of surface runoff become deep-recharge accounting for soil-structure. Changes in energy fluxes, total evapotranspiration and vegetation productivity are less significant but they can reach up to 10% in specific locations. Significance for land-surface and hydrological modeling and implications for distributed domains are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daly, Christopher; Halbleib, Michael D.; Hannaway, David B.
Several crops have recently been identified as potential dedicated bioenergy feedstocks for the production of power, fuels, and bioproducts. Despite being identified as early as the 1980s, no systematic work has been undertaken to characterize the spatial distribution of their long-term production potentials in the United states. Such information is a starting point for planners and economic modelers, and there is a need for this spatial information to be developed in a consistent manner for a variety of crops, so that their production potentials can be intercompared to support crop selection decisions. As part of the Sun Grant Regional Feedstockmore » Partnership (RFP), an approach to mapping these potential biomass resources was developed to take advantage of the informational synergy realized when bringing together coordinated field trials, close interaction with expert agronomists, and spatial modeling into a single, collaborative effort. A modeling and mapping system called PRISM-ELM was designed to answer a basic question: How do climate and soil characteristics affect the spatial distribution and long-term production patterns of a given crop? This empirical/mechanistic/biogeographical hybrid model employs a limiting factor approach, where productivity is determined by the most limiting of the factors addressed in submodels that simulate water balance, winter low-temperature response, summer high-temperature response, and soil pH, salinity, and drainage. Yield maps are developed through linear regressions relating soil and climate attributes to reported yield data. The model was parameterized and validated using grain yield data for winter wheat and maize, which served as benchmarks for parameterizing the model for upland and lowland switchgrass, CRP grasses, Miscanthus, biomass sorghum, energycane, willow, and poplar. The resulting maps served as potential production inputs to analyses comparing the viability of biomass crops under various economic scenarios. The modeling and parameterization framework can be expanded to include other biomass crops.« less
Daly, Christopher; Halbleib, Michael D.; Hannaway, David B.; ...
2017-12-22
Several crops have recently been identified as potential dedicated bioenergy feedstocks for the production of power, fuels, and bioproducts. Despite being identified as early as the 1980s, no systematic work has been undertaken to characterize the spatial distribution of their long-term production potentials in the United states. Such information is a starting point for planners and economic modelers, and there is a need for this spatial information to be developed in a consistent manner for a variety of crops, so that their production potentials can be intercompared to support crop selection decisions. As part of the Sun Grant Regional Feedstockmore » Partnership (RFP), an approach to mapping these potential biomass resources was developed to take advantage of the informational synergy realized when bringing together coordinated field trials, close interaction with expert agronomists, and spatial modeling into a single, collaborative effort. A modeling and mapping system called PRISM-ELM was designed to answer a basic question: How do climate and soil characteristics affect the spatial distribution and long-term production patterns of a given crop? This empirical/mechanistic/biogeographical hybrid model employs a limiting factor approach, where productivity is determined by the most limiting of the factors addressed in submodels that simulate water balance, winter low-temperature response, summer high-temperature response, and soil pH, salinity, and drainage. Yield maps are developed through linear regressions relating soil and climate attributes to reported yield data. The model was parameterized and validated using grain yield data for winter wheat and maize, which served as benchmarks for parameterizing the model for upland and lowland switchgrass, CRP grasses, Miscanthus, biomass sorghum, energycane, willow, and poplar. The resulting maps served as potential production inputs to analyses comparing the viability of biomass crops under various economic scenarios. The modeling and parameterization framework can be expanded to include other biomass crops.« less
Importance of including ammonium sulfate ((NH4)2SO4) aerosols for ice cloud parameterization in GCMs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhattacharjee, P. S.; Sud, Yogesh C.; Liu, Xiaohong
2010-02-22
A common deficiency of many cloud-physics parameterizations including the NASA’s microphysics of clouds with aerosol- cloud interactions (hereafter called McRAS-AC) is that they simulate less (larger) than the observed ice cloud particle number (size). A single column model (SCM) of McRAS-AC and Global Circulation Model (GCM) physics together with an adiabatic parcel model (APM) for ice-cloud nucleation (IN) of aerosols were used to systematically examine the influence of ammonium sulfate ((NH4)2SO4) aerosols, not included in the present formulations of McRAS-AC. Specifically, the influence of (NH4)2SO4 aerosols on the optical properties of both liquid and ice clouds were analyzed. First anmore » (NH4)2SO4 parameterization was included in the APM to assess its effect vis-à-vis that of the other aerosols. Subsequently, several evaluation tests were conducted over the ARM-SGP and thirteen other locations (sorted into pristine and polluted conditions) distributed over marine and continental sites with the SCM. The statistics of the simulated cloud climatology were evaluated against the available ground and satellite data. The results showed that inclusion of (NH4)2SO4 in the SCM made a remarkable improvement in the simulated effective radius of ice clouds. However, the corresponding ice-cloud optical thickness increased more than is observed. This can be caused by lack of cloud advection and evaporation. We argue that this deficiency can be mitigated by adjusting the other tunable parameters of McRAS-AC such as precipitation efficiency. Inclusion of ice cloud particle splintering introduced through well- established empirical equations is found to further improve the results. Preliminary tests show that these changes make a substantial improvement in simulating the cloud optical properties in the GCM, particularly by simulating a far more realistic cloud distribution over the ITCZ.« less
NASA Astrophysics Data System (ADS)
Witte, M.; Morrison, H.; Jensen, J. B.; Bansemer, A.; Gettelman, A.
2017-12-01
The spatial covariance of cloud and rain water (or in simpler terms, small and large drops, respectively) is an important quantity for accurate prediction of the accretion rate in bulk microphysical parameterizations that account for subgrid variability using assumed probability density functions (pdfs). Past diagnoses of this covariance from remote sensing, in situ measurements and large eddy simulation output have implicitly assumed that the magnitude of the covariance is insensitive to grain size (i.e. horizontal resolution) and averaging length, but this is not the case because both cloud and rain water exhibit scale invariance across a wide range of scales - from tens of centimeters to tens of kilometers in the case of cloud water, a range that we will show is primarily limited by instrumentation and sampling issues. Since the individual variances systematically vary as a function of spatial scale, it should be expected that the covariance follows a similar relationship. In this study, we quantify the scaling properties of cloud and rain water content and their covariability from high frequency in situ aircraft measurements of marine stratocumulus taken over the southeastern Pacific Ocean aboard the NSF/NCAR C-130 during the VOCALS-REx field experiment of October-November 2008. First we confirm that cloud and rain water scale in distinct manners, indicating that there is a statistically and potentially physically significant difference in the spatial structure of the two fields. Next, we demonstrate that the covariance is a strong function of spatial scale, which implies important caveats regarding the ability of limited-area models with domains smaller than a few tens of kilometers across to accurately reproduce the spatial organization of precipitation. Finally, we present preliminary work on the development of a scale-aware parameterization of cloud-rain water subgrid covariability based in multifractal analysis intended for application in large-scale model microphysics schemes.
NASA Technical Reports Server (NTRS)
Stone, Peter H.; Yao, Mao-Sung
1990-01-01
A number of perpetual January simulations are carried out with a two-dimensional zonally averaged model employing various parameterizations of the eddy fluxes of heat (potential temperature) and moisture. The parameterizations are evaluated by comparing these results with the eddy fluxes calculated in a parallel simulation using a three-dimensional general circulation model with zonally symmetric forcing. The three-dimensional model's performance in turn is evaluated by comparing its results using realistic (nonsymmetric) boundary conditions with observations. Branscome's parameterization of the meridional eddy flux of heat and Leovy's parameterization of the meridional eddy flux of moisture simulate the seasonal and latitudinal variations of these fluxes reasonably well, while somewhat underestimating their magnitudes. New parameterizations of the vertical eddy fluxes are developed that take into account the enhancement of the eddy mixing slope in a growing baroclinic wave due to condensation, and also the effect of eddy fluctuations in relative humidity. The new parameterizations, when tested in the two-dimensional model, simulate the seasonal, latitudinal, and vertical variations of the vertical eddy fluxes quite well, when compared with the three-dimensional model, and only underestimate the magnitude of the fluxes by 10 to 20 percent.
Transition between inverse and direct energy cascades in multiscale optical turbulence
Malkin, V. M.; Fisch, N. J.
2018-03-06
Transition between inverse and direct energy cascades in multiscale optical turbulence. Multiscale turbulence naturally develops and plays an important role in many fluid, gas, and plasma phenomena. Statistical models of multiscale turbulence usually employ Kolmogorov hypotheses of spectral locality of interactions (meaning that interactions primarily occur between pulsations of comparable scales) and scale-invariance of turbulent pulsations. However, optical turbulence described by the nonlinear Schrodinger equation exhibits breaking of both the Kolmogorov locality and scale-invariance. A weaker form of spectral locality that holds for multi-scale optical turbulence enables a derivation of simplified evolution equations that reduce the problem to a singlemore » scale modeling. We present the derivation of these equations for Kerr media with random inhomogeneities. Then, we find the analytical solution that exhibits a transition between inverse and direct energy cascades in optical turbulence.« less
Multiscale Integration of -Omic, Imaging, and Clinical Data in Biomedical Informatics
Phan, John H.; Quo, Chang F.; Cheng, Chihwen; Wang, May Dongmei
2016-01-01
This paper reviews challenges and opportunities in multiscale data integration for biomedical informatics. Biomedical data can come from different biological origins, data acquisition technologies, and clinical applications. Integrating such data across multiple scales (e.g., molecular, cellular/tissue, and patient) can lead to more informed decisions for personalized, predictive, and preventive medicine. However, data heterogeneity, community standards in data acquisition, and computational complexity are big challenges for such decision making. This review describes genomic and proteomic (i.e., molecular), histopathological imaging (i.e., cellular/tissue), and clinical (i.e., patient) data; it includes case studies for single-scale (e.g., combining genomic or histopathological image data), multiscale (e.g., combining histopathological image and clinical data), and multiscale and multiplatform (e.g., the Human Protein Atlas and The Cancer Genome Atlas) data integration. Numerous opportunities exist in biomedical informatics research focusing on integration of multiscale and multiplatform data. PMID:23231990
Macklin, Paul; Cristini, Vittorio
2013-01-01
Simulating cancer behavior across multiple biological scales in space and time, i.e., multiscale cancer modeling, is increasingly being recognized as a powerful tool to refine hypotheses, focus experiments, and enable more accurate predictions. A growing number of examples illustrate the value of this approach in providing quantitative insight on the initiation, progression, and treatment of cancer. In this review, we introduce the most recent and important multiscale cancer modeling works that have successfully established a mechanistic link between different biological scales. Biophysical, biochemical, and biomechanical factors are considered in these models. We also discuss innovative, cutting-edge modeling methods that are moving predictive multiscale cancer modeling toward clinical application. Furthermore, because the development of multiscale cancer models requires a new level of collaboration among scientists from a variety of fields such as biology, medicine, physics, mathematics, engineering, and computer science, an innovative Web-based infrastructure is needed to support this growing community. PMID:21529163
Multiscale integration of -omic, imaging, and clinical data in biomedical informatics.
Phan, John H; Quo, Chang F; Cheng, Chihwen; Wang, May Dongmei
2012-01-01
This paper reviews challenges and opportunities in multiscale data integration for biomedical informatics. Biomedical data can come from different biological origins, data acquisition technologies, and clinical applications. Integrating such data across multiple scales (e.g., molecular, cellular/tissue, and patient) can lead to more informed decisions for personalized, predictive, and preventive medicine. However, data heterogeneity, community standards in data acquisition, and computational complexity are big challenges for such decision making. This review describes genomic and proteomic (i.e., molecular), histopathological imaging (i.e., cellular/tissue), and clinical (i.e., patient) data; it includes case studies for single-scale (e.g., combining genomic or histopathological image data), multiscale (e.g., combining histopathological image and clinical data), and multiscale and multiplatform (e.g., the Human Protein Atlas and The Cancer Genome Atlas) data integration. Numerous opportunities exist in biomedical informatics research focusing on integration of multiscale and multiplatform data.
Multiscale Simulation of Blood Flow in Brain Arteries with an Aneurysm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leopold Grinberg; Vitali Morozov; Dmitry A. Fedosov
2013-04-24
Multi-scale modeling of arterial blood flow can shed light on the interaction between events happening at micro- and meso-scales (i.e., adhesion of red blood cells to the arterial wall, clot formation) and at macro-scales (i.e., change in flow patterns due to the clot). Coupled numerical simulations of such multi-scale flow require state-of-the-art computers and algorithms, along with techniques for multi-scale visualizations.This animation presents results of studies used in the development of a multi-scale visualization methodology. First we use streamlines to show the path the flow is taking as it moves through the system, including the aneurysm. Next we investigate themore » process of thrombus (blood clot) formation, which may be responsible for the rupture of aneurysms, by concentrating on the platelet blood cells, observing as they aggregate on the wall of the aneurysm.« less
Goal-oriented robot navigation learning using a multi-scale space representation.
Llofriu, M; Tejera, G; Contreras, M; Pelc, T; Fellous, J M; Weitzenfeld, A
2015-12-01
There has been extensive research in recent years on the multi-scale nature of hippocampal place cells and entorhinal grid cells encoding which led to many speculations on their role in spatial cognition. In this paper we focus on the multi-scale nature of place cells and how they contribute to faster learning during goal-oriented navigation when compared to a spatial cognition system composed of single scale place cells. The task consists of a circular arena with a fixed goal location, in which a robot is trained to find the shortest path to the goal after a number of learning trials. Synaptic connections are modified using a reinforcement learning paradigm adapted to the place cells multi-scale architecture. The model is evaluated in both simulation and physical robots. We find that larger scale and combined multi-scale representations favor goal-oriented navigation task learning. Copyright © 2015 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiao, Heng; Gustafson, Jr., William I.; Hagos, Samson M.
2015-04-18
With this study, to better understand the behavior of quasi-equilibrium-based convection parameterizations at higher resolution, we use a diagnostic framework to examine the resolution-dependence of subgrid-scale vertical transport of moist static energy as parameterized by the Zhang-McFarlane convection parameterization (ZM). Grid-scale input to ZM is supplied by coarsening output from cloud-resolving model (CRM) simulations onto subdomains ranging in size from 8 × 8 to 256 × 256 km 2s.
Parameterizing by the Number of Numbers
NASA Astrophysics Data System (ADS)
Fellows, Michael R.; Gaspers, Serge; Rosamond, Frances A.
The usefulness of parameterized algorithmics has often depended on what Niedermeier has called "the art of problem parameterization". In this paper we introduce and explore a novel but general form of parameterization: the number of numbers. Several classic numerical problems, such as Subset Sum, Partition, 3-Partition, Numerical 3-Dimensional Matching, and Numerical Matching with Target Sums, have multisets of integers as input. We initiate the study of parameterizing these problems by the number of distinct integers in the input. We rely on an FPT result for Integer Linear Programming Feasibility to show that all the above-mentioned problems are fixed-parameter tractable when parameterized in this way. In various applied settings, problem inputs often consist in part of multisets of integers or multisets of weighted objects (such as edges in a graph, or jobs to be scheduled). Such number-of-numbers parameterized problems often reduce to subproblems about transition systems of various kinds, parameterized by the size of the system description. We consider several core problems of this kind relevant to number-of-numbers parameterization. Our main hardness result considers the problem: given a non-deterministic Mealy machine M (a finite state automaton outputting a letter on each transition), an input word x, and a census requirement c for the output word specifying how many times each letter of the output alphabet should be written, decide whether there exists a computation of M reading x that outputs a word y that meets the requirement c. We show that this problem is hard for W[1]. If the question is whether there exists an input word x such that a computation of M on x outputs a word that meets c, the problem becomes fixed-parameter tractable.
Parameterized Cross Sections for Pion Production in Proton-Proton Collisions
NASA Technical Reports Server (NTRS)
Blattnig, Steve R.; Swaminathan, Sudha R.; Kruger, Adam T.; Ngom, Moussa; Norbury, John W.; Tripathi, R. K.
2000-01-01
An accurate knowledge of cross sections for pion production in proton-proton collisions finds wide application in particle physics, astrophysics, cosmic ray physics, and space radiation problems, especially in situations where an incident proton is transported through some medium and knowledge of the output particle spectrum is required when given the input spectrum. In these cases, accurate parameterizations of the cross sections are desired. In this paper much of the experimental data are reviewed and compared with a wide variety of different cross section parameterizations. Therefore, parameterizations of neutral and charged pion cross sections are provided that give a very accurate description of the experimental data. Lorentz invariant differential cross sections, spectral distributions, and total cross section parameterizations are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larson, Vincent; Gettelman, Andrew; Morrison, Hugh
In state-of-the-art climate models, each cloud type is treated using its own separate cloud parameterization and its own separate microphysics parameterization. This use of separate schemes for separate cloud regimes is undesirable because it is theoretically unfounded, it hampers interpretation of results, and it leads to the temptation to overtune parameters. In this grant, we are creating a climate model that contains a unified cloud parameterization and a unified microphysics parameterization. This model will be used to address the problems of excessive frequency of drizzle in climate models and excessively early onset of deep convection in the Tropics over land.more » The resulting model will be compared with ARM observations.« less
Multiscale Study of Currents Affected by Topography
2015-09-30
1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Multiscale Study of Currents Affected by Topography ... topography on the ocean general circulation is challenging because of the multiscale nature of the flow interactions. Small-scale details of the... topography , and the waves, drag, and turbulence generated at the boundary, from meter scale to mesoscale, interact in the boundary layers to influence the
Multiscale Mathematics for Biomass Conversion to Renewable Hydrogen
DOE Office of Scientific and Technical Information (OSTI.GOV)
Plechac, Petr; Vlachos, Dionisios; Katsoulakis, Markos
2013-09-05
The overall objective of this project is to develop multiscale models for understanding and eventually designing complex processes for renewables. To the best of our knowledge, our work is the first attempt at modeling complex reacting systems, whose performance relies on underlying multiscale mathematics. Our specific application lies at the heart of biofuels initiatives of DOE and entails modeling of catalytic systems, to enable economic, environmentally benign, and efficient conversion of biomass into either hydrogen or valuable chemicals. Specific goals include: (i) Development of rigorous spatio-temporal coarse-grained kinetic Monte Carlo (KMC) mathematics and simulation for microscopic processes encountered in biomassmore » transformation. (ii) Development of hybrid multiscale simulation that links stochastic simulation to a deterministic partial differential equation (PDE) model for an entire reactor. (iii) Development of hybrid multiscale simulation that links KMC simulation with quantum density functional theory (DFT) calculations. (iv) Development of parallelization of models of (i)-(iii) to take advantage of Petaflop computing and enable real world applications of complex, multiscale models. In this NCE period, we continued addressing these objectives and completed the proposed work. Main initiatives, key results, and activities are outlined.« less
Pope, Bernard J; Fitch, Blake G; Pitman, Michael C; Rice, John J; Reumann, Matthias
2011-01-01
Future multiscale and multiphysics models must use the power of high performance computing (HPC) systems to enable research into human disease, translational medical science, and treatment. Previously we showed that computationally efficient multiscale models will require the use of sophisticated hybrid programming models, mixing distributed message passing processes (e.g. the message passing interface (MPI)) with multithreading (e.g. OpenMP, POSIX pthreads). The objective of this work is to compare the performance of such hybrid programming models when applied to the simulation of a lightweight multiscale cardiac model. Our results show that the hybrid models do not perform favourably when compared to an implementation using only MPI which is in contrast to our results using complex physiological models. Thus, with regards to lightweight multiscale cardiac models, the user may not need to increase programming complexity by using a hybrid programming approach. However, considering that model complexity will increase as well as the HPC system size in both node count and number of cores per node, it is still foreseeable that we will achieve faster than real time multiscale cardiac simulations on these systems using hybrid programming models.
Beyond Low Rank + Sparse: Multi-scale Low Rank Matrix Decomposition
Ong, Frank; Lustig, Michael
2016-01-01
We present a natural generalization of the recent low rank + sparse matrix decomposition and consider the decomposition of matrices into components of multiple scales. Such decomposition is well motivated in practice as data matrices often exhibit local correlations in multiple scales. Concretely, we propose a multi-scale low rank modeling that represents a data matrix as a sum of block-wise low rank matrices with increasing scales of block sizes. We then consider the inverse problem of decomposing the data matrix into its multi-scale low rank components and approach the problem via a convex formulation. Theoretically, we show that under various incoherence conditions, the convex program recovers the multi-scale low rank components either exactly or approximately. Practically, we provide guidance on selecting the regularization parameters and incorporate cycle spinning to reduce blocking artifacts. Experimentally, we show that the multi-scale low rank decomposition provides a more intuitive decomposition than conventional low rank methods and demonstrate its effectiveness in four applications, including illumination normalization for face images, motion separation for surveillance videos, multi-scale modeling of the dynamic contrast enhanced magnetic resonance imaging and collaborative filtering exploiting age information. PMID:28450978
A fast solver for the Helmholtz equation based on the generalized multiscale finite-element method
NASA Astrophysics Data System (ADS)
Fu, Shubin; Gao, Kai
2017-11-01
Conventional finite-element methods for solving the acoustic-wave Helmholtz equation in highly heterogeneous media usually require finely discretized mesh to represent the medium property variations with sufficient accuracy. Computational costs for solving the Helmholtz equation can therefore be considerably expensive for complicated and large geological models. Based on the generalized multiscale finite-element theory, we develop a novel continuous Galerkin method to solve the Helmholtz equation in acoustic media with spatially variable velocity and mass density. Instead of using conventional polynomial basis functions, we use multiscale basis functions to form the approximation space on the coarse mesh. The multiscale basis functions are obtained from multiplying the eigenfunctions of a carefully designed local spectral problem with an appropriate multiscale partition of unity. These multiscale basis functions can effectively incorporate the characteristics of heterogeneous media's fine-scale variations, thus enable us to obtain accurate solution to the Helmholtz equation without directly solving the large discrete system formed on the fine mesh. Numerical results show that our new solver can significantly reduce the dimension of the discrete Helmholtz equation system, and can also obviously reduce the computational time.
Ice-nucleating particle emissions from photochemically aged diesel and biodiesel exhaust
NASA Astrophysics Data System (ADS)
Schill, G. P.; Jathar, S. H.; Kodros, J. K.; Levin, E. J. T.; Galang, A. M.; Friedman, B.; Link, M. F.; Farmer, D. K.; Pierce, J. R.; Kreidenweis, S. M.; DeMott, P. J.
2016-05-01
Immersion-mode ice-nucleating particle (INP) concentrations from an off-road diesel engine were measured using a continuous-flow diffusion chamber at -30°C. Both petrodiesel and biodiesel were utilized, and the exhaust was aged up to 1.5 photochemically equivalent days using an oxidative flow reactor. We found that aged and unaged diesel exhaust of both fuels is not likely to contribute to atmospheric INP concentrations at mixed-phase cloud conditions. To explore this further, a new limit-of-detection parameterization for ice nucleation on diesel exhaust was developed. Using a global-chemical transport model, potential black carbon INP (INPBC) concentrations were determined using a current literature INPBC parameterization and the limit-of-detection parameterization. Model outputs indicate that the current literature parameterization likely overemphasizes INPBC concentrations, especially in the Northern Hemisphere. These results highlight the need to integrate new INPBC parameterizations into global climate models as generalized INPBC parameterizations are not valid for diesel exhaust.
Radiative flux and forcing parameterization error in aerosol-free clear skies
Pincus, Robert; Mlawer, Eli J.; Oreopoulos, Lazaros; ...
2015-07-03
This article reports on the accuracy in aerosol- and cloud-free conditions of the radiation parameterizations used in climate models. Accuracy is assessed relative to observationally validated reference models for fluxes under present-day conditions and forcing (flux changes) from quadrupled concentrations of carbon dioxide. Agreement among reference models is typically within 1 W/m 2, while parameterized calculations are roughly half as accurate in the longwave and even less accurate, and more variable, in the shortwave. Absorption of shortwave radiation is underestimated by most parameterizations in the present day and has relatively large errors in forcing. Error in present-day conditions is essentiallymore » unrelated to error in forcing calculations. Recent revisions to parameterizations have reduced error in most cases. As a result, a dependence on atmospheric conditions, including integrated water vapor, means that global estimates of parameterization error relevant for the radiative forcing of climate change will require much more ambitious calculations.« less
Routine human-competitive machine intelligence by means of genetic programming
NASA Astrophysics Data System (ADS)
Koza, John R.; Streeter, Matthew J.; Keane, Martin
2004-01-01
Genetic programming is a systematic method for getting computers to automatically solve a problem. Genetic programming starts from a high-level statement of what needs to be done and automatically creates a computer program to solve the problem. The paper demonstrates that genetic programming (1) now routinely delivers high-return human-competitive machine intelligence; (2) is an automated invention machine; (3) can automatically create a general solution to a problem in the form of a parameterized topology; and (4) has delivered a progression of qualitatively more substantial results in synchrony with five approximately order-of-magnitude increases in the expenditure of computer time. Recent results involving the automatic synthesis of the topology and sizing of analog electrical circuits and controllers demonstrate these points.
Time-ordered product expansions for computational stochastic system biology.
Mjolsness, Eric
2013-06-01
The time-ordered product framework of quantum field theory can also be used to understand salient phenomena in stochastic biochemical networks. It is used here to derive Gillespie's stochastic simulation algorithm (SSA) for chemical reaction networks; consequently, the SSA can be interpreted in terms of Feynman diagrams. It is also used here to derive other, more general simulation and parameter-learning algorithms including simulation algorithms for networks of stochastic reaction-like processes operating on parameterized objects, and also hybrid stochastic reaction/differential equation models in which systems of ordinary differential equations evolve the parameters of objects that can also undergo stochastic reactions. Thus, the time-ordered product expansion can be used systematically to derive simulation and parameter-fitting algorithms for stochastic systems.
Arctic sea ice albedo - A comparison of two satellite-derived data sets
NASA Technical Reports Server (NTRS)
Schweiger, Axel J.; Serreze, Mark C.; Key, Jeffrey R.
1993-01-01
Spatial patterns of mean monthly surface albedo for May, June, and July, derived from DMSP Operational Line Scan (OLS) satellite imagery are compared with surface albedos derived from the International Satellite Cloud Climatology Program (ISCCP) monthly data set. Spatial patterns obtained by the two techniques are in general agreement, especially for June and July. Nevertheless, systematic differences in albedo of 0.05 - 0.10 are noted which are most likely related to uncertainties in the simple parameterizations used in the DMSP analyses, problems in the ISCCP cloud-clearing algorithm and other modeling simplifications. However, with respect to the eventual goal of developing a reliable automated retrieval algorithm for compiling a long-term albedo data base, these initial comparisons are very encouraging.
Reconstruction of 3d Models from Point Clouds with Hybrid Representation
NASA Astrophysics Data System (ADS)
Hu, P.; Dong, Z.; Yuan, P.; Liang, F.; Yang, B.
2018-05-01
The three-dimensional (3D) reconstruction of urban buildings from point clouds has long been an active topic in applications related to human activities. However, due to the structures significantly differ in terms of complexity, the task of 3D reconstruction remains a challenging issue especially for the freeform surfaces. In this paper, we present a new reconstruction algorithm which allows the 3D-models of building as a combination of regular structures and irregular surfaces, where the regular structures are parameterized plane primitives and the irregular surfaces are expressed as meshes. The extraction of irregular surfaces starts with an over-segmented method for the unstructured point data, a region growing approach based the adjacent graph of super-voxels is then applied to collapse these super-voxels, and the freeform surfaces can be clustered from the voxels filtered by a thickness threshold. To achieve these regular planar primitives, the remaining voxels with a larger flatness will be further divided into multiscale super-voxels as basic units, and the final segmented planes are enriched and refined in a mutually reinforcing manner under the framework of a global energy optimization. We have implemented the proposed algorithms and mainly tested on two point clouds that differ in point density and urban characteristic, and experimental results on complex building structures illustrated the efficacy of the proposed framework.
A photosynthesis-based two-leaf canopy stomatal ...
A coupled photosynthesis-stomatal conductance model with single-layer sunlit and shaded leaf canopy scaling is implemented and evaluated in a diagnostic box model with the Pleim-Xiu land surface model (PX LSM) and ozone deposition model components taken directly from the meteorology and air quality modeling system—WRF/CMAQ (Weather Research and Forecast model and Community Multiscale Air Quality model). The photosynthesis-based model for PX LSM (PX PSN) is evaluated at a FLUXNET site for implementation against different parameterizations and the current PX LSM approach with a simple Jarvis function (PX Jarvis). Latent heat flux (LH) from PX PSN is further evaluated at five FLUXNET sites with different vegetation types and landscape characteristics. Simulated ozone deposition and flux from PX PSN are evaluated at one of the sites with ozone flux measurements. Overall, the PX PSN simulates LH as well as the PX Jarvis approach. The PX PSN, however, shows distinct advantages over the PX Jarvis approach for grassland that likely result from its treatment of C3 and C4 plants for CO2 assimilation. Simulations using Moderate Resolution Imaging Spectroradiometer (MODIS) leaf area index (LAI) rather than LAI measured at each site assess how the model would perform with grid averaged data used in WRF/CMAQ. MODIS LAI estimates degrade model performance at all sites but one site having exceptionally old and tall trees. Ozone deposition velocity and ozone flux along with LH
NASA Astrophysics Data System (ADS)
Jeon, Wonbae; Choi, Yunsoo; Roy, Anirban; Pan, Shuai; Price, Daniel; Hwang, Mi-Kyoung; Kim, Kyu Rang; Oh, Inbo
2018-02-01
Oak pollen concentrations over the Houston-Galveston-Brazoria (HGB) area in southeastern Texas were modeled and evaluated against in-situ data. We modified the Community Multi-scale Air Quality (CMAQ) model to include oak pollen emission, dispersion, and deposition. The Oak Pollen Emission Model (OPEM) calculated gridded oak pollen emissions, which are based on a parameterized equation considering a plant-specific factor ( C e ), surface characteristics, and meteorology. The simulation period was chosen to be February 21 to April 30 in the spring of 2010, when the observed monthly mean oak pollen concentrations were the highest in six years (2009-2014). The results indicated C e and meteorology played an important role in the calculation of oak pollen emissions. While C e was critical in determining the magnitude of oak pollen emissions, meteorology determined their variability. In particular, the contribution of the meteorology to the variation in oak pollen emissions increased with the oak pollen emission rate. The evaluation results using in-situ surface data revealed that the model underestimated pollen concentrations and was unable to accurately reproduce the peak pollen episodes. The model error was likely due to uncertainty in climatology-based C e used for the estimation of oak pollen emissions and inaccuracy in the wind fields from the Weather Research and Forecast (WRF) model.
Progress on Implementing Additional Physics Schemes into ...
The U.S. Environmental Protection Agency (USEPA) has a team of scientists developing a next generation air quality modeling system employing the Model for Prediction Across Scales – Atmosphere (MPAS-A) as its meteorological foundation. Several preferred physics schemes and options available in the Weather Research and Forecasting (WRF) model are regularly used by the USEPA with the Community Multiscale Air Quality (CMAQ) model to conduct retrospective air quality simulations. These include the Pleim surface layer, the Pleim-Xiu (PX) land surface model with fractional land use for a 40-class National Land Cover Database (NLCD40), the Asymmetric Convective Model 2 (ACM2) planetary boundary layer scheme, the Kain-Fritsch (KF) convective parameterization with subgrid-scale cloud feedback to the radiation schemes and a scale-aware convective time scale, and analysis nudging four-dimensional data assimilation (FDDA). All of these physics modules and options have already been implemented by the USEPA into MPAS-A v4.0, tested, and evaluated (please see the presentations of R. Gilliam and R. Bullock at this workshop). Since the release of MPAS v5.1 in May 2017, work has been under way to implement these preferred physics options into the MPAS-A v5.1 code. Test simulations of a summer month are being conducted on a global variable resolution mesh with the higher resolution cells centered over the contiguous United States. Driving fields for the FDDA and soil nudging are
Towards the Next Generation Air Quality Modeling System ...
The community multiscale air quality (CMAQ) model of the U.S. Environmental Protection Agency is one of the most widely used air quality model worldwide; it is employed for both research and regulatory applications at major universities and government agencies for improving understanding of the formation and transport of air pollutants. It is noted, however, that air quality issues and climate change assessments need to be addressed globally recognizing the linkages and interactions between meteorology and atmospheric chemistry across a wide range of scales. Therefore, an effort is currently underway to develop the next generation air quality modeling system (NGAQM) that will be based on a global integrated meteorology and chemistry system. The model for prediction across scales-atmosphere (MPAS-A), a global fully compressible non-hydrostatic model with seamlessly refined centroidal Voronoi grids, has been chosen as the meteorological driver of this modeling system. The initial step of adapting MPAS-A for the NGAQM was to implement and test the physics parameterizations and options that are preferred for retrospective air quality simulations (see the work presented by R. Gilliam, R. Bullock, and J. Herwehe at this workshop). The next step, presented herein, would be to link the chemistry from CMAQ to MPAS-A to build a prototype for the NGAQM. Furthermore, the techniques to harmonize transport processes between CMAQ and MPAS-A, methodologies to connect the chemis
NASA Astrophysics Data System (ADS)
Christianson, D. S.; Varadharajan, C.; Detto, M.; Faybishenko, B.; Gimenez, B.; Jardine, K.; Negron Juarez, R. I.; Pastorello, G.; Powell, T.; Warren, J.; Wolfe, B.; McDowell, N. G.; Kueppers, L. M.; Chambers, J.; Agarwal, D.
2016-12-01
The U.S. Department of Energy's (DOE) Next Generation Ecosystem Experiment (NGEE) Tropics project aims to develop a process-rich tropical forest ecosystem model that is parameterized and benchmarked by field observations. Thus, data synthesis, quality assurance and quality control (QA/QC), and data product generation of a diverse and complex set of ecohydrological observations, including sapflux, leaf surface temperature, soil water content, and leaf gas exchange from sites across the Tropics, are required to support model simulations. We have developed a metadata reporting framework, implemented in conjunction with the NGEE Tropics Data Archive tool, to enable cross-site and cross-method comparison, data interpretability, and QA/QC. We employed a modified User-Centered Design approach, which involved short development cycles based on user-identified needs, and iterative testing with data providers and users. The metadata reporting framework currently has been implemented for sensor-based observations and leverages several existing metadata protocols. The framework consists of templates that define a multi-scale measurement position hierarchy, descriptions of measurement settings, and details about data collection and data file organization. The framework also enables data providers to define data-access permission settings, provenance, and referencing to enable appropriate data usage, citation, and attribution. In addition to describing the metadata reporting framework, we discuss tradeoffs and impressions from both data providers and users during the development process, focusing on the scalability, usability, and efficiency of the framework.
Development and evaluation of a physics-based windblown ...
A new windblown dust emission treatment was incorporated in the Community Multiscale Air Quality (CMAQ) modeling system. This new model treatment has been built upon previously developed physics-based parameterization schemes from the literature. A distinct and novel feature of this scheme, however, is the incorporation of a newly developed dynamic relation for the surface roughness length relevant to small-scale dust generation processes. Through this implementation, the effect of nonerodible elements on the local flow acceleration, drag partitioning, and surface coverage protection is modeled in a physically based and consistent manner. Careful attention is paid in integrating the new windblown dust treatment in the CMAQ model to ensure that the required input parameters are correctly configured. To test the performance of the new dust module in CMAQ, the entire year 2011 is simulated for the continental United States, with particular emphasis on the southwestern United States (SWUS) where windblown dust concentrations are relatively large. Overall, the model shows good performance with the daily mean bias of soil concentrations fluctuating in the range of ±1 µg m−3 for the entire year. Springtime soil concentrations are in quite good agreement (normalized mean bias of 8.3%) with observations, while moderate to high underestimation of soil concentration is seen in the summertime. The latter is attributed to the issue of representing the convective dust sto
Evaluation and intercomparison of five major dry deposition ...
Dry deposition of various pollutants needs to be quantified in air quality monitoring networks as well as in chemical transport models. The inferential method is the most commonly used approach in which the dry deposition velocity (Vd) is empirically parameterized as a function of meteorological and biological conditions and pollutant species’ chemical properties. Earlier model intercomparison studies suggested that existing dry deposition algorithms produce quite different Vd values, e.g., up to a factor of 2 for monthly to annual average values for ozone, and sulfur and nitrogen species (Flechard et al., 2011; Schwede et al., 2011; Wu et al., 2011). To further evaluate model discrepancies using available flux data, this study compared the five dry deposition algorithms commonly used in North America and evaluated the models using five-year Vd(O3) and Vd(SO2) data generated from concentration gradient measurements above a temperate mixed forest in Canada. The five algorithms include: (1) the one used in the Canadian Air and Precipitation Monitoring Network (CAPMoN) and several Canadian air quality models based on Zhang et al. (2003), (2) the one used in the US Clean Air Status and Trends Network (CASTNET) based on Meyers et al. (1998), (3) the one used in the Community Multiscale Air Quality (CMAQ) model described in Pleim and Ran (2011), (4) the Noah land surface model coupled with a photosynthesis-based Gas Exchange Model (Noah-GEM) described in Wu et a
[Formula: see text] regularity properties of singular parameterizations in isogeometric analysis.
Takacs, T; Jüttler, B
2012-11-01
Isogeometric analysis (IGA) is a numerical simulation method which is directly based on the NURBS-based representation of CAD models. It exploits the tensor-product structure of 2- or 3-dimensional NURBS objects to parameterize the physical domain. Hence the physical domain is parameterized with respect to a rectangle or to a cube. Consequently, singularly parameterized NURBS surfaces and NURBS volumes are needed in order to represent non-quadrangular or non-hexahedral domains without splitting, thereby producing a very compact and convenient representation. The Galerkin projection introduces finite-dimensional spaces of test functions in the weak formulation of partial differential equations. In particular, the test functions used in isogeometric analysis are obtained by composing the inverse of the domain parameterization with the NURBS basis functions. In the case of singular parameterizations, however, some of the resulting test functions do not necessarily fulfill the required regularity properties. Consequently, numerical methods for the solution of partial differential equations cannot be applied properly. We discuss the regularity properties of the test functions. For one- and two-dimensional domains we consider several important classes of singularities of NURBS parameterizations. For specific cases we derive additional conditions which guarantee the regularity of the test functions. In addition we present a modification scheme for the discretized function space in case of insufficient regularity. It is also shown how these results can be applied for computational domains in higher dimensions that can be parameterized via sweeping.
Parameterization Interactions in Global Aquaplanet Simulations
NASA Astrophysics Data System (ADS)
Bhattacharya, Ritthik; Bordoni, Simona; Suselj, Kay; Teixeira, João.
2018-02-01
Global climate simulations rely on parameterizations of physical processes that have scales smaller than the resolved ones. In the atmosphere, these parameterizations represent moist convection, boundary layer turbulence and convection, cloud microphysics, longwave and shortwave radiation, and the interaction with the land and ocean surface. These parameterizations can generate different climates involving a wide range of interactions among parameterizations and between the parameterizations and the resolved dynamics. To gain a simplified understanding of a subset of these interactions, we perform aquaplanet simulations with the global version of the Weather Research and Forecasting (WRF) model employing a range (in terms of properties) of moist convection and boundary layer (BL) parameterizations. Significant differences are noted in the simulated precipitation amounts, its partitioning between convective and large-scale precipitation, as well as in the radiative impacts. These differences arise from the way the subcloud physics interacts with convection, both directly and through various pathways involving the large-scale dynamics and the boundary layer, convection, and clouds. A detailed analysis of the profiles of the different tendencies (from the different physical processes) for both potential temperature and water vapor is performed. While different combinations of convection and boundary layer parameterizations can lead to different climates, a key conclusion of this study is that similar climates can be simulated with model versions that are different in terms of the partitioning of the tendencies: the vertically distributed energy and water balances in the tropics can be obtained with significantly different profiles of large-scale, convection, and cloud microphysics tendencies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kao, C.Y.J.; Bossert, J.E.; Winterkamp, J.
1993-10-01
One of the objectives of the DOE ARM Program is to improve the parameterization of clouds in general circulation models (GCMs). The approach taken in this research is two fold. We first examine the behavior of cumulus parameterization schemes by comparing their performance against the results from explicit cloud simulations with state-of-the-art microphysics. This is conducted in a two-dimensional (2-D) configuration of an idealized convective system. We then apply the cumulus parameterization schemes to realistic three-dimensional (3-D) simulations over the western US for a case with an enormous amount of convection in an extended period of five days. In themore » 2-D idealized tests, cloud effects are parameterized in the ``parameterization cases`` with a coarse resolution, whereas each cloud is explicitly resolved by the ``microphysics cases`` with a much finer resolution. Thus, the capability of the parameterization schemes in reproducing the growth and life cycle of a convective system can then be evaluated. These 2-D tests will form the basis for further 3-D realistic simulations which have the model resolution equivalent to that of the next generation of GCMs. Two cumulus parameterizations are used in this research: the Arakawa-Schubert (A-S) scheme (Arakawa and Schubert, 1974) used in Kao and Ogura (1987) and the Kuo scheme (Kuo, 1974) used in Tremback (1990). The numerical model used in this research is the Regional Atmospheric Modeling System (RAMS) developed at Colorado State University (CSU).« less
Brain Surface Conformal Parameterization Using Riemann Surface Structure
Wang, Yalin; Lui, Lok Ming; Gu, Xianfeng; Hayashi, Kiralee M.; Chan, Tony F.; Toga, Arthur W.; Thompson, Paul M.; Yau, Shing-Tung
2011-01-01
In medical imaging, parameterized 3-D surface models are useful for anatomical modeling and visualization, statistical comparisons of anatomy, and surface-based registration and signal processing. Here we introduce a parameterization method based on Riemann surface structure, which uses a special curvilinear net structure (conformal net) to partition the surface into a set of patches that can each be conformally mapped to a parallelogram. The resulting surface subdivision and the parameterizations of the components are intrinsic and stable (their solutions tend to be smooth functions and the boundary conditions of the Dirichlet problem can be enforced). Conformal parameterization also helps transform partial differential equations (PDEs) that may be defined on 3-D brain surface manifolds to modified PDEs on a two-dimensional parameter domain. Since the Jacobian matrix of a conformal parameterization is diagonal, the modified PDE on the parameter domain is readily solved. To illustrate our techniques, we computed parameterizations for several types of anatomical surfaces in 3-D magnetic resonance imaging scans of the brain, including the cerebral cortex, hippocampi, and lateral ventricles. For surfaces that are topologically homeomorphic to each other and have similar geometrical structures, we show that the parameterization results are consistent and the subdivided surfaces can be matched to each other. Finally, we present an automatic sulcal landmark location algorithm by solving PDEs on cortical surfaces. The landmark detection results are used as constraints for building conformal maps between surfaces that also match explicitly defined landmarks. PMID:17679336
NASA Astrophysics Data System (ADS)
Swenson, S. C.; Lawrence, D. M.
2014-12-01
Estimating the relative contributions of human withdrawals and climate variability to changes in groundwater is a challenging task at present. One method that has been used recently is a model-data synthesis combining GRACE total water storage estimates with simulated water storage estimates from land surface models. In this method, water storage changes due to natural climate variations simulated by a model are removed from total water storage changes observed by GRACE; the residual is then interpreted as anthropogenic groundwater change. If the modeled water storage estimate contains systematic errors, these errors will also be present in the residual groundwater estimate. For example, simulations performed with the Community Land Model (CLM; the land component of the Community Earth System Model) generally show a weak (as much as 50% smaller) seasonal cycle of water storage in semi-arid regions when compared to GRACE satellite water storage estimates. This bias propagates into GRACE-CLM anthropogenic groundwater change estimates, which then exhibit unphysical seasonal variability. The CLM bias can be traced to the parameterization of soil evaporative resistance. Incorporating a new soil resistance parameterization in CLM greatly reduces the seasonal bias with respect to GRACE. In this study, we compare the improved CLM water storage estimates to GRACE and discuss the implications for estimates of anthropogenic groundwater withdrawal, showing examples for the Middle East and Southwestern United States.
A Dynamic Model of Mercury's Magnetospheric Magnetic Field
Johnson, Catherine L.; Philpott, Lydia; Tsyganenko, Nikolai A.; Anderson, Brian J.
2017-01-01
Abstract Mercury's solar wind and interplanetary magnetic field environment is highly dynamic, and variations in these external conditions directly control the current systems and magnetic fields inside the planetary magnetosphere. We update our previous static model of Mercury's magnetic field by incorporating variations in the magnetospheric current systems, parameterized as functions of Mercury's heliocentric distance and magnetic activity. The new, dynamic model reproduces the location of the magnetopause current system as a function of systematic pressure variations encountered during Mercury's eccentric orbit, as well as the increase in the cross‐tail current intensity with increasing magnetic activity. Despite the enhancements in the external field parameterization, the residuals between the observed and modeled magnetic field inside the magnetosphere indicate that the dynamic model achieves only a modest overall improvement over the previous static model. The spatial distribution of the residuals in the magnetic field components shows substantial improvement of the model accuracy near the dayside magnetopause. Elsewhere, the large‐scale distribution of the residuals is similar to those of the static model. This result implies either that magnetic activity varies much faster than can be determined from the spacecraft's passage through the magnetosphere or that the residual fields are due to additional external current systems not represented in the model or both. Birkeland currents flowing along magnetic field lines between the magnetosphere and planetary high‐latitude regions have been identified as one such contribution. PMID:29263560
Impact of Apex Model parameterization strategy on estimated benefit of conservation practices
USDA-ARS?s Scientific Manuscript database
Three parameterized Agriculture Policy Environmental eXtender (APEX) models for corn-soybean rotation on clay pan soils were developed with the objectives, 1. Evaluate model performance of three parameterization strategies on a validation watershed; and 2. Compare predictions of water quality benefi...
Single-Column Modeling, GCM Parameterizations and Atmospheric Radiation Measurement Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Somerville, R.C.J.; Iacobellis, S.F.
2005-03-18
Our overall goal is identical to that of the Atmospheric Radiation Measurement (ARM) Program: the development of new and improved parameterizations of cloud-radiation effects and related processes, using ARM data at all three ARM sites, and the implementation and testing of these parameterizations in global and regional models. To test recently developed prognostic parameterizations based on detailed cloud microphysics, we have first compared single-column model (SCM) output with ARM observations at the Southern Great Plains (SGP), North Slope of Alaska (NSA) and Topical Western Pacific (TWP) sites. We focus on the predicted cloud amounts and on a suite of radiativemore » quantities strongly dependent on clouds, such as downwelling surface shortwave radiation. Our results demonstrate the superiority of parameterizations based on comprehensive treatments of cloud microphysics and cloud-radiative interactions. At the SGP and NSA sites, the SCM results simulate the ARM measurements well and are demonstrably more realistic than typical parameterizations found in conventional operational forecasting models. At the TWP site, the model performance depends strongly on details of the scheme, and the results of our diagnostic tests suggest ways to develop improved parameterizations better suited to simulating cloud-radiation interactions in the tropics generally. These advances have made it possible to take the next step and build on this progress, by incorporating our parameterization schemes in state-of-the-art 3D atmospheric models, and diagnosing and evaluating the results using independent data. Because the improved cloud-radiation results have been obtained largely via implementing detailed and physically comprehensive cloud microphysics, we anticipate that improved predictions of hydrologic cycle components, and hence of precipitation, may also be achievable. We are currently testing the performance of our ARM-based parameterizations in state-of-the--art global and regional models. One fruitful strategy for evaluating advances in parameterizations has turned out to be using short-range numerical weather prediction as a test-bed within which to implement and improve parameterizations for modeling and predicting climate variability. The global models we have used to date are the CAM atmospheric component of the National Center for Atmospheric Research (NCAR) CCSM climate model as well as the National Centers for Environmental Prediction (NCEP) numerical weather prediction model, thus allowing testing in both climate simulation and numerical weather prediction modes. We present detailed results of these tests, demonstrating the sensitivity of model performance to changes in parameterizations.« less
1997-09-30
research is multiscale , interdisciplinary and generic. The methods are applicable to an arbitrary region of the coastal and/or deep ocean and across the...dynamics. OBJECTIVES General objectives are: (I) To determine for the coastal and/or coupled deep ocean the multiscale processes which occur: i) in...Straits and the eastern basin; iii) extension and application of our balance of terms scheme (EVA) to multiscale , interdisciplinary fields with data
Brad C. Timm; Kevin McGarigal; Samuel A. Cushman; Joseph L. Ganey
2016-01-01
Efficacy of future habitat selection studies will benefit by taking a multi-scale approach. In addition to potentially providing increased explanatory power and predictive capacity, multi-scale habitat models enhance our understanding of the scales at which species respond to their environment, which is critical knowledge required to implement effective...
1990-02-21
LIDS-P-1953 Multiscale System Theory Albert Benveniste IRISA-INRIA, Campus de Beaulieu 35042 RENNES CEDEX, FRANCE Ramine Nikoukhah INRIA...TITLE AND SUBTITLE Multiscale System Theory 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e...the development of a corresponding system theory and a theory of stochastic processes and their estimation. The research presented in this and several
Characterization of Cyclohexanone Inclusions in Class 1 RDX
2014-06-01
characterized with respect to solvent inclusions in support of a U.S. Army Research Laboratory (ARL) program to model Multiscale Response of Energetic...pertinent to their modeling effort under the Multiscale Response of Energetic Materials (MREM) program, and the Weapons and Materials Research...support of a U.S. Army Research Laboratory (ARL) initiative called “ Multiscale Modeling of Energetic Materials” (MREM). The MREM program aims, for
Statistical Field Estimation for Complex Coastal Regions and Archipelagos (PREPRINT)
2011-04-09
and study the computational properties of these schemes. Specifically, we extend a multiscale Objective Analysis (OA) approach to complex coastal...computational properties of these schemes. Specifically, we extend a multiscale Objective Analysis (OA) approach to complex coastal regions and... multiscale free-surface code builds on the primitive-equation model of the Harvard Ocean Predic- tion System (HOPS, Haley et al. (2009)). Additionally
2016-07-15
AFRL-AFOSR-JP-TR-2016-0068 Multi-scale Computational Electromagnetics for Phenomenology and Saliency Characterization in Remote Sensing Hean-Teik...SUBTITLE Multi-scale Computational Electromagnetics for Phenomenology and Saliency Characterization in Remote Sensing 5a. CONTRACT NUMBER 5b. GRANT NUMBER... electromagnetics to the application in microwave remote sensing as well as extension of modelling capability with computational flexibility to study
2016-07-15
AFRL-AFOSR-JP-TR-2016-0068 Multi-scale Computational Electromagnetics for Phenomenology and Saliency Characterization in Remote Sensing Hean-Teik...SUBTITLE Multi-scale Computational Electromagnetics for Phenomenology and Saliency Characterization in Remote Sensing 5a. CONTRACT NUMBER 5b. GRANT NUMBER...electromagnetics to the application in microwave remote sensing as well as extension of modelling capability with computational flexibility to study
Improved parameterization for the vertical flux of dust aerosols emitted by an eroding soil
USDA-ARS?s Scientific Manuscript database
The representation of the dust cycle in atmospheric circulation models hinges on an accurate parameterization of the vertical dust flux at emission. However, existing parameterizations of the vertical dust flux vary substantially in their scaling with wind friction velocity, require input parameters...
Climate and the equilibrium state of land surface hydrology parameterizations
NASA Technical Reports Server (NTRS)
Entekhabi, Dara; Eagleson, Peter S.
1991-01-01
For given climatic rates of precipitation and potential evaporation, the land surface hydrology parameterizations of atmospheric general circulation models will maintain soil-water storage conditions that balance the moisture input and output. The surface relative soil saturation for such climatic conditions serves as a measure of the land surface parameterization state under a given forcing. The equilibrium value of this variable for alternate parameterizations of land surface hydrology are determined as a function of climate and the sensitivity of the surface to shifts and changes in climatic forcing are estimated.
Cross-Section Parameterizations for Pion and Nucleon Production From Negative Pion-Proton Collisions
NASA Technical Reports Server (NTRS)
Norbury, John W.; Blattnig, Steve R.; Norman, Ryan; Tripathi, R. K.
2002-01-01
Ranft has provided parameterizations of Lorentz invariant differential cross sections for pion and nucleon production in pion-proton collisions that are compared to some recent data. The Ranft parameterizations are then numerically integrated to form spectral and total cross sections. These numerical integrations are further parameterized to provide formula for spectral and total cross sections suitable for use in radiation transport codes. The reactions analyzed are for charged pions in the initial state and both charged and neutral pions in the final state.
Anisotropic Shear Dispersion Parameterization for Mesoscale Eddy Transport
NASA Astrophysics Data System (ADS)
Reckinger, S. J.; Fox-Kemper, B.
2016-02-01
The effects of mesoscale eddies are universally treated isotropically in general circulation models. However, the processes that the parameterization approximates, such as shear dispersion, typically have strongly anisotropic characteristics. The Gent-McWilliams/Redi mesoscale eddy parameterization is extended for anisotropy and tested using 1-degree Community Earth System Model (CESM) simulations. The sensitivity of the model to anisotropy includes a reduction of temperature and salinity biases, a deepening of the southern ocean mixed-layer depth, and improved ventilation of biogeochemical tracers, particularly in oxygen minimum zones. The parameterization is further extended to include the effects of unresolved shear dispersion, which sets the strength and direction of anisotropy. The shear dispersion parameterization is similar to drifter observations in spatial distribution of diffusivity and high-resolution model diagnosis in the distribution of eddy flux orientation.
Controllers, observers, and applications thereof
NASA Technical Reports Server (NTRS)
Gao, Zhiqiang (Inventor); Zhou, Wankun (Inventor); Miklosovic, Robert (Inventor); Radke, Aaron (Inventor); Zheng, Qing (Inventor)
2011-01-01
Controller scaling and parameterization are described. Techniques that can be improved by employing the scaling and parameterization include, but are not limited to, controller design, tuning and optimization. The scaling and parameterization methods described here apply to transfer function based controllers, including PID controllers. The parameterization methods also apply to state feedback and state observer based controllers, as well as linear active disturbance rejection (ADRC) controllers. Parameterization simplifies the use of ADRC. A discrete extended state observer (DESO) and a generalized extended state observer (GESO) are described. They improve the performance of the ESO and therefore ADRC. A tracking control algorithm is also described that improves the performance of the ADRC controller. A general algorithm is described for applying ADRC to multi-input multi-output systems. Several specific applications of the control systems and processes are disclosed.
NASA Astrophysics Data System (ADS)
Pincus, R.; Mlawer, E. J.
2017-12-01
Radiation is key process in numerical models of the atmosphere. The problem is well-understood and the parameterization of radiation has seen relatively few conceptual advances in the past 15 years. It is nonthelss often the single most expensive component of all physical parameterizations despite being computed less frequently than other terms. This combination of cost and maturity suggests value in a single radiation parameterization that could be shared across models; devoting effort to a single parameterization might allow for fine tuning for efficiency. The challenge lies in the coupling of this parameterization to many disparate representations of clouds and aerosols. This talk will describe RRTMGP, a new radiation parameterization that seeks to balance efficiency and flexibility. This balance is struck by isolating computational tasks in "kernels" that expose as much fine-grained parallelism as possible. These have simple interfaces and are interoperable across programming languages so that they might be repalced by alternative implementations in domain-specific langauges. Coupling to the host model makes use of object-oriented features of Fortran 2003, minimizing branching within the kernels and the amount of data that must be transferred. We will show accuracy and efficiency results for a globally-representative set of atmospheric profiles using a relatively high-resolution spectral discretization.
Vařeková, Radka Svobodová; Jiroušková, Zuzana; Vaněk, Jakub; Suchomel, Šimon; Koča, Jaroslav
2007-01-01
The Electronegativity Equalization Method (EEM) is a fast approach for charge calculation. A challenging part of the EEM is the parameterization, which is performed using ab initio charges obtained for a set of molecules. The goal of our work was to perform the EEM parameterization for selected sets of organic, organohalogen and organometal molecules. We have performed the most robust parameterization published so far. The EEM parameterization was based on 12 training sets selected from a database of predicted 3D structures (NCI DIS) and from a database of crystallographic structures (CSD). Each set contained from 2000 to 6000 molecules. We have shown that the number of molecules in the training set is very important for quality of the parameters. We have improved EEM parameters (STO-3G MPA charges) for elements that were already parameterized, specifically: C, O, N, H, S, F and Cl. The new parameters provide more accurate charges than those published previously. We have also developed new parameters for elements that were not parameterized yet, specifically for Br, I, Fe and Zn. We have also performed crossover validation of all obtained parameters using all training sets that included relevant elements and confirmed that calculated parameters provide accurate charges.
Spectral cumulus parameterization based on cloud-resolving model
NASA Astrophysics Data System (ADS)
Baba, Yuya
2018-02-01
We have developed a spectral cumulus parameterization using a cloud-resolving model. This includes a new parameterization of the entrainment rate which was derived from analysis of the cloud properties obtained from the cloud-resolving model simulation and was valid for both shallow and deep convection. The new scheme was examined in a single-column model experiment and compared with the existing parameterization of Gregory (2001, Q J R Meteorol Soc 127:53-72) (GR scheme). The results showed that the GR scheme simulated more shallow and diluted convection than the new scheme. To further validate the physical performance of the parameterizations, Atmospheric Model Intercomparison Project (AMIP) experiments were performed, and the results were compared with reanalysis data. The new scheme performed better than the GR scheme in terms of mean state and variability of atmospheric circulation, i.e., the new scheme improved positive bias of precipitation in western Pacific region, and improved positive bias of outgoing shortwave radiation over the ocean. The new scheme also simulated better features of convectively coupled equatorial waves and Madden-Julian oscillation. These improvements were found to be derived from the modification of parameterization for the entrainment rate, i.e., the proposed parameterization suppressed excessive increase of entrainment, thus suppressing excessive increase of low-level clouds.
NASA Astrophysics Data System (ADS)
Engel, Dave W.; Reichardt, Thomas A.; Kulp, Thomas J.; Graff, David L.; Thompson, Sandra E.
2016-05-01
Validating predictive models and quantifying uncertainties inherent in the modeling process is a critical component of the HARD Solids Venture program [1]. Our current research focuses on validating physics-based models predicting the optical properties of solid materials for arbitrary surface morphologies and characterizing the uncertainties in these models. We employ a systematic and hierarchical approach by designing physical experiments and comparing the experimental results with the outputs of computational predictive models. We illustrate this approach through an example comparing a micro-scale forward model to an idealized solid-material system and then propagating the results through a system model to the sensor level. Our efforts should enhance detection reliability of the hyper-spectral imaging technique and the confidence in model utilization and model outputs by users and stakeholders.
Network Intrusion Detection and Visualization using Aggregations in a Cyber Security Data Warehouse
DOE Office of Scientific and Technical Information (OSTI.GOV)
Czejdo, Bogdan; Ferragut, Erik M; Goodall, John R
2012-01-01
The challenge of achieving situational understanding is a limiting factor in effective, timely, and adaptive cyber-security analysis. Anomaly detection fills a critical role in network assessment and trend analysis, both of which underlie the establishment of comprehensive situational understanding. To that end, we propose a cyber security data warehouse implemented as a hierarchical graph of aggregations that captures anomalies at multiple scales. Each node of our pro-posed graph is a summarization table of cyber event aggregations, and the edges are aggregation operators. The cyber security data warehouse enables domain experts to quickly traverse a multi-scale aggregation space systematically. We describemore » the architecture of a test bed system and a summary of results on the IEEE VAST 2012 Cyber Forensics data.« less
NASA Technical Reports Server (NTRS)
Bednarcyk, Brett A.; Arnold, Steven M.
2012-01-01
A framework for the multiscale design and analysis of composite materials and structures is presented. The ImMAC software suite, developed at NASA Glenn Research Center, embeds efficient, nonlinear micromechanics capabilities within higher scale structural analysis methods such as finite element analysis. The result is an integrated, multiscale tool that relates global loading to the constituent scale, captures nonlinearities at this scale, and homogenizes local nonlinearities to predict their effects at the structural scale. Example applications of the multiscale framework are presented for the stochastic progressive failure of a SiC/Ti composite tensile specimen and the effects of microstructural variations on the nonlinear response of woven polymer matrix composites.
Multiscale modeling of a low magnetostrictive Fe-27wt%Co-0.5wt%Cr alloy
NASA Astrophysics Data System (ADS)
Savary, M.; Hubert, O.; Helbert, A. L.; Baudin, T.; Batonnet, R.; Waeckerlé, T.
2018-05-01
The present paper deals with the improvement of a multi-scale approach describing the magneto-mechanical coupling of Fe-27wt%Co-0.5wt%Cr alloy. The magnetostriction behavior is demonstrated as very different (low magnetostriction vs. high magnetostriction) when this material is submitted to two different final annealing conditions after cold rolling. The numerical data obtained from a multi-scale approach are in accordance with experimental data corresponding to the high magnetostriction level material. A bi-domain structure hypothesis is employed to explain the low magnetostriction behavior, in accordance with the effect of an applied tensile stress. A modification of the multiscale approach is proposed to match this result.
Generalization Performance of Regularized Ranking With Multiscale Kernels.
Zhou, Yicong; Chen, Hong; Lan, Rushi; Pan, Zhibin
2016-05-01
The regularized kernel method for the ranking problem has attracted increasing attentions in machine learning. The previous regularized ranking algorithms are usually based on reproducing kernel Hilbert spaces with a single kernel. In this paper, we go beyond this framework by investigating the generalization performance of the regularized ranking with multiscale kernels. A novel ranking algorithm with multiscale kernels is proposed and its representer theorem is proved. We establish the upper bound of the generalization error in terms of the complexity of hypothesis spaces. It shows that the multiscale ranking algorithm can achieve satisfactory learning rates under mild conditions. Experiments demonstrate the effectiveness of the proposed method for drug discovery and recommendation tasks.
Multiscale hidden Markov models for photon-limited imaging
NASA Astrophysics Data System (ADS)
Nowak, Robert D.
1999-06-01
Photon-limited image analysis is often hindered by low signal-to-noise ratios. A novel Bayesian multiscale modeling and analysis method is developed in this paper to assist in these challenging situations. In addition to providing a very natural and useful framework for modeling an d processing images, Bayesian multiscale analysis is often much less computationally demanding compared to classical Markov random field models. This paper focuses on a probabilistic graph model called the multiscale hidden Markov model (MHMM), which captures the key inter-scale dependencies present in natural image intensities. The MHMM framework presented here is specifically designed for photon-limited imagin applications involving Poisson statistics, and applications to image intensity analysis are examined.
Deep multi-scale convolutional neural network for hyperspectral image classification
NASA Astrophysics Data System (ADS)
Zhang, Feng-zhe; Yang, Xia
2018-04-01
In this paper, we proposed a multi-scale convolutional neural network for hyperspectral image classification task. Firstly, compared with conventional convolution, we utilize multi-scale convolutions, which possess larger respective fields, to extract spectral features of hyperspectral image. We design a deep neural network with a multi-scale convolution layer which contains 3 different convolution kernel sizes. Secondly, to avoid overfitting of deep neural network, dropout is utilized, which randomly sleeps neurons, contributing to improve the classification accuracy a bit. In addition, new skills like ReLU in deep learning is utilized in this paper. We conduct experiments on University of Pavia and Salinas datasets, and obtained better classification accuracy compared with other methods.
NASA Technical Reports Server (NTRS)
Bednarcyk, Brett A.; Arnold, Steven M.
2011-01-01
A framework for the multiscale design and analysis of composite materials and structures is presented. The ImMAC software suite, developed at NASA Glenn Research Center, embeds efficient, nonlinear micromechanics capabilities within higher scale structural analysis methods such as finite element analysis. The result is an integrated, multiscale tool that relates global loading to the constituent scale, captures nonlinearities at this scale, and homogenizes local nonlinearities to predict their effects at the structural scale. Example applications of the multiscale framework are presented for the stochastic progressive failure of a SiC/Ti composite tensile specimen and the effects of microstructural variations on the nonlinear response of woven polymer matrix composites.
NASA Astrophysics Data System (ADS)
White, Jeremy; Stengel, Victoria; Rendon, Samuel; Banta, John
2017-08-01
Computer models of hydrologic systems are frequently used to investigate the hydrologic response of land-cover change. If the modeling results are used to inform resource-management decisions, then providing robust estimates of uncertainty in the simulated response is an important consideration. Here we examine the importance of parameterization, a necessarily subjective process, on uncertainty estimates of the simulated hydrologic response of land-cover change. Specifically, we applied the soil water assessment tool (SWAT) model to a 1.4 km2 watershed in southern Texas to investigate the simulated hydrologic response of brush management (the mechanical removal of woody plants), a discrete land-cover change. The watershed was instrumented before and after brush-management activities were undertaken, and estimates of precipitation, streamflow, and evapotranspiration (ET) are available; these data were used to condition and verify the model. The role of parameterization in brush-management simulation was evaluated by constructing two models, one with 12 adjustable parameters (reduced parameterization) and one with 1305 adjustable parameters (full parameterization). Both models were subjected to global sensitivity analysis as well as Monte Carlo and generalized likelihood uncertainty estimation (GLUE) conditioning to identify important model inputs and to estimate uncertainty in several quantities of interest related to brush management. Many realizations from both parameterizations were identified as behavioral
in that they reproduce daily mean streamflow acceptably well according to Nash-Sutcliffe model efficiency coefficient, percent bias, and coefficient of determination. However, the total volumetric ET difference resulting from simulated brush management remains highly uncertain after conditioning to daily mean streamflow, indicating that streamflow data alone are not sufficient to inform the model inputs that influence the simulated outcomes of brush management the most. Additionally, the reduced-parameterization model grossly underestimates uncertainty in the total volumetric ET difference compared to the full-parameterization model; total volumetric ET difference is a primary metric for evaluating the outcomes of brush management. The failure of the reduced-parameterization model to provide robust uncertainty estimates demonstrates the importance of parameterization when attempting to quantify uncertainty in land-cover change simulations.
White, Jeremy; Stengel, Victoria G.; Rendon, Samuel H.; Banta, John
2017-01-01
Computer models of hydrologic systems are frequently used to investigate the hydrologic response of land-cover change. If the modeling results are used to inform resource-management decisions, then providing robust estimates of uncertainty in the simulated response is an important consideration. Here we examine the importance of parameterization, a necessarily subjective process, on uncertainty estimates of the simulated hydrologic response of land-cover change. Specifically, we applied the soil water assessment tool (SWAT) model to a 1.4 km2 watershed in southern Texas to investigate the simulated hydrologic response of brush management (the mechanical removal of woody plants), a discrete land-cover change. The watershed was instrumented before and after brush-management activities were undertaken, and estimates of precipitation, streamflow, and evapotranspiration (ET) are available; these data were used to condition and verify the model. The role of parameterization in brush-management simulation was evaluated by constructing two models, one with 12 adjustable parameters (reduced parameterization) and one with 1305 adjustable parameters (full parameterization). Both models were subjected to global sensitivity analysis as well as Monte Carlo and generalized likelihood uncertainty estimation (GLUE) conditioning to identify important model inputs and to estimate uncertainty in several quantities of interest related to brush management. Many realizations from both parameterizations were identified as behavioral in that they reproduce daily mean streamflow acceptably well according to Nash–Sutcliffe model efficiency coefficient, percent bias, and coefficient of determination. However, the total volumetric ET difference resulting from simulated brush management remains highly uncertain after conditioning to daily mean streamflow, indicating that streamflow data alone are not sufficient to inform the model inputs that influence the simulated outcomes of brush management the most. Additionally, the reduced-parameterization model grossly underestimates uncertainty in the total volumetric ET difference compared to the full-parameterization model; total volumetric ET difference is a primary metric for evaluating the outcomes of brush management. The failure of the reduced-parameterization model to provide robust uncertainty estimates demonstrates the importance of parameterization when attempting to quantify uncertainty in land-cover change simulations.
NASA Astrophysics Data System (ADS)
Liu, Weixin; Jin, Ningde; Han, Yunfeng; Ma, Jing
2018-06-01
In the present study, multi-scale entropy algorithm was used to characterise the complex flow phenomena of turbulent droplets in high water-cut oil-water two-phase flow. First, we compared multi-scale weighted permutation entropy (MWPE), multi-scale approximate entropy (MAE), multi-scale sample entropy (MSE) and multi-scale complexity measure (MCM) for typical nonlinear systems. The results show that MWPE presents satisfied variability with scale and anti-noise ability. Accordingly, we conducted an experiment of vertical upward oil-water two-phase flow with high water-cut and collected the signals of a high-resolution microwave resonant sensor, based on which two indexes, the entropy rate and mean value of MWPE, were extracted. Besides, the effects of total flow rate and water-cut on these two indexes were analysed. Our researches show that MWPE is an effective method to uncover the dynamic instability of oil-water two-phase flow with high water-cut.
Chiverton, John P; Ige, Olubisi; Barnett, Stephanie J; Parry, Tony
2017-11-01
This paper is concerned with the modeling and analysis of the orientation and distance between steel fibers in X-ray micro-tomography data. The advantage of combining both orientation and separation in a model is that it helps provide a detailed understanding of how the steel fibers are arranged, which is easy to compare. The developed models are designed to summarize the randomness of the orientation distribution of the steel fibers both locally and across an entire volume based on multiscale entropy. Theoretical modeling, simulation, and application to real imaging data are shown here. The theoretical modeling of multiscale entropy for orientation includes a proof showing the final form of the multiscale taken over a linear range of scales. A series of image processing operations are also included to overcome interslice connectivity issues to help derive the statistical descriptions of the orientation distributions of the steel fibers. The results demonstrate that multiscale entropy provides unique insights into both simulated and real imaging data of steel fiber reinforced concrete.
Multiscale modeling and simulation of brain blood flow
NASA Astrophysics Data System (ADS)
Perdikaris, Paris; Grinberg, Leopold; Karniadakis, George Em
2016-02-01
The aim of this work is to present an overview of recent advances in multi-scale modeling of brain blood flow. In particular, we present some approaches that enable the in silico study of multi-scale and multi-physics phenomena in the cerebral vasculature. We discuss the formulation of continuum and atomistic modeling approaches, present a consistent framework for their concurrent coupling, and list some of the challenges that one needs to overcome in achieving a seamless and scalable integration of heterogeneous numerical solvers. The effectiveness of the proposed framework is demonstrated in a realistic case involving modeling the thrombus formation process taking place on the wall of a patient-specific cerebral aneurysm. This highlights the ability of multi-scale algorithms to resolve important biophysical processes that span several spatial and temporal scales, potentially yielding new insight into the key aspects of brain blood flow in health and disease. Finally, we discuss open questions in multi-scale modeling and emerging topics of future research.
NASA Technical Reports Server (NTRS)
Saether, Erik; Hochhalter, Jacob D.; Glaessgen, Edward H.
2012-01-01
A multiscale modeling methodology that combines the predictive capability of discrete dislocation plasticity and the computational efficiency of continuum crystal plasticity is developed. Single crystal configurations of different grain sizes modeled with periodic boundary conditions are analyzed using discrete dislocation plasticity (DD) to obtain grain size-dependent stress-strain predictions. These relationships are mapped into crystal plasticity parameters to develop a multiscale DD/CP model for continuum level simulations. A polycrystal model of a structurally-graded microstructure is developed, analyzed and used as a benchmark for comparison between the multiscale DD/CP model and the DD predictions. The multiscale DD/CP model follows the DD predictions closely up to an initial peak stress and then follows a strain hardening path that is parallel but somewhat offset from the DD predictions. The difference is believed to be from a combination of the strain rate in the DD simulation and the inability of the DD/CP model to represent non-monotonic material response.
Using the PORS Problems to Examine Evolutionary Optimization of Multiscale Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reinhart, Zachary; Molian, Vaelan; Bryden, Kenneth
2013-01-01
Nearly all systems of practical interest are composed of parts assembled across multiple scales. For example, an agrodynamic system is composed of flora and fauna on one scale; soil types, slope, and water runoff on another scale; and management practice and yield on another scale. Or consider an advanced coal-fired power plant: combustion and pollutant formation occurs on one scale, the plant components on another scale, and the overall performance of the power system is measured on another. In spite of this, there are few practical tools for the optimization of multiscale systems. This paper examines multiscale optimization of systemsmore » composed of discrete elements using the plus-one-recall-store (PORS) problem as a test case or study problem for multiscale systems. From this study, it is found that by recognizing the constraints and patterns present in discrete multiscale systems, the solution time can be significantly reduced and much more complex problems can be optimized.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Emery, John M.; Coffin, Peter; Robbins, Brian A.
Microstructural variabilities are among the predominant sources of uncertainty in structural performance and reliability. We seek to develop efficient algorithms for multiscale calcu- lations for polycrystalline alloys such as aluminum alloy 6061-T6 in environments where ductile fracture is the dominant failure mode. Our approach employs concurrent multiscale methods, but does not focus on their development. They are a necessary but not sufficient ingredient to multiscale reliability predictions. We have focused on how to efficiently use concurrent models for forward propagation because practical applications cannot include fine-scale details throughout the problem domain due to exorbitant computational demand. Our approach begins withmore » a low-fidelity prediction at the engineering scale that is sub- sequently refined with multiscale simulation. The results presented in this report focus on plasticity and damage at the meso-scale, efforts to expedite Monte Carlo simulation with mi- crostructural considerations, modeling aspects regarding geometric representation of grains and second-phase particles, and contrasting algorithms for scale coupling.« less
FINAL REPORT (DE-FG02-97ER62338): Single-column modeling, GCM parameterizations, and ARM data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Richard C. J. Somerville
2009-02-27
Our overall goal is the development of new and improved parameterizations of cloud-radiation effects and related processes, using ARM data at all three ARM sites, and the implementation and testing of these parameterizations in global models. To test recently developed prognostic parameterizations based on detailed cloud microphysics, we have compared SCM (single-column model) output with ARM observations at the SGP, NSA and TWP sites. We focus on the predicted cloud amounts and on a suite of radiative quantities strongly dependent on clouds, such as downwelling surface shortwave radiation. Our results demonstrate the superiority of parameterizations based on comprehensive treatments ofmore » cloud microphysics and cloud-radiative interactions. At the SGP and NSA sites, the SCM results simulate the ARM measurements well and are demonstrably more realistic than typical parameterizations found in conventional operational forecasting models. At the TWP site, the model performance depends strongly on details of the scheme, and the results of our diagnostic tests suggest ways to develop improved parameterizations better suited to simulating cloud-radiation interactions in the tropics generally. These advances have made it possible to take the next step and build on this progress, by incorporating our parameterization schemes in state-of-the-art three-dimensional atmospheric models, and diagnosing and evaluating the results using independent data. Because the improved cloud-radiation results have been obtained largely via implementing detailed and physically comprehensive cloud microphysics, we anticipate that improved predictions of hydrologic cycle components, and hence of precipitation, may also be achievable.« less
NASA Astrophysics Data System (ADS)
Madi, Raneem; Huibert de Rooij, Gerrit; Mielenz, Henrike; Mai, Juliane
2018-02-01
Few parametric expressions for the soil water retention curve are suitable for dry conditions. Furthermore, expressions for the soil hydraulic conductivity curves associated with parametric retention functions can behave unrealistically near saturation. We developed a general criterion for water retention parameterizations that ensures physically plausible conductivity curves. Only 3 of the 18 tested parameterizations met this criterion without restrictions on the parameters of a popular conductivity curve parameterization. A fourth required one parameter to be fixed. We estimated parameters by shuffled complex evolution (SCE) with the objective function tailored to various observation methods used to obtain retention curve data. We fitted the four parameterizations with physically plausible conductivities as well as the most widely used parameterization. The performance of the resulting 12 combinations of retention and conductivity curves was assessed in a numerical study with 751 days of semiarid atmospheric forcing applied to unvegetated, uniform, 1 m freely draining columns for four textures. Choosing different parameterizations had a minor effect on evaporation, but cumulative bottom fluxes varied by up to an order of magnitude between them. This highlights the need for a careful selection of the soil hydraulic parameterization that ideally does not only rely on goodness of fit to static soil water retention data but also on hydraulic conductivity measurements. Parameter fits for 21 soils showed that extrapolations into the dry range of the retention curve often became physically more realistic when the parameterization had a logarithmic dry branch, particularly in fine-textured soils where high residual water contents would otherwise be fitted.
How certain are the process parameterizations in our models?
NASA Astrophysics Data System (ADS)
Gharari, Shervan; Hrachowitz, Markus; Fenicia, Fabrizio; Matgen, Patrick; Razavi, Saman; Savenije, Hubert; Gupta, Hoshin; Wheater, Howard
2016-04-01
Environmental models are abstract simplifications of real systems. As a result, the elements of these models, including system architecture (structure), process parameterization and parameters inherit a high level of approximation and simplification. In a conventional model building exercise the parameter values are the only elements of a model which can vary while the rest of the modeling elements are often fixed a priori and therefore not subjected to change. Once chosen the process parametrization and model structure usually remains the same throughout the modeling process. The only flexibility comes from the changing parameter values, thereby enabling these models to reproduce the desired observation. This part of modeling practice, parameter identification and uncertainty, has attracted a significant attention in the literature during the last years. However what remains unexplored in our point of view is to what extent the process parameterization and system architecture (model structure) can support each other. In other words "Does a specific form of process parameterization emerge for a specific model given its system architecture and data while no or little assumption has been made about the process parameterization itself? In this study we relax the assumption regarding a specific pre-determined form for the process parameterizations of a rainfall/runoff model and examine how varying the complexity of the system architecture can lead to different or possibly contradictory parameterization forms than what would have been decided otherwise. This comparison implicitly and explicitly provides us with an assessment of how uncertain is our perception of model process parameterization in respect to the extent the data can support.
Sensitivity of Pacific Cold Tongue and Double-ITCZ Bias to Convective Parameterization
NASA Astrophysics Data System (ADS)
Woelfle, M.; Bretherton, C. S.; Pritchard, M. S.; Yu, S.
2016-12-01
Many global climate models struggle to accurately simulate annual mean precipitation and sea surface temperature (SST) fields in the tropical Pacific basin. Precipitation biases are dominated by the double intertropical convergence zone (ITCZ) bias where models exhibit precipitation maxima straddling the equator while only a single Northern Hemispheric maximum exists in observations. The major SST bias is the enhancement of the equatorial cold tongue. A series of coupled model simulations are used to investigate the sensitivity of the bias development to convective parameterization. Model components are initialized independently prior to coupling to allow analysis of the transient response of the system directly following coupling. These experiments show precipitation and SST patterns to be highly sensitive to convective parameterization. Simulations in which the deep convective parameterization is disabled forcing all convection to be resolved by the shallow convection parameterization showed a degradation in both the cold tongue and double-ITCZ biases as precipitation becomes focused into off-equatorial regions of local SST maxima. Simulations using superparameterization in place of traditional cloud parameterizations showed a reduced cold tongue bias at the expense of additional precipitation biases. The equatorial SST responses to changes in convective parameterization are driven by changes in near equatorial zonal wind stress. The sensitivity of convection to SST is important in determining the precipitation and wind stress fields. However, differences in convective momentum transport also play a role. While no significant improvement is seen in these simulations of the double-ITCZ, the system's sensitivity to these changes reaffirm that improved convective parameterizations may provide an avenue for improving simulations of tropical Pacific precipitation and SST.
NASA Astrophysics Data System (ADS)
Määttänen, Anni; Merikanto, Joonas; Henschel, Henning; Duplissy, Jonathan; Makkonen, Risto; Ortega, Ismael K.; Vehkamäki, Hanna
2018-01-01
We have developed new parameterizations of electrically neutral homogeneous and ion-induced sulfuric acid-water particle formation for large ranges of environmental conditions, based on an improved model that has been validated against a particle formation rate data set produced by Cosmics Leaving OUtdoor Droplets (CLOUD) experiments at European Organization for Nuclear Research (CERN). The model uses a thermodynamically consistent version of the Classical Nucleation Theory normalized using quantum chemical data. Unlike the earlier parameterizations for H2SO4-H2O nucleation, the model is applicable to extreme dry conditions where the one-component sulfuric acid limit is approached. Parameterizations are presented for the critical cluster sulfuric acid mole fraction, the critical cluster radius, the total number of molecules in the critical cluster, and the particle formation rate. If the critical cluster contains only one sulfuric acid molecule, a simple formula for kinetic particle formation can be used: this threshold has also been parameterized. The parameterization for electrically neutral particle formation is valid for the following ranges: temperatures 165-400 K, sulfuric acid concentrations 104-1013 cm-3, and relative humidities 0.001-100%. The ion-induced particle formation parameterization is valid for temperatures 195-400 K, sulfuric acid concentrations 104-1016 cm-3, and relative humidities 10-5-100%. The new parameterizations are thus applicable for the full range of conditions in the Earth's atmosphere relevant for binary sulfuric acid-water particle formation, including both tropospheric and stratospheric conditions. They are also suitable for describing particle formation in the atmosphere of Venus.
2014-09-30
good test 3 case to study the multiscale data assimilation capabilities of our GMM-DO filter. We also performed stochastic simulations with our DO...Morakot and internal tides. The ignorance score and Kullback - Leibler divergence were employed to measure the skill of the multiscale pdf forecasts...read off from the posterior of the augmented state vector. We implemented this new smoother and tested it using a 2D-in-space stochastic flow exiting
Multiscale Modeling in the Clinic: Drug Design and Development
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clancy, Colleen E.; An, Gary; Cannon, William R.
A wide range of length and time scales are relevant to pharmacology, especially in drug development, drug design and drug delivery. Therefore, multi-scale computational modeling and simulation methods and paradigms that advance the linkage of phenomena occurring at these multiple scales have become increasingly important. Multi-scale approaches present in silico opportunities to advance laboratory research to bedside clinical applications in pharmaceuticals research. This is achievable through the capability of modeling to reveal phenomena occurring across multiple spatial and temporal scales, which are not otherwise readily accessible to experimentation. The resultant models, when validated, are capable of making testable predictions tomore » guide drug design and delivery. In this review we describe the goals, methods, and opportunities of multi-scale modeling in drug design and development. We demonstrate the impact of multiple scales of modeling in this field. We indicate the common mathematical techniques employed for multi-scale modeling approaches used in pharmacology and present several examples illustrating the current state-of-the-art regarding drug development for: Excitable Systems (Heart); Cancer (Metastasis and Differentiation); Cancer (Angiogenesis and Drug Targeting); Metabolic Disorders; and Inflammation and Sepsis. We conclude with a focus on barriers to successful clinical translation of drug development, drug design and drug delivery multi-scale models.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, Kai; Fu, Shubin; Gibson, Richard L.
It is important to develop fast yet accurate numerical methods for seismic wave propagation to characterize complex geological structures and oil and gas reservoirs. However, the computational cost of conventional numerical modeling methods, such as finite-difference method and finite-element method, becomes prohibitively expensive when applied to very large models. We propose a Generalized Multiscale Finite-Element Method (GMsFEM) for elastic wave propagation in heterogeneous, anisotropic media, where we construct basis functions from multiple local problems for both the boundaries and interior of a coarse node support or coarse element. The application of multiscale basis functions can capture the fine scale mediummore » property variations, and allows us to greatly reduce the degrees of freedom that are required to implement the modeling compared with conventional finite-element method for wave equation, while restricting the error to low values. We formulate the continuous Galerkin and discontinuous Galerkin formulation of the multiscale method, both of which have pros and cons. Applications of the multiscale method to three heterogeneous models show that our multiscale method can effectively model the elastic wave propagation in anisotropic media with a significant reduction in the degrees of freedom in the modeling system.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, Kai, E-mail: kaigao87@gmail.com; Fu, Shubin, E-mail: shubinfu89@gmail.com; Gibson, Richard L., E-mail: gibson@tamu.edu
It is important to develop fast yet accurate numerical methods for seismic wave propagation to characterize complex geological structures and oil and gas reservoirs. However, the computational cost of conventional numerical modeling methods, such as finite-difference method and finite-element method, becomes prohibitively expensive when applied to very large models. We propose a Generalized Multiscale Finite-Element Method (GMsFEM) for elastic wave propagation in heterogeneous, anisotropic media, where we construct basis functions from multiple local problems for both the boundaries and interior of a coarse node support or coarse element. The application of multiscale basis functions can capture the fine scale mediummore » property variations, and allows us to greatly reduce the degrees of freedom that are required to implement the modeling compared with conventional finite-element method for wave equation, while restricting the error to low values. We formulate the continuous Galerkin and discontinuous Galerkin formulation of the multiscale method, both of which have pros and cons. Applications of the multiscale method to three heterogeneous models show that our multiscale method can effectively model the elastic wave propagation in anisotropic media with a significant reduction in the degrees of freedom in the modeling system.« less
Gao, Kai; Fu, Shubin; Gibson, Richard L.; ...
2015-04-14
It is important to develop fast yet accurate numerical methods for seismic wave propagation to characterize complex geological structures and oil and gas reservoirs. However, the computational cost of conventional numerical modeling methods, such as finite-difference method and finite-element method, becomes prohibitively expensive when applied to very large models. We propose a Generalized Multiscale Finite-Element Method (GMsFEM) for elastic wave propagation in heterogeneous, anisotropic media, where we construct basis functions from multiple local problems for both the boundaries and interior of a coarse node support or coarse element. The application of multiscale basis functions can capture the fine scale mediummore » property variations, and allows us to greatly reduce the degrees of freedom that are required to implement the modeling compared with conventional finite-element method for wave equation, while restricting the error to low values. We formulate the continuous Galerkin and discontinuous Galerkin formulation of the multiscale method, both of which have pros and cons. Applications of the multiscale method to three heterogeneous models show that our multiscale method can effectively model the elastic wave propagation in anisotropic media with a significant reduction in the degrees of freedom in the modeling system.« less
Bae, Won-Gyu; Kim, Jangho; Choung, Yun-Hoon; Chung, Yesol; Suh, Kahp Y; Pang, Changhyun; Chung, Jong Hoon; Jeong, Hoon Eui
2015-11-01
Inspired by the hierarchically organized protein fibers in extracellular matrix (ECM) as well as the physiological importance of multiscale topography, we developed a simple but robust method for the design and manipulation of precisely controllable multiscale hierarchical structures using capillary force lithography in combination with an original wrinkling technique. In this study, based on our proposed fabrication technology, we approached a conceptual platform that can mimic the hierarchically multiscale topographical and orientation cues of the ECM for controlling cell structure and function. We patterned the polyurethane acrylate-based nanotopography with various orientations on the microgrooves, which could provide multiscale topography signals of ECM to control single and multicellular morphology and orientation with precision. Using our platforms, we found that the structures and orientations of fibroblast cells were greatly influenced by the nanotopography, rather than the microtopography. We also proposed a new approach that enables the generation of native ECM having nanofibers in specific three-dimensional (3D) configurations by culturing fibroblast cells on the multiscale substrata. We suggest that our methodology could be used as efficient strategies for the design and manipulation of various functional platforms, including well-defined 3D tissue structures for advanced regenerative medicine applications. Copyright © 2015 Elsevier Ltd. All rights reserved.
A Tensor-Product-Kernel Framework for Multiscale Neural Activity Decoding and Control
Li, Lin; Brockmeier, Austin J.; Choi, John S.; Francis, Joseph T.; Sanchez, Justin C.; Príncipe, José C.
2014-01-01
Brain machine interfaces (BMIs) have attracted intense attention as a promising technology for directly interfacing computers or prostheses with the brain's motor and sensory areas, thereby bypassing the body. The availability of multiscale neural recordings including spike trains and local field potentials (LFPs) brings potential opportunities to enhance computational modeling by enriching the characterization of the neural system state. However, heterogeneity on data type (spike timing versus continuous amplitude signals) and spatiotemporal scale complicates the model integration of multiscale neural activity. In this paper, we propose a tensor-product-kernel-based framework to integrate the multiscale activity and exploit the complementary information available in multiscale neural activity. This provides a common mathematical framework for incorporating signals from different domains. The approach is applied to the problem of neural decoding and control. For neural decoding, the framework is able to identify the nonlinear functional relationship between the multiscale neural responses and the stimuli using general purpose kernel adaptive filtering. In a sensory stimulation experiment, the tensor-product-kernel decoder outperforms decoders that use only a single neural data type. In addition, an adaptive inverse controller for delivering electrical microstimulation patterns that utilizes the tensor-product kernel achieves promising results in emulating the responses to natural stimulation. PMID:24829569
Multi-scale graph-cut algorithm for efficient water-fat separation.
Berglund, Johan; Skorpil, Mikael
2017-09-01
To improve the accuracy and robustness to noise in water-fat separation by unifying the multiscale and graph cut based approaches to B 0 -correction. A previously proposed water-fat separation algorithm that corrects for B 0 field inhomogeneity in 3D by a single quadratic pseudo-Boolean optimization (QPBO) graph cut was incorporated into a multi-scale framework, where field map solutions are propagated from coarse to fine scales for voxels that are not resolved by the graph cut. The accuracy of the single-scale and multi-scale QPBO algorithms was evaluated against benchmark reference datasets. The robustness to noise was evaluated by adding noise to the input data prior to water-fat separation. Both algorithms achieved the highest accuracy when compared with seven previously published methods, while computation times were acceptable for implementation in clinical routine. The multi-scale algorithm was more robust to noise than the single-scale algorithm, while causing only a small increase (+10%) of the reconstruction time. The proposed 3D multi-scale QPBO algorithm offers accurate water-fat separation, robustness to noise, and fast reconstruction. The software implementation is freely available to the research community. Magn Reson Med 78:941-949, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
USDA-ARS?s Scientific Manuscript database
Simulation models can be used to make management decisions when properly parameterized. This study aimed to parameterize the ALMANAC (Agricultural Land Management Alternatives with Numerical Assessment Criteria) crop simulation model for dry bean in the semi-arid temperate areas of Mexico. The par...
Midgley, S M
2004-01-21
A novel parameterization of x-ray interaction cross-sections is developed, and employed to describe the x-ray linear attenuation coefficient and mass energy absorption coefficient for both elements and mixtures. The new parameterization scheme addresses the Z-dependence of elemental cross-sections (per electron) using a simple function of atomic number, Z. This obviates the need for a complicated mathematical formalism. Energy dependent coefficients describe the Z-direction curvature of the cross-sections. The composition dependent quantities are the electron density and statistical moments describing the elemental distribution. We show that it is possible to describe elemental cross-sections for the entire periodic table and at energies above the K-edge (from 6 keV to 125 MeV), with an accuracy of better than 2% using a parameterization containing not more than five coefficients. For the biologically important elements 1 < or = Z < or = 20, and the energy range 30-150 keV, the parameterization utilizes four coefficients. At higher energies, the parameterization uses fewer coefficients with only two coefficients needed at megavoltage energies.
A service-oriented distributed semantic mediator: integrating multiscale biomedical information.
Mora, Oscar; Engelbrecht, Gerhard; Bisbal, Jesus
2012-11-01
Biomedical research continuously generates large amounts of heterogeneous and multimodal data spread over multiple data sources. These data, if appropriately shared and exploited, could dramatically improve the research practice itself, and ultimately the quality of health care delivered. This paper presents DISMED (DIstributed Semantic MEDiator), an open source semantic mediator that provides a unified view of a federated environment of multiscale biomedical data sources. DISMED is a Web-based software application to query and retrieve information distributed over a set of registered data sources, using semantic technologies. It also offers a userfriendly interface specifically designed to simplify the usage of these technologies by non-expert users. Although the architecture of the software mediator is generic and domain independent, in the context of this paper, DISMED has been evaluated for managing biomedical environments and facilitating research with respect to the handling of scientific data distributed in multiple heterogeneous data sources. As part of this contribution, a quantitative evaluation framework has been developed. It consist of a benchmarking scenario and the definition of five realistic use-cases. This framework, created entirely with public datasets, has been used to compare the performance of DISMED against other available mediators. It is also available to the scientific community in order to evaluate progress in the domain of semantic mediation, in a systematic and comparable manner. The results show an average improvement in the execution time by DISMED of 55% compared to the second best alternative in four out of the five use-cases of the experimental evaluation.
An adaptive framework to differentiate receiving water quality impacts on a multi-scale level.
Blumensaat, F; Tränckner, J; Helm, B; Kroll, S; Dirckx, G; Krebs, P
2013-01-01
The paradigm shift in recent years towards sustainable and coherent water resources management on a river basin scale has changed the subject of investigations to a multi-scale problem representing a great challenge for all actors participating in the management process. In this regard, planning engineers often face an inherent conflict to provide reliable decision support for complex questions with a minimum of effort. This trend inevitably increases the risk to base decisions upon uncertain and unverified conclusions. This paper proposes an adaptive framework for integral planning that combines several concepts (flow balancing, water quality monitoring, process modelling, multi-objective assessment) to systematically evaluate management strategies for water quality improvement. As key element, an S/P matrix is introduced to structure the differentiation of relevant 'pressures' in affected regions, i.e. 'spatial units', which helps in handling complexity. The framework is applied to a small, but typical, catchment in Flanders, Belgium. The application to the real-life case shows: (1) the proposed approach is adaptive, covers problems of different spatial and temporal scale, efficiently reduces complexity and finally leads to a transparent solution; and (2) water quality and emission-based performance evaluation must be done jointly as an emission-based performance improvement does not necessarily lead to an improved water quality status, and an assessment solely focusing on water quality criteria may mask non-compliance with emission-based standards. Recommendations derived from the theoretical analysis have been put into practice.
A unified spectral,parameterization for wave breaking: from the deep ocean to the surf zone
NASA Astrophysics Data System (ADS)
Filipot, J.
2010-12-01
A new wave-breaking dissipation parameterization designed for spectral wave models is presented. It combines wave breaking basic physical quantities, namely, the breaking probability and the dissipation rate per unit area. The energy lost by waves is fi[|#12#|]rst calculated in the physical space before being distributed over the relevant spectral components. This parameterization allows a seamless numerical model from the deep ocean into the surf zone. This transition from deep to shallow water is made possible by a dissipation rate per unit area of breaking waves that varies with the wave height, wavelength and water depth.The parameterization is further tested in the WAVEWATCH III TM code, from the global ocean to the beach scale. Model errors are smaller than with most specialized deep or shallow water parameterizations.
Methods of testing parameterizations: Vertical ocean mixing
NASA Technical Reports Server (NTRS)
Tziperman, Eli
1992-01-01
The ocean's velocity field is characterized by an exceptional variety of scales. While the small-scale oceanic turbulence responsible for the vertical mixing in the ocean is of scales a few centimeters and smaller, the oceanic general circulation is characterized by horizontal scales of thousands of kilometers. In oceanic general circulation models that are typically run today, the vertical structure of the ocean is represented by a few tens of discrete grid points. Such models cannot explicitly model the small-scale mixing processes, and must, therefore, find ways to parameterize them in terms of the larger-scale fields. Finding a parameterization that is both reliable and plausible to use in ocean models is not a simple task. Vertical mixing in the ocean is the combined result of many complex processes, and, in fact, mixing is one of the less known and less understood aspects of the oceanic circulation. In present models of the oceanic circulation, the many complex processes responsible for vertical mixing are often parameterized in an oversimplified manner. Yet, finding an adequate parameterization of vertical ocean mixing is crucial to the successful application of ocean models to climate studies. The results of general circulation models for quantities that are of particular interest to climate studies, such as the meridional heat flux carried by the ocean, are quite sensitive to the strength of the vertical mixing. We try to examine the difficulties in choosing an appropriate vertical mixing parameterization, and the methods that are available for validating different parameterizations by comparing model results to oceanographic data. First, some of the physical processes responsible for vertically mixing the ocean are briefly mentioned, and some possible approaches to the parameterization of these processes in oceanographic general circulation models are described in the following section. We then discuss the role of the vertical mixing in the physics of the large-scale ocean circulation, and examine methods of validating mixing parameterizations using large-scale ocean models.
Consequences of systematic model drift in DYNAMO MJO hindcasts with SP-CAM and CAM5
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hannah, Walter M.; Maloney, Eric D.; Pritchard, Michael S.
Hindcast simulations of MJO events during the dynamics of the MJO (DYNAMO) field campaign are conducted with two models, one with conventional parameterization (CAM5) and a comparable model that utilizes superparameterization (SP–CAM). SP–CAM is shown to produce a qualitatively better reproduction of the fluctuations of precipitation and low–level zonal wind associated with the first two DYNAMO MJO events compared to CAM5. Interestingly, skill metrics using the real–time multivariate MJO index (RMM) suggest the opposite conclusion that CAM5 has more skill than SP–CAM. This inconsistency can be explained by a systematic increase of RMM amplitude with lead time, which results frommore » a drift of the large–scale wind field in SP–CAM that projects strongly onto the RMM index. CAM5 hindcasts exhibit a contraction of the moisture distribution, in which extreme wet and dry conditions become less frequent with lead time. SP–CAM hindcasts better reproduce the observed moisture distribution, but also have stronger drift patterns of moisture budget terms, such as an increase in drying by meridional advection in SP–CAM. This advection tendency in SP–CAM appears to be associated with enhanced off–equatorial synoptic eddy activity with lead time. In conclusion, systematic drift moisture tendencies in SP–CAM are of similar magnitude to intraseasonal moisture tendencies, and therefore are important for understanding MJO prediction skill.« less
NASA Technical Reports Server (NTRS)
Choi, Hyun-Joo; Chun, Hye-Yeong; Gong, Jie; Wu, Dong L.
2012-01-01
The realism of ray-based spectral parameterization of convective gravity wave drag, which considers the updated moving speed of the convective source and multiple wave propagation directions, is tested against the Atmospheric Infrared Sounder (AIRS) onboard the Aqua satellite. Offline parameterization calculations are performed using the global reanalysis data for January and July 2005, and gravity wave temperature variances (GWTVs) are calculated at z = 2.5 hPa (unfiltered GWTV). AIRS-filtered GWTV, which is directly compared with AIRS, is calculated by applying the AIRS visibility function to the unfiltered GWTV. A comparison between the parameterization calculations and AIRS observations shows that the spatial distribution of the AIRS-filtered GWTV agrees well with that of the AIRS GWTV. However, the magnitude of the AIRS-filtered GWTV is smaller than that of the AIRS GWTV. When an additional cloud top gravity wave momentum flux spectrum with longer horizontal wavelength components that were obtained from the mesoscale simulations is included in the parameterization, both the magnitude and spatial distribution of the AIRS-filtered GWTVs from the parameterization are in good agreement with those of the AIRS GWTVs. The AIRS GWTV can be reproduced reasonably well by the parameterization not only with multiple wave propagation directions but also with two wave propagation directions of 45 degrees (northeast-southwest) and 135 degrees (northwest-southeast), which are optimally chosen for computational efficiency.
The Influence of Microphysical Cloud Parameterization on Microwave Brightness Temperatures
NASA Technical Reports Server (NTRS)
Skofronick-Jackson, Gail M.; Gasiewski, Albin J.; Wang, James R.; Zukor, Dorothy J. (Technical Monitor)
2000-01-01
The microphysical parameterization of clouds and rain-cells plays a central role in atmospheric forward radiative transfer models used in calculating passive microwave brightness temperatures. The absorption and scattering properties of a hydrometeor-laden atmosphere are governed by particle phase, size distribution, aggregate density., shape, and dielectric constant. This study identifies the sensitivity of brightness temperatures with respect to the microphysical cloud parameterization. Cloud parameterizations for wideband (6-410 GHz observations of baseline brightness temperatures were studied for four evolutionary stages of an oceanic convective storm using a five-phase hydrometeor model in a planar-stratified scattering-based radiative transfer model. Five other microphysical cloud parameterizations were compared to the baseline calculations to evaluate brightness temperature sensitivity to gross changes in the hydrometeor size distributions and the ice-air-water ratios in the frozen or partly frozen phase. The comparison shows that, enlarging the rain drop size or adding water to the partly Frozen hydrometeor mix warms brightness temperatures by up to .55 K at 6 GHz. The cooling signature caused by ice scattering intensifies with increasing ice concentrations and at higher frequencies. An additional comparison to measured Convection and Moisture LA Experiment (CAMEX 3) brightness temperatures shows that in general all but, two parameterizations produce calculated T(sub B)'s that fall within the observed clear-air minima and maxima. The exceptions are for parameterizations that, enhance the scattering characteristics of frozen hydrometeors.
Multiscale Modeling and Simulation of Material Processing
2006-07-01
As a re- GIMP simulations . Fig. 2 illustrates the contact algo- suit, MPM using a single mesh tends to induce early con- rithm for the contact pair ...21-07-2006 Final Performance Report 05-01-2003 - 04-30-2006 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Multiscale Modeling and Simulation of Material...development of scaling laws for multiscale simulations from atomistic to continuum using material testing techniques, such as tension and indentation
Smart Functional Nanoenergetic Materials
2012-08-01
Integrated Multiscale Organization of Energetic Materials Many biological and physical objects derive their unique properties through an...ve, . ancau , an . oss , g nergy u Nanocomposites obtained by DNA‐Directed Assembly, Adv. Functional Materials, 22, 323, 2012 Multiscale ...to any Multiscale Energetic Composites Fabricated on pSi Substrates • Si wafers (highly doped p-type) were photo lithographically patterned using
Multi-scale heat and mass transfer modelling of cell and tissue cryopreservation
Xu, Feng; Moon, Sangjun; Zhang, Xiaohui; Shao, Lei; Song, Young Seok; Demirci, Utkan
2010-01-01
Cells and tissues undergo complex physical processes during cryopreservation. Understanding the underlying physical phenomena is critical to improve current cryopreservation methods and to develop new techniques. Here, we describe multi-scale approaches for modelling cell and tissue cryopreservation including heat transfer at macroscale level, crystallization, cell volume change and mass transport across cell membranes at microscale level. These multi-scale approaches allow us to study cell and tissue cryopreservation. PMID:20047939
Coherent structures shed by multiscale cut-in trailing edge serrations on lifting wings
NASA Astrophysics Data System (ADS)
Prigent, S. L.; Buxton, O. R. H.; Bruce, P. J. K.
2017-07-01
This experimental study presents the effect of multiscale cut-in trailing edge serrations on the coherent structures shed into the wake of a lifting wing. Two-probe span-wise hot-wire traverses are performed to study spectra, coherence, and phase shift. In addition, planar particle image velocimetry is used to study the spatio-temporal structure of the vortices shed by the airfoils. Compared with a single tone sinusoidal serration, the multiscale ones reduce the vortex shedding energy as well as the span-wise coherence. Results indicate that the vortex shedding is locked into an arch-shaped cell structure. This structure is weakened by the multiscale patterns, which explains the reduction in both shedding energy and coherence.
A procedure for the significance testing of unmodeled errors in GNSS observations
NASA Astrophysics Data System (ADS)
Li, Bofeng; Zhang, Zhetao; Shen, Yunzhong; Yang, Ling
2018-01-01
It is a crucial task to establish a precise mathematical model for global navigation satellite system (GNSS) observations in precise positioning. Due to the spatiotemporal complexity of, and limited knowledge on, systematic errors in GNSS observations, some residual systematic errors would inevitably remain even after corrected with empirical model and parameterization. These residual systematic errors are referred to as unmodeled errors. However, most of the existing studies mainly focus on handling the systematic errors that can be properly modeled and then simply ignore the unmodeled errors that may actually exist. To further improve the accuracy and reliability of GNSS applications, such unmodeled errors must be handled especially when they are significant. Therefore, a very first question is how to statistically validate the significance of unmodeled errors. In this research, we will propose a procedure to examine the significance of these unmodeled errors by the combined use of the hypothesis tests. With this testing procedure, three components of unmodeled errors, i.e., the nonstationary signal, stationary signal and white noise, are identified. The procedure is tested by using simulated data and real BeiDou datasets with varying error sources. The results show that the unmodeled errors can be discriminated by our procedure with approximately 90% confidence. The efficiency of the proposed procedure is further reassured by applying the time-domain Allan variance analysis and frequency-domain fast Fourier transform. In summary, the spatiotemporally correlated unmodeled errors are commonly existent in GNSS observations and mainly governed by the residual atmospheric biases and multipath. Their patterns may also be impacted by the receiver.
The application of depletion curves for parameterization of subgrid variability of snow
C. H. Luce; D. G. Tarboton
2004-01-01
Parameterization of subgrid-scale variability in snow accumulation and melt is important for improvements in distributed snowmelt modelling. We have taken the approach of using depletion curves that relate fractional snowcovered area to element-average snow water equivalent to parameterize the effect of snowpack heterogeneity within a physically based mass and energy...
NASA Astrophysics Data System (ADS)
Mukhopadhyay, P.; Phani Murali Krishna, R.; Goswami, Bidyut B.; Abhik, S.; Ganai, Malay; Mahakur, M.; Khairoutdinov, Marat; Dudhia, Jimmy
2016-05-01
Inspite of significant improvement in numerical model physics, resolution and numerics, the general circulation models (GCMs) find it difficult to simulate realistic seasonal and intraseasonal variabilities over global tropics and particularly over Indian summer monsoon (ISM) region. The bias is mainly attributed to the improper representation of physical processes. Among all the processes, the cloud and convective processes appear to play a major role in modulating model bias. In recent times, NCEP CFSv2 model is being adopted under Monsoon Mission for dynamical monsoon forecast over Indian region. The analyses of climate free run of CFSv2 in two resolutions namely at T126 and T382, show largely similar bias in simulating seasonal rainfall, in capturing the intraseasonal variability at different scales over the global tropics and also in capturing tropical waves. Thus, the biases of CFSv2 indicate a deficiency in model's parameterization of cloud and convective processes. Keeping this in background and also for the need to improve the model fidelity, two approaches have been adopted. Firstly, in the superparameterization, 32 cloud resolving models each with a horizontal resolution of 4 km are embedded in each GCM (CFSv2) grid and the conventional sub-grid scale convective parameterization is deactivated. This is done to demonstrate the role of resolving cloud processes which otherwise remain unresolved. The superparameterized CFSv2 (SP-CFS) is developed on a coarser version T62. The model is integrated for six and half years in climate free run mode being initialised from 16 May 2008. The analyses reveal that SP-CFS simulates a significantly improved mean state as compared to default CFS. The systematic bias of lesser rainfall over Indian land mass, colder troposphere has substantially been improved. Most importantly the convectively coupled equatorial waves and the eastward propagating MJO has been found to be simulated with more fidelity in SP-CFS. The reason of such betterment in model mean state has been found to be due to the systematic improvement in moisture field, temperature profile and moist instability. The model also has better simulated the cloud and rainfall relation. This initiative demonstrates the role of cloud processes on the mean state of coupled GCM. As the superparameterization approach is computationally expensive, so in another approach, the conventional Simplified Arakawa Schubert (SAS) scheme is replaced by a revised SAS scheme (RSAS) and also the old and simplified cloud scheme of Zhao-Karr (1997) has been replaced by WSM6 in CFSV2 (hereafter CFS-CR). The primary objective of such modifications is to improve the distribution of convective rain in the model by using RSAS and the grid-scale or the large scale nonconvective rain by WSM6. The WSM6 computes the tendency of six class (water vapour, cloud water, ice, snow, graupel, rain water) hydrometeors at each of the model grid and contributes in the low, middle and high cloud fraction. By incorporating WSM6, for the first time in a global climate model, we are able to show a reasonable simulation of cloud ice and cloud liquid water distribution vertically and spatially as compared to Cloudsat observations. The CFS-CR has also showed improvement in simulating annual rainfall cycle and intraseasonal variability over the ISM region. These improvements in CFS-CR are likely to be associated with improvement of the convective and stratiform rainfall distribution in the model. These initiatives clearly address a long standing issue of resolving the cloud processes in climate model and demonstrate that the improved cloud and convective process paramterizations can eventually reduce the systematic bias and improve the model fidelity.
Multiscale study of metal nanoparticles
NASA Astrophysics Data System (ADS)
Lee, Byeongchan
Extremely small structures with reduced dimensionality have emerged as a scientific motif for their interesting properties. In particular, metal nanoparticles have been identified as a fundamental material in many catalytic activities; as a consequence, a better understanding of structure-function relationship of nanoparticles has become crucial. The functional analysis of nanoparticles, reactivity for example, requires an accurate method at the electronic structure level, whereas the structural analysis to find energetically stable local minima is beyond the scope of quantum mechanical methods as the computational cost becomes prohibitingly high. The challenge is that the inherent length scale and accuracy associated with any single method hardly covers the broad scale range spanned by both structural and functional analyses. In order to address this, and effectively explore the energetics and reactivity of metal nanoparticles, a hierarchical multiscale modeling is developed, where methodologies of different length scales, i.e. first principles density functional theory, atomistic calculations, and continuum modeling, are utilized in a sequential fashion. This work has focused on identifying the essential information that bridges two different methods so that a successive use of different methods is seamless. The bond characteristics of low coordination systems have been obtained with first principles calculations, and incorporated into the atomistic simulation. This also rectifies the deficiency of conventional interatomic potentials fitted to bulk properties, and improves the accuracy of atomistic calculations for nanoparticles. For the systematic shape selection of nanoparticles, we have improved the Wulff-type construction using a semi-continuum approach, in which atomistic surface energetics and crystallinity of materials are added on to the continuum framework. The developed multiscale modeling scheme is applied to the rational design of platinum nanoparticles in the range of 2.4 nm to 3.1 nm: energetically favorable structures have been determined in terms of semi-continuum binding energy, and the reactivity of the selected nanoparticle has been investigated based on local density of states from first principles calculations. The calculation suggests that the reactivity landscape of particles is more complex than the simple reactivity of clean surfaces, and the reactivity towards a particular reactant can be predicted for a given structure.
Deciphering Biochemical Network: from particles to planes then to spaces
NASA Astrophysics Data System (ADS)
Ye, Xinhao; Zhang, Siliang; Engineer Research CenterBiotechnology, National
2004-03-01
Today when we are still infatuated with the booming systematic fashion in life science, we, especially as biologist, ironically have fallen down into a sub-systematic maze. That is, although rapid advances in "omics" sciences ceaselessly provided so-called global or large-scale maps to exhibit the corresponding subnet, seldom paid attention to connecting these distinct but close-knit functional modules. Fortunately, a group of physicists recently cast off this natural moat and integrated multi-scale biological network into a simple life's pyramid. However, if extended this pyramid to a 3D structure in view of XYZ axis constructed by the temporal, spatial and organized characteristics respectively, it should be noted that this from-universal-to-particular pyramid is only a transverse section while the achievements in diverse "omics" sciences consist of relative longitudinal ones. On that footing, if analogizing the development of systems biology in last decades as a huge leap from discrete particles (typically in "a paper = a gene" era) to several planes (that is relative to corresponding OMICS science), we might rationally predict a next "space" era is coming soon to untangle and map the multi-tiered biological network really in a whole.