DOE Office of Scientific and Technical Information (OSTI.GOV)
Lehmann, Benjamin V.; Mao, Yao -Yuan; Becker, Matthew R.
Empirical methods for connecting galaxies to their dark matter halos have become essential for interpreting measurements of the spatial statistics of galaxies. In this work, we present a novel approach for parameterizing the degree of concentration dependence in the abundance matching method. Furthermore, this new parameterization provides a smooth interpolation between two commonly used matching proxies: the peak halo mass and the peak halo maximal circular velocity. This parameterization controls the amount of dependence of galaxy luminosity on halo concentration at a fixed halo mass. Effectively this interpolation scheme enables abundance matching models to have adjustable assembly bias in the resulting galaxy catalogs. With the newmore » $$400\\,\\mathrm{Mpc}\\,{h}^{-1}$$ DarkSky Simulation, whose larger volume provides lower sample variance, we further show that low-redshift two-point clustering and satellite fraction measurements from SDSS can already provide a joint constraint on this concentration dependence and the scatter within the abundance matching framework.« less
Lehmann, Benjamin V.; Mao, Yao -Yuan; Becker, Matthew R.; ...
2016-12-28
Empirical methods for connecting galaxies to their dark matter halos have become essential for interpreting measurements of the spatial statistics of galaxies. In this work, we present a novel approach for parameterizing the degree of concentration dependence in the abundance matching method. Furthermore, this new parameterization provides a smooth interpolation between two commonly used matching proxies: the peak halo mass and the peak halo maximal circular velocity. This parameterization controls the amount of dependence of galaxy luminosity on halo concentration at a fixed halo mass. Effectively this interpolation scheme enables abundance matching models to have adjustable assembly bias in the resulting galaxy catalogs. With the newmore » $$400\\,\\mathrm{Mpc}\\,{h}^{-1}$$ DarkSky Simulation, whose larger volume provides lower sample variance, we further show that low-redshift two-point clustering and satellite fraction measurements from SDSS can already provide a joint constraint on this concentration dependence and the scatter within the abundance matching framework.« less
Automatic Parametrization of Somatosensory Evoked Potentials With Chirp Modeling.
Vayrynen, Eero; Noponen, Kai; Vipin, Ashwati; Thow, X Y; Al-Nashash, Hasan; Kortelainen, Jukka; All, Angelo
2016-09-01
In this paper, an approach using polynomial phase chirp signals to model somatosensory evoked potentials (SEPs) is proposed. SEP waveforms are assumed as impulses undergoing group velocity dispersion while propagating along a multipath neural connection. Mathematical analysis of pulse dispersion resulting in chirp signals is performed. An automatic parameterization of SEPs is proposed using chirp models. A Particle Swarm Optimization algorithm is used to optimize the model parameters. Features describing the latencies and amplitudes of SEPs are automatically derived. A rat model is then used to evaluate the automatic parameterization of SEPs in two experimental cases, i.e., anesthesia level and spinal cord injury (SCI). Experimental results show that chirp-based model parameters and the derived SEP features are significant in describing both anesthesia level and SCI changes. The proposed automatic optimization based approach for extracting chirp parameters offers potential for detailed SEP analysis in future studies. The method implementation in Matlab technical computing language is provided online.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Banerjee, Tirtha; De Roo, Frederik; Mauder, Matthias
Parameterizations of biosphere-atmosphere interaction processes in climate models and other hydrological applications require characterization of turbulent transport of momentum and scalars between vegetation canopies and the atmosphere, which is often modeled using a turbulent analogy to molecular diffusion processes. However, simple flux-gradient approaches (K-theory) fail for canopy turbulence. One cause is turbulent transport by large coherent eddies at the canopy scale, which can be linked to sweep-ejection events, and bear signatures of non-local organized eddy motions. K-theory, that parameterizes the turbulent flux or stress proportional to the local concentration or velocity gradient, fails to account for these non-local organized motions. The connection to sweep-ejection cycles and the local turbulent flux can be traced back to the turbulence triple momentmore » $$\\overline{C'W'W'}$$. In this work, we use large-eddy simulation to investigate the diagnostic connection between the failure of K-theory and sweep-ejection motions. Analyzed schemes are quadrant analysis (QA) and a complete and incomplete cumulant expansion (CEM and ICEM) method. The latter approaches introduce a turbulence timescale in the modeling. Furthermore, we find that the momentum flux needs a different formulation for the turbulence timescale than the sensible heat flux. In conclusion, accounting for buoyancy in stratified conditions is also deemed to be important in addition to accounting for non-local events to predict the correct momentum or scalar fluxes.« less
Banerjee, Tirtha; De Roo, Frederik; Mauder, Matthias
2017-10-19
Parameterizations of biosphere-atmosphere interaction processes in climate models and other hydrological applications require characterization of turbulent transport of momentum and scalars between vegetation canopies and the atmosphere, which is often modeled using a turbulent analogy to molecular diffusion processes. However, simple flux-gradient approaches (K-theory) fail for canopy turbulence. One cause is turbulent transport by large coherent eddies at the canopy scale, which can be linked to sweep-ejection events, and bear signatures of non-local organized eddy motions. K-theory, that parameterizes the turbulent flux or stress proportional to the local concentration or velocity gradient, fails to account for these non-local organized motions. The connection to sweep-ejection cycles and the local turbulent flux can be traced back to the turbulence triple momentmore » $$\\overline{C'W'W'}$$. In this work, we use large-eddy simulation to investigate the diagnostic connection between the failure of K-theory and sweep-ejection motions. Analyzed schemes are quadrant analysis (QA) and a complete and incomplete cumulant expansion (CEM and ICEM) method. The latter approaches introduce a turbulence timescale in the modeling. Furthermore, we find that the momentum flux needs a different formulation for the turbulence timescale than the sensible heat flux. In conclusion, accounting for buoyancy in stratified conditions is also deemed to be important in addition to accounting for non-local events to predict the correct momentum or scalar fluxes.« less
NASA Astrophysics Data System (ADS)
Xu, Gang; Li, Ming; Mourrain, Bernard; Rabczuk, Timon; Xu, Jinlan; Bordas, Stéphane P. A.
2018-01-01
In this paper, we propose a general framework for constructing IGA-suitable planar B-spline parameterizations from given complex CAD boundaries consisting of a set of B-spline curves. Instead of forming the computational domain by a simple boundary, planar domains with high genus and more complex boundary curves are considered. Firstly, some pre-processing operations including B\\'ezier extraction and subdivision are performed on each boundary curve in order to generate a high-quality planar parameterization; then a robust planar domain partition framework is proposed to construct high-quality patch-meshing results with few singularities from the discrete boundary formed by connecting the end points of the resulting boundary segments. After the topology information generation of quadrilateral decomposition, the optimal placement of interior B\\'ezier curves corresponding to the interior edges of the quadrangulation is constructed by a global optimization method to achieve a patch-partition with high quality. Finally, after the imposition of C1=G1-continuity constraints on the interface of neighboring B\\'ezier patches with respect to each quad in the quadrangulation, the high-quality B\\'ezier patch parameterization is obtained by a C1-constrained local optimization method to achieve uniform and orthogonal iso-parametric structures while keeping the continuity conditions between patches. The efficiency and robustness of the proposed method are demonstrated by several examples which are compared to results obtained by the skeleton-based parameterization approach.
Nogeire, Theresa M.; Davis, Frank W.; Crooks, Kevin R.; McRae, Brad H.; Lyren, Lisa M.; Boydston, Erin E.
2015-01-01
As natural habitats become fragmented by human activities, animals must increasingly move through human-dominated systems, particularly agricultural landscapes. Mapping areas important for animal movement has therefore become a key part of conservation planning. Models of landscape connectivity are often parameterized using expert opinion and seldom distinguish between the risks and barriers presented by different crop types. Recent research, however, suggests different crop types, such as row crops and orchards, differ in the degree to which they facilitate or impede species movements. Like many mammalian carnivores, bobcats (Lynx rufus) are sensitive to fragmentation and loss of connectivity between habitat patches. We investigated how distinguishing between different agricultural land covers might change conclusions about the relative conservation importance of different land uses in a Mediterranean ecosystem. Bobcats moved relatively quickly in row crops but relatively slowly in orchards, at rates similar to those in natural habitats of woodlands and scrub. We found that parameterizing a connectivity model using empirical data on bobcat movements in agricultural lands and other land covers, instead of parameterizing the model using habitat suitability indices based on expert opinion, altered locations of predicted animal movement routes. These results emphasize that differentiating between types of agriculture can alter conservation planning outcomes.
Parameterizing the Transport Pathways for Cell Invasion in Complex Scaffold Architectures
Ashworth, Jennifer C.; Mehr, Marco; Buxton, Paul G.; Best, Serena M.
2016-01-01
Interconnecting pathways through porous tissue engineering scaffolds play a vital role in determining nutrient supply, cell invasion, and tissue ingrowth. However, the global use of the term “interconnectivity” often fails to describe the transport characteristics of these pathways, giving no clear indication of their potential to support tissue synthesis. This article uses new experimental data to provide a critical analysis of reported methods for the description of scaffold transport pathways, ranging from qualitative image analysis to thorough structural parameterization using X-ray Micro-Computed Tomography. In the collagen scaffolds tested in this study, it was found that the proportion of pore space perceived to be accessible dramatically changed depending on the chosen method of analysis. Measurements of % interconnectivity as defined in this manner varied as a function of direction and connection size, and also showed a dependence on measurement length scale. As an alternative, a method for transport pathway parameterization was investigated, using percolation theory to calculate the diameter of the largest sphere that can travel to infinite distance through a scaffold in a specified direction. As proof of principle, this approach was used to investigate the invasion behavior of primary fibroblasts in response to independent changes in pore wall alignment and pore space accessibility, parameterized using the percolation diameter. The result was that both properties played a distinct role in determining fibroblast invasion efficiency. This example therefore demonstrates the potential of the percolation diameter as a method of transport pathway parameterization, to provide key structural criteria for application-based scaffold design. PMID:26888449
An image morphing technique based on optimal mass preserving mapping.
Zhu, Lei; Yang, Yan; Haker, Steven; Tannenbaum, Allen
2007-06-01
Image morphing, or image interpolation in the time domain, deals with the metamorphosis of one image into another. In this paper, a new class of image morphing algorithms is proposed based on the theory of optimal mass transport. The L(2) mass moving energy functional is modified by adding an intensity penalizing term, in order to reduce the undesired double exposure effect. It is an intensity-based approach and, thus, is parameter free. The optimal warping function is computed using an iterative gradient descent approach. This proposed morphing method is also extended to doubly connected domains using a harmonic parameterization technique, along with finite-element methods.
An Image Morphing Technique Based on Optimal Mass Preserving Mapping
Zhu, Lei; Yang, Yan; Haker, Steven; Tannenbaum, Allen
2013-01-01
Image morphing, or image interpolation in the time domain, deals with the metamorphosis of one image into another. In this paper, a new class of image morphing algorithms is proposed based on the theory of optimal mass transport. The L2 mass moving energy functional is modified by adding an intensity penalizing term, in order to reduce the undesired double exposure effect. It is an intensity-based approach and, thus, is parameter free. The optimal warping function is computed using an iterative gradient descent approach. This proposed morphing method is also extended to doubly connected domains using a harmonic parameterization technique, along with finite-element methods. PMID:17547128
Development of Turbulent Biological Closure Parameterizations
2011-09-30
LONG-TERM GOAL: The long-term goals of this project are: (1) to develop a theoretical framework to quantify turbulence induced NPZ interactions. (2) to apply the theory to develop parameterizations to be used in realistic environmental physical biological coupling numerical models. OBJECTIVES: Connect the Goodman and Robinson (2008) statistically based pdf theory to Advection Diffusion Reaction (ADR) modeling of NPZ interaction.
An empirical test of a diffusion model: predicting clouded apollo movements in a novel environment.
Ovaskainen, Otso; Luoto, Miska; Ikonen, Iiro; Rekola, Hanna; Meyke, Evgeniy; Kuussaari, Mikko
2008-05-01
Functional connectivity is a fundamental concept in conservation biology because it sets the level of migration and gene flow among local populations. However, functional connectivity is difficult to measure, largely because it is hard to acquire and analyze movement data from heterogeneous landscapes. Here we apply a Bayesian state-space framework to parameterize a diffusion-based movement model using capture-recapture data on the endangered clouded apollo butterfly. We test whether the model is able to disentangle the inherent movement behavior of the species from landscape structure and sampling artifacts, which is a necessity if the model is to be used to examine how movements depend on landscape structure. We show that this is the case by demonstrating that the model, parameterized with data from a reference landscape, correctly predicts movements in a structurally different landscape. In particular, the model helps to explain why a movement corridor that was constructed as a management measure failed to increase movement among local populations. We illustrate how the parameterized model can be used to derive biologically relevant measures of functional connectivity, thus linking movement data with models of spatial population dynamics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Guang; Fan, Jiwen; Xu, Kuan-Man
2015-06-01
Arakawa and Wu (2013, hereafter referred to as AW13) recently developed a formal approach to a unified parameterization of atmospheric convection for high-resolution numerical models. The work is based on ideas formulated by Arakawa et al. (2011). It lays the foundation for a new parameterization pathway in the era of high-resolution numerical modeling of the atmosphere. The key parameter in this approach is convective cloud fraction. In conventional parameterization, it is assumed that <<1. This assumption is no longer valid when horizontal resolution of numerical models approaches a few to a few tens kilometers, since in such situations convective cloudmore » fraction can be comparable to unity. Therefore, they argue that the conventional approach to parameterizing convective transport must include a factor 1 - in order to unify the parameterization for the full range of model resolutions so that it is scale-aware and valid for large convective cloud fractions. While AW13’s approach provides important guidance for future convective parameterization development, in this note we intend to show that the conventional approach already has this scale awareness factor 1 - built in, although not recognized for the last forty years. Therefore, it should work well even in situations of large convective cloud fractions in high-resolution numerical models.« less
Stochastic Convection Parameterizations: The Eddy-Diffusivity/Mass-Flux (EDMF) Approach (Invited)
NASA Astrophysics Data System (ADS)
Teixeira, J.
2013-12-01
In this presentation it is argued that moist convection parameterizations need to be stochastic in order to be realistic - even in deterministic atmospheric prediction systems. A new unified convection and boundary layer parameterization (EDMF) that optimally combines the Eddy-Diffusivity (ED) approach for smaller-scale boundary layer mixing with the Mass-Flux (MF) approach for larger-scale plumes is discussed. It is argued that for realistic simulations stochastic methods have to be employed in this new unified EDMF. Positive results from the implementation of the EDMF approach in atmospheric models are presented.
Connections between Graphical Gaussian Models and Factor Analysis
ERIC Educational Resources Information Center
Salgueiro, M. Fatima; Smith, Peter W. F.; McDonald, John W.
2010-01-01
Connections between graphical Gaussian models and classical single-factor models are obtained by parameterizing the single-factor model as a graphical Gaussian model. Models are represented by independence graphs, and associations between each manifest variable and the latent factor are measured by factor partial correlations. Power calculations…
NASA Astrophysics Data System (ADS)
Farzamian, Mohammad; Monteiro Santos, Fernando A.; Khalil, Mohamed A.
2017-12-01
The coupled hydrogeophysical approach has proved to be a valuable tool for improving the use of geoelectrical data for hydrological model parameterization. In the coupled approach, hydrological parameters are directly inferred from geoelectrical measurements in a forward manner to eliminate the uncertainty connected to the independent inversion of electrical resistivity data. Several numerical studies have been conducted to demonstrate the advantages of a coupled approach; however, only a few attempts have been made to apply the coupled approach to actual field data. In this study, we developed a 1D coupled hydrogeophysical code to estimate the van Genuchten-Mualem model parameters, K s, n, θ r and α, from time-lapse vertical electrical sounding data collected during a constant inflow infiltration experiment. van Genuchten-Mualem parameters were sampled using the Latin hypercube sampling method to provide a full coverage of the range of each parameter from their distributions. By applying the coupled approach, vertical electrical sounding data were coupled to hydrological models inferred from van Genuchten-Mualem parameter samples to investigate the feasibility of constraining the hydrological model. The key approaches taken in the study are to (1) integrate electrical resistivity and hydrological data and avoiding data inversion, (2) estimate the total water mass recovery of electrical resistivity data and consider it in van Genuchten-Mualem parameters evaluation and (3) correct the influence of subsurface temperature fluctuations during the infiltration experiment on electrical resistivity data. The results of the study revealed that the coupled hydrogeophysical approach can improve the value of geophysical measurements in hydrological model parameterization. However, the approach cannot overcome the technical limitations of the geoelectrical method associated with resolution and of water mass recovery.
Uniting paradigms of connectivity in marine ecology.
Brown, Christopher J; Harborne, Alastair R; Paris, Claire B; Mumby, Peter J
2016-09-01
The connectivity of marine organisms among habitat patches has been dominated by two independent paradigms with distinct conservation strategies. One paradigm is the dispersal of larvae on ocean currents, which suggests networks of marine reserves. The other is the demersal migration of animals from nursery to adult habitats, requiring the conservation of connected ecosystem corridors. Here, we suggest that a common driver, wave exposure, links larval and demersal connectivity across the seascape. To study the effect of linked connectivities on fish abundance at reefs, we parameterize a demographic model for The Bahamas seascape using maps of habitats, empirically forced models of wave exposure and spatially realistic three-dimensional hydrological models of larval dispersal. The integrated empirical-modeling approach enabled us to study linked connectivity on a scale not currently possible by purely empirical studies. We find sheltered environments not only provide greater nursery habitat for juvenile fish but larvae spawned on adjacent reefs have higher retention, thereby creating a synergistic increase in fish abundance. Uniting connectivity paradigms to consider all life stages simultaneously can help explain the evolution of nursery habitat use and simplifies conservation advice: Reserves in sheltered environments have desirable characteristics for biodiversity conservation and can support local fisheries through adult spillover. © 2016 by the Ecological Society of America.
NASA Astrophysics Data System (ADS)
Hansen, Samantha E.; Nyblade, Andrew A.; Benoit, Margaret H.
2012-02-01
While the Cenozoic Afro-Arabian Rift System (AARS) has been the focus of numerous studies, it has long been questioned if low-velocity anomalies in the upper mantle beneath eastern Africa and western Arabia are connected, forming one large anomaly, and if any parts of the anomalous upper mantle structure extend into the lower mantle. To address these questions, we have developed a new image of P-wave velocity variations in the Afro-Arabian mantle using an adaptively parameterized tomography approach and an expanded dataset containing travel-times from earthquakes recorded on many new temporary and permanent seismic networks. Our model shows a laterally continuous, low-velocity region in the upper mantle beneath all of eastern Africa and western Arabia, extending to depths of ~ 500-700 km, as well as a lower mantle anomaly beneath southern Africa that rises from the core-mantle boundary to at least ~ 1100 km depth and possibly connects to the upper mantle anomaly across the transition zone. Geodynamic models which invoke one or more discrete plumes to explain the origin of the AARS are difficult to reconcile with the lateral and depth extent of the upper mantle low-velocity region, as are non-plume models invoking small-scale convection passively induced by lithospheric extension or by edge-flow around thick cratonic lithosphere. Instead, the low-velocity anomaly beneath the AARS can be explained by the African superplume model, where the anomalous upper mantle structure is a continuation of a large, thermo-chemical upwelling in the lower mantle beneath southern Africa. These findings provide further support for a geodynamic connection between processes in Earth's lower mantle and continental break-up within the AARS.
Shortcomings with Tree-Structured Edge Encodings for Neural Networks
NASA Technical Reports Server (NTRS)
Hornby, Gregory S.
2004-01-01
In evolutionary algorithms a common method for encoding neural networks is to use a tree structured assembly procedure for constructing them. Since node operators have difficulties in specifying edge weights and these operators are execution-order dependent, an alternative is to use edge operators. Here we identify three problems with edge operators: in the initialization phase most randomly created genotypes produce an incorrect number of inputs and outputs; variation operators can easily change the number of input/output (I/O) units; and units have a connectivity bias based on their order of creation. Instead of creating I/O nodes as part of the construction process we propose using parameterized operators to connect to preexisting I/O units. Results from experiments show that these parameterized operators greatly improve the probability of creating and maintaining networks with the correct number of I/O units, remove the connectivity bias with I/O units and produce better controllers for a goal-scoring task.
A Testbed for Model Development
NASA Astrophysics Data System (ADS)
Berry, J. A.; Van der Tol, C.; Kornfeld, A.
2014-12-01
Carbon cycle and land-surface models used in global simulations need to be computationally efficient and have a high standard of software engineering. These models also make a number of scaling assumptions to simplify the representation of complex biochemical and structural properties of ecosystems. This makes it difficult to use these models to test new ideas for parameterizations or to evaluate scaling assumptions. The stripped down nature of these models also makes it difficult to "connect" with current disciplinary research which tends to be focused on much more nuanced topics than can be included in the models. In our opinion/experience this indicates the need for another type of model that can more faithfully represent the complexity ecosystems and which has the flexibility to change or interchange parameterizations and to run optimization codes for calibration. We have used the SCOPE (Soil Canopy Observation, Photochemistry and Energy fluxes) model in this way to develop, calibrate, and test parameterizations for solar induced chlorophyll fluorescence, OCS exchange and stomatal parameterizations at the canopy scale. Examples of the data sets and procedures used to develop and test new parameterizations are presented.
A Novel Shape Parameterization Approach
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.
1999-01-01
This paper presents a novel parameterization approach for complex shapes suitable for a multidisciplinary design optimization application. The approach consists of two basic concepts: (1) parameterizing the shape perturbations rather than the geometry itself and (2) performing the shape deformation by means of the soft objects animation algorithms used in computer graphics. Because the formulation presented in this paper is independent of grid topology, we can treat computational fluid dynamics and finite element grids in a similar manner. The proposed approach is simple, compact, and efficient. Also, the analytical sensitivity derivatives are easily computed for use in a gradient-based optimization. This algorithm is suitable for low-fidelity (e.g., linear aerodynamics and equivalent laminated plate structures) and high-fidelity analysis tools (e.g., nonlinear computational fluid dynamics and detailed finite element modeling). This paper contains the implementation details of parameterizing for planform, twist, dihedral, thickness, and camber. The results are presented for a multidisciplinary design optimization application consisting of nonlinear computational fluid dynamics, detailed computational structural mechanics, performance, and a simple propulsion module.
NASA Technical Reports Server (NTRS)
Freitas, Saulo R.; Grell, Georg; Molod, Andrea; Thompson, Matthew A.
2017-01-01
We implemented and began to evaluate an alternative convection parameterization for the NASA Goddard Earth Observing System (GEOS) global model. The parameterization is based on the mass flux approach with several closures, for equilibrium and non-equilibrium convection, and includes scale and aerosol awareness functionalities. Recently, the scheme has been extended to a tri-modal spectral size approach to simulate the transition from shallow, mid, and deep convection regimes. In addition, the inclusion of a new closure for non-equilibrium convection resulted in a substantial gain of realism in model simulation of the diurnal cycle of convection over the land. Here, we briefly introduce the recent developments, implementation, and preliminary results of this parameterization in the NASA GEOS modeling system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saenz, Juan A.; Chen, Qingshan; Ringler, Todd
Recent work has shown that taking the thickness-weighted average (TWA) of the Boussinesq equations in buoyancy coordinates results in exact equations governing the prognostic residual mean flow where eddy–mean flow interactions appear in the horizontal momentum equations as the divergence of the Eliassen–Palm flux tensor (EPFT). It has been proposed that, given the mathematical tractability of the TWA equations, the physical interpretation of the EPFT, and its relation to potential vorticity fluxes, the TWA is an appropriate framework for modeling ocean circulation with parameterized eddies. The authors test the feasibility of this proposition and investigate the connections between the TWAmore » framework and the conventional framework used in models, where Eulerian mean flow prognostic variables are solved for. Using the TWA framework as a starting point, this study explores the well-known connections between vertical transfer of horizontal momentum by eddy form drag and eddy overturning by the bolus velocity, used by Greatbatch and Lamb and Gent and McWilliams to parameterize eddies. After implementing the TWA framework in an ocean general circulation model, we verify our analysis by comparing the flows in an idealized Southern Ocean configuration simulated using the TWA and conventional frameworks with the same mesoscale eddy parameterization.« less
Urban Canopy Effects in Regional Climate Simulations - An Inter-Model Comparison
NASA Astrophysics Data System (ADS)
Halenka, T.; Huszar, P.; Belda, M.; Karlicky, J.
2017-12-01
To assess the impact of cities and urban surfaces on climate, the modeling approach is often used with inclusion of urban parameterization in land-surface interactions. This is especially important when going to higher resolution, which is common trend both in operational weather prediction and regional climate modelling. Model description of urban canopy related meteorological effects can, however, differ largely given especially the underlying surface models and the urban canopy parameterizations, representing a certain uncertainty. To assess this uncertainty is important for adaptation and mitigation measures often applied in the big cities, especially in connection to climate change perspective, which is one of the main task of the new project OP-PPR Proof of Concept UK. In this study we contribute to the estimation of this uncertainty by performing numerous experiments to assess the urban canopy meteorological forcing over central Europe on climate for the decade 2001-2010, using two regional climate models (RegCM4 and WRF) in 10 km resolution driven by ERA-Interim reanalyses, three surface schemes (BATS and CLM4.5 for RegCM4 and Noah for WRF) and five urban canopy parameterizations available: one bulk urban scheme, three single layer and a multilayer urban scheme. Effects of cities on urban and remote areas were evaluated. There are some differences in sensitivity of individual canopy model implementations to the UHI effects, depending on season and size of the city as well. Effect of reducing diurnal temperature range in cities (around 2 °C in summer mean) is noticeable in all simulations, independent to urban parameterization type and model, due to well-known warmer summer city nights. For the adaptation and mitigation purposes, rather than the average urban heat island intensity the distribution of it is more important providing the information on extreme UHI effects, e.g. during heat waves. We demonstrate that for big central European cities this effect can approach 10°C, even for not so big ones these extreme effects can go above 5°C.
Exploring the potential of machine learning to break deadlock in convection parameterization
NASA Astrophysics Data System (ADS)
Pritchard, M. S.; Gentine, P.
2017-12-01
We explore the potential of modern machine learning tools (via TensorFlow) to replace parameterization of deep convection in climate models. Our strategy begins by generating a large ( 1 Tb) training dataset from time-step level (30-min) output harvested from a one-year integration of a zonally symmetric, uniform-SST aquaplanet integration of the SuperParameterized Community Atmosphere Model (SPCAM). We harvest the inputs and outputs connecting each of SPCAM's 8,192 embedded cloud-resolving model (CRM) arrays to its host climate model's arterial thermodynamic state variables to afford 143M independent training instances. We demonstrate that this dataset is sufficiently large to induce preliminary convergence for neural network prediction of desired outputs of SP, i.e. CRM-mean convective heating and moistening profiles. Sensitivity of the machine learning convergence to the nuances of the TensorFlow implementation are discussed, as well as results from pilot tests from the neural network operating inline within the SPCAM as a replacement to the (super)parameterization of convection.
The application of depletion curves for parameterization of subgrid variability of snow
C. H. Luce; D. G. Tarboton
2004-01-01
Parameterization of subgrid-scale variability in snow accumulation and melt is important for improvements in distributed snowmelt modelling. We have taken the approach of using depletion curves that relate fractional snowcovered area to element-average snow water equivalent to parameterize the effect of snowpack heterogeneity within a physically based mass and energy...
Better models are more effectively connected models
NASA Astrophysics Data System (ADS)
Nunes, João Pedro; Bielders, Charles; Darboux, Frederic; Fiener, Peter; Finger, David; Turnbull-Lloyd, Laura; Wainwright, John
2016-04-01
The concept of hydrologic and geomorphologic connectivity describes the processes and pathways which link sources (e.g. rainfall, snow and ice melt, springs, eroded areas and barren lands) to accumulation areas (e.g. foot slopes, streams, aquifers, reservoirs), and the spatial variations thereof. There are many examples of hydrological and sediment connectivity on a watershed scale; in consequence, a process-based understanding of connectivity is crucial to help managers understand their systems and adopt adequate measures for flood prevention, pollution mitigation and soil protection, among others. Modelling is often used as a tool to understand and predict fluxes within a catchment by complementing observations with model results. Catchment models should therefore be able to reproduce the linkages, and thus the connectivity of water and sediment fluxes within the systems under simulation. In modelling, a high level of spatial and temporal detail is desirable to ensure taking into account a maximum number of components, which then enables connectivity to emerge from the simulated structures and functions. However, computational constraints and, in many cases, lack of data prevent the representation of all relevant processes and spatial/temporal variability in most models. In most cases, therefore, the level of detail selected for modelling is too coarse to represent the system in a way in which connectivity can emerge; a problem which can be circumvented by representing fine-scale structures and processes within coarser scale models using a variety of approaches. This poster focuses on the results of ongoing discussions on modelling connectivity held during several workshops within COST Action Connecteur. It assesses the current state of the art of incorporating the concept of connectivity in hydrological and sediment models, as well as the attitudes of modellers towards this issue. The discussion will focus on the different approaches through which connectivity can be represented in models: either by allowing it to emerge from model behaviour or by parameterizing it inside model structures; and on the appropriate scale at which processes should be represented explicitly or implicitly. It will also explore how modellers themselves approach connectivity through the results of a community survey. Finally, it will present the outline of an international modelling exercise aimed at assessing how different modelling concepts can capture connectivity in real catchments.
NASA Astrophysics Data System (ADS)
Liu, Yuefeng; Duan, Zhuoyi; Chen, Song
2017-10-01
Aerodynamic shape optimization design aiming at improving the efficiency of an aircraft has always been a challenging task, especially when the configuration is complex. In this paper, a hybrid FFD-RBF surface parameterization approach has been proposed for designing a civil transport wing-body configuration. This approach is simple and efficient, with the FFD technique used for parameterizing the wing shape and the RBF interpolation approach used for handling the wing body junction part updating. Furthermore, combined with Cuckoo Search algorithm and Kriging surrogate model with expected improvement adaptive sampling criterion, an aerodynamic shape optimization design system has been established. Finally, the aerodynamic shape optimization design on DLR F4 wing-body configuration has been carried out as a study case, and the result has shown that the approach proposed in this paper is of good effectiveness.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liou, Kuo-Nan
2016-02-09
Under the support of the aforementioned DOE Grant, we have made two fundamental contributions to atmospheric and climate sciences: (1) Develop an efficient 3-D radiative transfer parameterization for application to intense and intricate inhomogeneous mountain/snow regions. (2) Innovate a stochastic parameterization for light absorption by internally mixed black carbon and dust particles in snow grains for understanding and physical insight into snow albedo reduction in climate models. With reference to item (1), we divided solar fluxes reaching mountain surfaces into five components: direct and diffuse fluxes, direct- and diffuse-reflected fluxes, and coupled mountain-mountain flux. “Exact” 3D Monte Carlo photon tracingmore » computations can then be performed for these solar flux components to compare with those calculated from the conventional plane-parallel (PP) radiative transfer program readily available in climate models. Subsequently, Parameterizations of the deviations of 3D from PP results for five flux components are carried out by means of the multiple linear regression analysis associated with topographic information, including elevation, solar incident angle, sky view factor, and terrain configuration factor. We derived five regression equations with high statistical correlations for flux deviations and successfully incorporated this efficient parameterization into WRF model, which was used as the testbed in connection with the Fu-Liou-Gu PP radiation scheme that has been included in the WRF physics package. Incorporating this 3D parameterization program, we conducted simulations of WRF and CCSM4 to understand and evaluate the mountain/snow effect on snow albedo reduction during seasonal transition and the interannual variability for snowmelt, cloud cover, and precipitation over the Western United States presented in the final report. With reference to item (2), we developed in our previous research a geometric-optics surface-wave approach (GOS) for the computation of light absorption and scattering by complex and inhomogeneous particles for application to aggregates and snow grains with external and internal mixing structures. We demonstrated that a small black (BC) particle on the order of 1 μm internally mixed with snow grains could effectively reduce visible snow albedo by as much as 5–10%. Following this work and within the context of DOE support, we have made two key accomplishments presented in the attached final report.« less
The Hubbard Dimer: A Complete DFT Solution to a Many-Body Problem
NASA Astrophysics Data System (ADS)
Smith, Justin; Carrascal, Diego; Ferrer, Jaime; Burke, Kieron
2015-03-01
In this work we explain the relationship between density functional theory and strongly correlated models using the simplest possible example, the two-site asymmetric Hubbard model. We discuss the connection between the lattice and real-space and how this is a simple model for stretched H2. We can solve this elementary example analytically, and with that we can illuminate the underlying logic and aims of DFT. While the many-body solution is analytic, the density functional is given only implicitly. We overcome this difficulty by creating a highly accurate parameterization of the exact function. We use this parameterization to perform benchmark calculations of correlation kinetic energy, the adiabatic connection, etc. We also test Hartree-Fock and the Bethe Ansatz Local Density Approximation. We also discuss and illustrate the derivative discontinuity in the exchange-correlation energy and the infamous gap problem in DFT. DGE-1321846, DE-FG02-08ER46496.
NASA Astrophysics Data System (ADS)
Huang, Dong; Liu, Yangang
2014-12-01
Subgrid-scale variability is one of the main reasons why parameterizations are needed in large-scale models. Although some parameterizations started to address the issue of subgrid variability by introducing a subgrid probability distribution function for relevant quantities, the spatial structure has been typically ignored and thus the subgrid-scale interactions cannot be accounted for physically. Here we present a new statistical-physics-like approach whereby the spatial autocorrelation function can be used to physically capture the net effects of subgrid cloud interaction with radiation. The new approach is able to faithfully reproduce the Monte Carlo 3D simulation results with several orders less computational cost, allowing for more realistic representation of cloud radiation interactions in large-scale models.
Subgrid-scale parameterization and low-frequency variability: a response theory approach
NASA Astrophysics Data System (ADS)
Demaeyer, Jonathan; Vannitsem, Stéphane
2016-04-01
Weather and climate models are limited in the possible range of resolved spatial and temporal scales. However, due to the huge space- and time-scale ranges involved in the Earth System dynamics, the effects of many sub-grid processes should be parameterized. These parameterizations have an impact on the forecasts or projections. It could also affect the low-frequency variability present in the system (such as the one associated to ENSO or NAO). An important question is therefore to know what is the impact of stochastic parameterizations on the Low-Frequency Variability generated by the system and its model representation. In this context, we consider a stochastic subgrid-scale parameterization based on the Ruelle's response theory and proposed in Wouters and Lucarini (2012). We test this approach in the context of a low-order coupled ocean-atmosphere model, detailed in Vannitsem et al. (2015), for which a part of the atmospheric modes is considered as unresolved. A natural separation of the phase-space into a slow invariant set and its fast complement allows for an analytical derivation of the different terms involved in the parameterization, namely the average, the fluctuation and the long memory terms. Its application to the low-order system reveals that a considerable correction of the low-frequency variability along the invariant subset can be obtained. This new approach of scale separation opens new avenues of subgrid-scale parameterizations in multiscale systems used for climate forecasts. References: Vannitsem S, Demaeyer J, De Cruz L, Ghil M. 2015. Low-frequency variability and heat transport in a low-order nonlinear coupled ocean-atmosphere model. Physica D: Nonlinear Phenomena 309: 71-85. Wouters J, Lucarini V. 2012. Disentangling multi-level systems: averaging, correlations and memory. Journal of Statistical Mechanics: Theory and Experiment 2012(03): P03 003.
Saenz, Juan A.; Chen, Qingshan; Ringler, Todd
2015-05-19
Recent work has shown that taking the thickness-weighted average (TWA) of the Boussinesq equations in buoyancy coordinates results in exact equations governing the prognostic residual mean flow where eddy–mean flow interactions appear in the horizontal momentum equations as the divergence of the Eliassen–Palm flux tensor (EPFT). It has been proposed that, given the mathematical tractability of the TWA equations, the physical interpretation of the EPFT, and its relation to potential vorticity fluxes, the TWA is an appropriate framework for modeling ocean circulation with parameterized eddies. The authors test the feasibility of this proposition and investigate the connections between the TWAmore » framework and the conventional framework used in models, where Eulerian mean flow prognostic variables are solved for. Using the TWA framework as a starting point, this study explores the well-known connections between vertical transfer of horizontal momentum by eddy form drag and eddy overturning by the bolus velocity, used by Greatbatch and Lamb and Gent and McWilliams to parameterize eddies. After implementing the TWA framework in an ocean general circulation model, we verify our analysis by comparing the flows in an idealized Southern Ocean configuration simulated using the TWA and conventional frameworks with the same mesoscale eddy parameterization.« less
Andrello, Marco; Mouillot, David; Beuvier, Jonathan; Albouy, Camille; Thuiller, Wilfried; Manel, Stéphanie
2013-01-01
Marine protected areas (MPAs) are major tools to protect biodiversity and sustain fisheries. For species with a sedentary adult phase and a dispersive larval phase, the effectiveness of MPA networks for population persistence depends on connectivity through larval dispersal. However, connectivity patterns between MPAs remain largely unknown at large spatial scales. Here, we used a biophysical model to evaluate connectivity between MPAs in the Mediterranean Sea, a region of extremely rich biodiversity that is currently protected by a system of approximately a hundred MPAs. The model was parameterized according to the dispersal capacity of the dusky grouper Epinephelus marginatus, an archetypal conservation-dependent species, with high economic importance and emblematic in the Mediterranean. Using various connectivity metrics and graph theory, we showed that Mediterranean MPAs are far from constituting a true, well-connected network. On average, each MPA was directly connected to four others and MPAs were clustered into several groups. Two MPAs (one in the Balearic Islands and one in Sardinia) emerged as crucial nodes for ensuring multi-generational connectivity. The high heterogeneity of MPA distribution, with low density in the South-Eastern Mediterranean, coupled with a mean dispersal distance of 120 km, leaves about 20% of the continental shelf without any larval supply. This low connectivity, here demonstrated for a major Mediterranean species, poses new challenges for the creation of a future Mediterranean network of well-connected MPAs providing recruitment to the whole continental shelf. This issue is even more critical given that the expected reduction of pelagic larval duration following sea temperature rise will likely decrease connectivity even more. PMID:23861917
Approaches for Subgrid Parameterization: Does Scaling Help?
NASA Astrophysics Data System (ADS)
Yano, Jun-Ichi
2016-04-01
Arguably the scaling behavior is a well-established fact in many geophysical systems. There are already many theoretical studies elucidating this issue. However, the scaling law is slow to be introduced in "operational" geophysical modelling, notably for weather forecast as well as climate projection models. The main purpose of this presentation is to ask why, and try to answer this question. As a reference point, the presentation reviews the three major approaches for traditional subgrid parameterization: moment, PDF (probability density function), and mode decomposition. The moment expansion is a standard method for describing the subgrid-scale turbulent flows both in the atmosphere and the oceans. The PDF approach is intuitively appealing as it directly deals with a distribution of variables in subgrid scale in a more direct manner. The third category, originally proposed by Aubry et al (1988) in context of the wall boundary-layer turbulence, is specifically designed to represent coherencies in compact manner by a low--dimensional dynamical system. Their original proposal adopts the proper orthogonal decomposition (POD, or empirical orthogonal functions, EOF) as their mode-decomposition basis. However, the methodology can easily be generalized into any decomposition basis. The mass-flux formulation that is currently adopted in majority of atmospheric models for parameterizing convection can also be considered a special case of the mode decomposition, adopting the segmentally-constant modes for the expansion basis. The mode decomposition can, furthermore, be re-interpreted as a type of Galarkin approach for numerically modelling the subgrid-scale processes. Simple extrapolation of this re-interpretation further suggests us that the subgrid parameterization problem may be re-interpreted as a type of mesh-refinement problem in numerical modelling. We furthermore see a link between the subgrid parameterization and downscaling problems along this line. The mode decomposition approach would also be the best framework for linking between the traditional parameterizations and the scaling perspectives. However, by seeing the link more clearly, we also see strength and weakness of introducing the scaling perspectives into parameterizations. Any diagnosis under a mode decomposition would immediately reveal a power-law nature of the spectrum. However, exploiting this knowledge in operational parameterization would be a different story. It is symbolic to realize that POD studies have been focusing on representing the largest-scale coherency within a grid box under a high truncation. This problem is already hard enough. Looking at differently, the scaling law is a very concise manner for characterizing many subgrid-scale variabilities in systems. We may even argue that the scaling law can provide almost complete subgrid-scale information in order to construct a parameterization, but with a major missing link: its amplitude must be specified by an additional condition. The condition called "closure" in the parameterization problem, and known to be a tough problem. We should also realize that the studies of the scaling behavior tend to be statistical in the sense that it hardly provides complete information for constructing a parameterization: can we specify the coefficients of all the decomposition modes by a scaling law perfectly when the first few leading modes are specified? Arguably, the renormalization group (RNG) is a very powerful tool for reducing a system with a scaling behavior into a low dimension, say, under an appropriate mode decomposition procedure. However, RNG is analytical tool: it is extremely hard to apply it to real complex geophysical systems. It appears that it is still a long way to go for us before we can begin to exploit the scaling law in order to construct operational subgrid parameterizations in effective manner.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Dong; Liu, Yangang
2014-12-18
Subgrid-scale variability is one of the main reasons why parameterizations are needed in large-scale models. Although some parameterizations started to address the issue of subgrid variability by introducing a subgrid probability distribution function for relevant quantities, the spatial structure has been typically ignored and thus the subgrid-scale interactions cannot be accounted for physically. Here we present a new statistical-physics-like approach whereby the spatial autocorrelation function can be used to physically capture the net effects of subgrid cloud interaction with radiation. The new approach is able to faithfully reproduce the Monte Carlo 3D simulation results with several orders less computational cost,more » allowing for more realistic representation of cloud radiation interactions in large-scale models.« less
How to assess the impact of a physical parameterization in simulations of moist convection?
NASA Astrophysics Data System (ADS)
Grabowski, Wojciech
2017-04-01
A numerical model capable in simulating moist convection (e.g., cloud-resolving model or large-eddy simulation model) consists of a fluid flow solver combined with required representations (i.e., parameterizations) of physical processes. The later typically include cloud microphysics, radiative transfer, and unresolved turbulent transport. Traditional approaches to investigate impacts of such parameterizations on convective dynamics involve parallel simulations with different parameterization schemes or with different scheme parameters. Such methodologies are not reliable because of the natural variability of a cloud field that is affected by the feedback between the physics and dynamics. For instance, changing the cloud microphysics typically leads to a different realization of the cloud-scale flow, and separating dynamical and microphysical impacts is difficult. This presentation will present a novel modeling methodology, the piggybacking, that allows studying the impact of a physical parameterization on cloud dynamics with confidence. The focus will be on the impact of cloud microphysics parameterization. Specific examples of the piggybacking approach will include simulations concerning the hypothesized deep convection invigoration in polluted environments, the validity of the saturation adjustment in modeling condensation in moist convection, and separation of physical impacts from statistical uncertainty in simulations applying particle-based Lagrangian microphysics, the super-droplet method.
A probabilistic framework to infer brain functional connectivity from anatomical connections.
Deligianni, Fani; Varoquaux, Gael; Thirion, Bertrand; Robinson, Emma; Sharp, David J; Edwards, A David; Rueckert, Daniel
2011-01-01
We present a novel probabilistic framework to learn across several subjects a mapping from brain anatomical connectivity to functional connectivity, i.e. the covariance structure of brain activity. This prediction problem must be formulated as a structured-output learning task, as the predicted parameters are strongly correlated. We introduce a model selection framework based on cross-validation with a parametrization-independent loss function suitable to the manifold of covariance matrices. Our model is based on constraining the conditional independence structure of functional activity by the anatomical connectivity. Subsequently, we learn a linear predictor of a stationary multivariate autoregressive model. This natural parameterization of functional connectivity also enforces the positive-definiteness of the predicted covariance and thus matches the structure of the output space. Our results show that functional connectivity can be explained by anatomical connectivity on a rigorous statistical basis, and that a proper model of functional connectivity is essential to assess this link.
Observational and Modeling Studies of Clouds and the Hydrological Cycle
NASA Technical Reports Server (NTRS)
Somerville, Richard C. J.
1997-01-01
Our approach involved validating parameterizations directly against measurements from field programs, and using this validation to tune existing parameterizations and to guide the development of new ones. We have used a single-column model (SCM) to make the link between observations and parameterizations of clouds, including explicit cloud microphysics (e.g., prognostic cloud liquid water used to determine cloud radiative properties). Surface and satellite radiation measurements were used to provide an initial evaluation of the performance of the different parameterizations. The results of this evaluation will then used to develop improved cloud and cloud-radiation schemes, which were tested in GCM experiments.
Importance of Physico-Chemical Properties of Aerosols in the Formation of Arctic Ice Clouds
NASA Astrophysics Data System (ADS)
Keita, S. A.; Girard, E.
2014-12-01
Ice clouds play an important role in the Arctic weather and climate system but interactions between aerosols, clouds and radiation are poorly understood. Consequently, it is essential to fully understand their properties and especially their formation process. Extensive measurements from ground-based sites and satellite remote sensing reveal the existence of two Types of Ice Clouds (TICs) in the Arctic during the polar night and early spring. TIC-1 are composed by non-precipitating very small (radar-unseen) ice crystals whereas TIC-2 are detected by both sensors and are characterized by a low concentration of large precipitating ice crystals. It is hypothesized that TIC-2 formation is linked to the acidification of aerosols, which inhibit the ice nucleating properties of ice nuclei (IN). As a result, the IN concentration is reduced in these regions, resulting to a smaller concentration of larger ice crystals. Over the past 10 years, several parameterizations of homogeneous and heterogeneous ice nucleation have been developed to reflect the various physical and chemical properties of aerosols. These parameterizations are derived from laboratory studies on aerosols of different chemical compositions. The parameterizations are also developed according to two main approaches: stochastic (that nucleation is a probabilistic process, which is time dependent) and singular (that nucleation occurs at fixed conditions of temperature and humidity and time-independent). This research aims to better understand the formation process of TICs using a newly-developed ice nucleation parameterizations. For this purpose, we implement some parameterizations (2 approaches) into the Limited Area version of the Global Multiscale Environmental Model (GEM-LAM) and use them to simulate ice clouds observed during the Indirect and Semi-Direct Arctic Cloud (ISDAC) in Alaska. We use both approaches but special attention is focused on the new parameterizations of the singular approach. Simulation results of the TICs-2 observed on April 15th and 25th (polluted or acidic cases) and TICs-1 observed on April 5th (non-polluted cases) will be presented.
Uncertainty in Modeling Dust Mass Balance and Radiative Forcing from Size Parameterization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Chun; Chen, Siyu; Leung, Lai-Yung R.
2013-11-05
This study examines the uncertainties in simulating mass balance and radiative forcing of mineral dust due to biases in the aerosol size parameterization. Simulations are conducted quasi-globally (180oW-180oE and 60oS-70oN) using the WRF24 Chem model with three different approaches to represent aerosol size distribution (8-bin, 4-bin, and 3-mode). The biases in the 3-mode or 4-bin approaches against a relatively more accurate 8-bin approach in simulating dust mass balance and radiative forcing are identified. Compared to the 8-bin approach, the 4-bin approach simulates similar but coarser size distributions of dust particles in the atmosphere, while the 3-mode pproach retains more finemore » dust particles but fewer coarse dust particles due to its prescribed og of each mode. Although the 3-mode approach yields up to 10 days longer dust mass lifetime over the remote oceanic regions than the 8-bin approach, the three size approaches produce similar dust mass lifetime (3.2 days to 3.5 days) on quasi-global average, reflecting that the global dust mass lifetime is mainly determined by the dust mass lifetime near the dust source regions. With the same global dust emission (~6000 Tg yr-1), the 8-bin approach produces a dust mass loading of 39 Tg, while the 4-bin and 3-mode approaches produce 3% (40.2 Tg) and 25% (49.1 Tg) higher dust mass loading, respectively. The difference in dust mass loading between the 8-bin approach and the 4-bin or 3-mode approaches has large spatial variations, with generally smaller relative difference (<10%) near the surface over the dust source regions. The three size approaches also result in significantly different dry and wet deposition fluxes and number concentrations of dust. The difference in dust aerosol optical depth (AOD) (a factor of 3) among the three size approaches is much larger than their difference (25%) in dust mass loading. Compared to the 8-bin approach, the 4-bin approach yields stronger dust absorptivity, while the 3-mode approach yields weaker dust absorptivity. Overall, on quasi-global average, the three size parameterizations result in a significant difference of a factor of 2~3 in dust surface cooling (-1.02~-2.87 W m-2) and atmospheric warming (0.39~0.96 W m-2) and in a tremendous difference of a factor of ~10 in dust TOA cooling (-0.24~-2.20 W m-2). An uncertainty of a factor of 2 is quantified in dust emission estimation due to the different size parameterizations. This study also highlights the uncertainties in modeling dust mass and number loading, deposition fluxes, and radiative forcing resulting from different size parameterizations, and motivates further investigation of the impact of size parameterizations on modeling dust impacts on air quality, climate, and ecosystem.« less
Parameterization of erodibility in the Rangeland Hydrology and Erosion Model
USDA-ARS?s Scientific Manuscript database
The magnitude of erosion from a hillslope is governed by the availability of sediment and connectivity of runoff and erosion processes. For undisturbed rangelands, sediment is primarily detached and transported by rainsplash and sheetflow (splash-sheet) processes in isolated bare batches, but sedime...
Thayer-Calder, K.; Gettelman, A.; Craig, C.; ...
2015-06-30
Most global climate models parameterize separate cloud types using separate parameterizations. This approach has several disadvantages, including obscure interactions between parameterizations and inaccurate triggering of cumulus parameterizations. Alternatively, a unified cloud parameterization uses one equation set to represent all cloud types. Such cloud types include stratiform liquid and ice cloud, shallow convective cloud, and deep convective cloud. Vital to the success of a unified parameterization is a general interface between clouds and microphysics. One such interface involves drawing Monte Carlo samples of subgrid variability of temperature, water vapor, cloud liquid, and cloud ice, and feeding the sample points into amore » microphysics scheme.This study evaluates a unified cloud parameterization and a Monte Carlo microphysics interface that has been implemented in the Community Atmosphere Model (CAM) version 5.3. Results describing the mean climate and tropical variability from global simulations are presented. The new model shows a degradation in precipitation skill but improvements in short-wave cloud forcing, liquid water path, long-wave cloud forcing, precipitable water, and tropical wave simulation. Also presented are estimations of computational expense and investigation of sensitivity to number of subcolumns.« less
Thayer-Calder, Katherine; Gettelman, A.; Craig, Cheryl; ...
2015-12-01
Most global climate models parameterize separate cloud types using separate parameterizations.This approach has several disadvantages, including obscure interactions between parameterizations and inaccurate triggering of cumulus parameterizations. Alternatively, a unified cloud parameterization uses one equation set to represent all cloud types. Such cloud types include stratiform liquid and ice cloud, shallow convective cloud, and deep convective cloud. Vital to the success of a unified parameterization is a general interface between clouds and microphysics. One such interface involves drawing Monte Carlo samples of subgrid variability of temperature, water vapor, cloud liquid, and cloud ice, and feeding the sample points into a microphysicsmore » scheme. This study evaluates a unified cloud parameterization and a Monte Carlo microphysics interface that has been implemented in the Community Atmosphere Model (CAM) version 5.3. Results describing the mean climate and tropical variability from global simulations are presented. In conclusion, the new model shows a degradation in precipitation skill but improvements in short-wave cloud forcing, liquid water path, long-wave cloud forcing, perceptible water, and tropical wave simulation. Also presented are estimations of computational expense and investigation of sensitivity to number of subcolumns.« less
Agishev, Ravil; Comerón, Adolfo; Rodriguez, Alejandro; Sicard, Michaël
2014-05-20
In this paper, we show a renewed approach to the generalized methodology for atmospheric lidar assessment, which uses the dimensionless parameterization as a core component. It is based on a series of our previous works where the problem of universal parameterization over many lidar technologies were described and analyzed from different points of view. The modernized dimensionless parameterization concept applied to relatively new silicon photomultiplier detectors (SiPMs) and traditional photomultiplier (PMT) detectors for remote-sensing instruments allowed predicting the lidar receiver performance with sky background available. The renewed approach can be widely used to evaluate a broad range of lidar system capabilities for a variety of lidar remote-sensing applications as well as to serve as a basis for selection of appropriate lidar system parameters for a specific application. Such a modernized methodology provides a generalized, uniform, and objective approach for evaluation of a broad range of lidar types and systems (aerosol, Raman, DIAL) operating on different targets (backscatter or topographic) and under intense sky background conditions. It can be used within the lidar community to compare different lidar instruments.
Stochastic Convection Parameterizations
NASA Technical Reports Server (NTRS)
Teixeira, Joao; Reynolds, Carolyn; Suselj, Kay; Matheou, Georgios
2012-01-01
computational fluid dynamics, radiation, clouds, turbulence, convection, gravity waves, surface interaction, radiation interaction, cloud and aerosol microphysics, complexity (vegetation, biogeochemistry, radiation versus turbulence/convection stochastic approach, non-linearities, Monte Carlo, high resolutions, large-Eddy Simulations, cloud structure, plumes, saturation in tropics, forecasting, parameterizations, stochastic, radiation-clod interaction, hurricane forecasts
Data error and highly parameterized groundwater models
Hill, M.C.
2008-01-01
Strengths and weaknesses of highly parameterized models, in which the number of parameters exceeds the number of observations, are demonstrated using a synthetic test case. Results suggest that the approach can yield close matches to observations but also serious errors in system representation. It is proposed that avoiding the difficulties of highly parameterized models requires close evaluation of: (1) model fit, (2) performance of the regression, and (3) estimated parameter distributions. Comparisons to hydrogeologic information are expected to be critical to obtaining credible models. Copyright ?? 2008 IAHS Press.
Liu, Ping; Li, Guodong; Liu, Xinggao
2015-09-01
Control vector parameterization (CVP) is an important approach of the engineering optimization for the industrial dynamic processes. However, its major defect, the low optimization efficiency caused by calculating the relevant differential equations in the generated nonlinear programming (NLP) problem repeatedly, limits its wide application in the engineering optimization for the industrial dynamic processes. A novel highly effective control parameterization approach, fast-CVP, is first proposed to improve the optimization efficiency for industrial dynamic processes, where the costate gradient formulae is employed and a fast approximate scheme is presented to solve the differential equations in dynamic process simulation. Three well-known engineering optimization benchmark problems of the industrial dynamic processes are demonstrated as illustration. The research results show that the proposed fast approach achieves a fine performance that at least 90% of the computation time can be saved in contrast to the traditional CVP method, which reveals the effectiveness of the proposed fast engineering optimization approach for the industrial dynamic processes. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
An inter-model comparison of urban canopy effects on climate
NASA Astrophysics Data System (ADS)
Halenka, Tomas; Karlicky, Jan; Huszar, Peter; Belda, Michal; Bardachova, Tatsiana
2017-04-01
The role of cities is increasing and will continue to increase in future, as the population within the urban areas is growing faster, with the estimate for Europe of about 84% living in urban areas in about mid of 21st century. To assess the impact of cities and, in general, urban surfaces on climate, using of modeling approach is well appropriate. Moreover, with higher resolution, urban areas becomes to be better resolved in the regional models and their relatively significant impacts should not be neglected. Model descriptions of urban canopy related meteorological effects can, however, differ largely given the odds in the driving models, the underlying surface models and the urban canopy parameterizations, representing a certain uncertainty. In this study we try to contribute to the estimation of this uncertainty by performing numerous experiments to assess the urban canopy meteorological forcing over central Europe on climate for the decade 2001-2010, using two driving models (RegCM4 and WRF) in 10 km resolution driven by ERA-Interim reanalyses, three surface schemes (BATS and CLM4.5 for RegCM4 and Noah for WRF) and five urban canopy parameterizations available: one bulk urban scheme, three single layer and a multilayer urban scheme. Actually, in RegCM4 we used our implementation of the Single Layer Urban Canopy Model (SLUCM) in BATS scheme and CLM4.5 option with urban parameterization based on SLUCM concept as well, in WRF we used all the three options, i.e. bulk, SLUCM and more complex and sophisticated Building Environment Parameterization (BEP) connected with Building Energy Model (BEM). As a reference simulations, runs with no urban areas and with no urban parameterizations were performed. Effects of cities on urban and rural areas were evaluated. Effect of reducing diurnal temperature range in cities (around 2 °C in summer) is noticeable in all simulation, independent to urban parameterization type and model. Also well-known warmer summer city nights appear in all simulations. Further, winter boundary layer increase by 100-200 m, together with wind reduction, is visible in all simulations. The spatial distribution of the night-time temperature response of models to urban canopy forcing is rather similar in each set-up, showing temperature increases up to 3°C in summer. In general, much lower increase are modeled for day-time conditions, which can be even slightly negative due to dominance of shadowing in urban canyons, especially in the morning hours. The winter temperature response, driven mainly by anthropogenic heat (AH) is strong in urban schemes where the building-street energy exchange is more resolved and is smaller, where AH is simply prescribed as additive flux to the sensible heat. Somewhat larger differences between the models are encountered for the response of wind and the height of planetary boundary layer (ZPBL), with dominant increases from a few 10 m up to 250 m depending on the model. The comparison of observation of diurnal temperature amplitude from ECAD data with model results and hourly data from Prague with model hourly values show improvement when urban effects are considered. Larger spread encountered for wind and turbulence (as ZPBL) should be considered when choices of urban canopy schemes are made, especially in connection with modeling transport of pollutants within/from cities. Another conclusion is that choosing more complex urban schemes does not necessary improves model performance and using simpler and computationally less demanding (e.g. single layer) urban schemes, is often sufficient.
Approaches to highly parameterized inversion-A guide to using PEST for groundwater-model calibration
Doherty, John E.; Hunt, Randall J.
2010-01-01
Highly parameterized groundwater models can create calibration difficulties. Regularized inversion-the combined use of large numbers of parameters with mathematical approaches for stable parameter estimation-is becoming a common approach to address these difficulties and enhance the transfer of information contained in field measurements to parameters used to model that system. Though commonly used in other industries, regularized inversion is somewhat imperfectly understood in the groundwater field. There is concern that this unfamiliarity can lead to underuse, and misuse, of the methodology. This document is constructed to facilitate the appropriate use of regularized inversion for calibrating highly parameterized groundwater models. The presentation is directed at an intermediate- to advanced-level modeler, and it focuses on the PEST software suite-a frequently used tool for highly parameterized model calibration and one that is widely supported by commercial graphical user interfaces. A brief overview of the regularized inversion approach is provided, and techniques for mathematical regularization offered by PEST are outlined, including Tikhonov, subspace, and hybrid schemes. Guidelines for applying regularized inversion techniques are presented after a logical progression of steps for building suitable PEST input. The discussion starts with use of pilot points as a parameterization device and processing/grouping observations to form multicomponent objective functions. A description of potential parameter solution methodologies and resources available through the PEST software and its supporting utility programs follows. Directing the parameter-estimation process through PEST control variables is then discussed, including guidance for monitoring and optimizing the performance of PEST. Comprehensive listings of PEST control variables, and of the roles performed by PEST utility support programs, are presented in the appendixes.
Improving the Predictability of Severe Water Levels along the Coasts of Marginal Seas
NASA Astrophysics Data System (ADS)
Ridder, N. N.; de Vries, H.; van den Brink, H.; De Vries, H.
2016-12-01
Extreme water levels can lead to catastrophic consequences with severe societal and economic repercussions. Particularly vulnerable are countries that are largely situated below sea level. To support and optimize forecast models, as well as future adaptation efforts, this study assesses the modeled contribution of storm surges and astronomical tides to total water levels under different air-sea momentum transfer parameterizations in a numerical surge model (WAQUA/DCSMv5) of the North Sea. It particularly focuses on the implications for the representation of extreme and rapidly recurring severe water levels over the past decades based on the example of the Netherlands. For this, WAQUA/DCSMv5, which is currently used to forecast coastal water levels in the Netherlands, is forced with ERA Interim reanalysis data. Model results are obtained from two different methodologies to parameterize air-sea momentum transfer. The first calculates the governing wind stress forcing using a drag coefficient derived from the conventional approach of wind speed dependent Charnock constants. The other uses instantaneous wind stress from the parameterization of the quasi-linear theory applied within the ECMWF wave model which is expected to deliver a more realistic forcing. The performance of both methods is tested by validating the model output with observations, paying particular attention to their ability to reproduce rapidly succeeding high water levels and extreme events. In a second step, the common features of and connections between these events are analyzed. The results of this study will allow recommendations for the improvement of water level forecasts within marginal seas and support decisions by policy makers. Furthermore, they will strengthen the general understanding of severe and extreme water levels as a whole and help to extend the currently limited knowledge about clustering events.
Comparing Habitat Suitability and Connectivity Modeling Methods for Conserving Pronghorn Migrations
Poor, Erin E.; Loucks, Colby; Jakes, Andrew; Urban, Dean L.
2012-01-01
Terrestrial long-distance migrations are declining globally: in North America, nearly 75% have been lost. Yet there has been limited research comparing habitat suitability and connectivity models to identify migration corridors across increasingly fragmented landscapes. Here we use pronghorn (Antilocapra americana) migrations in prairie habitat to compare two types of models that identify habitat suitability: maximum entropy (Maxent) and expert-based (Analytic Hierarchy Process). We used distance to wells, distance to water, NDVI, land cover, distance to roads, terrain shape and fence presence to parameterize the models. We then used the output of these models as cost surfaces to compare two common connectivity models, least-cost modeling (LCM) and circuit theory. Using pronghorn movement data from spring and fall migrations, we identified potential migration corridors by combining each habitat suitability model with each connectivity model. The best performing model combination was Maxent with LCM corridors across both seasons. Maxent out-performed expert-based habitat suitability models for both spring and fall migrations. However, expert-based corridors can perform relatively well and are a cost-effective alternative if species location data are unavailable. Corridors created using LCM out-performed circuit theory, as measured by the number of pronghorn GPS locations present within the corridors. We suggest the use of a tiered approach using different corridor widths for prioritizing conservation and mitigation actions, such as fence removal or conservation easements. PMID:23166656
Comparing habitat suitability and connectivity modeling methods for conserving pronghorn migrations.
Poor, Erin E; Loucks, Colby; Jakes, Andrew; Urban, Dean L
2012-01-01
Terrestrial long-distance migrations are declining globally: in North America, nearly 75% have been lost. Yet there has been limited research comparing habitat suitability and connectivity models to identify migration corridors across increasingly fragmented landscapes. Here we use pronghorn (Antilocapra americana) migrations in prairie habitat to compare two types of models that identify habitat suitability: maximum entropy (Maxent) and expert-based (Analytic Hierarchy Process). We used distance to wells, distance to water, NDVI, land cover, distance to roads, terrain shape and fence presence to parameterize the models. We then used the output of these models as cost surfaces to compare two common connectivity models, least-cost modeling (LCM) and circuit theory. Using pronghorn movement data from spring and fall migrations, we identified potential migration corridors by combining each habitat suitability model with each connectivity model. The best performing model combination was Maxent with LCM corridors across both seasons. Maxent out-performed expert-based habitat suitability models for both spring and fall migrations. However, expert-based corridors can perform relatively well and are a cost-effective alternative if species location data are unavailable. Corridors created using LCM out-performed circuit theory, as measured by the number of pronghorn GPS locations present within the corridors. We suggest the use of a tiered approach using different corridor widths for prioritizing conservation and mitigation actions, such as fence removal or conservation easements.
Newton Algorithms for Analytic Rotation: An Implicit Function Approach
ERIC Educational Resources Information Center
Boik, Robert J.
2008-01-01
In this paper implicit function-based parameterizations for orthogonal and oblique rotation matrices are proposed. The parameterizations are used to construct Newton algorithms for minimizing differentiable rotation criteria applied to "m" factors and "p" variables. The speed of the new algorithms is compared to that of existing algorithms and to…
Engelmann Spruce Site Index Models: A Comparison of Model Functions and Parameterizations
Nigh, Gordon
2015-01-01
Engelmann spruce (Picea engelmannii Parry ex Engelm.) is a high-elevation species found in western Canada and western USA. As this species becomes increasingly targeted for harvesting, better height growth information is required for good management of this species. This project was initiated to fill this need. The objective of the project was threefold: develop a site index model for Engelmann spruce; compare the fits and modelling and application issues between three model formulations and four parameterizations; and more closely examine the grounded-Generalized Algebraic Difference Approach (g-GADA) model parameterization. The model fitting data consisted of 84 stem analyzed Engelmann spruce site trees sampled across the Engelmann Spruce – Subalpine Fir biogeoclimatic zone. The fitted models were based on the Chapman-Richards function, a modified Hossfeld IV function, and the Schumacher function. The model parameterizations that were tested are indicator variables, mixed-effects, GADA, and g-GADA. Model evaluation was based on the finite-sample corrected version of Akaike’s Information Criteria and the estimated variance. Model parameterization had more of an influence on the fit than did model formulation, with the indicator variable method providing the best fit, followed by the mixed-effects modelling (9% increase in the variance for the Chapman-Richards and Schumacher formulations over the indicator variable parameterization), g-GADA (optimal approach) (335% increase in the variance), and the GADA/g-GADA (with the GADA parameterization) (346% increase in the variance). Factors related to the application of the model must be considered when selecting the model for use as the best fitting methods have the most barriers in their application in terms of data and software requirements. PMID:25853472
Non-perturbational surface-wave inversion: A Dix-type relation for surface waves
Haney, Matt; Tsai, Victor C.
2015-01-01
We extend the approach underlying the well-known Dix equation in reflection seismology to surface waves. Within the context of surface wave inversion, the Dix-type relation we derive for surface waves allows accurate depth profiles of shear-wave velocity to be constructed directly from phase velocity data, in contrast to perturbational methods. The depth profiles can subsequently be used as an initial model for nonlinear inversion. We provide examples of the Dix-type relation for under-parameterized and over-parameterized cases. In the under-parameterized case, we use the theory to estimate crustal thickness, crustal shear-wave velocity, and mantle shear-wave velocity across the Western U.S. from phase velocity maps measured at 8-, 20-, and 40-s periods. By adopting a thin-layer formalism and an over-parameterized model, we show how a regularized inversion based on the Dix-type relation yields smooth depth profiles of shear-wave velocity. In the process, we quantitatively demonstrate the depth sensitivity of surface-wave phase velocity as a function of frequency and the accuracy of the Dix-type relation. We apply the over-parameterized approach to a near-surface data set within the frequency band from 5 to 40 Hz and find overall agreement between the inverted model and the result of full nonlinear inversion.
NASA Technical Reports Server (NTRS)
Genthon, Christophe; Le Treut, Herve; Sadourny, Robert; Jouzel, Jean
1990-01-01
A Charney-Branscome based parameterization has been tested as a way of representing the eddy sensible heat transports missing in a zonally averaged dynamic model (ZADM) of the atmosphere. The ZADM used is a zonally averaged version of a general circulation model (GCM). The parameterized transports in the ZADM are gaged against the corresponding fluxes explicitly simulated in the GCM, using the same zonally averaged boundary conditions in both models. The Charney-Branscome approach neglects stationary eddies and transient barotropic disturbances and relies on a set of simplifying assumptions, including the linear appoximation, to describe growing transient baroclinic eddies. Nevertheless, fairly satisfactory results are obtained when the parameterization is performed interactively with the model. Compared with noninteractive tests, a very efficient restoring feedback effect between the modeled zonal-mean climate and the parameterized meridional eddy transport is identified.
Subgrid-scale physical parameterization in atmospheric modeling: How can we make it consistent?
NASA Astrophysics Data System (ADS)
Yano, Jun-Ichi
2016-07-01
Approaches to subgrid-scale physical parameterization in atmospheric modeling are reviewed by taking turbulent combustion flow research as a point of reference. Three major general approaches are considered for its consistent development: moment, distribution density function (DDF), and mode decomposition. The moment expansion is a standard method for describing the subgrid-scale turbulent flows both in geophysics and engineering. The DDF (commonly called PDF) approach is intuitively appealing as it deals with a distribution of variables in subgrid scale in a more direct manner. Mode decomposition was originally applied by Aubry et al (1988 J. Fluid Mech. 192 115-73) in the context of wall boundary-layer turbulence. It is specifically designed to represent coherencies in compact manner by a low-dimensional dynamical system. Their original proposal adopts the proper orthogonal decomposition (empirical orthogonal functions) as their mode-decomposition basis. However, the methodology can easily be generalized into any decomposition basis. Among those, wavelet is a particularly attractive alternative. The mass-flux formulation that is currently adopted in the majority of atmospheric models for parameterizing convection can also be considered a special case of mode decomposition, adopting segmentally constant modes for the expansion basis. This perspective further identifies a very basic but also general geometrical constraint imposed on the massflux formulation: the segmentally-constant approximation. Mode decomposition can, furthermore, be understood by analogy with a Galerkin method in numerically modeling. This analogy suggests that the subgrid parameterization may be re-interpreted as a type of mesh-refinement in numerical modeling. A link between the subgrid parameterization and downscaling problems is also pointed out.
Djurfeldt, Mikael
2012-07-01
The connection-set algebra (CSA) is a novel and general formalism for the description of connectivity in neuronal network models, from small-scale to large-scale structure. The algebra provides operators to form more complex sets of connections from simpler ones and also provides parameterization of such sets. CSA is expressive enough to describe a wide range of connection patterns, including multiple types of random and/or geometrically dependent connectivity, and can serve as a concise notation for network structure in scientific writing. CSA implementations allow for scalable and efficient representation of connectivity in parallel neuronal network simulators and could even allow for avoiding explicit representation of connections in computer memory. The expressiveness of CSA makes prototyping of network structure easy. A C+ + version of the algebra has been implemented and used in a large-scale neuronal network simulation (Djurfeldt et al., IBM J Res Dev 52(1/2):31-42, 2008b) and an implementation in Python has been publicly released.
Approaches in highly parameterized inversion - GENIE, a general model-independent TCP/IP run manager
Muffels, Christopher T.; Schreuder, Willem A.; Doherty, John E.; Karanovic, Marinko; Tonkin, Matthew J.; Hunt, Randall J.; Welter, David E.
2012-01-01
GENIE is a model-independent suite of programs that can be used to generally distribute, manage, and execute multiple model runs via the TCP/IP infrastructure. The suite consists of a file distribution interface, a run manage, a run executer, and a routine that can be compiled as part of a program and used to exchange model runs with the run manager. Because communication is via a standard protocol (TCP/IP), any computer connected to the Internet can serve in any of the capacities offered by this suite. Model independence is consistent with the existing template and instruction file protocols of the widely used PEST parameter estimation program. This report describes (1) the problem addressed; (2) the approach used by GENIE to queue, distribute, and retrieve model runs; and (3) user instructions, classes, and functions developed. It also includes (4) an example to illustrate the linking of GENIE with Parallel PEST using the interface routine.
A novel surface registration algorithm with biomedical modeling applications.
Huang, Heng; Shen, Li; Zhang, Rong; Makedon, Fillia; Saykin, Andrew; Pearlman, Justin
2007-07-01
In this paper, we propose a novel surface matching algorithm for arbitrarily shaped but simply connected 3-D objects. The spherical harmonic (SPHARM) method is used to describe these 3-D objects, and a novel surface registration approach is presented. The proposed technique is applied to various applications of medical image analysis. The results are compared with those using the traditional method, in which the first-order ellipsoid is used for establishing surface correspondence and aligning objects. In these applications, our surface alignment method is demonstrated to be more accurate and flexible than the traditional approach. This is due in large part to the fact that a new surface parameterization is generated by a shortcut that employs a useful rotational property of spherical harmonic basis functions for a fast implementation. In order to achieve a suitable computational speed for practical applications, we propose a fast alignment algorithm that improves computational complexity of the new surface registration method from O(n3) to O(n2).
Parameterizing unresolved obstacles with source terms in wave modeling: A real-world application
NASA Astrophysics Data System (ADS)
Mentaschi, Lorenzo; Kakoulaki, Georgia; Vousdoukas, Michalis; Voukouvalas, Evangelos; Feyen, Luc; Besio, Giovanni
2018-06-01
Parameterizing the dissipative effects of small, unresolved coastal features, is fundamental to improve the skills of wave models. The established technique to deal with this problem consists in reducing the amount of energy advected within the propagation scheme, and is currently available only for regular grids. To find a more general approach, Mentaschi et al., 2015b formulated a technique based on source terms, and validated it on synthetic case studies. This technique separates the parameterization of the unresolved features from the energy advection, and can therefore be applied to any numerical scheme and to any type of mesh. Here we developed an open-source library for the estimation of the transparency coefficients needed by this approach, from bathymetric data and for any type of mesh. The spectral wave model WAVEWATCH III was used to show that in a real-world domain, such as the Caribbean Sea, the proposed approach has skills comparable and sometimes better than the established propagation-based technique.
Multidisciplinary Aerodynamic-Structural Shape Optimization Using Deformation (MASSOUD)
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.
2000-01-01
This paper presents a multidisciplinary shape parameterization approach. The approach consists of two basic concepts: (1) parameterizing the shape perturbations rather than the geometry itself and (2) performing the shape deformation by means of the soft object animation algorithms used in computer graphics. Because the formulation presented in this paper is independent of grid topology, we can treat computational fluid dynamics and finite element grids in the same manner. The proposed approach is simple, compact, and efficient. Also, the analytical sensitivity derivatives are easily computed for use in a gradient-based optimization. This algorithm is suitable for low-fidelity (e.g., linear aerodynamics and equivalent laminate plate structures) and high-fidelity (e.g., nonlinear computational fluid dynamics and detailed finite element modeling) analysis tools. This paper contains the implementation details of parameterizing for planform, twist, dihedral, thickness, camber, and free-form surface. Results are presented for a multidisciplinary application consisting of nonlinear computational fluid dynamics, detailed computational structural mechanics, and a simple performance module.
Multidisciplinary Aerodynamic-Structural Shape Optimization Using Deformation (MASSOUD)
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.
2000-01-01
This paper presents a multidisciplinary shape parameterization approach. The approach consists of two basic concepts: (1) parameterizing the shape perturbations rather than the geometry itself and (2) performing the shape deformation by means of the soft object animation algorithms used in computer graphics. Because the formulation presented in this paper is independent of grid topology, we can treat computational fluid dynamics and finite element grids in a similar manner. The proposed approach is simple, compact, and efficient. Also, the analytical sensitivity derivatives are easily computed for use in a gradient-based optimization. This algorithm is suitable for low-fidelity (e.g., linear aerodynamics and equivalent laminated plate structures) and high-fidelity (e.g., nonlinear computational fluid dynamics and detailed finite element modeling analysis tools. This paper contains the implementation details of parameterizing for planform, twist, dihedral, thickness, camber, and free-form surface. Results are presented for a multidisciplinary design optimization application consisting of nonlinear computational fluid dynamics, detailed computational structural mechanics, and a simple performance module.
Fienen, Michael N.; D'Oria, Marco; Doherty, John E.; Hunt, Randall J.
2013-01-01
The application bgaPEST is a highly parameterized inversion software package implementing the Bayesian Geostatistical Approach in a framework compatible with the parameter estimation suite PEST. Highly parameterized inversion refers to cases in which parameters are distributed in space or time and are correlated with one another. The Bayesian aspect of bgaPEST is related to Bayesian probability theory in which prior information about parameters is formally revised on the basis of the calibration dataset used for the inversion. Conceptually, this approach formalizes the conditionality of estimated parameters on the specific data and model available. The geostatistical component of the method refers to the way in which prior information about the parameters is used. A geostatistical autocorrelation function is used to enforce structure on the parameters to avoid overfitting and unrealistic results. Bayesian Geostatistical Approach is designed to provide the smoothest solution that is consistent with the data. Optionally, users can specify a level of fit or estimate a balance between fit and model complexity informed by the data. Groundwater and surface-water applications are used as examples in this text, but the possible uses of bgaPEST extend to any distributed parameter applications.
Vařeková, Radka Svobodová; Jiroušková, Zuzana; Vaněk, Jakub; Suchomel, Šimon; Koča, Jaroslav
2007-01-01
The Electronegativity Equalization Method (EEM) is a fast approach for charge calculation. A challenging part of the EEM is the parameterization, which is performed using ab initio charges obtained for a set of molecules. The goal of our work was to perform the EEM parameterization for selected sets of organic, organohalogen and organometal molecules. We have performed the most robust parameterization published so far. The EEM parameterization was based on 12 training sets selected from a database of predicted 3D structures (NCI DIS) and from a database of crystallographic structures (CSD). Each set contained from 2000 to 6000 molecules. We have shown that the number of molecules in the training set is very important for quality of the parameters. We have improved EEM parameters (STO-3G MPA charges) for elements that were already parameterized, specifically: C, O, N, H, S, F and Cl. The new parameters provide more accurate charges than those published previously. We have also developed new parameters for elements that were not parameterized yet, specifically for Br, I, Fe and Zn. We have also performed crossover validation of all obtained parameters using all training sets that included relevant elements and confirmed that calculated parameters provide accurate charges.
Evaluation of wave runup predictions from numerical and parametric models
Stockdon, Hilary F.; Thompson, David M.; Plant, Nathaniel G.; Long, Joseph W.
2014-01-01
Wave runup during storms is a primary driver of coastal evolution, including shoreline and dune erosion and barrier island overwash. Runup and its components, setup and swash, can be predicted from a parameterized model that was developed by comparing runup observations to offshore wave height, wave period, and local beach slope. Because observations during extreme storms are often unavailable, a numerical model is used to simulate the storm-driven runup to compare to the parameterized model and then develop an approach to improve the accuracy of the parameterization. Numerically simulated and parameterized runup were compared to observations to evaluate model accuracies. The analysis demonstrated that setup was accurately predicted by both the parameterized model and numerical simulations. Infragravity swash heights were most accurately predicted by the parameterized model. The numerical model suffered from bias and gain errors that depended on whether a one-dimensional or two-dimensional spatial domain was used. Nonetheless, all of the predictions were significantly correlated to the observations, implying that the systematic errors can be corrected. The numerical simulations did not resolve the incident-band swash motions, as expected, and the parameterized model performed best at predicting incident-band swash heights. An assimilated prediction using a weighted average of the parameterized model and the numerical simulations resulted in a reduction in prediction error variance. Finally, the numerical simulations were extended to include storm conditions that have not been previously observed. These results indicated that the parameterized predictions of setup may need modification for extreme conditions; numerical simulations can be used to extend the validity of the parameterized predictions of infragravity swash; and numerical simulations systematically underpredict incident swash, which is relatively unimportant under extreme conditions.
A novel representation of groundwater dynamics in large-scale land surface modelling
NASA Astrophysics Data System (ADS)
Rahman, Mostaquimur; Rosolem, Rafael; Kollet, Stefan
2017-04-01
Land surface processes are connected to groundwater dynamics via shallow soil moisture. For example, groundwater affects evapotranspiration (by influencing the variability of soil moisture) and runoff generation mechanisms. However, contemporary Land Surface Models (LSM) generally consider isolated soil columns and free drainage lower boundary condition for simulating hydrology. This is mainly due to the fact that incorporating detailed groundwater dynamics in LSMs usually requires considerable computing resources, especially for large-scale applications (e.g., continental to global). Yet, these simplifications undermine the potential effect of groundwater dynamics on land surface mass and energy fluxes. In this study, we present a novel approach of representing high-resolution groundwater dynamics in LSMs that is computationally efficient for large-scale applications. This new parameterization is incorporated in the Joint UK Land Environment Simulator (JULES) and tested at the continental-scale.
10 Ways to Improve the Representation of MCSs in Climate Models
NASA Astrophysics Data System (ADS)
Schumacher, C.
2017-12-01
1. The first way to improve the representation of mesoscale convective systems (MCSs) in global climate models (GCMs) is to recognize that MCSs are important to climate. That may be obvious to most of the people attending this session, but it cannot be taken for granted in the wider community. The fact that MCSs produce large amounts of the global rainfall and that they dramatically impact the atmosphere via transports of heat, moisture, and momentum must be continuously stressed. 2-4. There has traditionally been three approaches to representing MCSs and/or their impacts in GCMs. The first is to focus on improving cumulus parameterizations by implementing things like cold pools that are assumed to better organize convection. The second is to focus on including mesoscale processes in the cumulus parameterization such as mesoscale vertical motions. The third is to just buy your way out with higher resolution using techniques like super-parameterization or global cloud-resolving model runs. All of these approaches have their pros and cons, but none of them satisfactorily solve the MCS climate modeling problem. 5-10. Looking forward, there is active discussion and new ideas in the modeling community on how to better represent convective organization in models. A number of ideas are a dramatic shift from the traditional plume-based cumulus parameterizations of most GCMs, such as implementing mesoscale parmaterizations based on their physical impacts (e.g., via heating), on empirical relationships based on big data/machine learning, or on stochastic approaches. Regardless of the technique employed, smart evaluation processes using observations are paramount to refining and constraining the inevitable tunable parameters in any parameterization.
Maria C. Mateo-Sanchez; Niko Balkenhol; Samuel Cushman; Trinidad Perez; Ana Dominguez; Santiago Saura
2015-01-01
Resistance models provide a key foundation for landscape connectivity analyses and are widely used to delineate wildlife corridors. Currently, there is no general consensus regarding the most effective empirical methods to parameterize resistance models, but habitat data (speciesâ presence data and related habitat suitability models) and genetic data are the...
Rediscovering the doldrums in high resolution simulations of the tropical Atlantic
NASA Astrophysics Data System (ADS)
Klocke, Daniel; Brueck, Matthias; Stevens, Bjorn
2017-04-01
When sailors started crossing the tropics, they quickly discovered and learned to fear the belt of calm and variable winds around the ITCZ. They named this region the doldrums. The doldrums are such a persistent and dominant part of the general circulation that they were marked and described in the earliest maps and studies concerned with the winds over the oceans. After the invention of steam ships the interest in the doldrums faded and the doldrums mainly lived on in colloquial language as an expression for stagnation, listlessness and depression describing the experience of the sailors rather than the region. The research focus shifted to the ITCZ, which describes more the position of a variable front of strong convergence and maximum precipitation residing in the doldrums. GCMs continue to correctly simulate the position of the ITCZ, which is partly determined by small scale processes which are parameterized. In support of the NARVAL measurement campaign, convection permitting simulations were conducted for a winter and a summer months covering the tropical Atlantic (9000x3300 km) with the Icosahedral Nonhydrostatic Model (ICON). These simulations reveal the doldrums with their embedded convective structures including cold pools, convective storms and squall lines. The calms and the high variability of the wind direction between the trades connected to convective activity in the ITCZ is presented using the high resolution simulations and are compared to historical and current observations. Comparisons with current NWP models using parameterized convection show that parameterized models struggle to reproduce some of the essential features connected to the doldrums.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kao, C.Y.J.; Bossert, J.E.; Winterkamp, J.
1993-10-01
One of the objectives of the DOE ARM Program is to improve the parameterization of clouds in general circulation models (GCMs). The approach taken in this research is two fold. We first examine the behavior of cumulus parameterization schemes by comparing their performance against the results from explicit cloud simulations with state-of-the-art microphysics. This is conducted in a two-dimensional (2-D) configuration of an idealized convective system. We then apply the cumulus parameterization schemes to realistic three-dimensional (3-D) simulations over the western US for a case with an enormous amount of convection in an extended period of five days. In themore » 2-D idealized tests, cloud effects are parameterized in the ``parameterization cases`` with a coarse resolution, whereas each cloud is explicitly resolved by the ``microphysics cases`` with a much finer resolution. Thus, the capability of the parameterization schemes in reproducing the growth and life cycle of a convective system can then be evaluated. These 2-D tests will form the basis for further 3-D realistic simulations which have the model resolution equivalent to that of the next generation of GCMs. Two cumulus parameterizations are used in this research: the Arakawa-Schubert (A-S) scheme (Arakawa and Schubert, 1974) used in Kao and Ogura (1987) and the Kuo scheme (Kuo, 1974) used in Tremback (1990). The numerical model used in this research is the Regional Atmospheric Modeling System (RAMS) developed at Colorado State University (CSU).« less
NASA Technical Reports Server (NTRS)
Tapiador, Francisco; Tao, Wei-Kuo; Angelis, Carlos F.; Martinez, Miguel A.; Cecilia Marcos; Antonio Rodriguez; Hou, Arthur; Jong Shi, Jain
2012-01-01
Ensembles of numerical model forecasts are of interest to operational early warning forecasters as the spread of the ensemble provides an indication of the uncertainty of the alerts, and the mean value is deemed to outperform the forecasts of the individual models. This paper explores two ensembles on a severe weather episode in Spain, aiming to ascertain the relative usefulness of each one. One ensemble uses sensible choices of physical parameterizations (precipitation microphysics, land surface physics, and cumulus physics) while the other follows a perturbed initial conditions approach. The results show that, depending on the parameterizations, large differences can be expected in terms of storm location, spatial structure of the precipitation field, and rain intensity. It is also found that the spread of the perturbed initial conditions ensemble is smaller than the dispersion due to physical parameterizations. This confirms that in severe weather situations operational forecasts should address moist physics deficiencies to realize the full benefits of the ensemble approach, in addition to optimizing initial conditions. The results also provide insights into differences in simulations arising from ensembles of weather models using several combinations of different physical parameterizations.
NASA Technical Reports Server (NTRS)
Suarez, M. J.; Arakawa, A.; Randall, D. A.
1983-01-01
A planetary boundary layer (PBL) parameterization for general circulation models (GCMs) is presented. It uses a mixed-layer approach in which the PBL is assumed to be capped by discontinuities in the mean vertical profiles. Both clear and cloud-topped boundary layers are parameterized. Particular emphasis is placed on the formulation of the coupling between the PBL and both the free atmosphere and cumulus convection. For this purpose a modified sigma-coordinate is introduced in which the PBL top and the lower boundary are both coordinate surfaces. The use of a bulk PBL formulation with this coordinate is extensively discussed. Results are presented from a July simulation produced by the UCLA GCM. PBL-related variables are shown, to illustrate the various regimes the parameterization is capable of simulating.
Importance of Chemical Composition of Ice Nuclei on the Formation of Arctic Ice Clouds
NASA Astrophysics Data System (ADS)
Keita, Setigui Aboubacar; Girard, Eric
2016-09-01
Ice clouds play an important role in the Arctic weather and climate system but interactions between aerosols, clouds and radiation remain poorly understood. Consequently, it is essential to fully understand their properties and especially their formation process. Extensive measurements from ground-based sites and satellite remote sensing reveal the existence of two Types of Ice Clouds (TICs) in the Arctic during the polar night and early spring. TICs-1 are composed by non-precipitating small (radar-unseen) ice crystals of less than 30 μm in diameter. The second type, TICs-2, are detected by radar and are characterized by a low concentration of large precipitating ice crystals ice crystals (>30 μm). To explain these differences, we hypothesized that TIC-2 formation is linked to the acidification of aerosols, which inhibits the ice nucleating properties of ice nuclei (IN). As a result, the IN concentration is reduced in these regions, resulting to a lower concentration of larger ice crystals. Water vapor available for deposition being the same, these crystals reach a larger size. Current weather and climate models cannot simulate these different types of ice clouds. This problem is partly due to the parameterizations implemented for ice nucleation. Over the past 10 years, several parameterizations of homogeneous and heterogeneous ice nucleation on IN of different chemical compositions have been developed. These parameterizations are based on two approaches: stochastic (that is nucleation is a probabilistic process, which is time dependent) and singular (that is nucleation occurs at fixed conditions of temperature and humidity and time-independent). The best approach remains unclear. This research aims to better understand the formation process of Arctic TICs using recently developed ice nucleation parameterizations. For this purpose, we have implemented these ice nucleation parameterizations into the Limited Area version of the Global Multiscale Environmental Model (GEM-LAM) and use them to simulate ice clouds observed during the Indirect and Semi-Direct Aerosol Campaign (ISDAC) in Alaska. Simulation results of the TICs-2 observed on April 15th and 25th (acidic cases) and TICs-1 observed on April 5th (non-acidic cases) are presented. Our results show that the stochastic approach based on the classical nucleation theory with the appropriate contact angle is better. Parameterizations of ice nucleation based on the singular approach tend to overestimate the ice crystal concentration in TICs-1 and TICs-2. The classical nucleation theory using the appropriate contact angle is the best approach to use to simulate the ice clouds investigated in this research.
NASA Astrophysics Data System (ADS)
Neggers, Roel
2016-04-01
Boundary-layer schemes have always formed an integral part of General Circulation Models (GCMs) used for numerical weather and climate prediction. The spatial and temporal scales associated with boundary-layer processes and clouds are typically much smaller than those at which GCMs are discretized, which makes their representation through parameterization a necessity. The need for generally applicable boundary-layer parameterizations has motivated many scientific studies, which in effect has created its own active research field in the atmospheric sciences. Of particular interest has been the evaluation of boundary-layer schemes at "process-level". This means that parameterized physics are studied in isolated mode from the larger-scale circulation, using prescribed forcings and excluding any upscale interaction. Although feedbacks are thus prevented, the benefit is an enhanced model transparency, which might aid an investigator in identifying model errors and understanding model behavior. The popularity and success of the process-level approach is demonstrated by the many past and ongoing model inter-comparison studies that have been organized by initiatives such as GCSS/GASS. A red line in the results of these studies is that although most schemes somehow manage to capture first-order aspects of boundary layer cloud fields, there certainly remains room for improvement in many areas. Only too often are boundary layer parameterizations still found to be at the heart of problems in large-scale models, negatively affecting forecast skills of NWP models or causing uncertainty in numerical predictions of future climate. How to break this parameterization "deadlock" remains an open problem. This presentation attempts to give an overview of the various existing methods for the process-level evaluation of boundary-layer physics in large-scale models. This includes i) idealized case studies, ii) longer-term evaluation at permanent meteorological sites (the testbed approach), and iii) process-level evaluation at climate time-scales. The advantages and disadvantages of each approach will be identified and discussed, and some thoughts about possible future developments will be given.
Price, Malcolm J; Welton, Nicky J; Briggs, Andrew H; Ades, A E
2011-01-01
Standard approaches to estimation of Markov models with data from randomized controlled trials tend either to make a judgment about which transition(s) treatments act on, or they assume that treatment has a separate effect on every transition. An alternative is to fit a series of models that assume that treatment acts on specific transitions. Investigators can then choose among alternative models using goodness-of-fit statistics. However, structural uncertainty about any chosen parameterization will remain and this may have implications for the resulting decision and the need for further research. We describe a Bayesian approach to model estimation, and model selection. Structural uncertainty about which parameterization to use is accounted for using model averaging and we developed a formula for calculating the expected value of perfect information (EVPI) in averaged models. Marginal posterior distributions are generated for each of the cost-effectiveness parameters using Markov Chain Monte Carlo simulation in WinBUGS, or Monte-Carlo simulation in Excel (Microsoft Corp., Redmond, WA). We illustrate the approach with an example of treatments for asthma using aggregate-level data from a connected network of four treatments compared in three pair-wise randomized controlled trials. The standard errors of incremental net benefit using structured models is reduced by up to eight- or ninefold compared to the unstructured models, and the expected loss attaching to decision uncertainty by factors of several hundreds. Model averaging had considerable influence on the EVPI. Alternative structural assumptions can alter the treatment decision and have an overwhelming effect on model uncertainty and expected value of information. Structural uncertainty can be accounted for by model averaging, and the EVPI can be calculated for averaged models. Copyright © 2011 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Määttänen, Anni; Merikanto, Joonas; Henschel, Henning; Duplissy, Jonathan; Makkonen, Risto; Ortega, Ismael K.; Vehkamäki, Hanna
2018-01-01
We have developed new parameterizations of electrically neutral homogeneous and ion-induced sulfuric acid-water particle formation for large ranges of environmental conditions, based on an improved model that has been validated against a particle formation rate data set produced by Cosmics Leaving OUtdoor Droplets (CLOUD) experiments at European Organization for Nuclear Research (CERN). The model uses a thermodynamically consistent version of the Classical Nucleation Theory normalized using quantum chemical data. Unlike the earlier parameterizations for H2SO4-H2O nucleation, the model is applicable to extreme dry conditions where the one-component sulfuric acid limit is approached. Parameterizations are presented for the critical cluster sulfuric acid mole fraction, the critical cluster radius, the total number of molecules in the critical cluster, and the particle formation rate. If the critical cluster contains only one sulfuric acid molecule, a simple formula for kinetic particle formation can be used: this threshold has also been parameterized. The parameterization for electrically neutral particle formation is valid for the following ranges: temperatures 165-400 K, sulfuric acid concentrations 104-1013 cm-3, and relative humidities 0.001-100%. The ion-induced particle formation parameterization is valid for temperatures 195-400 K, sulfuric acid concentrations 104-1016 cm-3, and relative humidities 10-5-100%. The new parameterizations are thus applicable for the full range of conditions in the Earth's atmosphere relevant for binary sulfuric acid-water particle formation, including both tropospheric and stratospheric conditions. They are also suitable for describing particle formation in the atmosphere of Venus.
NASA Astrophysics Data System (ADS)
Hoose, C.; Hande, L. B.; Mohler, O.; Niemand, M.; Paukert, M.; Reichardt, I.; Ullrich, R.
2016-12-01
Between 0 and -37°C, ice formation in clouds is triggered by aerosol particles acting as heterogeneous ice nuclei. At lower temperatures, heterogeneous ice nucleation on aerosols can occur at lower supersaturations than homogeneous freezing of solutes. In laboratory experiments, the ability of different aerosol species (e.g. desert dusts, soot, biological particles) has been studied in detail and quantified via various theoretical or empirical parameterization approaches. For experiments in the AIDA cloud chamber, we have quantified the ice nucleation efficiency via a temperature- and supersaturation dependent ice nucleation active site density. Here we present a new empirical parameterization scheme for immersion and deposition ice nucleation on desert dust and soot based on these experimental data. The application of this parameterization to the simulation of cirrus clouds, deep convective clouds and orographic clouds will be shown, including the extension of the scheme to the treatment of freezing of rain drops. The results are compared to other heterogeneous ice nucleation schemes. Furthermore, an aerosol-dependent parameterization of contact ice nucleation is presented.
Multi-Scale Modeling and the Eddy-Diffusivity/Mass-Flux (EDMF) Parameterization
NASA Astrophysics Data System (ADS)
Teixeira, J.
2015-12-01
Turbulence and convection play a fundamental role in many key weather and climate science topics. Unfortunately, current atmospheric models cannot explicitly resolve most turbulent and convective flow. Because of this fact, turbulence and convection in the atmosphere has to be parameterized - i.e. equations describing the dynamical evolution of the statistical properties of turbulence and convection motions have to be devised. Recently a variety of different models have been developed that attempt at simulating the atmosphere using variable resolution. A key problem however is that parameterizations are in general not explicitly aware of the resolution - the scale awareness problem. In this context, we will present and discuss a specific approach, the Eddy-Diffusivity/Mass-Flux (EDMF) parameterization, that not only is in itself a multi-scale parameterization but it is also particularly well suited to deal with the scale-awareness problems that plague current variable-resolution models. It does so by representing small-scale turbulence using a classic Eddy-Diffusivity (ED) method, and the larger-scale (boundary layer and tropospheric-scale) eddies as a variety of plumes using the Mass-Flux (MF) concept.
Parameterized reduced-order models using hyper-dual numbers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fike, Jeffrey A.; Brake, Matthew Robert
2013-10-01
The goal of most computational simulations is to accurately predict the behavior of a real, physical system. Accurate predictions often require very computationally expensive analyses and so reduced order models (ROMs) are commonly used. ROMs aim to reduce the computational cost of the simulations while still providing accurate results by including all of the salient physics of the real system in the ROM. However, real, physical systems often deviate from the idealized models used in simulations due to variations in manufacturing or other factors. One approach to this issue is to create a parameterized model in order to characterize themore » effect of perturbations from the nominal model on the behavior of the system. This report presents a methodology for developing parameterized ROMs, which is based on Craig-Bampton component mode synthesis and the use of hyper-dual numbers to calculate the derivatives necessary for the parameterization.« less
A note on: "A Gaussian-product stochastic Gent-McWilliams parameterization"
NASA Astrophysics Data System (ADS)
Jansen, Malte F.
2017-02-01
This note builds on a recent article by Grooms (2016), which introduces a new stochastic parameterization for eddy buoyancy fluxes. The closure proposed by Grooms accounts for the fact that eddy fluxes arise as the product of two approximately Gaussian variables, which in turn leads to a distinctly non-Gaussian distribution. The directionality of the stochastic eddy fluxes, however, remains somewhat ad-hoc and depends on the reference frame of the chosen coordinate system. This note presents a modification of the approach proposed by Grooms, which eliminates this shortcoming. Eddy fluxes are computed based on a stochastic mixing length model, which leads to a frame invariant formulation. As in the original closure proposed by Grooms, eddy fluxes are proportional to the product of two Gaussian variables, and the parameterization reduces to the Gent and McWilliams parameterization for the mean buyoancy fluxes.
Bevilacqua, Antonio; Speranza, Barbara; Sinigaglia, Milena; Corbo, Maria Rosaria
2015-01-01
Predictive Microbiology (PM) deals with the mathematical modeling of microorganisms in foods for different applications (challenge test, evaluation of microbiological shelf life, prediction of the microbiological hazards connected with foods, etc.). An interesting and important part of PM focuses on the use of primary functions to fit data of death kinetics of spoilage, pathogenic, and useful microorganisms following thermal or non-conventional treatments and can also be used to model survivors throughout storage. The main topic of this review is a focus on the most important death models (negative Gompertz, log-linear, shoulder/tail, Weibull, Weibull+tail, re-parameterized Weibull, biphasic approach, etc.) to pinpoint the benefits and the limits of each model; in addition, the last section addresses the most important tools for the use of death kinetics and predictive microbiology in a user-friendly way. PMID:28231222
NASA Astrophysics Data System (ADS)
Medellín, G.; Brinkkemper, J. A.; Torres-Freyermuth, A.; Appendini, C. M.; Mendoza, E. T.; Salles, P.
2016-01-01
We present a downscaling approach for the study of wave-induced extreme water levels at a location on a barrier island in Yucatán (Mexico). Wave information from a 30-year wave hindcast is validated with in situ measurements at 8 m water depth. The maximum dissimilarity algorithm is employed for the selection of 600 representative cases, encompassing different combinations of wave characteristics and tidal level. The selected cases are propagated from 8 m water depth to the shore using the coupling of a third-generation wave model and a phase-resolving non-hydrostatic nonlinear shallow-water equation model. Extreme wave run-up, R2%, is estimated for the simulated cases and can be further employed to reconstruct the 30-year time series using an interpolation algorithm. Downscaling results show run-up saturation during more energetic wave conditions and modulation owing to tides. The latter suggests that the R2% can be parameterized using a hyperbolic-like formulation with dependency on both wave height and tidal level. The new parametric formulation is in agreement with the downscaling results (r2 = 0.78), allowing a fast calculation of wave-induced extreme water levels at this location. Finally, an assessment of beach vulnerability to wave-induced extreme water levels is conducted at the study area by employing the two approaches (reconstruction/parameterization) and a storm impact scale. The 30-year extreme water level hindcast allows the calculation of beach vulnerability as a function of return periods. It is shown that the downscaling-derived parameterization provides reasonable results as compared with the numerical approach. This methodology can be extended to other locations and can be further improved by incorporating the storm surge contributions to the extreme water level.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parsons, Taylor; Guo, Yi; Veers, Paul
Software models that use design-level input variables and physics-based engineering analysis for estimating the mass and geometrical properties of components in large-scale machinery can be very useful for analyzing design trade-offs in complex systems. This study uses DriveSE, an OpenMDAO-based drivetrain model that uses stress and deflection criteria to size drivetrain components within a geared, upwind wind turbine. Because a full lifetime fatigue load spectrum can only be defined using computationally-expensive simulations in programs such as FAST, a parameterized fatigue loads spectrum that depends on wind conditions, rotor diameter, and turbine design life has been implemented. The parameterized fatigue spectrummore » is only used in this paper to demonstrate the proposed fatigue analysis approach. This paper details a three-part investigation of the parameterized approach and a comparison of the DriveSE model with and without fatigue analysis on the main shaft system. It compares loads from three turbines of varying size and determines if and when fatigue governs drivetrain sizing compared to extreme load-driven design. It also investigates the model's sensitivity to shaft material parameters. The intent of this paper is to demonstrate how fatigue considerations in addition to extreme loads can be brought into a system engineering optimization.« less
Analysis of sensitivity to different parameterization schemes for a subtropical cyclone
NASA Astrophysics Data System (ADS)
Quitián-Hernández, L.; Fernández-González, S.; González-Alemán, J. J.; Valero, F.; Martín, M. L.
2018-05-01
A sensitivity analysis to diverse WRF model physical parameterization schemes is carried out during the lifecycle of a Subtropical cyclone (STC). STCs are low-pressure systems that share tropical and extratropical characteristics, with hybrid thermal structures. In October 2014, a STC made landfall in the Canary Islands, causing widespread damage from strong winds and precipitation there. The system began to develop on October 18 and its effects lasted until October 21. Accurate simulation of this type of cyclone continues to be a major challenge because of its rapid intensification and unique characteristics. In the present study, several numerical simulations were performed using the WRF model to do a sensitivity analysis of its various parameterization schemes for the development and intensification of the STC. The combination of parameterization schemes that best simulated this type of phenomenon was thereby determined. In particular, the parameterization combinations that included the Tiedtke cumulus schemes had the most positive effects on model results. Moreover, concerning STC track validation, optimal results were attained when the STC was fully formed and all convective processes stabilized. Furthermore, to obtain the parameterization schemes that optimally categorize STC structure, a verification using Cyclone Phase Space is assessed. Consequently, the combination of parameterizations including the Tiedtke cumulus schemes were again the best in categorizing the cyclone's subtropical structure. For strength validation, related atmospheric variables such as wind speed and precipitable water were analyzed. Finally, the effects of using a deterministic or probabilistic approach in simulating intense convective phenomena were evaluated.
Lightning Scaling Laws Revisited
NASA Technical Reports Server (NTRS)
Boccippio, D. J.; Arnold, James E. (Technical Monitor)
2000-01-01
Scaling laws relating storm electrical generator power (and hence lightning flash rate) to charge transport velocity and storm geometry were originally posed by Vonnegut (1963). These laws were later simplified to yield simple parameterizations for lightning based upon cloud top height, with separate parameterizations derived over land and ocean. It is demonstrated that the most recent ocean parameterization: (1) yields predictions of storm updraft velocity which appear inconsistent with observation, and (2) is formally inconsistent with the theory from which it purports to derive. Revised formulations consistent with Vonnegut's original framework are presented. These demonstrate that Vonnegut's theory is, to first order, consistent with observation. The implications of assuming that flash rate is set by the electrical generator power, rather than the electrical generator current, are examined. The two approaches yield significantly different predictions about the dependence of charge transfer per flash on storm dimensions, which should be empirically testable. The two approaches also differ significantly in their explanation of regional variability in lightning observations.
Ruths, Derek; Muller, Melissa; Tseng, Jen-Te; Nakhleh, Luay; Ram, Prahlad T
2008-02-29
Reconstructing cellular signaling networks and understanding how they work are major endeavors in cell biology. The scale and complexity of these networks, however, render their analysis using experimental biology approaches alone very challenging. As a result, computational methods have been developed and combined with experimental biology approaches, producing powerful tools for the analysis of these networks. These computational methods mostly fall on either end of a spectrum of model parameterization. On one end is a class of structural network analysis methods; these typically use the network connectivity alone to generate hypotheses about global properties. On the other end is a class of dynamic network analysis methods; these use, in addition to the connectivity, kinetic parameters of the biochemical reactions to predict the network's dynamic behavior. These predictions provide detailed insights into the properties that determine aspects of the network's structure and behavior. However, the difficulty of obtaining numerical values of kinetic parameters is widely recognized to limit the applicability of this latter class of methods. Several researchers have observed that the connectivity of a network alone can provide significant insights into its dynamics. Motivated by this fundamental observation, we present the signaling Petri net, a non-parametric model of cellular signaling networks, and the signaling Petri net-based simulator, a Petri net execution strategy for characterizing the dynamics of signal flow through a signaling network using token distribution and sampling. The result is a very fast method, which can analyze large-scale networks, and provide insights into the trends of molecules' activity-levels in response to an external stimulus, based solely on the network's connectivity. We have implemented the signaling Petri net-based simulator in the PathwayOracle toolkit, which is publicly available at http://bioinfo.cs.rice.edu/pathwayoracle. Using this method, we studied a MAPK1,2 and AKT signaling network downstream from EGFR in two breast tumor cell lines. We analyzed, both experimentally and computationally, the activity level of several molecules in response to a targeted manipulation of TSC2 and mTOR-Raptor. The results from our method agreed with experimental results in greater than 90% of the cases considered, and in those where they did not agree, our approach provided valuable insights into discrepancies between known network connectivities and experimental observations.
Ruths, Derek; Muller, Melissa; Tseng, Jen-Te; Nakhleh, Luay; Ram, Prahlad T.
2008-01-01
Reconstructing cellular signaling networks and understanding how they work are major endeavors in cell biology. The scale and complexity of these networks, however, render their analysis using experimental biology approaches alone very challenging. As a result, computational methods have been developed and combined with experimental biology approaches, producing powerful tools for the analysis of these networks. These computational methods mostly fall on either end of a spectrum of model parameterization. On one end is a class of structural network analysis methods; these typically use the network connectivity alone to generate hypotheses about global properties. On the other end is a class of dynamic network analysis methods; these use, in addition to the connectivity, kinetic parameters of the biochemical reactions to predict the network's dynamic behavior. These predictions provide detailed insights into the properties that determine aspects of the network's structure and behavior. However, the difficulty of obtaining numerical values of kinetic parameters is widely recognized to limit the applicability of this latter class of methods. Several researchers have observed that the connectivity of a network alone can provide significant insights into its dynamics. Motivated by this fundamental observation, we present the signaling Petri net, a non-parametric model of cellular signaling networks, and the signaling Petri net-based simulator, a Petri net execution strategy for characterizing the dynamics of signal flow through a signaling network using token distribution and sampling. The result is a very fast method, which can analyze large-scale networks, and provide insights into the trends of molecules' activity-levels in response to an external stimulus, based solely on the network's connectivity. We have implemented the signaling Petri net-based simulator in the PathwayOracle toolkit, which is publicly available at http://bioinfo.cs.rice.edu/pathwayoracle. Using this method, we studied a MAPK1,2 and AKT signaling network downstream from EGFR in two breast tumor cell lines. We analyzed, both experimentally and computationally, the activity level of several molecules in response to a targeted manipulation of TSC2 and mTOR-Raptor. The results from our method agreed with experimental results in greater than 90% of the cases considered, and in those where they did not agree, our approach provided valuable insights into discrepancies between known network connectivities and experimental observations. PMID:18463702
The rational parameterization theorem for multisite post-translational modification systems.
Thomson, Matthew; Gunawardena, Jeremy
2009-12-21
Post-translational modification of proteins plays a central role in cellular regulation but its study has been hampered by the exponential increase in substrate modification forms ("modforms") with increasing numbers of sites. We consider here biochemical networks arising from post-translational modification under mass-action kinetics, allowing for multiple substrates, having different types of modification (phosphorylation, methylation, acetylation, etc.) on multiple sites, acted upon by multiple forward and reverse enzymes (in total number L), using general enzymatic mechanisms. These assumptions are substantially more general than in previous studies. We show that the steady-state modform concentrations constitute an algebraic variety that can be parameterized by rational functions of the L free enzyme concentrations, with coefficients which are rational functions of the rate constants. The parameterization allows steady states to be calculated by solving L algebraic equations, a dramatic reduction compared to simulating an exponentially large number of differential equations. This complexity collapse enables analysis in contexts that were previously intractable and leads to biological predictions that we review. Our results lay a foundation for the systems biology of post-translational modification and suggest deeper connections between biochemical networks and algebraic geometry.
Methods of testing parameterizations: Vertical ocean mixing
NASA Technical Reports Server (NTRS)
Tziperman, Eli
1992-01-01
The ocean's velocity field is characterized by an exceptional variety of scales. While the small-scale oceanic turbulence responsible for the vertical mixing in the ocean is of scales a few centimeters and smaller, the oceanic general circulation is characterized by horizontal scales of thousands of kilometers. In oceanic general circulation models that are typically run today, the vertical structure of the ocean is represented by a few tens of discrete grid points. Such models cannot explicitly model the small-scale mixing processes, and must, therefore, find ways to parameterize them in terms of the larger-scale fields. Finding a parameterization that is both reliable and plausible to use in ocean models is not a simple task. Vertical mixing in the ocean is the combined result of many complex processes, and, in fact, mixing is one of the less known and less understood aspects of the oceanic circulation. In present models of the oceanic circulation, the many complex processes responsible for vertical mixing are often parameterized in an oversimplified manner. Yet, finding an adequate parameterization of vertical ocean mixing is crucial to the successful application of ocean models to climate studies. The results of general circulation models for quantities that are of particular interest to climate studies, such as the meridional heat flux carried by the ocean, are quite sensitive to the strength of the vertical mixing. We try to examine the difficulties in choosing an appropriate vertical mixing parameterization, and the methods that are available for validating different parameterizations by comparing model results to oceanographic data. First, some of the physical processes responsible for vertically mixing the ocean are briefly mentioned, and some possible approaches to the parameterization of these processes in oceanographic general circulation models are described in the following section. We then discuss the role of the vertical mixing in the physics of the large-scale ocean circulation, and examine methods of validating mixing parameterizations using large-scale ocean models.
An analogue conceptual rainfall-runoff model for educational purposes
NASA Astrophysics Data System (ADS)
Herrnegger, Mathew; Riedl, Michael; Schulz, Karsten
2016-04-01
Conceptual rainfall-runoff models, in which runoff processes are modelled with a series of connected linear and non-linear reservoirs, remain widely applied tools in science and practice. Additionally, the concept is appreciated in teaching due to its somewhat simplicity in explaining and exploring hydrological processes of catchments. However, when a series of reservoirs are used, the model system becomes highly parametrized and complex and the traceability of the model results becomes more difficult to explain to an audience not accustomed to numerical modelling. Since normally the simulations are performed with a not visible digital code, the results are also not easily comprehensible. This contribution therefore presents a liquid analogue model, in which a conceptual rainfall-runoff model is reproduced by a physical model. This consists of different acrylic glass containers representing different storage components within a catchment, e.g. soil water or groundwater storage. The containers are equipped and connected with pipes, in which water movement represents different flow processes, e.g. surface runoff, percolation or base flow. Water from a storage container is pumped to the upper part of the model and represents effective rainfall input. The water then flows by gravity through the different pipes and storages. Valves are used for controlling the flows within the analogue model, comparable to the parameterization procedure in numerical models. Additionally, an inexpensive microcontroller-based board and sensors are used to measure storage water levels, with online visualization of the states as time series data, building a bridge between the analogue and digital world. The ability to physically witness the different flows and water levels in the storages makes the analogue model attractive to the audience. Hands-on experiments can be performed with students, in which different scenarios or catchment types can be simulated, not only with the analogue but also in parallel with the digital model, thereby connecting real-world with science. The effects of different parameterization setups, which is important not only in hydrological sciences, can be shown in a tangible way. The use of the analogue model in the context of "children meet University" events seems an attractive approach to show a younger audience the basic ideas of catchment modelling concepts, which would otherwise not be possible.
NASA Astrophysics Data System (ADS)
Fripp, Jurgen; Crozier, Stuart; Warfield, Simon K.; Ourselin, Sébastien
2006-03-01
Subdivision surfaces and parameterization are desirable for many algorithms that are commonly used in Medical Image Analysis. However, extracting an accurate surface and parameterization can be difficult for many anatomical objects of interest, due to noisy segmentations and the inherent variability of the object. The thin cartilages of the knee are an example of this, especially after damage is incurred from injuries or conditions like osteoarthritis. As a result, the cartilages can have different topologies or exist in multiple pieces. In this paper we present a topology preserving (genus 0) subdivision-based parametric deformable model that is used to extract the surfaces of the patella and tibial cartilages in the knee. These surfaces have minimal thickness in areas without cartilage. The algorithm inherently incorporates several desirable properties, including: shape based interpolation, sub-division remeshing and parameterization. To illustrate the usefulness of this approach, the surfaces and parameterizations of the patella cartilage are used to generate a 3D statistical shape model.
Whys and Hows of the Parameterized Interval Analyses: A Guide for the Perplexed
NASA Astrophysics Data System (ADS)
Elishakoff, I.
2013-10-01
Novel elements of the parameterized interval analysis developed in [1, 2] are emphasized in this response, to Professor E.D. Popova, or possibly to others who may be perplexed by the parameterized interval analysis. It is also shown that the overwhelming majority of comments by Popova [3] are based on a misreading of our paper [1]. Partial responsibility for this misreading can be attributed to the fact that explanations provided in [1] were laconic. These could have been more extensive in view of the novelty of our approach [1, 2]. It is our duty, therefore, to reiterate, in this response, the whys and hows of parameterization of intervals, introduced in [1] to incorporate the possibly available information on dependencies between various intervals describing the problem at hand. This possibility appears to have been discarded by the standard interval analysis, which may, as a result, lead to overdesign, leading to the possible divorce of engineers from the otherwise beautiful interval analysis.
An empirical approach for estimating stress-coupling lengths for marine-terminating glaciers
Enderlin, Ellyn; Hamilton, Gordon S.; O'Neel, Shad; Bartholomaus, Timothy C.; Morlighem, Mathieu; Holt, John W.
2016-01-01
Here we present a new empirical method to estimate the SCL for marine-terminating glaciers using high-resolution observations. We use the empirically-determined periodicity in resistive stress oscillations as a proxy for the SCL. Application of our empirical method to two well-studied tidewater glaciers (Helheim Glacier, SE Greenland, and Columbia Glacier, Alaska, USA) demonstrates that SCL estimates obtained using this approach are consistent with theory (i.e., can be parameterized as a function of the ice thickness) and with prior, independent SCL estimates. In order to accurately resolve stress variations, we suggest that similar empirical stress-coupling parameterizations be employed in future analyses of glacier dynamics.
NASA Astrophysics Data System (ADS)
Freitas, S.; Grell, G. A.; Molod, A.
2017-12-01
We implemented and began to evaluate an alternative convection parameterization for the NASA Goddard Earth Observing System (GEOS) global model. The parameterization (Grell and Freitas, 2014) is based on the mass flux approach with several closures, for equilibrium and non-equilibrium convection, and includes scale and aerosol awareness functionalities. Scale dependence for deep convection is implemented either through using the method described by Arakawa et al (2011), or through lateral spreading of the subsidence terms. Aerosol effects are included though the dependence of autoconversion and evaporation on the CCN number concentration.Recently, the scheme has been extended to a tri-modal spectral size approach to simulate the transition from shallow, congestus, and deep convection regimes. In addition, the inclusion of a new closure for non-equilibrium convection resulted in a substantial gain of realism in model simulation of the diurnal cycle of convection over the land. Also, a beta-pdf is employed now to represent the normalized mass flux profile. This opens up an additional venue to apply stochasticism in the scheme.
Improved Satellite-based Crop Yield Mapping by Spatially Explicit Parameterization of Crop Phenology
NASA Astrophysics Data System (ADS)
Jin, Z.; Azzari, G.; Lobell, D. B.
2016-12-01
Field-scale mapping of crop yields with satellite data often relies on the use of crop simulation models. However, these approaches can be hampered by inaccuracies in the simulation of crop phenology. Here we present and test an approach to use dense time series of Landsat 7 and 8 acquisitions data to calibrate various parameters related to crop phenology simulation, such as leaf number and leaf appearance rates. These parameters are then mapped across the Midwestern United States for maize and soybean, and for two different simulation models. We then implement our recently developed Scalable satellite-based Crop Yield Mapper (SCYM) with simulations reflecting the improved phenology parameterizations, and compare to prior estimates based on default phenology routines. Our preliminary results show that the proposed method can effectively alleviate the underestimation of early-season LAI by the default Agricultural Production Systems sIMulator (APSIM), and that spatially explicit parameterization for the phenology model substantially improves the SCYM performance in capturing the spatiotemporal variation in maize and soybean yield. The scheme presented in our study thus preserves the scalability of SCYM, while significantly reducing its uncertainty.
Liu, Ping; Li, Guodong; Liu, Xinggao; Xiao, Long; Wang, Yalin; Yang, Chunhua; Gui, Weihua
2018-02-01
High quality control method is essential for the implementation of aircraft autopilot system. An optimal control problem model considering the safe aerodynamic envelop is therefore established to improve the control quality of aircraft flight level tracking. A novel non-uniform control vector parameterization (CVP) method with time grid refinement is then proposed for solving the optimal control problem. By introducing the Hilbert-Huang transform (HHT) analysis, an efficient time grid refinement approach is presented and an adaptive time grid is automatically obtained. With this refinement, the proposed method needs fewer optimization parameters to achieve better control quality when compared with uniform refinement CVP method, whereas the computational cost is lower. Two well-known flight level altitude tracking problems and one minimum time cost problem are tested as illustrations and the uniform refinement control vector parameterization method is adopted as the comparative base. Numerical results show that the proposed method achieves better performances in terms of optimization accuracy and computation cost; meanwhile, the control quality is efficiently improved. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
USDA-ARS?s Scientific Manuscript database
Given a time series of potential evapotranspiration and rainfall data, there are at least two approaches for estimating vertical percolation rates. One approach involves solving Richards' equation (RE) with a plant uptake model. An alternative approach involves applying a simple soil moisture accoun...
Using an Adjoint Approach to Eliminate Mesh Sensitivities in Computational Design
NASA Technical Reports Server (NTRS)
Nielsen, Eric J.; Park, Michael A.
2006-01-01
An algorithm for efficiently incorporating the effects of mesh sensitivities in a computational design framework is introduced. The method is based on an adjoint approach and eliminates the need for explicit linearizations of the mesh movement scheme with respect to the geometric parameterization variables, an expense that has hindered practical large-scale design optimization using discrete adjoint methods. The effects of the mesh sensitivities can be accounted for through the solution of an adjoint problem equivalent in cost to a single mesh movement computation, followed by an explicit matrix-vector product scaling with the number of design variables and the resolution of the parameterized surface grid. The accuracy of the implementation is established and dramatic computational savings obtained using the new approach are demonstrated using several test cases. Sample design optimizations are also shown.
Using an Adjoint Approach to Eliminate Mesh Sensitivities in Computational Design
NASA Technical Reports Server (NTRS)
Nielsen, Eric J.; Park, Michael A.
2005-01-01
An algorithm for efficiently incorporating the effects of mesh sensitivities in a computational design framework is introduced. The method is based on an adjoint approach and eliminates the need for explicit linearizations of the mesh movement scheme with respect to the geometric parameterization variables, an expense that has hindered practical large-scale design optimization using discrete adjoint methods. The effects of the mesh sensitivities can be accounted for through the solution of an adjoint problem equivalent in cost to a single mesh movement computation, followed by an explicit matrix-vector product scaling with the number of design variables and the resolution of the parameterized surface grid. The accuracy of the implementation is established and dramatic computational savings obtained using the new approach are demonstrated using several test cases. Sample design optimizations are also shown.
NASA Astrophysics Data System (ADS)
Subramanian, Aneesh C.; Palmer, Tim N.
2017-06-01
Stochastic schemes to represent model uncertainty in the European Centre for Medium-Range Weather Forecasts (ECMWF) ensemble prediction system has helped improve its probabilistic forecast skill over the past decade by both improving its reliability and reducing the ensemble mean error. The largest uncertainties in the model arise from the model physics parameterizations. In the tropics, the parameterization of moist convection presents a major challenge for the accurate prediction of weather and climate. Superparameterization is a promising alternative strategy for including the effects of moist convection through explicit turbulent fluxes calculated from a cloud-resolving model (CRM) embedded within a global climate model (GCM). In this paper, we compare the impact of initial random perturbations in embedded CRMs, within the ECMWF ensemble prediction system, with stochastically perturbed physical tendency (SPPT) scheme as a way to represent model uncertainty in medium-range tropical weather forecasts. We especially focus on forecasts of tropical convection and dynamics during MJO events in October-November 2011. These are well-studied events for MJO dynamics as they were also heavily observed during the DYNAMO field campaign. We show that a multiscale ensemble modeling approach helps improve forecasts of certain aspects of tropical convection during the MJO events, while it also tends to deteriorate certain large-scale dynamic fields with respect to stochastically perturbed physical tendencies approach that is used operationally at ECMWF.
A Comparative Study of Probability Collectives Based Multi-agent Systems and Genetic Algorithms
NASA Technical Reports Server (NTRS)
Huang, Chien-Feng; Wolpert, David H.; Bieniawski, Stefan; Strauss, Charles E. M.
2005-01-01
We compare Genetic Algorithms (GA's) with Probability Collectives (PC), a new framework for distributed optimization and control. In contrast to GA's, PC-based methods do not update populations of solutions. Instead they update an explicitly parameterized probability distribution p over the space of solutions. That updating of p arises as the optimization of a functional of p. The functional is chosen so that any p that optimizes it should be p peaked about good solutions. The PC approach works in both continuous and discrete problems. It does not suffer from the resolution limitation of the finite bit length encoding of parameters into GA alleles. It also has deep connections with both game theory and statistical physics. We review the PC approach using its motivation as the information theoretic formulation of bounded rationality for multi-agent systems. It is then compared with GA's on a diverse set of problems. To handle high dimensional surfaces, in the PC method investigated here p is restricted to a product distribution. Each distribution in that product is controlled by a separate agent. The test functions were selected for their difficulty using either traditional gradient descent or genetic algorithms. On those functions the PC-based approach significantly outperforms traditional GA's in both rate of descent, trapping in false minima, and long term optimization.
Jirousková, Zuzana; Vareková, Radka Svobodová; Vanek, Jakub; Koca, Jaroslav
2009-05-01
The electronegativity equalization method (EEM) was developed by Mortier et al. as a semiempirical method based on the density-functional theory. After parameterization, in which EEM parameters A(i), B(i), and adjusting factor kappa are obtained, this approach can be used for calculation of average electronegativity and charge distribution in a molecule. The aim of this work is to perform the EEM parameterization using the Merz-Kollman-Singh (MK) charge distribution scheme obtained from B3LYP/6-31G* and HF/6-31G* calculations. To achieve this goal, we selected a set of 380 organic molecules from the Cambridge Structural Database (CSD) and used the methodology, which was recently successfully applied to EEM parameterization to calculate the HF/STO-3G Mulliken charges on large sets of molecules. In the case of B3LYP/6-31G* MK charges, we have improved the EEM parameters for already parameterized elements, specifically C, H, N, O, and F. Moreover, EEM parameters for S, Br, Cl, and Zn, which have not as yet been parameterized for this level of theory and basis set, we also developed. In the case of HF/6-31G* MK charges, we have developed the EEM parameters for C, H, N, O, S, Br, Cl, F, and Zn that have not been parameterized for this level of theory and basis set so far. The obtained EEM parameters were verified by a previously developed validation procedure and used for the charge calculation on a different set of 116 organic molecules from the CSD. The calculated EEM charges are in a very good agreement with the quantum mechanically obtained ab initio charges. 2008 Wiley Periodicals, Inc.
A review of group ICA for fMRI data and ICA for joint inference of imaging, genetic, and ERP data
Calhoun, Vince D.; Liu, Jingyu; Adalı, Tülay
2009-01-01
Independent component analysis (ICA) has become an increasingly utilized approach for analyzing brain imaging data. In contrast to the widely used general linear model (GLM) that requires the user to parameterize the data (e.g. the brain's response to stimuli), ICA, by relying upon a general assumption of independence, allows the user to be agnostic regarding the exact form of the response. In addition, ICA is intrinsically a multivariate approach, and hence each component provides a grouping of brain activity into regions that share the same response pattern thus providing a natural measure of functional connectivity. There are a wide variety of ICA approaches that have been proposed, in this paper we focus upon two distinct methods. The first part of this paper reviews the use of ICA for making group inferences from fMRI data. We provide an overview of current approaches for utilizing ICA to make group inferences with a focus upon the group ICA approach implemented in the GIFT software. In the next part of this paper, we provide an overview of the use of ICA to combine or fuse multimodal data. ICA has proven particularly useful for data fusion of multiple tasks or data modalities such as single nucleotide polymorphism (SNP) data or event-related potentials. As demonstrated by a number of examples in this paper, ICA is a powerful and versatile data-driven approach for studying the brain. PMID:19059344
A review of group ICA for fMRI data and ICA for joint inference of imaging, genetic, and ERP data.
Calhoun, Vince D; Liu, Jingyu; Adali, Tülay
2009-03-01
Independent component analysis (ICA) has become an increasingly utilized approach for analyzing brain imaging data. In contrast to the widely used general linear model (GLM) that requires the user to parameterize the data (e.g. the brain's response to stimuli), ICA, by relying upon a general assumption of independence, allows the user to be agnostic regarding the exact form of the response. In addition, ICA is intrinsically a multivariate approach, and hence each component provides a grouping of brain activity into regions that share the same response pattern thus providing a natural measure of functional connectivity. There are a wide variety of ICA approaches that have been proposed, in this paper we focus upon two distinct methods. The first part of this paper reviews the use of ICA for making group inferences from fMRI data. We provide an overview of current approaches for utilizing ICA to make group inferences with a focus upon the group ICA approach implemented in the GIFT software. In the next part of this paper, we provide an overview of the use of ICA to combine or fuse multimodal data. ICA has proven particularly useful for data fusion of multiple tasks or data modalities such as single nucleotide polymorphism (SNP) data or event-related potentials. As demonstrated by a number of examples in this paper, ICA is a powerful and versatile data-driven approach for studying the brain.
A Survey of Phase Variable Candidates of Human Locomotion
Villarreal, Dario J.; Gregg, Robert D.
2014-01-01
Studies show that the human nervous system is able to parameterize gait cycle phase using sensory feedback. In the field of bipedal robots, the concept of a phase variable has been successfully used to mimic this behavior by parameterizing the gait cycle in a time-independent manner. This approach has been applied to control a powered transfemoral prosthetic leg, but the proposed phase variable was limited to the stance period of the prosthesis only. In order to achieve a more robust controller, we attempt to find a new phase variable that fully parameterizes the gait cycle of a prosthetic leg. The angle with respect to a global reference frame at the hip is able to monotonically parameterize both the stance and swing periods of the gait cycle. This survey looks at multiple phase variable candidates involving the hip angle with respect to a global reference frame across multiple tasks including level-ground walking, running, and stair negotiation. In particular, we propose a novel phase variable candidate that monotonically parameterizes the whole gait cycle across all tasks, and does so particularly well across level-ground walking. In addition to furthering the design of robust robotic prosthetic leg controllers, this survey could help neuroscientists and physicians study human locomotion across tasks from a time-independent perspective. PMID:25570873
Public goods games on adaptive coevolutionary networks
NASA Astrophysics Data System (ADS)
Pichler, Elgar; Shapiro, Avi M.
2017-07-01
Productive societies feature high levels of cooperation and strong connections between individuals. Public Goods Games (PGGs) are frequently used to study the development of social connections and cooperative behavior in model societies. In such games, contributions to the public good are made only by cooperators, while all players, including defectors, reap public goods benefits, which are shares of the contributions amplified by a synergy factor. Classic results of game theory show that mutual defection, as opposed to cooperation, is the Nash Equilibrium of PGGs in well-mixed populations, where each player interacts with all others. In this paper, we explore the coevolutionary dynamics of a low information public goods game on a complex network in which players adapt to their environment in order to increase individual payoffs relative to past payoffs parameterized by greediness. Players adapt by changing their strategies, either to cooperate or to defect, and by altering their social connections. We find that even if players do not know other players' strategies and connectivity, cooperation can arise and persist despite large short-term fluctuations.
Public goods games on adaptive coevolutionary networks.
Pichler, Elgar; Shapiro, Avi M
2017-07-01
Productive societies feature high levels of cooperation and strong connections between individuals. Public Goods Games (PGGs) are frequently used to study the development of social connections and cooperative behavior in model societies. In such games, contributions to the public good are made only by cooperators, while all players, including defectors, reap public goods benefits, which are shares of the contributions amplified by a synergy factor. Classic results of game theory show that mutual defection, as opposed to cooperation, is the Nash Equilibrium of PGGs in well-mixed populations, where each player interacts with all others. In this paper, we explore the coevolutionary dynamics of a low information public goods game on a complex network in which players adapt to their environment in order to increase individual payoffs relative to past payoffs parameterized by greediness. Players adapt by changing their strategies, either to cooperate or to defect, and by altering their social connections. We find that even if players do not know other players' strategies and connectivity, cooperation can arise and persist despite large short-term fluctuations.
Spielman, Stephanie J; Wilke, Claus O
2016-11-01
The mutation-selection model of coding sequence evolution has received renewed attention for its use in estimating site-specific amino acid propensities and selection coefficient distributions. Two computationally tractable mutation-selection inference frameworks have been introduced: One framework employs a fixed-effects, highly parameterized maximum likelihood approach, whereas the other employs a random-effects Bayesian Dirichlet Process approach. While both implementations follow the same model, they appear to make distinct predictions about the distribution of selection coefficients. The fixed-effects framework estimates a large proportion of highly deleterious substitutions, whereas the random-effects framework estimates that all substitutions are either nearly neutral or weakly deleterious. It remains unknown, however, how accurately each method infers evolutionary constraints at individual sites. Indeed, selection coefficient distributions pool all site-specific inferences, thereby obscuring a precise assessment of site-specific estimates. Therefore, in this study, we use a simulation-based strategy to determine how accurately each approach recapitulates the selective constraint at individual sites. We find that the fixed-effects approach, despite its extensive parameterization, consistently and accurately estimates site-specific evolutionary constraint. By contrast, the random-effects Bayesian approach systematically underestimates the strength of natural selection, particularly for slowly evolving sites. We also find that, despite the strong differences between their inferred selection coefficient distributions, the fixed- and random-effects approaches yield surprisingly similar inferences of site-specific selective constraint. We conclude that the fixed-effects mutation-selection framework provides the more reliable software platform for model application and future development. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Modeling of the Wegener Bergeron Findeisen process—implications for aerosol indirect effects
NASA Astrophysics Data System (ADS)
Storelvmo, T.; Kristjánsson, J. E.; Lohmann, U.; Iversen, T.; Kirkevåg, A.; Seland, Ø.
2008-10-01
A new parameterization of the Wegener-Bergeron-Findeisen (WBF) process has been developed, and implemented in the general circulation model CAM-Oslo. The new parameterization scheme has important implications for the process of phase transition in mixed-phase clouds. The new treatment of the WBF process replaces a previous formulation, in which the onset of the WBF effect depended on a threshold value of the mixing ratio of cloud ice. As no observational guidance for such a threshold value exists, the previous treatment added uncertainty to estimates of aerosol effects on mixed-phase clouds. The new scheme takes subgrid variability into account when simulating the WBF process, allowing for smoother phase transitions in mixed-phase clouds compared to the previous approach. The new parameterization yields a model state which gives reasonable agreement with observed quantities, allowing for calculations of aerosol effects on mixed-phase clouds involving a reduced number of tunable parameters. Furthermore, we find a significant sensitivity to perturbations in ice nuclei concentrations with the new parameterization, which leads to a reversal of the traditional cloud lifetime effect.
A Thermal Infrared Radiation Parameterization for Atmospheric Studies
NASA Technical Reports Server (NTRS)
Chou, Ming-Dah; Suarez, Max J.; Liang, Xin-Zhong; Yan, Michael M.-H.; Cote, Charles (Technical Monitor)
2001-01-01
This technical memorandum documents the longwave radiation parameterization developed at the Climate and Radiation Branch, NASA Goddard Space Flight Center, for a wide variety of weather and climate applications. Based on the 1996-version of the Air Force Geophysical Laboratory HITRAN data, the parameterization includes the absorption due to major gaseous absorption (water vapor, CO2, O3) and most of the minor trace gases (N2O, CH4, CFCs), as well as clouds and aerosols. The thermal infrared spectrum is divided into nine bands. To achieve a high degree of accuracy and speed, various approaches of computing the transmission function are applied to different spectral bands and gases. The gaseous transmission function is computed either using the k-distribution method or the table look-up method. To include the effect of scattering due to clouds and aerosols, the optical thickness is scaled by the single-scattering albedo and asymmetry factor. The parameterization can accurately compute fluxes to within 1% of the high spectral-resolution line-by-line calculations. The cooling rate can be accurately computed in the region extending from the surface to the 0.01-hPa level.
Lu, Chunsong; Liu, Yangang; Zhang, Guang J.; ...
2016-02-01
This work examines the relationships of entrainment rate to vertical velocity, buoyancy, and turbulent dissipation rate by applying stepwise principal component regression to observational data from shallow cumulus clouds collected during the Routine AAF [Atmospheric Radiation Measurement (ARM) Aerial Facility] Clouds with Low Optical Water Depths (CLOWD) Optical Radiative Observations (RACORO) field campaign over the ARM Southern Great Plains (SGP) site near Lamont, Oklahoma. The cumulus clouds during the RACORO campaign simulated using a large eddy simulation (LES) model are also examined with the same approach. The analysis shows that a combination of multiple variables can better represent entrainment ratemore » in both the observations and LES than any single-variable fitting. Three commonly used parameterizations are also tested on the individual cloud scale. A new parameterization is therefore presented that relates entrainment rate to vertical velocity, buoyancy and dissipation rate; the effects of treating clouds as ensembles and humid shells surrounding cumulus clouds on the new parameterization are discussed. Physical mechanisms underlying the relationships of entrainment rate to vertical velocity, buoyancy and dissipation rate are also explored.« less
NASA Astrophysics Data System (ADS)
Oh, D.; Noh, Y.; Hoffmann, F.; Raasch, S.
2017-12-01
Lagrangian cloud model (LCM) is a fundamentally new approach of cloud simulation, in which the flow field is simulated by large eddy simulation and droplets are treated as Lagrangian particles undergoing cloud microphysics. LCM enables us to investigate raindrop formation and examine the parameterization of cloud microphysics directly by tracking the history of individual Lagrangian droplets simulated by LCM. Analysis of the magnitude of raindrop formation and the background physical conditions at the moment at which every Lagrangian droplet grows from cloud droplets to raindrops in a shallow cumulus cloud reveals how and under which condition raindrops are formed. It also provides information how autoconversion and accretion appear and evolve within a cloud, and how they are affected by various factors such as cloud water mixing ratio, rain water mixing ratio, aerosol concentration, drop size distribution, and dissipation rate. Based on these results, the parameterizations of autoconversion and accretion, such as Kessler (1969), Tripoli and Cotton (1980), Beheng (1994), and Kharioutdonov and Kogan (2000), are examined, and the modifications to improve the parameterizations are proposed.
NASA Astrophysics Data System (ADS)
Charles, T. K.; Paganin, D. M.; Dowd, R. T.
2016-08-01
Intrinsic emittance is often the limiting factor for brightness in fourth generation light sources and as such, a good understanding of the factors affecting intrinsic emittance is essential in order to be able to decrease it. Here we present a parameterization model describing the proportional increase in emittance induced by cathode surface roughness. One major benefit behind the parameterization approach presented here is that it takes the complexity of a Monte Carlo model and reduces the results to a straight-forward empirical model. The resulting models describe the proportional increase in transverse momentum introduced by surface roughness, and are applicable to various metal types, photon wavelengths, applied electric fields, and cathode surface terrains. The analysis includes the increase in emittance due to changes in the electric field induced by roughness as well as the increase in transverse momentum resultant from the spatially varying surface normal. We also compare the results of the Parameterization Model to an Analytical Model which employs various approximations to produce a more compact expression with the cost of a reduction in accuracy.
Unsupervised image matching based on manifold alignment.
Pei, Yuru; Huang, Fengchun; Shi, Fuhao; Zha, Hongbin
2012-08-01
This paper challenges the issue of automatic matching between two image sets with similar intrinsic structures and different appearances, especially when there is no prior correspondence. An unsupervised manifold alignment framework is proposed to establish correspondence between data sets by a mapping function in the mutual embedding space. We introduce a local similarity metric based on parameterized distance curves to represent the connection of one point with the rest of the manifold. A small set of valid feature pairs can be found without manual interactions by matching the distance curve of one manifold with the curve cluster of the other manifold. To avoid potential confusions in image matching, we propose an extended affine transformation to solve the nonrigid alignment in the embedding space. The comparatively tight alignments and the structure preservation can be obtained simultaneously. The point pairs with the minimum distance after alignment are viewed as the matchings. We apply manifold alignment to image set matching problems. The correspondence between image sets of different poses, illuminations, and identities can be established effectively by our approach.
Dommert, M; Reginatto, M; Zboril, M; Fiedler, F; Helmbrecht, S; Enghardt, W; Lutz, B
2017-11-28
Bonner sphere measurements are typically analyzed using unfolding codes. It is well known that it is difficult to get reliable estimates of uncertainties for standard unfolding procedures. An alternative approach is to analyze the data using Bayesian parameter estimation. This method provides reliable estimates of the uncertainties of neutron spectra leading to rigorous estimates of uncertainties of the dose. We extend previous Bayesian approaches and apply the method to stray neutrons in proton therapy environments by introducing a new parameterized model which describes the main features of the expected neutron spectra. The parameterization is based on information that is available from measurements and detailed Monte Carlo simulations. The validity of this approach has been validated with results of an experiment using Bonner spheres carried out at the experimental hall of the OncoRay proton therapy facility in Dresden. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Tomassini, Lorenzo; Field, Paul R.; Honnert, Rachel; Malardel, Sylvie; McTaggart-Cowan, Ron; Saitou, Kei; Noda, Akira T.; Seifert, Axel
2017-03-01
A stratocumulus-to-cumulus transition as observed in a cold air outbreak over the North Atlantic Ocean is compared in global climate and numerical weather prediction models and a large-eddy simulation model as part of the Working Group on Numerical Experimentation "Grey Zone" project. The focus of the project is to investigate to what degree current convection and boundary layer parameterizations behave in a scale-adaptive manner in situations where the model resolution approaches the scale of convection. Global model simulations were performed at a wide range of resolutions, with convective parameterizations turned on and off. The models successfully simulate the transition between the observed boundary layer structures, from a well-mixed stratocumulus to a deeper, partly decoupled cumulus boundary layer. There are indications that surface fluxes are generally underestimated. The amount of both cloud liquid water and cloud ice, and likely precipitation, are under-predicted, suggesting deficiencies in the strength of vertical mixing in shear-dominated boundary layers. But also regulation by precipitation and mixed-phase cloud microphysical processes play an important role in the case. With convection parameterizations switched on, the profiles of atmospheric liquid water and cloud ice are essentially resolution-insensitive. This, however, does not imply that convection parameterizations are scale-aware. Even at the highest resolutions considered here, simulations with convective parameterizations do not converge toward the results of convection-off experiments. Convection and boundary layer parameterizations strongly interact, suggesting the need for a unified treatment of convective and turbulent mixing when addressing scale-adaptivity.
Shrinkage Degree in $L_{2}$ -Rescale Boosting for Regression.
Xu, Lin; Lin, Shaobo; Wang, Yao; Xu, Zongben
2017-08-01
L 2 -rescale boosting ( L 2 -RBoosting) is a variant of L 2 -Boosting, which can essentially improve the generalization performance of L 2 -Boosting. The key feature of L 2 -RBoosting lies in introducing a shrinkage degree to rescale the ensemble estimate in each iteration. Thus, the shrinkage degree determines the performance of L 2 -RBoosting. The aim of this paper is to develop a concrete analysis concerning how to determine the shrinkage degree in L 2 -RBoosting. We propose two feasible ways to select the shrinkage degree. The first one is to parameterize the shrinkage degree and the other one is to develop a data-driven approach. After rigorously analyzing the importance of the shrinkage degree in L 2 -RBoosting, we compare the pros and cons of the proposed methods. We find that although these approaches can reach the same learning rates, the structure of the final estimator of the parameterized approach is better, which sometimes yields a better generalization capability when the number of sample is finite. With this, we recommend to parameterize the shrinkage degree of L 2 -RBoosting. We also present an adaptive parameter-selection strategy for shrinkage degree and verify its feasibility through both theoretical analysis and numerical verification. The obtained results enhance the understanding of L 2 -RBoosting and give guidance on how to use it for regression tasks.
Computation at a coordinate singularity
NASA Astrophysics Data System (ADS)
Prusa, Joseph M.
2018-05-01
Coordinate singularities are sometimes encountered in computational problems. An important example involves global atmospheric models used for climate and weather prediction. Classical spherical coordinates can be used to parameterize the manifold - that is, generate a grid for the computational spherical shell domain. This particular parameterization offers significant benefits such as orthogonality and exact representation of curvature and connection (Christoffel) coefficients. But it also exhibits two polar singularities and at or near these points typical continuity/integral constraints on dependent fields and their derivatives are generally inadequate and lead to poor model performance and erroneous results. Other parameterizations have been developed that eliminate polar singularities, but problems of weaker singularities and enhanced grid noise compared to spherical coordinates (away from the poles) persist. In this study reparameterization invariance of geometric objects (scalars, vectors and the forms generated by their covariant derivatives) is utilized to generate asymptotic forms for dependent fields of interest valid in the neighborhood of a pole. The central concept is that such objects cannot be altered by the metric structure of a parameterization. The new boundary conditions enforce symmetries that are required for transformations of geometric objects. They are implemented in an implicit polar filter of a structured grid, nonhydrostatic global atmospheric model that is simulating idealized Held-Suarez flows. A series of test simulations using different configurations of the asymptotic boundary conditions are made, along with control simulations that use the default model numerics with no absorber, at three different grid sizes. Typically the test simulations are ∼ 20% faster in wall clock time than the control-resulting from a decrease in noise at the poles in all cases. In the control simulations adverse numerical effects from the polar singularity are observed to increase with grid resolution. In contrast, test simulations demonstrate robust polar behavior independent of grid resolution.
Robustness of Hierarchical Modeling of Skill Association in Cognitive Diagnosis Models
ERIC Educational Resources Information Center
Templin, Jonathan L.; Henson, Robert A.; Templin, Sara E.; Roussos, Louis
2008-01-01
Several types of parameterizations of attribute correlations in cognitive diagnosis models use the reduced reparameterized unified model. The general approach presumes an unconstrained correlation matrix with K(K - 1)/2 parameters, whereas the higher order approach postulates K parameters, imposing a unidimensional structure on the correlation…
A Parameterization of Dry Thermals and Shallow Cumuli for Mesoscale Numerical Weather Prediction
NASA Astrophysics Data System (ADS)
Pergaud, Julien; Masson, Valéry; Malardel, Sylvie; Couvreux, Fleur
2009-07-01
For numerical weather prediction models and models resolving deep convection, shallow convective ascents are subgrid processes that are not parameterized by classical local turbulent schemes. The mass flux formulation of convective mixing is now largely accepted as an efficient approach for parameterizing the contribution of larger plumes in convective dry and cloudy boundary layers. We propose a new formulation of the EDMF scheme (for Eddy DiffusivityMass Flux) based on a single updraft that improves the representation of dry thermals and shallow convective clouds and conserves a correct representation of stratocumulus in mesoscale models. The definition of entrainment and detrainment in the dry part of the updraft is original, and is specified as proportional to the ratio of buoyancy to vertical velocity. In the cloudy part of the updraft, the classical buoyancy sorting approach is chosen. The main closure of the scheme is based on the mass flux near the surface, which is proportional to the sub-cloud layer convective velocity scale w *. The link with the prognostic grid-scale cloud content and cloud cover and the projection on the non- conservative variables is processed by the cloud scheme. The validation of this new formulation using large-eddy simulations focused on showing the robustness of the scheme to represent three different boundary layer regimes. For dry convective cases, this parameterization enables a correct representation of the countergradient zone where the mass flux part represents the top entrainment (IHOP case). It can also handle the diurnal cycle of boundary-layer cumulus clouds (EUROCSARM) and conserve a realistic evolution of stratocumulus (EUROCSFIRE).
Distributed parameterization of complex terrain
NASA Astrophysics Data System (ADS)
Band, Lawrence E.
1991-03-01
This paper addresses the incorporation of high resolution topography, soils and vegetation information into the simulation of land surface processes in atmospheric circulation models (ACM). Recent work has concentrated on detailed representation of one-dimensional exchange processes, implicitly assuming surface homogeneity over the atmospheric grid cell. Two approaches that could be taken to incorporate heterogeneity are the integration of a surface model over distributed, discrete portions of the landscape, or over a distribution function of the model parameters. However, the computational burden and parameter intensive nature of current land surface models in ACM limits the number of independent model runs and parameterizations that are feasible to accomplish for operational purposes. Therefore, simplications in the representation of the vertical exchange processes may be necessary to incorporate the effects of landscape variability and horizontal divergence of energy and water. The strategy is then to trade off the detail and rigor of point exchange calculations for the ability to repeat those calculations over extensive, complex terrain. It is clear the parameterization process for this approach must be automated such that large spatial databases collected from remotely sensed images, digital terrain models and digital maps can be efficiently summarized and transformed into the appropriate parameter sets. Ideally, the landscape should be partitioned into surface units that maximize between unit variance while minimizing within unit variance, although it is recognized that some level of surface heterogeneity will be retained at all scales. Therefore, the geographic data processing necessary to automate the distributed parameterization should be able to estimate or predict parameter distributional information within each surface unit.
NASA Astrophysics Data System (ADS)
Rosolem, R.; Rahman, M.; Kollet, S. J.; Wagener, T.
2017-12-01
Understanding the impacts of land cover and climate changes on terrestrial hydrometeorology is important across a range of spatial and temporal scales. Earth System Models (ESMs) provide a robust platform for evaluating these impacts. However, current ESMs lack the representation of key hydrological processes (e.g., preferential water flow, and direct interactions with aquifers) in general. The typical "free drainage" conceptualization of land models can misrepresent the magnitude of those interactions, consequently affecting the exchange of energy and water at the surface as well as estimates of groundwater recharge. Recent studies show the benefits of explicitly simulating the interactions between subsurface and surface processes in similar models. However, such parameterizations are often computationally demanding resulting in limited application for large/global-scale studies. Here, we take a different approach in developing a novel parameterization for groundwater dynamics. Instead of directly adding another complex process to an established land model, we examine a set of comprehensive experimental scenarios using a very robust and establish three-dimensional hydrological model to develop a simpler parameterization that represents the aquifer to land surface interactions. The main goal of our developed parameterization is to simultaneously maximize the computational gain (i.e., "efficiency") while minimizing simulation errors in comparison to the full 3D model (i.e., "robustness") to allow for easy implementation in ESMs globally. Our study focuses primarily on understanding both the dynamics for groundwater recharge and discharge, respectively. Preliminary results show that our proposed approach significantly reduced the computational demand while model deviations from the full 3D model are considered to be small for these processes.
NASA Astrophysics Data System (ADS)
Pan, Wenyong; Innanen, Kristopher A.; Geng, Yu
2018-03-01
Seismic full-waveform inversion (FWI) methods hold strong potential to recover multiple subsurface elastic properties for hydrocarbon reservoir characterization. Simultaneously updating multiple physical parameters introduces the problem of interparameter tradeoff, arising from the covariance between different physical parameters, which increases nonlinearity and uncertainty of multiparameter FWI. The coupling effects of different physical parameters are significantly influenced by model parameterization and acquisition arrangement. An appropriate choice of model parameterization is critical to successful field data applications of multiparameter FWI. The objective of this paper is to examine the performance of various model parameterizations in isotropic-elastic FWI with walk-away vertical seismic profile (W-VSP) dataset for unconventional heavy oil reservoir characterization. Six model parameterizations are considered: velocity-density (α, β and ρ΄), modulus-density (κ, μ and ρ), Lamé-density (λ, μ΄ and ρ‴), impedance-density (IP, IS and ρ″), velocity-impedance-I (α΄, β΄ and I_P^'), and velocity-impedance-II (α″, β″ and I_S^'). We begin analyzing the interparameter tradeoff by making use of scattering radiation patterns, which is a common strategy for qualitative parameter resolution analysis. In this paper, we discuss the advantages and limitations of the scattering radiation patterns and recommend that interparameter tradeoffs be evaluated using interparameter contamination kernels, which provide quantitative, second-order measurements of the interparameter contaminations and can be constructed efficiently with an adjoint-state approach. Synthetic W-VSP isotropic-elastic FWI experiments in the time domain verify our conclusions about interparameter tradeoffs for various model parameterizations. Density profiles are most strongly influenced by the interparameter contaminations; depending on model parameterization, the inverted density profile can be over-estimated, under-estimated or spatially distorted. Among the six cases, only the velocity-density parameterization provides stable and informative density features not included in the starting model. Field data applications of multicomponent W-VSP isotropic-elastic FWI in the time domain were also carried out. The heavy oil reservoir target zone, characterized by low α-to-β ratios and low Poisson's ratios, can be identified clearly with the inverted isotropic-elastic parameters.
Pan, Wenyong; Innanen, Kristopher A.; Geng, Yu
2018-03-06
We report seismic full-waveform inversion (FWI) methods hold strong potential to recover multiple subsurface elastic properties for hydrocarbon reservoir characterization. Simultaneously updating multiple physical parameters introduces the problem of interparameter tradeoff, arising from the covariance between different physical parameters, which increases nonlinearity and uncertainty of multiparameter FWI. The coupling effects of different physical parameters are significantly influenced by model parameterization and acquisition arrangement. An appropriate choice of model parameterization is critical to successful field data applications of multiparameter FWI. The objective of this paper is to examine the performance of various model parameterizations in isotropic-elastic FWI with walk-away vertical seismicmore » profile (W-VSP) dataset for unconventional heavy oil reservoir characterization. Six model parameterizations are considered: velocity-density (α, β and ρ'), modulus-density (κ, μ and ρ), Lamé-density (λ, μ' and ρ'''), impedance-density (IP, IS and ρ''), velocity-impedance-I (α', β' and I' P), and velocity-impedance-II (α'', β'' and I'S). We begin analyzing the interparameter tradeoff by making use of scattering radiation patterns, which is a common strategy for qualitative parameter resolution analysis. In this paper, we discuss the advantages and limitations of the scattering radiation patterns and recommend that interparameter tradeoffs be evaluated using interparameter contamination kernels, which provide quantitative, second-order measurements of the interparameter contaminations and can be constructed efficiently with an adjoint-state approach. Synthetic W-VSP isotropic-elastic FWI experiments in the time domain verify our conclusions about interparameter tradeoffs for various model parameterizations. Density profiles are most strongly influenced by the interparameter contaminations; depending on model parameterization, the inverted density profile can be over-estimated, under-estimated or spatially distorted. Among the six cases, only the velocity-density parameterization provides stable and informative density features not included in the starting model. Field data applications of multicomponent W-VSP isotropic-elastic FWI in the time domain were also carried out. Finally, the heavy oil reservoir target zone, characterized by low α-to-β ratios and low Poisson’s ratios, can be identified clearly with the inverted isotropic-elastic parameters.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pan, Wenyong; Innanen, Kristopher A.; Geng, Yu
We report seismic full-waveform inversion (FWI) methods hold strong potential to recover multiple subsurface elastic properties for hydrocarbon reservoir characterization. Simultaneously updating multiple physical parameters introduces the problem of interparameter tradeoff, arising from the covariance between different physical parameters, which increases nonlinearity and uncertainty of multiparameter FWI. The coupling effects of different physical parameters are significantly influenced by model parameterization and acquisition arrangement. An appropriate choice of model parameterization is critical to successful field data applications of multiparameter FWI. The objective of this paper is to examine the performance of various model parameterizations in isotropic-elastic FWI with walk-away vertical seismicmore » profile (W-VSP) dataset for unconventional heavy oil reservoir characterization. Six model parameterizations are considered: velocity-density (α, β and ρ'), modulus-density (κ, μ and ρ), Lamé-density (λ, μ' and ρ'''), impedance-density (IP, IS and ρ''), velocity-impedance-I (α', β' and I' P), and velocity-impedance-II (α'', β'' and I'S). We begin analyzing the interparameter tradeoff by making use of scattering radiation patterns, which is a common strategy for qualitative parameter resolution analysis. In this paper, we discuss the advantages and limitations of the scattering radiation patterns and recommend that interparameter tradeoffs be evaluated using interparameter contamination kernels, which provide quantitative, second-order measurements of the interparameter contaminations and can be constructed efficiently with an adjoint-state approach. Synthetic W-VSP isotropic-elastic FWI experiments in the time domain verify our conclusions about interparameter tradeoffs for various model parameterizations. Density profiles are most strongly influenced by the interparameter contaminations; depending on model parameterization, the inverted density profile can be over-estimated, under-estimated or spatially distorted. Among the six cases, only the velocity-density parameterization provides stable and informative density features not included in the starting model. Field data applications of multicomponent W-VSP isotropic-elastic FWI in the time domain were also carried out. Finally, the heavy oil reservoir target zone, characterized by low α-to-β ratios and low Poisson’s ratios, can be identified clearly with the inverted isotropic-elastic parameters.« less
NASA Astrophysics Data System (ADS)
Cariolle, D.; Caro, D.; Paoli, R.; Hauglustaine, D. A.; CuéNot, B.; Cozic, A.; Paugam, R.
2009-10-01
A method is presented to parameterize the impact of the nonlinear chemical reactions occurring in the plume generated by concentrated NOx sources into large-scale models. The resulting plume parameterization is implemented into global models and used to evaluate the impact of aircraft emissions on the atmospheric chemistry. Compared to previous approaches that rely on corrected emissions or corrective factors to account for the nonlinear chemical effects, the present parameterization is based on the representation of the plume effects via a fuel tracer and a characteristic lifetime during which the nonlinear interactions between species are important and operate via rates of conversion for the NOx species and an effective reaction rates for O3. The implementation of this parameterization insures mass conservation and allows the transport of emissions at high concentrations in plume form by the model dynamics. Results from the model simulations of the impact on atmospheric ozone of aircraft NOx emissions are in rather good agreement with previous work. It is found that ozone production is decreased by 10 to 25% in the Northern Hemisphere with the largest effects in the north Atlantic flight corridor when the plume effects on the global-scale chemistry are taken into account. These figures are consistent with evaluations made with corrected emissions, but regional differences are noticeable owing to the possibility offered by this parameterization to transport emitted species in plume form prior to their dilution at large scale. This method could be further improved to make the parameters used by the parameterization function of the local temperature, humidity and turbulence properties diagnosed by the large-scale model. Further extensions of the method can also be considered to account for multistep dilution regimes during the plume dissipation. Furthermore, the present parameterization can be adapted to other types of point-source NOx emissions that have to be introduced in large-scale models, such as ship exhausts, provided that the plume life cycle, the type of emissions, and the major reactions involved in the nonlinear chemical systems can be determined with sufficient accuracy.
NASA Astrophysics Data System (ADS)
Basarab, B.; Fuchs, B.; Rutledge, S. A.
2013-12-01
Predicting lightning activity in thunderstorms is important in order to accurately quantify the production of nitrogen oxides (NOx = NO + NO2) by lightning (LNOx). Lightning is an important global source of NOx, and since NOx is a chemical precursor to ozone, the climatological impacts of LNOx could be significant. Many cloud-resolving models rely on parameterizations to predict lightning and LNOx since the processes leading to charge separation and lightning discharge are not yet fully understood. This study evaluates predicted flash rates based on existing lightning parameterizations against flash rates observed for Colorado storms during the Deep Convective Clouds and Chemistry Experiment (DC3). Evaluating lightning parameterizations against storm observations is a useful way to possibly improve the prediction of flash rates and LNOx in models. Additionally, since convective storms that form in the eastern plains of Colorado can be different thermodynamically and electrically from storms in other regions, it is useful to test existing parameterizations against observations from these storms. We present an analysis of the dynamics, microphysics, and lightning characteristics of two case studies, severe storms that developed on 6 and 7 June 2012. This analysis includes dual-Doppler derived horizontal and vertical velocities, a hydrometeor identification based on polarimetric radar variables using the CSU-CHILL radar, and insight into the charge structure using observations from the northern Colorado Lightning Mapping Array (LMA). Flash rates were inferred from the LMA data using a flash counting algorithm. We have calculated various microphysical and dynamical parameters for these storms that have been used in empirical flash rate parameterizations. In particular, maximum vertical velocity has been used to predict flash rates in some cloud-resolving chemistry simulations. We diagnose flash rates for the 6 and 7 June storms using this parameterization and compare to observed flash rates. For the 6 June storm, a preliminary analysis of aircraft observations of storm inflow and outflow is presented in order to place flash rates (and other lightning statistics) in the context of storm chemistry. An approach to a possibly improved LNOx parameterization scheme using different lightning metrics such as flash area will be discussed.
Dynamically consistent parameterization of mesoscale eddies. Part III: Deterministic approach
NASA Astrophysics Data System (ADS)
Berloff, Pavel
2018-07-01
This work continues development of dynamically consistent parameterizations for representing mesoscale eddy effects in non-eddy-resolving and eddy-permitting ocean circulation models and focuses on the classical double-gyre problem, in which the main dynamic eddy effects maintain eastward jet extension of the western boundary currents and its adjacent recirculation zones via eddy backscatter mechanism. Despite its fundamental importance, this mechanism remains poorly understood, and in this paper we, first, study it and, then, propose and test its novel parameterization. We start by decomposing the reference eddy-resolving flow solution into the large-scale and eddy components defined by spatial filtering, rather than by the Reynolds decomposition. Next, we find that the eastward jet and its recirculations are robustly present not only in the large-scale flow itself, but also in the rectified time-mean eddies, and in the transient rectified eddy component, which consists of highly anisotropic ribbons of the opposite-sign potential vorticity anomalies straddling the instantaneous eastward jet core and being responsible for its continuous amplification. The transient rectified component is separated from the flow by a novel remapping method. We hypothesize that the above three components of the eastward jet are ultimately driven by the small-scale transient eddy forcing via the eddy backscatter mechanism, rather than by the mean eddy forcing and large-scale nonlinearities. We verify this hypothesis by progressively turning down the backscatter and observing the induced flow anomalies. The backscatter analysis leads us to formulating the key eddy parameterization hypothesis: in an eddy-permitting model at least partially resolved eddy backscatter can be significantly amplified to improve the flow solution. Such amplification is a simple and novel eddy parameterization framework implemented here in terms of local, deterministic flow roughening controlled by single parameter. We test the parameterization skills in an hierarchy of non-eddy-resolving and eddy-permitting modifications of the original model and demonstrate, that indeed it can be highly efficient for restoring the eastward jet extension and its adjacent recirculation zones. The new deterministic parameterization framework not only combines remarkable simplicity with good performance but also is dynamically transparent, therefore, it provides a powerful alternative to the common eddy diffusion and emerging stochastic parameterizations.
NASA Astrophysics Data System (ADS)
Grell, G. A.; Freitas, S. R.; Olson, J.; Bela, M.
2017-12-01
We will start by providing a summary of the latest cumulus parameterization modeling efforts at NOAA's Earth System Research Laboratory (ESRL) will be presented on both regional and global scales. The physics package includes a scale-aware parameterization of subgrid cloudiness feedback to radiation (coupled PBL, microphysics, radiation, shallow and congestus type convection), the stochastic Grell-Freitas (GF) scale- and aerosol-aware convective parameterization, and an aerosol aware microphysics package. GF is based on a stochastic approach originally implemented by Grell and Devenyi (2002) and described in more detail in Grell and Freitas (2014, ACP). It was expanded to include PDF's for vertical mass flux, as well as modifications to improve the diurnal cycle. This physics package will be used on different scales, spanning global to cloud resolving, to look at the impact on scalar transport and numerical weather prediction.
A linear-RBF multikernel SVM to classify big text corpora.
Romero, R; Iglesias, E L; Borrajo, L
2015-01-01
Support vector machine (SVM) is a powerful technique for classification. However, SVM is not suitable for classification of large datasets or text corpora, because the training complexity of SVMs is highly dependent on the input size. Recent developments in the literature on the SVM and other kernel methods emphasize the need to consider multiple kernels or parameterizations of kernels because they provide greater flexibility. This paper shows a multikernel SVM to manage highly dimensional data, providing an automatic parameterization with low computational cost and improving results against SVMs parameterized under a brute-force search. The model consists in spreading the dataset into cohesive term slices (clusters) to construct a defined structure (multikernel). The new approach is tested on different text corpora. Experimental results show that the new classifier has good accuracy compared with the classic SVM, while the training is significantly faster than several other SVM classifiers.
Neutrons in proton pencil beam scanning: parameterization of energy, quality factors and RBE
NASA Astrophysics Data System (ADS)
Schneider, Uwe; Hälg, Roger A.; Baiocco, Giorgio; Lomax, Tony
2016-08-01
The biological effectiveness of neutrons produced during proton therapy in inducing cancer is unknown, but potentially large. In particular, since neutron biological effectiveness is energy dependent, it is necessary to estimate, besides the dose, also the energy spectra, in order to obtain quantities which could be a measure of the biological effectiveness and test current models and new approaches against epidemiological studies on cancer induction after proton therapy. For patients treated with proton pencil beam scanning, this work aims to predict the spatially localized neutron energies, the effective quality factor, the weighting factor according to ICRP, and two RBE values, the first obtained from the saturation corrected dose mean lineal energy and the second from DSB cluster induction. A proton pencil beam was Monte Carlo simulated using GEANT. Based on the simulated neutron spectra for three different proton beam energies a parameterization of energy, quality factors and RBE was calculated. The pencil beam algorithm used for treatment planning at PSI has been extended using the developed parameterizations in order to calculate the spatially localized neutron energy, quality factors and RBE for each treated patient. The parameterization represents the simple quantification of neutron energy in two energy bins and the quality factors and RBE with a satisfying precision up to 85 cm away from the proton pencil beam when compared to the results based on 3D Monte Carlo simulations. The root mean square error of the energy estimate between Monte Carlo simulation based results and the parameterization is 3.9%. For the quality factors and RBE estimates it is smaller than 0.9%. The model was successfully integrated into the PSI treatment planning system. It was found that the parameterizations for neutron energy, quality factors and RBE were independent of proton energy in the investigated energy range of interest for proton therapy. The pencil beam algorithm has been extended using the developed parameterizations in order to calculate the neutron energy, quality factor and RBE.
Neutrons in proton pencil beam scanning: parameterization of energy, quality factors and RBE.
Schneider, Uwe; Hälg, Roger A; Baiocco, Giorgio; Lomax, Tony
2016-08-21
The biological effectiveness of neutrons produced during proton therapy in inducing cancer is unknown, but potentially large. In particular, since neutron biological effectiveness is energy dependent, it is necessary to estimate, besides the dose, also the energy spectra, in order to obtain quantities which could be a measure of the biological effectiveness and test current models and new approaches against epidemiological studies on cancer induction after proton therapy. For patients treated with proton pencil beam scanning, this work aims to predict the spatially localized neutron energies, the effective quality factor, the weighting factor according to ICRP, and two RBE values, the first obtained from the saturation corrected dose mean lineal energy and the second from DSB cluster induction. A proton pencil beam was Monte Carlo simulated using GEANT. Based on the simulated neutron spectra for three different proton beam energies a parameterization of energy, quality factors and RBE was calculated. The pencil beam algorithm used for treatment planning at PSI has been extended using the developed parameterizations in order to calculate the spatially localized neutron energy, quality factors and RBE for each treated patient. The parameterization represents the simple quantification of neutron energy in two energy bins and the quality factors and RBE with a satisfying precision up to 85 cm away from the proton pencil beam when compared to the results based on 3D Monte Carlo simulations. The root mean square error of the energy estimate between Monte Carlo simulation based results and the parameterization is 3.9%. For the quality factors and RBE estimates it is smaller than 0.9%. The model was successfully integrated into the PSI treatment planning system. It was found that the parameterizations for neutron energy, quality factors and RBE were independent of proton energy in the investigated energy range of interest for proton therapy. The pencil beam algorithm has been extended using the developed parameterizations in order to calculate the neutron energy, quality factor and RBE.
Saa, Pedro; Nielsen, Lars K.
2015-01-01
Kinetic models provide the means to understand and predict the dynamic behaviour of enzymes upon different perturbations. Despite their obvious advantages, classical parameterizations require large amounts of data to fit their parameters. Particularly, enzymes displaying complex reaction and regulatory (allosteric) mechanisms require a great number of parameters and are therefore often represented by approximate formulae, thereby facilitating the fitting but ignoring many real kinetic behaviours. Here, we show that full exploration of the plausible kinetic space for any enzyme can be achieved using sampling strategies provided a thermodynamically feasible parameterization is used. To this end, we developed a General Reaction Assembly and Sampling Platform (GRASP) capable of consistently parameterizing and sampling accurate kinetic models using minimal reference data. The former integrates the generalized MWC model and the elementary reaction formalism. By formulating the appropriate thermodynamic constraints, our framework enables parameterization of any oligomeric enzyme kinetics without sacrificing complexity or using simplifying assumptions. This thermodynamically safe parameterization relies on the definition of a reference state upon which feasible parameter sets can be efficiently sampled. Uniform sampling of the kinetics space enabled dissecting enzyme catalysis and revealing the impact of thermodynamics on reaction kinetics. Our analysis distinguished three reaction elasticity regions for common biochemical reactions: a steep linear region (0> ΔGr >-2 kJ/mol), a transition region (-2> ΔGr >-20 kJ/mol) and a constant elasticity region (ΔGr <-20 kJ/mol). We also applied this framework to model more complex kinetic behaviours such as the monomeric cooperativity of the mammalian glucokinase and the ultrasensitive response of the phosphoenolpyruvate carboxylase of Escherichia coli. In both cases, our approach described appropriately not only the kinetic behaviour of these enzymes, but it also provided insights about the particular features underpinning the observed kinetics. Overall, this framework will enable systematic parameterization and sampling of enzymatic reactions. PMID:25874556
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuang, Zhiming; Gentine, Pierre
Over the duration of this project, we have made the following advances. 1) We have developed a novel approach to obtain a Lagrangian view of convection from high-resolution numerical model through Lagrangian tracking. This approach nicely complements the more traditionally used Eulerian statistics. We have applied this approach to a range of problem. 2) We have looked into improving and extending our parameterizations based on stochastically entraining parcels, developed previously for shallow convection. 3) This grant also supported our effort on a paper where we compared cumulus parameterizations and cloud resolving models in terms of their linear response functions. Thismore » work will help the community to better evaluate and develop cumulus parameterization. 4) We have applied Lagrangian tracking to shallow convection, deep convection with and without convective organization to better characterize their dynamics and the transition between them. 5) We have devised a novel way of using Lagrangian to identify cold pools, an area identified as of great interest by the ASR community. Our algorithm has a number of advantages and in particular can handle merging cold pools more gracefully than existing techniques. 6) We demonstrated that we can, for the first time, correctly reproduce both the diurnal and seasonal cycle of the hydrologic cycle in the Amazon using a strategy that explicitly represents convection but parameterizes large-scale circulation. In addition we showed that the main cause of the wet season is the presence of an early morning fog, which insulate the surface from top of the atmosphere shortwave radiation. In essence this fog makes the day shorter because radiation cannot penetrate to the surface in the early morning. This is why all fluxes are reduced in the wet season compared to the dry season. 7) We have investigated the life cycle of cold pools and the role of surface diabatic heating. We show that surface heating can kill cold pols and reduce the number of large cold pools and the organization of convection. The effect is quite dramatic over land where the entire distribution of cold pools is modified, and the cold pools are much warmer and more humid with surface diabatic heating below the cold pools. The PI and the co-PI continue to work together on parameterization of cold pools.« less
Parameterized data-driven fuzzy model based optimal control of a semi-batch reactor.
Kamesh, Reddi; Rani, K Yamuna
2016-09-01
A parameterized data-driven fuzzy (PDDF) model structure is proposed for semi-batch processes, and its application for optimal control is illustrated. The orthonormally parameterized input trajectories, initial states and process parameters are the inputs to the model, which predicts the output trajectories in terms of Fourier coefficients. Fuzzy rules are formulated based on the signs of a linear data-driven model, while the defuzzification step incorporates a linear regression model to shift the domain from input to output domain. The fuzzy model is employed to formulate an optimal control problem for single rate as well as multi-rate systems. Simulation study on a multivariable semi-batch reactor system reveals that the proposed PDDF modeling approach is capable of capturing the nonlinear and time-varying behavior inherent in the semi-batch system fairly accurately, and the results of operating trajectory optimization using the proposed model are found to be comparable to the results obtained using the exact first principles model, and are also found to be comparable to or better than parameterized data-driven artificial neural network model based optimization results. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Parameterized reduced order models from a single mesh using hyper-dual numbers
NASA Astrophysics Data System (ADS)
Brake, M. R. W.; Fike, J. A.; Topping, S. D.
2016-06-01
In order to assess the predicted performance of a manufactured system, analysts must consider random variations (both geometric and material) in the development of a model, instead of a single deterministic model of an idealized geometry with idealized material properties. The incorporation of random geometric variations, however, potentially could necessitate the development of thousands of nearly identical solid geometries that must be meshed and separately analyzed, which would require an impractical number of man-hours to complete. This research advances a recent approach to uncertainty quantification by developing parameterized reduced order models. These parameterizations are based upon Taylor series expansions of the system's matrices about the ideal geometry, and a component mode synthesis representation for each linear substructure is used to form an efficient basis with which to study the system. The numerical derivatives required for the Taylor series expansions are obtained via hyper-dual numbers, and are compared to parameterized models constructed with finite difference formulations. The advantage of using hyper-dual numbers is two-fold: accuracy of the derivatives to machine precision, and the need to only generate a single mesh of the system of interest. The theory is applied to a stepped beam system in order to demonstrate proof of concept. The results demonstrate that the hyper-dual number multivariate parameterization of geometric variations, which largely are neglected in the literature, are accurate for both sensitivity and optimization studies. As model and mesh generation can constitute the greatest expense of time in analyzing a system, the foundation to create a parameterized reduced order model based off of a single mesh is expected to reduce dramatically the necessary time to analyze multiple realizations of a component's possible geometry.
How do dispersal costs and habitat selection influence realized population connectivity?
Burgess, Scott C; Treml, Eric A; Marshall, Dustin J
2012-06-01
Despite the importance of dispersal for population connectivity, dispersal is often costly to the individual. A major impediment to understanding connectivity has been a lack of data combining the movement of individuals and their survival to reproduction in the new habitat (realized connectivity). Although mortality often occurs during dispersal (an immediate cost), in many organisms costs are paid after dispersal (deferred costs). It is unclear how such deferred costs influence the mismatch between dispersal and realized connectivity. Through a series of experiments in the field and laboratory, we estimated both direct and indirect deferred costs in a marine bryozoan (Bugula neritina). We then used the empirical data to parameterize a theoretical model in order to formalize predictions about how dispersal costs influence realized connectivity. Individuals were more likely to colonize poor-quality habitat after prolonged dispersal durations. Individuals that colonized poor-quality habitat performed poorly after colonization because of some property of the habitat (an indirect deferred cost) rather than from prolonged dispersal per se (a direct deferred cost). Our theoretical model predicted that indirect deferred costs could result in nonlinear mismatches between spatial patterns of potential and realized connectivity. The deferred costs of dispersal are likely to be crucial for determining how well patterns of dispersal reflect realized connectivity. Ignoring these deferred costs could lead to inaccurate predictions of spatial population dynamics.
Parameterization of single-scattering properties of snow
NASA Astrophysics Data System (ADS)
Räisänen, Petri; Kokhanovsky, Alexander; Guyot, Gwennole; Jourdan, Olivier; Nousiainen, Timo
2015-04-01
Snow consists of non-spherical ice grains of various shapes and sizes, which are surrounded by air and sometimes covered by films of liquid water. Still, in many studies, homogeneous spherical snow grains have been assumed in radiative transfer calculations, due to the convenience of using Mie theory. More recently, second-generation Koch fractals have been employed. While they produce a relatively flat scattering phase function typical of deformed non-spherical particles, this is still a rather ad-hoc choice. Here, angular scattering measurements for blowing snow conducted during the CLimate IMpacts of Short-Lived pollutants In the Polar region (CLIMSLIP) campaign at Ny Ålesund, Svalbard, are used to construct a reference phase function for snow. Based on this phase function, an optimized habit combination (OHC) consisting of severely rough (SR) droxtals, aggregates of SR plates and strongly distorted Koch fractals is selected. The single-scattering properties of snow are then computed for the OHC as a function of wavelength λ and snow grain volume-to-projected area equivalent radius rvp. Parameterization equations are developed for λ=0.199-2.7 μm and rvp = 10-2000 μm, which express the single-scattering co-albedo β, the asymmetry parameter g and the phase function as functions of the size parameter and the real and imaginary parts of the refractive index. Compared to the reference values computed for the OHC, the accuracy of the parameterization is very high for β and g. This is also true for the phase function parameterization, except for strongly absorbing cases (β > 0.3). Finally, we consider snow albedo and reflected radiances for the suggested snow optics parameterization, making comparisons with spheres and distorted Koch fractals. Further evaluation and validation of the proposed approach against (e.g.) bidirectional reflectance and polarization measurements for snow is planned. At any rate, it seems safe to assume that the OHC selected here provides a substantially better basis for representing the single-scattering properties of snow than spheres do. Moreover, the parameterizations developed here are analytic and simple to use, and they can also be applied to the treatment of dirty snow following (e.g.) the approach of Kokhanovsky (The Cryosphere, 7, 1325-1331, doi:10.5194/tc-7-1325-2013, 2013). This should make them an attractive option for use in radiative transfer applications involving snow.
Data-driven parameterization of the generalized Langevin equation
Lei, Huan; Baker, Nathan A.; Li, Xiantao
2016-11-29
We present a data-driven approach to determine the memory kernel and random noise of the generalized Langevin equation. To facilitate practical implementations, we parameterize the kernel function in the Laplace domain by a rational function, with coefficients directly linked to the equilibrium statistics of the coarse-grain variables. Further, we show that such an approximation can be constructed to arbitrarily high order. Within these approximations, the generalized Langevin dynamics can be embedded in an extended stochastic model without memory. We demonstrate how to introduce the stochastic noise so that the fluctuation-dissipation theorem is exactly satisfied.
The Hunt Opinion Model—An Agent Based Approach to Recurring Fashion Cycles
Apriasz, Rafał; Krueger, Tyll; Marcjasz, Grzegorz; Sznajd-Weron, Katarzyna
2016-01-01
We study a simple agent-based model of the recurring fashion cycles in the society that consists of two interacting communities: “snobs” and “followers” (or “opinion hunters”, hence the name of the model). Followers conform to all other individuals, whereas snobs conform only to their own group and anticonform to the other. The model allows to examine the role of the social structure, i.e. the influence of the number of inter-links between the two communities, as well as the role of the stability of links. The latter is accomplished by considering two versions of the same model—quenched (parameterized by fraction L of fixed inter-links) and annealed (parameterized by probability p that a given inter-link exists). Using Monte Carlo simulations and analytical treatment (the latter only for the annealed model), we show that there is a critical fraction of inter-links, above which recurring cycles occur. For p ≤ 0.5 we derive a relation between parameters L and p that allows to compare both models and show that the critical value of inter-connections, p*, is the same for both versions of the model (annealed and quenched) but the period of a fashion cycle is shorter for the quenched model. Near the critical point, the cycles are irregular and a change of fashion is difficult to predict. For the annealed model we also provide a deeper theoretical analysis. We conjecture on topological grounds that the so-called saddle node heteroclinic bifurcation appears at p*. For p ≥ 0.5 we show analytically the existence of the second critical value of p, for which the system undergoes Hopf’s bifurcation. PMID:27835679
The Hunt Opinion Model-An Agent Based Approach to Recurring Fashion Cycles.
Apriasz, Rafał; Krueger, Tyll; Marcjasz, Grzegorz; Sznajd-Weron, Katarzyna
2016-01-01
We study a simple agent-based model of the recurring fashion cycles in the society that consists of two interacting communities: "snobs" and "followers" (or "opinion hunters", hence the name of the model). Followers conform to all other individuals, whereas snobs conform only to their own group and anticonform to the other. The model allows to examine the role of the social structure, i.e. the influence of the number of inter-links between the two communities, as well as the role of the stability of links. The latter is accomplished by considering two versions of the same model-quenched (parameterized by fraction L of fixed inter-links) and annealed (parameterized by probability p that a given inter-link exists). Using Monte Carlo simulations and analytical treatment (the latter only for the annealed model), we show that there is a critical fraction of inter-links, above which recurring cycles occur. For p ≤ 0.5 we derive a relation between parameters L and p that allows to compare both models and show that the critical value of inter-connections, p*, is the same for both versions of the model (annealed and quenched) but the period of a fashion cycle is shorter for the quenched model. Near the critical point, the cycles are irregular and a change of fashion is difficult to predict. For the annealed model we also provide a deeper theoretical analysis. We conjecture on topological grounds that the so-called saddle node heteroclinic bifurcation appears at p*. For p ≥ 0.5 we show analytically the existence of the second critical value of p, for which the system undergoes Hopf's bifurcation.
Breckheimer, Ian; Haddad, Nick M; Morris, William F; Trainor, Anne M; Fields, William R; Jobe, R Todd; Hudgens, Brian R; Moody, Aaron; Walters, Jeffrey R
2014-12-01
Conserving or restoring landscape connectivity between patches of breeding habitat is a common strategy to protect threatened species from habitat fragmentation. By managing connectivity for some species, usually charismatic vertebrates, it is often assumed that these species will serve as conservation umbrellas for other species. We tested this assumption by developing a quantitative method to measure overlap in dispersal habitat of 3 threatened species-a bird (the umbrella), a butterfly, and a frog-inhabiting the same fragmented landscape. Dispersal habitat was determined with Circuitscape, which was parameterized with movement data collected for each species. Despite differences in natural history and breeding habitat, we found substantial overlap in the spatial distributions of areas important for dispersal of this suite of taxa. However, the intuitive umbrella species (the bird) did not have the highest overlap with other species in terms of the areas that supported connectivity. Nevertheless, we contend that when there are no irreconcilable differences between the dispersal habitats of species that cohabitate on the landscape, managing for umbrella species can help conserve or restore connectivity simultaneously for multiple threatened species with different habitat requirements. © 2014 Society for Conservation Biology.
Mixing Efficiency in the Ocean.
Gregg, M C; D'Asaro, E A; Riley, J J; Kunze, E
2018-01-03
Mixing efficiency is the ratio of the net change in potential energy to the energy expended in producing the mixing. Parameterizations of efficiency and of related mixing coefficients are needed to estimate diapycnal diffusivity from measurements of the turbulent dissipation rate. Comparing diffusivities from microstructure profiling with those inferred from the thickening rate of four simultaneous tracer releases has verified, within observational accuracy, 0.2 as the mixing coefficient over a 30-fold range of diapycnal diffusivities. Although some mixing coefficients can be estimated from pycnocline measurements, at present mixing efficiency must be obtained from channel flows, laboratory experiments, and numerical simulations. Reviewing the different approaches demonstrates that estimates and parameterizations for mixing efficiency and coefficients are not converging beyond the at-sea comparisons with tracer releases, leading to recommendations for a community approach to address this important issue.
Efficient statistical mapping of avian count data
Royle, J. Andrew; Wikle, C.K.
2005-01-01
We develop a spatial modeling framework for count data that is efficient to implement in high-dimensional prediction problems. We consider spectral parameterizations for the spatially varying mean of a Poisson model. The spectral parameterization of the spatial process is very computationally efficient, enabling effective estimation and prediction in large problems using Markov chain Monte Carlo techniques. We apply this model to creating avian relative abundance maps from North American Breeding Bird Survey (BBS) data. Variation in the ability of observers to count birds is modeled as spatially independent noise, resulting in over-dispersion relative to the Poisson assumption. This approach represents an improvement over existing approaches used for spatial modeling of BBS data which are either inefficient for continental scale modeling and prediction or fail to accommodate important distributional features of count data thus leading to inaccurate accounting of prediction uncertainty.
Mixing Efficiency in the Ocean
NASA Astrophysics Data System (ADS)
Gregg, M. C.; D'Asaro, E. A.; Riley, J. J.; Kunze, E.
2018-01-01
Mixing efficiency is the ratio of the net change in potential energy to the energy expended in producing the mixing. Parameterizations of efficiency and of related mixing coefficients are needed to estimate diapycnal diffusivity from measurements of the turbulent dissipation rate. Comparing diffusivities from microstructure profiling with those inferred from the thickening rate of four simultaneous tracer releases has verified, within observational accuracy, 0.2 as the mixing coefficient over a 30-fold range of diapycnal diffusivities. Although some mixing coefficients can be estimated from pycnocline measurements, at present mixing efficiency must be obtained from channel flows, laboratory experiments, and numerical simulations. Reviewing the different approaches demonstrates that estimates and parameterizations for mixing efficiency and coefficients are not converging beyond the at-sea comparisons with tracer releases, leading to recommendations for a community approach to address this important issue.
DeepNAT: Deep convolutional neural network for segmenting neuroanatomy.
Wachinger, Christian; Reuter, Martin; Klein, Tassilo
2018-04-15
We introduce DeepNAT, a 3D Deep convolutional neural network for the automatic segmentation of NeuroAnaTomy in T1-weighted magnetic resonance images. DeepNAT is an end-to-end learning-based approach to brain segmentation that jointly learns an abstract feature representation and a multi-class classification. We propose a 3D patch-based approach, where we do not only predict the center voxel of the patch but also neighbors, which is formulated as multi-task learning. To address a class imbalance problem, we arrange two networks hierarchically, where the first one separates foreground from background, and the second one identifies 25 brain structures on the foreground. Since patches lack spatial context, we augment them with coordinates. To this end, we introduce a novel intrinsic parameterization of the brain volume, formed by eigenfunctions of the Laplace-Beltrami operator. As network architecture, we use three convolutional layers with pooling, batch normalization, and non-linearities, followed by fully connected layers with dropout. The final segmentation is inferred from the probabilistic output of the network with a 3D fully connected conditional random field, which ensures label agreement between close voxels. The roughly 2.7million parameters in the network are learned with stochastic gradient descent. Our results show that DeepNAT compares favorably to state-of-the-art methods. Finally, the purely learning-based method may have a high potential for the adaptation to young, old, or diseased brains by fine-tuning the pre-trained network with a small training sample on the target application, where the availability of larger datasets with manual annotations may boost the overall segmentation accuracy in the future. Copyright © 2017 Elsevier Inc. All rights reserved.
Two Approaches to Calibration in Metrology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campanelli, Mark
2014-04-01
Inferring mathematical relationships with quantified uncertainty from measurement data is common to computational science and metrology. Sufficient knowledge of measurement process noise enables Bayesian inference. Otherwise, an alternative approach is required, here termed compartmentalized inference, because collection of uncertain data and model inference occur independently. Bayesian parameterized model inference is compared to a Bayesian-compatible compartmentalized approach for ISO-GUM compliant calibration problems in renewable energy metrology. In either approach, model evidence can help reduce model discrepancy.
Large-scale DCMs for resting-state fMRI.
Razi, Adeel; Seghier, Mohamed L; Zhou, Yuan; McColgan, Peter; Zeidman, Peter; Park, Hae-Jeong; Sporns, Olaf; Rees, Geraint; Friston, Karl J
2017-01-01
This paper considers the identification of large directed graphs for resting-state brain networks based on biophysical models of distributed neuronal activity, that is, effective connectivity . This identification can be contrasted with functional connectivity methods based on symmetric correlations that are ubiquitous in resting-state functional MRI (fMRI). We use spectral dynamic causal modeling (DCM) to invert large graphs comprising dozens of nodes or regions. The ensuing graphs are directed and weighted, hence providing a neurobiologically plausible characterization of connectivity in terms of excitatory and inhibitory coupling. Furthermore, we show that the use of to discover the most likely sparse graph (or model) from a parent (e.g., fully connected) graph eschews the arbitrary thresholding often applied to large symmetric (functional connectivity) graphs. Using empirical fMRI data, we show that spectral DCM furnishes connectivity estimates on large graphs that correlate strongly with the estimates provided by stochastic DCM. Furthermore, we increase the efficiency of model inversion using functional connectivity modes to place prior constraints on effective connectivity. In other words, we use a small number of modes to finesse the potentially redundant parameterization of large DCMs. We show that spectral DCM-with functional connectivity priors-is ideally suited for directed graph theoretic analyses of resting-state fMRI. We envision that directed graphs will prove useful in understanding the psychopathology and pathophysiology of neurodegenerative and neurodevelopmental disorders. We will demonstrate the utility of large directed graphs in clinical populations in subsequent reports, using the procedures described in this paper.
Bayesian analysis of physiologically based toxicokinetic and toxicodynamic models.
Hack, C Eric
2006-04-17
Physiologically based toxicokinetic (PBTK) and toxicodynamic (TD) models of bromate in animals and humans would improve our ability to accurately estimate the toxic doses in humans based on available animal studies. These mathematical models are often highly parameterized and must be calibrated in order for the model predictions of internal dose to adequately fit the experimentally measured doses. Highly parameterized models are difficult to calibrate and it is difficult to obtain accurate estimates of uncertainty or variability in model parameters with commonly used frequentist calibration methods, such as maximum likelihood estimation (MLE) or least squared error approaches. The Bayesian approach called Markov chain Monte Carlo (MCMC) analysis can be used to successfully calibrate these complex models. Prior knowledge about the biological system and associated model parameters is easily incorporated in this approach in the form of prior parameter distributions, and the distributions are refined or updated using experimental data to generate posterior distributions of parameter estimates. The goal of this paper is to give the non-mathematician a brief description of the Bayesian approach and Markov chain Monte Carlo analysis, how this technique is used in risk assessment, and the issues associated with this approach.
NASA Astrophysics Data System (ADS)
Christensen, H. M.; Moroz, I.; Palmer, T.
2015-12-01
It is now acknowledged that representing model uncertainty in atmospheric simulators is essential for the production of reliable probabilistic ensemble forecasts, and a number of different techniques have been proposed for this purpose. Stochastic convection parameterization schemes use random numbers to represent the difference between a deterministic parameterization scheme and the true atmosphere, accounting for the unresolved sub grid-scale variability associated with convective clouds. An alternative approach varies the values of poorly constrained physical parameters in the model to represent the uncertainty in these parameters. This study presents new perturbed parameter schemes for use in the European Centre for Medium Range Weather Forecasts (ECMWF) convection scheme. Two types of scheme are developed and implemented. Both schemes represent the joint uncertainty in four of the parameters in the convection parametrisation scheme, which was estimated using the Ensemble Prediction and Parameter Estimation System (EPPES). The first scheme developed is a fixed perturbed parameter scheme, where the values of uncertain parameters are changed between ensemble members, but held constant over the duration of the forecast. The second is a stochastically varying perturbed parameter scheme. The performance of these schemes was compared to the ECMWF operational stochastic scheme, Stochastically Perturbed Parametrisation Tendencies (SPPT), and to a model which does not represent uncertainty in convection. The skill of probabilistic forecasts made using the different models was evaluated. While the perturbed parameter schemes improve on the stochastic parametrisation in some regards, the SPPT scheme outperforms the perturbed parameter approaches when considering forecast variables that are particularly sensitive to convection. Overall, SPPT schemes are the most skilful representations of model uncertainty due to convection parametrisation. Reference: H. M. Christensen, I. M. Moroz, and T. N. Palmer, 2015: Stochastic and Perturbed Parameter Representations of Model Uncertainty in Convection Parameterization. J. Atmos. Sci., 72, 2525-2544.
Strategy for long-term 3D cloud-resolving simulations over the ARM SGP site and preliminary results
NASA Astrophysics Data System (ADS)
Lin, W.; Liu, Y.; Song, H.; Endo, S.
2011-12-01
Parametric representations of cloud/precipitation processes continue having to be adopted in climate simulations with increasingly higher spatial resolution or with emerging adaptive mesh framework; and it is only becoming more critical that such parameterizations have to be scale aware. Continuous cloud measurements at DOE's ARM sites have provided a strong observational basis for novel cloud parameterization research at various scales. Despite significant progress in our observational ability, there are important cloud-scale physical and dynamical quantities that are either not currently observable or insufficiently sampled. To complement the long-term ARM measurements, we have explored an optimal strategy to carry out long-term 3-D cloud-resolving simulations over the ARM SGP site using Weather Research and Forecasting (WRF) model with multi-domain nesting. The factors that are considered to have important influences on the simulated cloud fields include domain size, spatial resolution, model top, forcing data set, model physics and the growth of model errors. The hydrometeor advection that may play a significant role in hydrological process within the observational domain but is often lacking, and the limitations due to the constraint of domain-wide uniform forcing in conventional cloud system-resolving model simulations, are at least partly accounted for in our approach. Conventional and probabilistic verification approaches are employed first for selected cases to optimize the model's capability of faithfully reproducing the observed mean and statistical distributions of cloud-scale quantities. This then forms the basis of our setup for long-term cloud-resolving simulations over the ARM SGP site. The model results will facilitate parameterization research, as well as understanding and dissecting parameterization deficiencies in climate models.
NASA Astrophysics Data System (ADS)
Garrett, T. J.; Alva, S.; Glenn, I. B.; Krueger, S. K.
2015-12-01
There are two possible approaches for parameterizing sub-grid cloud dynamics in a coarser grid model. The most common is to use a fine scale model to explicitly resolve the mechanistic details of clouds to the best extent possible, and then to parameterize these behaviors cloud state for the coarser grid. A second is to invoke physical intuition and some very general theoretical principles from equilibrium statistical mechanics. This approach avoids any requirement to resolve time-dependent processes in order to arrive at a suitable solution. The second approach is widely used elsewhere in the atmospheric sciences: for example the Planck function for blackbody radiation is derived this way, where no mention is made of the complexities of modeling a large ensemble of time-dependent radiation-dipole interactions in order to obtain the "grid-scale" spectrum of thermal emission by the blackbody as a whole. We find that this statistical approach may be equally suitable for modeling convective clouds. Specifically, we make the physical argument that the dissipation of buoyant energy in convective clouds is done through mixing across a cloud perimeter. From thermodynamic reasoning, one might then anticipate that vertically stacked isentropic surfaces are characterized by a power law dlnN/dlnP = -1, where N(P) is the number clouds of perimeter P. In a Giga-LES simulation of convective clouds within a 100 km square domain we find that such a power law does appear to characterize simulated cloud perimeters along isentropes, provided a sufficient cloudy sample. The suggestion is that it may be possible to parameterize certain important aspects of cloud state without appealing to computationally expensive dynamic simulations.
Efficient Skeletonization of Volumetric Objects.
Zhou, Yong; Toga, Arthur W
1999-07-01
Skeletonization promises to become a powerful tool for compact shape description, path planning, and other applications. However, current techniques can seldom efficiently process real, complicated 3D data sets, such as MRI and CT data of human organs. In this paper, we present an efficient voxel-coding based algorithm for Skeletonization of 3D voxelized objects. The skeletons are interpreted as connected centerlines. consisting of sequences of medial points of consecutive clusters. These centerlines are initially extracted as paths of voxels, followed by medial point replacement, refinement, smoothness, and connection operations. The voxel-coding techniques have been proposed for each of these operations in a uniform and systematic fashion. In addition to preserving basic connectivity and centeredness, the algorithm is characterized by straightforward computation, no sensitivity to object boundary complexity, explicit extraction of ready-to-parameterize and branch-controlled skeletons, and efficient object hole detection. These issues are rarely discussed in traditional methods. A range of 3D medical MRI and CT data sets were used for testing the algorithm, demonstrating its utility.
Attitude Estimation or Quaternion Estimation?
NASA Technical Reports Server (NTRS)
Markley, F. Landis
2003-01-01
The attitude of spacecraft is represented by a 3x3 orthogonal matrix with unity determinant, which belongs to the three-dimensional special orthogonal group SO(3). The fact that all three-parameter representations of SO(3) are singular or discontinuous for certain attitudes has led to the use of higher-dimensional nonsingular parameterizations, especially the four-component quaternion. In attitude estimation, we are faced with the alternatives of using an attitude representation that is either singular or redundant. Estimation procedures fall into three broad classes. The first estimates a three-dimensional representation of attitude deviations from a reference attitude parameterized by a higher-dimensional nonsingular parameterization. The deviations from the reference are assumed to be small enough to avoid any singularity or discontinuity of the three-dimensional parameterization. The second class, which estimates a higher-dimensional representation subject to enough constraints to leave only three degrees of freedom, is difficult to formulate and apply consistently. The third class estimates a representation of SO(3) with more than three dimensions, treating the parameters as independent. We refer to the most common member of this class as quaternion estimation, to contrast it with attitude estimation. We analyze the first and third of these approaches in the context of an extended Kalman filter with simplified kinematics and measurement models.
NASA Astrophysics Data System (ADS)
Balaykin, A. V.; Bezsonov, K. A.; Nekhoroshev, M. V.; Shulepov, A. P.
2018-01-01
This paper dwells upon a variance parameterization method. Variance or dimensional parameterization is based on sketching, with various parametric links superimposed on the sketch objects and user-imposed constraints in the form of an equation system that determines the parametric dependencies. This method is fully integrated in a top-down design methodology to enable the creation of multi-variant and flexible fixture assembly models, as all the modeling operations are hierarchically linked in the built tree. In this research the authors consider a parameterization method of machine tooling used for manufacturing parts using multiaxial CNC machining centers in the real manufacturing process. The developed method allows to significantly reduce tooling design time when making changes of a part’s geometric parameters. The method can also reduce time for designing and engineering preproduction, in particular, for development of control programs for CNC equipment and control and measuring machines, automate the release of design and engineering documentation. Variance parameterization helps to optimize construction of parts as well as machine tooling using integrated CAE systems. In the framework of this study, the authors demonstrate a comprehensive approach to parametric modeling of machine tooling in the CAD package used in the real manufacturing process of aircraft engines.
Building integral projection models: a user's guide
Rees, Mark; Childs, Dylan Z; Ellner, Stephen P; Coulson, Tim
2014-01-01
In order to understand how changes in individual performance (growth, survival or reproduction) influence population dynamics and evolution, ecologists are increasingly using parameterized mathematical models. For continuously structured populations, where some continuous measure of individual state influences growth, survival or reproduction, integral projection models (IPMs) are commonly used. We provide a detailed description of the steps involved in constructing an IPM, explaining how to: (i) translate your study system into an IPM; (ii) implement your IPM; and (iii) diagnose potential problems with your IPM. We emphasize how the study organism's life cycle, and the timing of censuses, together determine the structure of the IPM kernel and important aspects of the statistical analysis used to parameterize an IPM using data on marked individuals. An IPM based on population studies of Soay sheep is used to illustrate the complete process of constructing, implementing and evaluating an IPM fitted to sample data. We then look at very general approaches to parameterizing an IPM, using a wide range of statistical techniques (e.g. maximum likelihood methods, generalized additive models, nonparametric kernel density estimators). Methods for selecting models for parameterizing IPMs are briefly discussed. We conclude with key recommendations and a brief overview of applications that extend the basic model. The online Supporting Information provides commented R code for all our analyses. PMID:24219157
Local Minima Free Parameterized Appearance Models
Nguyen, Minh Hoai; De la Torre, Fernando
2010-01-01
Parameterized Appearance Models (PAMs) (e.g. Eigentracking, Active Appearance Models, Morphable Models) are commonly used to model the appearance and shape variation of objects in images. While PAMs have numerous advantages relative to alternate approaches, they have at least two drawbacks. First, they are especially prone to local minima in the fitting process. Second, often few if any of the local minima of the cost function correspond to acceptable solutions. To solve these problems, this paper proposes a method to learn a cost function by explicitly optimizing that the local minima occur at and only at the places corresponding to the correct fitting parameters. To the best of our knowledge, this is the first paper to address the problem of learning a cost function to explicitly model local properties of the error surface to fit PAMs. Synthetic and real examples show improvement in alignment performance in comparison with traditional approaches. PMID:21804750
NASA Technical Reports Server (NTRS)
Havelund, Klaus
2014-01-01
We present a form of automaton, referred to as data automata, suited for monitoring sequences of data-carrying events, for example emitted by an executing software system. This form of automata allows states to be parameterized with data, forming named records, which are stored in an efficiently indexed data structure, a form of database. This very explicit approach differs from other automaton-based monitoring approaches. Data automata are also characterized by allowing transition conditions to refer to other parameterized states, and by allowing transitions sequences. The presented automaton concept is inspired by rule-based systems, especially the Rete algorithm, which is one of the well-established algorithms for executing rule-based systems. We present an optimized external DSL for data automata, as well as a comparable unoptimized internal DSL (API) in the Scala programming language, in order to compare the two solutions. An evaluation compares these two solutions to several other monitoring systems.
Optimal Variable-Structure Control Tracking of Spacecraft Maneuvers
NASA Technical Reports Server (NTRS)
Crassidis, John L.; Vadali, Srinivas R.; Markley, F. Landis
1999-01-01
An optimal control approach using variable-structure (sliding-mode) tracking for large angle spacecraft maneuvers is presented. The approach expands upon a previously derived regulation result using a quaternion parameterization for the kinematic equations of motion. This parameterization is used since it is free of singularities. The main contribution of this paper is the utilization of a simple term in the control law that produces a maneuver to the reference attitude trajectory in the shortest distance. Also, a multiplicative error quaternion between the desired and actual attitude is used to derive the control law. Sliding-mode switching surfaces are derived using an optimal-control analysis. Control laws are given using either external torque commands or reaction wheel commands. Global asymptotic stability is shown for both cases using a Lyapunov analysis. Simulation results are shown which use the new control strategy to stabilize the motion of the Microwave Anisotropy Probe spacecraft.
A volumetric conformal mapping approach for clustering white matter fibers in the brain
Gupta, Vikash; Prasad, Gautam; Thompson, Paul
2017-01-01
The human brain may be considered as a genus-0 shape, topologically equivalent to a sphere. Various methods have been used in the past to transform the brain surface to that of a sphere using harmonic energy minimization methods used for cortical surface matching. However, very few methods have studied volumetric parameterization of the brain using a spherical embedding. Volumetric parameterization is typically used for complicated geometric problems like shape matching, morphing and isogeometric analysis. Using conformal mapping techniques, we can establish a bijective mapping between the brain and the topologically equivalent sphere. Our hypothesis is that shape analysis problems are simplified when the shape is defined in an intrinsic coordinate system. Our goal is to establish such a coordinate system for the brain. The efficacy of the method is demonstrated with a white matter clustering problem. Initial results show promise for future investigation in these parameterization technique and its application to other problems related to computational anatomy like registration and segmentation. PMID:29177252
Parameterized examination in econometrics
NASA Astrophysics Data System (ADS)
Malinova, Anna; Kyurkchiev, Vesselin; Spasov, Georgi
2018-01-01
The paper presents a parameterization of basic types of exam questions in Econometrics. This algorithm is used to automate and facilitate the process of examination, assessment and self-preparation of a large number of students. The proposed parameterization of testing questions reduces the time required to author tests and course assignments. It enables tutors to generate a large number of different but equivalent dynamic questions (with dynamic answers) on a certain topic, which are automatically assessed. The presented methods are implemented in DisPeL (Distributed Platform for e-Learning) and provide questions in the areas of filtering and smoothing of time-series data, forecasting, building and analysis of single-equation econometric models. Questions also cover elasticity, average and marginal characteristics, product and cost functions, measurement of monopoly power, supply, demand and equilibrium price, consumer and product surplus, etc. Several approaches are used to enable the required numerical computations in DisPeL - integration of third-party mathematical libraries, developing our own procedures from scratch, and wrapping our legacy math codes in order to modernize and reuse them.
On constraining pilot point calibration with regularization in PEST
Fienen, M.N.; Muffels, C.T.; Hunt, R.J.
2009-01-01
Ground water model calibration has made great advances in recent years with practical tools such as PEST being instrumental for making the latest techniques available to practitioners. As models and calibration tools get more sophisticated, however, the power of these tools can be misapplied, resulting in poor parameter estimates and/or nonoptimally calibrated models that do not suit their intended purpose. Here, we focus on an increasingly common technique for calibrating highly parameterized numerical models - pilot point parameterization with Tikhonov regularization. Pilot points are a popular method for spatially parameterizing complex hydrogeologic systems; however, additional flexibility offered by pilot points can become problematic if not constrained by Tikhonov regularization. The objective of this work is to explain and illustrate the specific roles played by control variables in the PEST software for Tikhonov regularization applied to pilot points. A recent study encountered difficulties implementing this approach, but through examination of that analysis, insight into underlying sources of potential misapplication can be gained and some guidelines for overcoming them developed. ?? 2009 National Ground Water Association.
NASA Astrophysics Data System (ADS)
Alzubadi, A. A.
2015-06-01
Nuclear many-body system is usually described by a mean-field built upon a nucleon-nucleon effective interaction. In this work, we investigate ground state properties of the sulfur isotopes covering a wide range from the line of stability up to the dripline region (30-44S). For this purpose the Hartree-Fock mean field theory in coordinate space with a Skyrme parameterization SkM* has been utilized. In particular, we calculate the nuclear charge, neutrons, protons, mass densities, the associated radii, neutron skin thickness and binding energy. The charge form factors have been also investigated using SkM*, SkO, SkE, SLy4 and Skxs15 Skyrme parameterizations and the results obtained using the theoretical approach are compared with the available experimental data. To investigate the potential energy surface as a function of the quadrupole deformation for isotopic sulfur chains, Skyrme-Hartree-Fock-Bogoliubov theory has been adopted with SLy4 parameterization.
NASA Astrophysics Data System (ADS)
Bourgeau-Chavez, L. L.; Miller, M. E.; Battaglia, M.; Banda, E.; Endres, S.; Currie, W. S.; Elgersma, K. J.; French, N. H. F.; Goldberg, D. E.; Hyndman, D. W.
2014-12-01
Spread of invasive plant species in the coastal wetlands of the Great Lakes is degrading wetland habitat, decreasing biodiversity, and decreasing ecosystem services. An understanding of the mechanisms of invasion is crucial to gaining control of this growing threat. To better understand the effects of land use and climatic drivers on the vulnerability of coastal zones to invasion, as well as to develop an understanding of the mechanisms of invasion, research is being conducted that integrates field studies, process-based ecosystem and hydrological models, and remote sensing. Spatial data from remote sensing is needed to parameterize the hydrological model and to test the outputs of the linked models. We will present several new remote sensing products that are providing important physiological, biochemical, and landscape information to parameterize and verify models. This includes a novel hybrid radar-optical technique to delineate stands of invasives, as well as natural wetland cover types; using radar to map seasonally inundated areas not hydrologically connected; and developing new algorithms to estimate leaf area index (LAI) using Landsat. A coastal map delineating wetland types including monocultures of the invaders (Typha spp. and Phragmites austrailis) was created using satellite radar (ALOS PALSAR, 20 m resolution) and optical data (Landsat 5, 30 m resolution) fusion from multiple dates in a Random Forests classifier. These maps provide verification of the integrated model showing areas at high risk of invasion. For parameterizing the hydrological model, maps of seasonal wetness are being developed using spring (wet) imagery and differencing that with summer (dry) imagery to detect the seasonally wet areas. Finally, development of LAI remote sensing high resolution algorithms for uplands and wetlands is underway. LAI algorithms for wetlands have not been previously developed due to the difficulty of a water background. These products are being used to improve the hydrological model through higher resolution products and parameterization of variables that have previously been largely unknown.
A new conceptual framework for water and sediment connectivity
NASA Astrophysics Data System (ADS)
Keesstra, Saskia; Cerdà, Artemi; Parsons, Tony; Nunes, Joao Pedro; Saco, Patricia
2016-04-01
For many years scientists have tried to understand, describe and quantify sediment transport on multiple scales; from the geomorphological work triggered by a single thunderstorm to the geological time scale land scape evolution, and from particles and soil aggregates up to the continental scale. In the last two decades, a new concept called connectivity (Baartman et al., 2013; Bracken et al., 2013, 2015; Parsons et al., 2015) has been used by the scientific community to describe the connection between the different scales at which the sediment redistribution research along the watershed are being studied: pedon, slope tram, slope, watersheds, and basins. This concept is seen as a means to describe and quantify the results of processes influencing the transport of sediment on all these scales. Therefore the concept of connectivity and the way scales are used in the design of a measurement and monitoring scheme are interconnected (Cerdà et al., 2012), which shows that connectivity is not only a tool for process understanding, but also a tool to measure processes on multiple scales. This research aims to describe catchment system dynamics from a connectivity point of view. This conceptual framework can be helpful to look at catchment systems and synthesize which data are necessary to take into account when measuring or modelling water and sediment transfer in catchment systems, Identifying common patterns and generalities will help discover physical reasons for differences in responses and interaction between these processes. We describe a conceptual framework which is meant to bring a better understanding of the system dynamics of a catchment in terms of water and sediment transfer by breaking apart the system dynamics in stocks (the system state at a given moment) and flows (the system fluxes). Breaking apart the internal system dynamics that determine the behaviour of the catchment system is in our opinion a way to bring a better insight into the concepts of hydrological and sediment connectivity as described in previous research by Bracken et al (2013, 2015). By looking at the individual parts of the system, it becomes more manageable and less conceptual, which is important because we have to indicate where the research on connectivity should focus on. With this approach, processes and feedbacks in the catchment system can be pulled apart to study separately, making the system understandable and measureable, which will enable parameterization of models with actual measured data. The approach we took in describing water and sediment transfer is to first assess how they work in a system in dynamic equilibrium. After describing this, an assessment is made of how such dynamic equilibriums can be taken out of balance by an external push. Baartman, J.E.M., Masselink, R.H., Keesstra, S.D., Temme, A.J.A.M., 2013. Linking landscape morphological complexity and sediment connectivity. Earth Surface Processes and Landforms 38: 1457-1471. Bracken, L.J., Wainwright, J., Ali, G.A., Tetzlaff, D., Smith, M.W., Reaney, S.M., and Roy, A.G. 2013. Concepts of hydrological connectivity: research approaches, pathways and future agendas. Earth Science Reviews, 119, 17-34. Bracken, L.J., Turnbull, L, Wainwright, J. and Boogart, P. Submitted. Sediment Connectivity: A Framework for Understanding Sediment Transfer at Multiple Scales. Earth Surface Processes and Landforms. Cerdà, A., Brazier, R., Nearing, M., and de Vente, J. 2012. scales and erosion. Catena, 102, 1-2. doi:10.1016/j.catena.2011.09.006 Parsons A.J., Bracken L., Peoppl , R., Wainwright J., Keesstra, S.D., 2015. Editorial: Introduction to special issue on connectivity in water and sediment dynamics. In press in Earth Surface Processes and Landforms. DOI: 10.1002/esp.3714
NASA Astrophysics Data System (ADS)
McFarquhar, G. M.; Finlon, J.; Um, J.; Nesbitt, S. W.; Borque, P.; Chase, R.; Wu, W.; Morrison, H.; Poellot, M.
2017-12-01
Parameterizations of fall speed-dimension (V-D), mass (m)-D and projected area (A)-D relationships are needed for development of model parameterization and remote sensing retrieval schemes. An approach for deriving such relations is discussed here that improves upon previously developed schemes in the following aspects: 1) surfaces are used to characterize uncertainties in derived coefficients; 2) all derived relations are internally consistent; and 3) multiple bulk measures are used to derive parameter coefficients. In this study, data collected by two-dimensional optical array probes (OAPs) installed on the University of North Dakota Citation aircraft during the Mid-Latitude Continental Convective Clouds Experiment (MC3E) and during the Olympic Mountains Experiment (OLYMPEX) are used in conjunction with data from a Nevzorov total water content (TWC) probe and ground-based radar data at S-band to test a novel approach that determines m-D relationships for a variety of environments. A surface of equally realizable a and b coefficients, where m=aDb, in (a,b) phase space is determined using a technique that minimizes the chi-squared difference between both the TWC and radar reflectivity Z derived from the size distributions measured by the OAPs and those directly measured by a TWC probe and radar, accepting as valid all coefficients within a specified tolerance of the minimum chi-squared difference. Because both A and perimeter P can be directly measured by OAPs, coefficients characterizing these relationships are derived using only one bulk parameter constraint derived from the appropriate images. Because terminal velocity parameterizations depend on both A and m, V-D relations can be derived from these self-consistent relations. Using this approach, changes in parameters associated with varying environmental conditions and varying aerosol amounts and compositions can be isolated from changes associated with statistical noise or measurement errors. The applicability of the derived coefficients for a stochastic framework that employs an observationally-constrained dataset to account for coefficient variability within microphysics parameterization schemes is discussed.
Path-space variational inference for non-equilibrium coarse-grained systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harmandaris, Vagelis, E-mail: harman@uoc.gr; Institute of Applied and Computational Mathematics; Kalligiannaki, Evangelia, E-mail: ekalligian@tem.uoc.gr
In this paper we discuss information-theoretic tools for obtaining optimized coarse-grained molecular models for both equilibrium and non-equilibrium molecular simulations. The latter are ubiquitous in physicochemical and biological applications, where they are typically associated with coupling mechanisms, multi-physics and/or boundary conditions. In general the non-equilibrium steady states are not known explicitly as they do not necessarily have a Gibbs structure. The presented approach can compare microscopic behavior of molecular systems to parametric and non-parametric coarse-grained models using the relative entropy between distributions on the path space and setting up a corresponding path-space variational inference problem. The methods can become entirelymore » data-driven when the microscopic dynamics are replaced with corresponding correlated data in the form of time series. Furthermore, we present connections and generalizations of force matching methods in coarse-graining with path-space information methods. We demonstrate the enhanced transferability of information-based parameterizations to different observables, at a specific thermodynamic point, due to information inequalities. We discuss methodological connections between information-based coarse-graining of molecular systems and variational inference methods primarily developed in the machine learning community. However, we note that the work presented here addresses variational inference for correlated time series due to the focus on dynamics. The applicability of the proposed methods is demonstrated on high-dimensional stochastic processes given by overdamped and driven Langevin dynamics of interacting particles.« less
Querying databases of trajectories of differential equations: Data structures for trajectories
NASA Technical Reports Server (NTRS)
Grossman, Robert
1989-01-01
One approach to qualitative reasoning about dynamical systems is to extract qualitative information by searching or making queries on databases containing very large numbers of trajectories. The efficiency of such queries depends crucially upon finding an appropriate data structure for trajectories of dynamical systems. Suppose that a large number of parameterized trajectories gamma of a dynamical system evolving in R sup N are stored in a database. Let Eta is contained in set R sup N denote a parameterized path in Euclidean Space, and let the Euclidean Norm denote a norm on the space of paths. A data structure is defined to represent trajectories of dynamical systems, and an algorithm is sketched which answers queries.
Parameterized post-Newtonian cosmology
NASA Astrophysics Data System (ADS)
Sanghai, Viraj A. A.; Clifton, Timothy
2017-03-01
Einstein’s theory of gravity has been extensively tested on solar system scales, and for isolated astrophysical systems, using the perturbative framework known as the parameterized post-Newtonian (PPN) formalism. This framework is designed for use in the weak-field and slow-motion limit of gravity, and can be used to constrain a large class of metric theories of gravity with data collected from the aforementioned systems. Given the potential of future surveys to probe cosmological scales to high precision, it is a topic of much contemporary interest to construct a similar framework to link Einstein’s theory of gravity and its alternatives to observations on cosmological scales. Our approach to this problem is to adapt and extend the existing PPN formalism for use in cosmology. We derive a set of equations that use the same parameters to consistently model both weak fields and cosmology. This allows us to parameterize a large class of modified theories of gravity and dark energy models on cosmological scales, using just four functions of time. These four functions can be directly linked to the background expansion of the universe, first-order cosmological perturbations, and the weak-field limit of the theory. They also reduce to the standard PPN parameters on solar system scales. We illustrate how dark energy models and scalar-tensor and vector-tensor theories of gravity fit into this framework, which we refer to as ‘parameterized post-Newtonian cosmology’ (PPNC).
Parameterizing the Spatial Markov Model from Breakthrough Curve Data Alone
NASA Astrophysics Data System (ADS)
Sherman, T.; Bolster, D.; Fakhari, A.; Miller, S.; Singha, K.
2017-12-01
The spatial Markov model (SMM) uses a correlated random walk and has been shown to effectively capture anomalous transport in porous media systems; in the SMM, particles' future trajectories are correlated to their current velocity. It is common practice to use a priori Lagrangian velocity statistics obtained from high resolution simulations to determine a distribution of transition probabilities (correlation) between velocity classes that govern predicted transport behavior; however, this approach is computationally cumbersome. Here, we introduce a methodology to quantify velocity correlation from Breakthrough (BTC) curve data alone; discretizing two measured BTCs into a set of arrival times and reverse engineering the rules of the SMM allows for prediction of velocity correlation, thereby enabling parameterization of the SMM in studies where Lagrangian velocity statistics are not available. The introduced methodology is applied to estimate velocity correlation from BTCs measured in high resolution simulations, thus allowing for a comparison of estimated parameters with known simulated values. Results show 1) estimated transition probabilities agree with simulated values and 2) using the SMM with estimated parameterization accurately predicts BTCs downstream. Additionally, we include uncertainty measurements by calculating lower and upper estimates of velocity correlation, which allow for prediction of a range of BTCs. The simulated BTCs fall in the range of predicted BTCs. This research proposes a novel method to parameterize the SMM from BTC data alone, thereby reducing the SMM's computational costs and widening its applicability.
NASA Astrophysics Data System (ADS)
Luo, Ning; Zhao, Zhanfeng; Illman, Walter A.; Berg, Steven J.
2017-11-01
Transient hydraulic tomography (THT) is a robust method of aquifer characterization to estimate the spatial distributions (or tomograms) of both hydraulic conductivity (K) and specific storage (Ss). However, the highly-parameterized nature of the geostatistical inversion approach renders it computationally intensive for large-scale investigations. In addition, geostatistics-based THT may produce overly smooth tomograms when head data used to constrain the inversion is limited. Therefore, alternative model conceptualizations for THT need to be examined. To investigate this, we simultaneously calibrated different groundwater models with varying parameterizations and zonations using two cases of different pumping and monitoring data densities from a laboratory sandbox. Specifically, one effective parameter model, four geology-based zonation models with varying accuracy and resolution, and five geostatistical models with different prior information are calibrated. Model performance is quantitatively assessed by examining the calibration and validation results. Our study reveals that highly parameterized geostatistical models perform the best among the models compared, while the zonation model with excellent knowledge of stratigraphy also yields comparable results. When few pumping tests with sparse monitoring intervals are available, the incorporation of accurate or simplified geological information into geostatistical models reveals more details in heterogeneity and yields more robust validation results. However, results deteriorate when inaccurate geological information are incorporated. Finally, our study reveals that transient inversions are necessary to obtain reliable K and Ss estimates for making accurate predictions of transient drawdown events.
Explicit Global Simulation of Gravity Waves up to the Lower Thermosphere
NASA Astrophysics Data System (ADS)
Becker, E.
2016-12-01
At least for short-term simulations, middle atmosphere general circulation models (GCMs) can be run with sufficiently high resolution in order to describe a good part of the gravity wave spectrum explicitly. Nevertheless, the parameterization of unresolved dynamical scales remains an issue, especially when the scales of parameterized gravity waves (GWs) and resolved GWs become comparable. In addition, turbulent diffusion must always be parameterized along with other subgrid-scale dynamics. A practical solution to the combined closure problem for GWs and turbulent diffusion is to dispense with a parameterization of GWs, apply a high spatial resolution, and to represent the unresolved scales by a macro-turbulent diffusion scheme that gives rise to wave damping in a self-consistent fashion. This is the approach of a few GCMs that extend from the surface to the lower thermosphere and simulate a realistic GW drag and summer-to-winter-pole residual circulation in the upper mesosphere. In this study we describe a new version of the Kuehlungsborn Mechanistic general Circulation Model (KMCM), which includes explicit (though idealized) computations of radiative transfer and the tropospheric moisture cycle. Particular emphasis is spent on 1) the turbulent diffusion scheme, 2) the attenuation of resolved GWs at critical levels, 3) the generation of GWs in the middle atmosphere from body forces, and 4) GW-tidal interactions (including the energy deposition of GWs and tides).
Building integral projection models: a user's guide.
Rees, Mark; Childs, Dylan Z; Ellner, Stephen P
2014-05-01
In order to understand how changes in individual performance (growth, survival or reproduction) influence population dynamics and evolution, ecologists are increasingly using parameterized mathematical models. For continuously structured populations, where some continuous measure of individual state influences growth, survival or reproduction, integral projection models (IPMs) are commonly used. We provide a detailed description of the steps involved in constructing an IPM, explaining how to: (i) translate your study system into an IPM; (ii) implement your IPM; and (iii) diagnose potential problems with your IPM. We emphasize how the study organism's life cycle, and the timing of censuses, together determine the structure of the IPM kernel and important aspects of the statistical analysis used to parameterize an IPM using data on marked individuals. An IPM based on population studies of Soay sheep is used to illustrate the complete process of constructing, implementing and evaluating an IPM fitted to sample data. We then look at very general approaches to parameterizing an IPM, using a wide range of statistical techniques (e.g. maximum likelihood methods, generalized additive models, nonparametric kernel density estimators). Methods for selecting models for parameterizing IPMs are briefly discussed. We conclude with key recommendations and a brief overview of applications that extend the basic model. The online Supporting Information provides commented R code for all our analyses. © 2014 The Authors. Journal of Animal Ecology published by John Wiley & Sons Ltd on behalf of British Ecological Society.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shen, Samuel S. P.
2013-09-01
The long-range goal of several past and current projects in our DOE-supported research has been the development of new and improved parameterizations of cloud-radiation effects and related processes, using ARM data, and the implementation and testing of these parameterizations in global models. The main objective of the present project being reported on here has been to develop and apply advanced statistical techniques, including Bayesian posterior estimates, to diagnose and evaluate features of both observed and simulated clouds. The research carried out under this project has been novel in two important ways. The first is that it is a key stepmore » in the development of practical stochastic cloud-radiation parameterizations, a new category of parameterizations that offers great promise for overcoming many shortcomings of conventional schemes. The second is that this work has brought powerful new tools to bear on the problem, because it has been an interdisciplinary collaboration between a meteorologist with long experience in ARM research (Somerville) and a mathematician who is an expert on a class of advanced statistical techniques that are well-suited for diagnosing model cloud simulations using ARM observations (Shen). The motivation and long-term goal underlying this work is the utilization of stochastic radiative transfer theory (Lane-Veron and Somerville, 2004; Lane et al., 2002) to develop a new class of parametric representations of cloud-radiation interactions and closely related processes for atmospheric models. The theoretical advantage of the stochastic approach is that it can accurately calculate the radiative heating rates through a broken cloud layer without requiring an exact description of the cloud geometry.« less
Measurement and partitioning of evapotranspiration for application to vadose zone studies
USDA-ARS?s Scientific Manuscript database
Partitioning evapotranspiration (ET) into its constituent components, soil evaporation (E) and plant transpiration (T), is important for vadose zone studies because E and T are often parameterized separately. However, partitioning ET is challenging, and many longstanding approaches have significant ...
Predictions of Bedforms in Tidal Inlets and River Mouths
2016-07-31
that community modeling environment. APPROACH Bedforms are ubiquitous in unconsolidated sediments . They act as roughness elements, altering the...flow and creating feedback between the bed and the flow and, in doing so, they are intimately tied to erosion, transport and deposition of sediments ...With this approach, grain-scale sediment transport is parameterized with simple rules to drive bedform-scale dynamics. Gallagher (2011) developed a
Parameterized Micro-benchmarking: An Auto-tuning Approach for Complex Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, Wenjing; Krishnamoorthy, Sriram; Agrawal, Gagan
2012-05-15
Auto-tuning has emerged as an important practical method for creating highly optimized implementations of key computational kernels and applications. However, the growing complexity of architectures and applications is creating new challenges for auto-tuning. Complex applications can involve a prohibitively large search space that precludes empirical auto-tuning. Similarly, architectures are becoming increasingly complicated, making it hard to model performance. In this paper, we focus on the challenge to auto-tuning presented by applications with a large number of kernels and kernel instantiations. While these kernels may share a somewhat similar pattern, they differ considerably in problem sizes and the exact computation performed.more » We propose and evaluate a new approach to auto-tuning which we refer to as parameterized micro-benchmarking. It is an alternative to the two existing classes of approaches to auto-tuning: analytical model-based and empirical search-based. Particularly, we argue that the former may not be able to capture all the architectural features that impact performance, whereas the latter might be too expensive for an application that has several different kernels. In our approach, different expressions in the application, different possible implementations of each expression, and the key architectural features, are used to derive a simple micro-benchmark and a small parameter space. This allows us to learn the most significant features of the architecture that can impact the choice of implementation for each kernel. We have evaluated our approach in the context of GPU implementations of tensor contraction expressions encountered in excited state calculations in quantum chemistry. We have focused on two aspects of GPUs that affect tensor contraction execution: memory access patterns and kernel consolidation. Using our parameterized micro-benchmarking approach, we obtain a speedup of up to 2 over the version that used default optimizations, but no auto-tuning. We demonstrate that observations made from microbenchmarks match the behavior seen from real expressions. In the process, we make important observations about the memory hierarchy of two of the most recent NVIDIA GPUs, which can be used in other optimization frameworks as well.« less
Parameterization of wind turbine impacts on hydrodynamics and sediment transport
NASA Astrophysics Data System (ADS)
Rivier, Aurélie; Bennis, Anne-Claire; Pinon, Grégory; Magar, Vanesa; Gross, Markus
2016-10-01
Monopile foundations of offshore wind turbines modify the hydrodynamics and sediment transport at local and regional scales. The aim of this work is to assess these modifications and to parameterize them in a regional model. In the present study, this is achieved through a regional circulation model, coupled with a sediment transport module, using two approaches. One approach is to explicitly model the monopiles in the mesh as dry cells, and the other is to parameterize them by adding a drag force term to the momentum and turbulence equations. Idealised cases are run using hydrodynamical conditions and sediment grain sizes typical from the area located off Courseulles-sur-Mer (Normandy, France), where an offshore windfarm is under planning, to assess the capacity of the model to reproduce the effect of the monopile on the environment. Then, the model is applied to a real configuration on an area including the future offshore windfarm of Courseulles-sur-Mer. Four monopiles are represented in the model using both approaches, and modifications of the hydrodynamics and sediment transport are assessed over a tidal cycle. In relation to local hydrodynamic effects, it is observed that currents increase at the side of the monopile and decrease in front of and downstream of the monopile. In relation to sediment transport effect, the results show that resuspension and erosion occur around the monopile in locations where the current speed increases due to the monopile presence, and sediments deposit downstream where the bed shear stress is lower. During the tidal cycle, wakes downstream of the monopile reach the following monopile and modify the velocity magnitude and suspended sediment concentration patterns around the second monopile.
Ebel, Brian A.; Rengers, Francis K.; Tucker, Gregory E.
2016-01-01
Hydrologic response to extreme rainfall in disturbed landscapes is poorly understood because of the paucity of measurements. A unique opportunity presented itself when extreme rainfall in September 2013 fell on a headwater catchment (i.e., <1 ha) in Colorado, USA that had previously been burned by a wildfire in 2010. We compared measurements of soil-hydraulic properties, soil saturation from subsurface sensors, and estimated peak runoff during the extreme rainfall with numerical simulations of runoff generation and subsurface hydrologic response during this event. The simulations were used to explore differences in runoff generation between the wildfire-affected headwater catchment, a simulated unburned case, and for uniform versus spatially variable parameterizations of soil-hydraulic properties that affect infiltration and runoff generation in burned landscapes. Despite 3 years of elapsed time since the 2010 wildfire, observations and simulations pointed to substantial surface runoff generation in the wildfire-affected headwater catchment by the infiltration-excess mechanism while no surface runoff was generated in the unburned case. The surface runoff generation was the result of incomplete recovery of soil-hydraulic properties in the burned area, suggesting recovery takes longer than 3 years. Moreover, spatially variable soil-hydraulic property parameterizations produced longer duration but lower peak-flow infiltration-excess runoff, compared to uniform parameterization, which may have important hillslope sediment export and geomorphologic implications during long duration, extreme rainfall. The majority of the simulated surface runoff in the spatially variable cases came from connected near-channel contributing areas, which was a substantially smaller contributing area than the uniform simulations.
NASA Technical Reports Server (NTRS)
Liston, G. E.; Sud, Y. C.; Wood, E. F.
1994-01-01
To relate general circulation model (GCM) hydrologic output to readily available river hydrographic data, a runoff routing scheme that routes gridded runoffs through regional- or continental-scale river drainage basins is developed. By following the basin overland flow paths, the routing model generates river discharge hydrographs that can be compared to observed river discharges, thus allowing an analysis of the GCM representation of monthly, seasonal, and annual water balances over large regions. The runoff routing model consists of two linear reservoirs, a surface reservoir and a groundwater reservoir, which store and transport water. The water transport mechanisms operating within these two reservoirs are differentiated by their time scales; the groundwater reservoir transports water much more slowly than the surface reservior. The groundwater reservior feeds the corresponding surface store, and the surface stores are connected via the river network. The routing model is implemented over the Global Energy and Water Cycle Experiment (GEWEX) Continental-Scale International Project Mississippi River basin on a rectangular grid of 2 deg X 2.5 deg. Two land surface hydrology parameterizations provide the gridded runoff data required to run the runoff routing scheme: the variable infiltration capacity model, and the soil moisture component of the simple biosphere model. These parameterizations are driven with 4 deg X 5 deg gridded climatological potential evapotranspiration and 1979 First Global Atmospheric Research Program (GARP) Global Experiment precipitation. These investigations have quantified the importance of physically realistic soil moisture holding capacities, evaporation parameters, and runoff mechanisms in land surface hydrology formulations.
A Solar Radiation Parameterization for Atmospheric Studies. Volume 15
NASA Technical Reports Server (NTRS)
Chou, Ming-Dah; Suarez, Max J. (Editor)
1999-01-01
The solar radiation parameterization (CLIRAD-SW) developed at the Goddard Climate and Radiation Branch for application to atmospheric models are described. It includes the absorption by water vapor, O3, O2, CO2, clouds, and aerosols and the scattering by clouds, aerosols, and gases. Depending upon the nature of absorption, different approaches are applied to different absorbers. In the ultraviolet and visible regions, the spectrum is divided into 8 bands, and single O3 absorption coefficient and Rayleigh scattering coefficient are used for each band. In the infrared, the spectrum is divided into 3 bands, and the k-distribution method is applied for water vapor absorption. The flux reduction due to O2 is derived from a simple function, while the flux reduction due to CO2 is derived from precomputed tables. Cloud single-scattering properties are parameterized, separately for liquid drops and ice, as functions of water amount and effective particle size. A maximum-random approximation is adopted for the overlapping of clouds at different heights. Fluxes are computed using the Delta-Eddington approximation.
Bayesian parameter estimation for nonlinear modelling of biological pathways.
Ghasemi, Omid; Lindsey, Merry L; Yang, Tianyi; Nguyen, Nguyen; Huang, Yufei; Jin, Yu-Fang
2011-01-01
The availability of temporal measurements on biological experiments has significantly promoted research areas in systems biology. To gain insight into the interaction and regulation of biological systems, mathematical frameworks such as ordinary differential equations have been widely applied to model biological pathways and interpret the temporal data. Hill equations are the preferred formats to represent the reaction rate in differential equation frameworks, due to their simple structures and their capabilities for easy fitting to saturated experimental measurements. However, Hill equations are highly nonlinearly parameterized functions, and parameters in these functions cannot be measured easily. Additionally, because of its high nonlinearity, adaptive parameter estimation algorithms developed for linear parameterized differential equations cannot be applied. Therefore, parameter estimation in nonlinearly parameterized differential equation models for biological pathways is both challenging and rewarding. In this study, we propose a Bayesian parameter estimation algorithm to estimate parameters in nonlinear mathematical models for biological pathways using time series data. We used the Runge-Kutta method to transform differential equations to difference equations assuming a known structure of the differential equations. This transformation allowed us to generate predictions dependent on previous states and to apply a Bayesian approach, namely, the Markov chain Monte Carlo (MCMC) method. We applied this approach to the biological pathways involved in the left ventricle (LV) response to myocardial infarction (MI) and verified our algorithm by estimating two parameters in a Hill equation embedded in the nonlinear model. We further evaluated our estimation performance with different parameter settings and signal to noise ratios. Our results demonstrated the effectiveness of the algorithm for both linearly and nonlinearly parameterized dynamic systems. Our proposed Bayesian algorithm successfully estimated parameters in nonlinear mathematical models for biological pathways. This method can be further extended to high order systems and thus provides a useful tool to analyze biological dynamics and extract information using temporal data.
NASA Astrophysics Data System (ADS)
Raju, P. V. S.; Potty, Jayaraman; Mohanty, U. C.
2011-09-01
Comprehensive sensitivity analyses on physical parameterization schemes of Weather Research Forecast (WRF-ARW core) model have been carried out for the prediction of track and intensity of tropical cyclones by taking the example of cyclone Nargis, which formed over the Bay of Bengal and hit Myanmar on 02 May 2008, causing widespread damages in terms of human and economic losses. The model performances are also evaluated with different initial conditions of 12 h intervals starting from the cyclogenesis to the near landfall time. The initial and boundary conditions for all the model simulations are drawn from the global operational analysis and forecast products of National Center for Environmental Prediction (NCEP-GFS) available for the public at 1° lon/lat resolution. The results of the sensitivity analyses indicate that a combination of non-local parabolic type exchange coefficient PBL scheme of Yonsei University (YSU), deep and shallow convection scheme with mass flux approach for cumulus parameterization (Kain-Fritsch), and NCEP operational cloud microphysics scheme with diagnostic mixed phase processes (Ferrier), predicts better track and intensity as compared against the Joint Typhoon Warning Center (JTWC) estimates. Further, the final choice of the physical parameterization schemes selected from the above sensitivity experiments is used for model integration with different initial conditions. The results reveal that the cyclone track, intensity and time of landfall are well simulated by the model with an average intensity error of about 8 hPa, maximum wind error of 12 m s-1and track error of 77 km. The simulations also show that the landfall time error and intensity error are decreasing with delayed initial condition, suggesting that the model forecast is more dependable when the cyclone approaches the coast. The distribution and intensity of rainfall are also well simulated by the model and comparable with the TRMM estimates.
NASA Astrophysics Data System (ADS)
Dipankar, A.; Stevens, B. B.; Zängl, G.; Pondkule, M.; Brdar, S.
2014-12-01
The effect of clouds on large scale dynamics is represented in climate models through parameterization of various processes, of which the parameterization of shallow and deep convection are particularly uncertain. The atmospheric boundary layer, which controls the coupling to the surface, and which defines the scale of shallow convection, is typically 1 km in depth. Thus, simulations on a O(100 m) grid largely obviate the need for such parameterizations. By crossing this threshold of O(100m) grid resolution one can begin thinking of large-eddy simulation (LES), wherein the sub-grid scale parameterization have a sounder theoretical foundation. Substantial initiatives have been taken internationally to approach this threshold. For example, Miura et al., 2007 and Mirakawa et al., 2014 approach this threshold by doing global simulations, with (gradually) decreasing grid resolution, to understand the effect of cloud-resolving scales on the general circulation. Our strategy, on the other hand, is to take a big leap forward by fixing the resolution at O(100 m), and gradually increasing the domain size. We believe that breaking this threshold would greatly help in improving the parameterization schemes and reducing the uncertainty in climate predictions. To take this forward, the German Federal Ministry of Education and Research has initiated a project on HD(CP)2 that aims for a limited area LES at resolution O(100 m) using the new unified modeling system ICON (Zängl et al., 2014). In the talk, results from the HD(CP)2 evaluation simulation will be shown that targets high resolution simulation over a small domain around Jülich, Germany. This site is chosen because high resolution HD(CP)2 Observational Prototype Experiment took place in this region from 1.04.2013 to 31.05.2013, in order to critically evaluate the model. Nesting capabilities of ICON is used to gradually increase the resolution from the outermost domain, which is forced from the COSMO-DE data, to the innermost and finest resolution domain centered around Jülich (see Fig. 1 top panel). Furthermore, detailed analyses of the simulation results against the observation data will be presented. A reprsentative figure showing time series of column integrated water vapor (IWV) for both model and observation on 24.04.2013 is shown in bottom panel of Fig. 1.
An efficient approach to ARMA modeling of biological systems with multiple inputs and delays
NASA Technical Reports Server (NTRS)
Perrott, M. H.; Cohen, R. J.
1996-01-01
This paper presents a new approach to AutoRegressive Moving Average (ARMA or ARX) modeling which automatically seeks the best model order to represent investigated linear, time invariant systems using their input/output data. The algorithm seeks the ARMA parameterization which accounts for variability in the output of the system due to input activity and contains the fewest number of parameters required to do so. The unique characteristics of the proposed system identification algorithm are its simplicity and efficiency in handling systems with delays and multiple inputs. We present results of applying the algorithm to simulated data and experimental biological data In addition, a technique for assessing the error associated with the impulse responses calculated from estimated ARMA parameterizations is presented. The mapping from ARMA coefficients to impulse response estimates is nonlinear, which complicates any effort to construct confidence bounds for the obtained impulse responses. Here a method for obtaining a linearization of this mapping is derived, which leads to a simple procedure to approximate the confidence bounds.
Loupa, G; Rapsomanikis, S; Trepekli, A; Kourtidis, K
2016-01-15
Energy flux parameterization was effected for the city of Athens, Greece, by utilizing two approaches, the Local-Scale Urban Meteorological Parameterization Scheme (LUMPS) and the Bulk Approach (BA). In situ acquired data are used to validate the algorithms of these schemes and derive coefficients applicable to the study area. Model results from these corrected algorithms are compared with literature results for coefficients applicable to other cities and their varying construction materials. Asphalt and concrete surfaces, canyons and anthropogenic heat releases were found to be the key characteristics of the city center that sustain the elevated surface and air temperatures, under hot, sunny and dry weather, during the Mediterranean summer. A relationship between storage heat flux plus anthropogenic energy flux and temperatures (surface and lower atmosphere) is presented, that results in understanding of the interplay between temperatures, anthropogenic energy releases and the city characteristics under the Urban Heat Island conditions.
Zhang, Jie; Xiao, Wendong; Zhang, Sen; Huang, Shoudong
2017-04-17
Device-free localization (DFL) is becoming one of the new technologies in wireless localization field, due to its advantage that the target to be localized does not need to be attached to any electronic device. In the radio-frequency (RF) DFL system, radio transmitters (RTs) and radio receivers (RXs) are used to sense the target collaboratively, and the location of the target can be estimated by fusing the changes of the received signal strength (RSS) measurements associated with the wireless links. In this paper, we will propose an extreme learning machine (ELM) approach for DFL, to improve the efficiency and the accuracy of the localization algorithm. Different from the conventional machine learning approaches for wireless localization, in which the above differential RSS measurements are trivially used as the only input features, we introduce the parameterized geometrical representation for an affected link, which consists of its geometrical intercepts and differential RSS measurement. Parameterized geometrical feature extraction (PGFE) is performed for the affected links and the features are used as the inputs of ELM. The proposed PGFE-ELM for DFL is trained in the offline phase and performed for real-time localization in the online phase, where the estimated location of the target is obtained through the created ELM. PGFE-ELM has the advantages that the affected links used by ELM in the online phase can be different from those used for training in the offline phase, and can be more robust to deal with the uncertain combination of the detectable wireless links. Experimental results show that the proposed PGFE-ELM can improve the localization accuracy and learning speed significantly compared with a number of the existing machine learning and DFL approaches, including the weighted K-nearest neighbor (WKNN), support vector machine (SVM), back propagation neural network (BPNN), as well as the well-known radio tomographic imaging (RTI) DFL approach.
Zhang, Jie; Xiao, Wendong; Zhang, Sen; Huang, Shoudong
2017-01-01
Device-free localization (DFL) is becoming one of the new technologies in wireless localization field, due to its advantage that the target to be localized does not need to be attached to any electronic device. In the radio-frequency (RF) DFL system, radio transmitters (RTs) and radio receivers (RXs) are used to sense the target collaboratively, and the location of the target can be estimated by fusing the changes of the received signal strength (RSS) measurements associated with the wireless links. In this paper, we will propose an extreme learning machine (ELM) approach for DFL, to improve the efficiency and the accuracy of the localization algorithm. Different from the conventional machine learning approaches for wireless localization, in which the above differential RSS measurements are trivially used as the only input features, we introduce the parameterized geometrical representation for an affected link, which consists of its geometrical intercepts and differential RSS measurement. Parameterized geometrical feature extraction (PGFE) is performed for the affected links and the features are used as the inputs of ELM. The proposed PGFE-ELM for DFL is trained in the offline phase and performed for real-time localization in the online phase, where the estimated location of the target is obtained through the created ELM. PGFE-ELM has the advantages that the affected links used by ELM in the online phase can be different from those used for training in the offline phase, and can be more robust to deal with the uncertain combination of the detectable wireless links. Experimental results show that the proposed PGFE-ELM can improve the localization accuracy and learning speed significantly compared with a number of the existing machine learning and DFL approaches, including the weighted K-nearest neighbor (WKNN), support vector machine (SVM), back propagation neural network (BPNN), as well as the well-known radio tomographic imaging (RTI) DFL approach. PMID:28420187
NASA Astrophysics Data System (ADS)
Lee, Joong Gwang; Nietch, Christopher T.; Panguluri, Srinivas
2018-05-01
Urban stormwater runoff quantity and quality are strongly dependent upon catchment properties. Models are used to simulate the runoff characteristics, but the output from a stormwater management model is dependent on how the catchment area is subdivided and represented as spatial elements. For green infrastructure modeling, we suggest a discretization method that distinguishes directly connected impervious area (DCIA) from the total impervious area (TIA). Pervious buffers, which receive runoff from upgradient impervious areas should also be identified as a separate subset of the entire pervious area (PA). This separation provides an improved model representation of the runoff process. With these criteria in mind, an approach to spatial discretization for projects using the US Environmental Protection Agency's Storm Water Management Model (SWMM) is demonstrated for the Shayler Crossing watershed (SHC), a well-monitored, residential suburban area occupying 100 ha, east of Cincinnati, Ohio. The model relies on a highly resolved spatial database of urban land cover, stormwater drainage features, and topography. To verify the spatial discretization approach, a hypothetical analysis was conducted. Six different representations of a common urbanscape that discharges runoff to a single storm inlet were evaluated with eight 24 h synthetic storms. This analysis allowed us to select a discretization scheme that balances complexity in model setup with presumed accuracy of the output with respect to the most complex discretization option considered. The balanced approach delineates directly and indirectly connected impervious areas (ICIA), buffering pervious area (BPA) receiving impervious runoff, and the other pervious area within a SWMM subcatchment. It performed well at the watershed scale with minimal calibration effort (Nash-Sutcliffe coefficient = 0.852; R2 = 0.871). The approach accommodates the distribution of runoff contributions from different spatial components and flow pathways that would impact green infrastructure performance. A developed SWMM model using the discretization approach is calibrated by adjusting parameters per land cover component, instead of per subcatchment and, therefore, can be applied to relatively large watersheds if the land cover components are relatively homogeneous and/or categorized appropriately in the GIS that supports the model parameterization. Finally, with a few model adjustments, we show how the simulated stream hydrograph can be separated into the relative contributions from different land cover types and subsurface sources, adding insight to the potential effectiveness of planned green infrastructure scenarios at the watershed scale.
NASA Astrophysics Data System (ADS)
Erazo, Kalil; Nagarajaiah, Satish
2017-06-01
In this paper an offline approach for output-only Bayesian identification of stochastic nonlinear systems is presented. The approach is based on a re-parameterization of the joint posterior distribution of the parameters that define a postulated state-space stochastic model class. In the re-parameterization the state predictive distribution is included, marginalized, and estimated recursively in a state estimation step using an unscented Kalman filter, bypassing state augmentation as required by existing online methods. In applications expectations of functions of the parameters are of interest, which requires the evaluation of potentially high-dimensional integrals; Markov chain Monte Carlo is adopted to sample the posterior distribution and estimate the expectations. The proposed approach is suitable for nonlinear systems subjected to non-stationary inputs whose realization is unknown, and that are modeled as stochastic processes. Numerical verification and experimental validation examples illustrate the effectiveness and advantages of the approach, including: (i) an increased numerical stability with respect to augmented-state unscented Kalman filtering, avoiding divergence of the estimates when the forcing input is unmeasured; (ii) the ability to handle arbitrary prior and posterior distributions. The experimental validation of the approach is conducted using data from a large-scale structure tested on a shake table. It is shown that the approach is robust to inherent modeling errors in the description of the system and forcing input, providing accurate prediction of the dynamic response when the excitation history is unknown.
Impacts of Light Use Efficiency and fPAR Parameterization on Gross Primary Production Modeling
NASA Technical Reports Server (NTRS)
Cheng, Yen-Ben; Zhang, Qingyuan; Lyapustin, Alexei I.; Wang, Yujie; Middleton, Elizabeth M.
2014-01-01
This study examines the impact of parameterization of two variables, light use efficiency (LUE) and the fraction of absorbed photosynthetically active radiation (fPAR or fAPAR), on gross primary production(GPP) modeling. Carbon sequestration by terrestrial plants is a key factor to a comprehensive under-standing of the carbon budget at global scale. In this context, accurate measurements and estimates of GPP will allow us to achieve improved carbon monitoring and to quantitatively assess impacts from cli-mate changes and human activities. Spaceborne remote sensing observations can provide a variety of land surface parameterizations for modeling photosynthetic activities at various spatial and temporal scales. This study utilizes a simple GPP model based on LUE concept and different land surface parameterizations to evaluate the model and monitor GPP. Two maize-soybean rotation fields in Nebraska, USA and the Bartlett Experimental Forest in New Hampshire, USA were selected for study. Tower-based eddy-covariance carbon exchange and PAR measurements were collected from the FLUXNET Synthesis Dataset. For the model parameterization, we utilized different values of LUE and the fPAR derived from various algorithms. We adapted the approach and parameters from the MODIS MOD17 Biome Properties Look-Up Table (BPLUT) to derive LUE. We also used a site-specific analytic approach with tower-based Net Ecosystem Exchange (NEE) and PAR to estimate maximum potential LUE (LUEmax) to derive LUE. For the fPAR parameter, the MODIS MOD15A2 fPAR product was used. We also utilized fAPAR chl, a parameter accounting for the fAPAR linked to the chlorophyll-containing canopy fraction. fAPAR chl was obtained by inversion of a radiative transfer model, which used the MODIS-based reflectances in bands 1-7 produced by Multi-Angle Implementation of Atmospheric Correction (MAIAC) algorithm. fAPAR chl exhibited seasonal dynamics more similar with the flux tower based GPP than MOD15A2 fPAR, especially in the spring and fall at the agricultural sites. When using the MODIS MOD17-based parameters to estimate LUE, fAPAR chl generated better agreements with GPP (r2= 0.79-0.91) than MOD15A2 fPAR (r2= 0.57-0.84).However, underestimations of GPP were also observed, especially for the crop fields. When applying the site-specific LUE max value to estimate in situ LUE, the magnitude of estimated GPP was closer to in situ GPP; this method produced a slight overestimation for the MOD15A2 fPAR at the Bartlett forest. This study highlights the importance of accurate land surface parameterizations to achieve reliable carbon monitoring capabilities from remote sensing information.
NASA Astrophysics Data System (ADS)
Serva, Federico; Cagnazzo, Chiara; Riccio, Angelo
2016-04-01
The effects of the propagation and breaking of atmospheric gravity waves have long been considered crucial for their impact on the circulation, especially in the stratosphere and mesosphere, between heights of 10 and 110 km. These waves, that in the Earth's atmosphere originate from surface orography (OGWs) or from transient (nonorographic) phenomena such as fronts and convective processes (NOGWs), have horizontal wavelengths between 10 and 1000 km, vertical wavelengths of several km, and frequencies spanning from minutes to hours. Orographic and nonorographic GWs must be accounted for in climate models to obtain a realistic simulation of the stratosphere in both hemispheres, since they can have a substantial impact on circulation and temperature, hence an important role in ozone chemistry for chemistry-climate models. Several types of parameterization are currently employed in models, differing in the formulation and for the values assigned to parameters, but the common aim is to quantify the effect of wave breaking on large-scale wind and temperature patterns. In the last decade, both global observations from satellite-borne instruments and the outputs of very high resolution climate models provided insight on the variability and properties of gravity wave field, and these results can be used to constrain some of the empirical parameters present in most parameterization scheme. A feature of the NOGW forcing that clearly emerges is the intermittency, linked with the nature of the sources: this property is absent in the majority of the models, in which NOGW parameterizations are uncoupled with other atmospheric phenomena, leading to results which display lower variability compared to observations. In this work, we analyze the climate simulated in AMIP runs of the MAECHAM5 model, which uses the Hines NOGW parameterization and with a fine vertical resolution suitable to capture the effects of wave-mean flow interaction. We compare the results obtained with two version of the model, the default and a new stochastic version, in which the value of the perturbation field at launching level is not constant and uniform, but extracted at each time-step and grid-point from a given PDF. With this approach we are trying to add further variability to the effects given by the deterministic NOGW parameterization: the impact on the simulated climate will be assessed focusing on the Quasi-Biennial Oscillation of the equatorial stratosphere (known to be driven also by gravity waves) and on the variability of the mid-to-high latitudes atmosphere. The different characteristics of the circulation will be compared with recent reanalysis products in order to determine the advantages of the stochastic approach over the traditional deterministic scheme.
NASA Technical Reports Server (NTRS)
Randall, David A.
1990-01-01
A bulk planetary boundary layer (PBL) model was developed with a simple internal vertical structure and a simple second-order closure, designed for use as a PBL parameterization in a large-scale model. The model allows the mean fields to vary with height within the PBL, and so must address the vertical profiles of the turbulent fluxes, going beyond the usual mixed-layer assumption that the fluxes of conservative variables are linear with height. This is accomplished using the same convective mass flux approach that has also been used in cumulus parameterizations. The purpose is to show that such a mass flux model can include, in a single framework, the compensating subsidence concept, downgradient mixing, and well-mixed layers.
New Approaches to Parameterizing Convection
NASA Technical Reports Server (NTRS)
Randall, David A.; Lappen, Cara-Lyn
1999-01-01
Many general circulation models (GCMs) currently use separate schemes for planetary boundary layer (PBL) processes, shallow and deep cumulus (Cu) convection, and stratiform clouds. The conventional distinctions. among these processes are somewhat arbitrary. For example, in the stratocumulus-to-cumulus transition region, stratocumulus clouds break up into a combination of shallow cumulus and broken stratocumulus. Shallow cumulus clouds may be considered to reside completely within the PBL, or they may be regarded as starting in the PBL but terminating above it. Deeper cumulus clouds often originate within the PBL with also can originate aloft. To the extent that our models separately parameterize physical processes which interact strongly on small space and time scales, the currently fashionable practice of modularization may be doing more harm than good.
Working memory load modulation of parieto-frontal connections: evidence from dynamic causal modeling
Ma, Liangsuo; Steinberg, Joel L.; Hasan, Khader M.; Narayana, Ponnada A.; Kramer, Larry A.; Moeller, F. Gerard
2011-01-01
Previous neuroimaging studies have shown that working memory load has marked effects on regional neural activation. However, the mechanism through which working memory load modulates brain connectivity is still unclear. In this study, this issue was addressed using dynamic causal modeling (DCM) based on functional magnetic resonance imaging (fMRI) data. Eighteen normal healthy subjects were scanned while they performed a working memory task with variable memory load, as parameterized by two levels of memory delay and three levels of digit load (number of digits presented in each visual stimulus). Eight regions of interest, i.e., bilateral middle frontal gyrus (MFG), anterior cingulate cortex (ACC), inferior frontal cortex (IFC), and posterior parietal cortex (PPC), were chosen for DCM analyses. Analysis of the behavioral data during the fMRI scan revealed that accuracy decreased as digit load increased. Bayesian inference on model structure indicated that a bilinear DCM in which memory delay was the driving input to bilateral PPC and in which digit load modulated several parieto-frontal connections was the optimal model. Analysis of model parameters showed that higher digit load enhanced connection from L PPC to L IFC, and lower digit load inhibited connection from R PPC to L ACC. These findings suggest that working memory load modulates brain connectivity in a parieto-frontal network, and may reflect altered neuronal processes, e.g., information processing or error monitoring, with the change in working memory load. PMID:21692148
We present results from a study testing the new boundary layer parameterization method, the canopy drag approach (DA) which is designed to explicitly simulate the effects of buildings, street and tree canopies on the dynamic, thermodynamic structure and dispersion fields in urban...
Contrasting Causatives: A Minimalist Approach
ERIC Educational Resources Information Center
Tubino Blanco, Mercedes
2010-01-01
This dissertation explores the mechanisms behind the linguistic expression of causation in English, Hiaki (Uto-Aztecan) and Spanish. Pylkkanen's (2002, 2008) analysis of causatives as dependent on the parameterization of the functional head v[subscript CAUSE] is chosen as a point of departure. The studies conducted in this dissertation confirm…
Li, Xianfeng; Murthy, N. Sanjeeva; Becker, Matthew L.; Latour, Robert A.
2016-01-01
A multiscale modeling approach is presented for the efficient construction of an equilibrated all-atom model of a cross-linked poly(ethylene glycol) (PEG)-based hydrogel using the all-atom polymer consistent force field (PCFF). The final equilibrated all-atom model was built with a systematic simulation toolset consisting of three consecutive parts: (1) building a global cross-linked PEG-chain network at experimentally determined cross-link density using an on-lattice Monte Carlo method based on the bond fluctuation model, (2) recovering the local molecular structure of the network by transitioning from the lattice model to an off-lattice coarse-grained (CG) model parameterized from PCFF, followed by equilibration using high performance molecular dynamics methods, and (3) recovering the atomistic structure of the network by reverse mapping from the equilibrated CG structure, hydrating the structure with explicitly represented water, followed by final equilibration using PCFF parameterization. The developed three-stage modeling approach has application to a wide range of other complex macromolecular hydrogel systems, including the integration of peptide, protein, and/or drug molecules as side-chains within the hydrogel network for the incorporation of bioactivity for tissue engineering, regenerative medicine, and drug delivery applications. PMID:27013229
NASA Astrophysics Data System (ADS)
Sanyal, Tanmoy; Shell, M. Scott
2016-07-01
Bottom-up multiscale techniques are frequently used to develop coarse-grained (CG) models for simulations at extended length and time scales but are often limited by a compromise between computational efficiency and accuracy. The conventional approach to CG nonbonded interactions uses pair potentials which, while computationally efficient, can neglect the inherently multibody contributions of the local environment of a site to its energy, due to degrees of freedom that were coarse-grained out. This effect often causes the CG potential to depend strongly on the overall system density, composition, or other properties, which limits its transferability to states other than the one at which it was parameterized. Here, we propose to incorporate multibody effects into CG potentials through additional nonbonded terms, beyond pair interactions, that depend in a mean-field manner on local densities of different atomic species. This approach is analogous to embedded atom and bond-order models that seek to capture multibody electronic effects in metallic systems. We show that the relative entropy coarse-graining framework offers a systematic route to parameterizing such local density potentials. We then characterize this approach in the development of implicit solvation strategies for interactions between model hydrophobes in an aqueous environment.
Pion, Kaon, Proton and Antiproton Production in Proton-Proton Collisions
NASA Technical Reports Server (NTRS)
Norbury, John W.; Blattnig, Steve R.
2008-01-01
Inclusive pion, kaon, proton, and antiproton production from proton-proton collisions is studied at a variety of proton energies. Various available parameterizations of Lorentz-invariant differential cross sections as a function of transverse momentum and rapidity are compared with experimental data. The Badhwar and Alper parameterizations are moderately satisfactory for charged pion production. The Badhwar parameterization provides the best fit for charged kaon production. For proton production, the Alper parameterization is best, and for antiproton production the Carey parameterization works best. However, no parameterization is able to fully account for all the data.
NASA Astrophysics Data System (ADS)
Brodie, E.; King, E.; Molins, S.; Karaoz, U.; Johnson, J. N.; Bouskill, N.; Hug, L. A.; Thomas, B. C.; Castelle, C. J.; Beller, H. R.; Banfield, J. F.; Steefel, C. I.
2014-12-01
In soils and sediments microorganisms perform essential ecosystem services through their roles in regulating the stability of carbon and the flux of nutrients, and the purification of water. But these are complex systems with the physical, chemical and biological components all intimately connected. Components of this complexity are gradually being uncovered and our understanding of the extent of microbial functional diversity in particular has been enhanced greatly with the development of cultivation independent approaches. However we have not moved far beyond a descriptive and correlative use of this powerful resource. As the ability to reconstruct thousands of genomes from microbial populations using metagenomic techniques gains momentum, the challenge will be to develop an understanding of how these metabolic blueprints serve to influence the fitness of organisms within these complex systems and how populations emerge and impact the physical and chemical properties of their environment. In the presentation we will discuss the development of a trait-based model of microbial activity that simulates coupled guilds of microorganisms that are parameterized including traits extracted from large-scale metagenomic data. Using a reactive transport framework we simulate the thermodynamics of coupled electron donor and acceptor reactions to predict the energy available for respiration, biomass development and exo-enzyme production. Each group within a functional guild is parameterized with a unique combination of traits governing organism fitness under dynamic environmental conditions. This presentation will address our latest developments in the estimation of trait values related to growth rate and the identification and linkage of key fitness traits associated with respiratory and fermentative pathways, macromolecule depolymerization enzymes and nitrogen fixation from metagenomic data. We are testing model sensitivity to initial microbial composition and intra-guild trait variability amongst other parameters and are using this model to explore abiotic controls on community emergence and impact on rates of reactions that contribute to the cycling of carbon across biogeochemical gradients from the soil to the subsurface.
Hydraulic Conductivity Estimation using Bayesian Model Averaging and Generalized Parameterization
NASA Astrophysics Data System (ADS)
Tsai, F. T.; Li, X.
2006-12-01
Non-uniqueness in parameterization scheme is an inherent problem in groundwater inverse modeling due to limited data. To cope with the non-uniqueness problem of parameterization, we introduce a Bayesian Model Averaging (BMA) method to integrate a set of selected parameterization methods. The estimation uncertainty in BMA includes the uncertainty in individual parameterization methods as the within-parameterization variance and the uncertainty from using different parameterization methods as the between-parameterization variance. Moreover, the generalized parameterization (GP) method is considered in the geostatistical framework in this study. The GP method aims at increasing the flexibility of parameterization through the combination of a zonation structure and an interpolation method. The use of BMP with GP avoids over-confidence in a single parameterization method. A normalized least-squares estimation (NLSE) is adopted to calculate the posterior probability for each GP. We employee the adjoint state method for the sensitivity analysis on the weighting coefficients in the GP method. The adjoint state method is also applied to the NLSE problem. The proposed methodology is implemented to the Alamitos Barrier Project (ABP) in California, where the spatially distributed hydraulic conductivity is estimated. The optimal weighting coefficients embedded in GP are identified through the maximum likelihood estimation (MLE) where the misfits between the observed and calculated groundwater heads are minimized. The conditional mean and conditional variance of the estimated hydraulic conductivity distribution using BMA are obtained to assess the estimation uncertainty.
Pedotransfer functions in Earth system science: challenges and perspectives
NASA Astrophysics Data System (ADS)
Van Looy, K.; Minasny, B.; Nemes, A.; Verhoef, A.; Weihermueller, L.; Vereecken, H.
2017-12-01
We make a stronghold for a new generation of Pedotransfer functions (PTFs) that is currently developed in the different disciplines of Earth system science, offering strong perspectives for improvement of integrated process-based models, from local to global scale applications. PTFs are simple to complex knowledge rules that relate available soil information to soil properties and variables that are needed to parameterize soil processes. To meet the methodological challenges for a successful application in Earth system modeling, we highlight how PTF development needs to go hand in hand with suitable extrapolation and upscaling techniques such that the PTFs correctly capture the spatial heterogeneity of soils. Most actively pursued recent developments are related to parameterizations of solute transport, heat exchange, soil respiration and organic carbon content, root density and vegetation water uptake. We present an outlook and stepwise approach to the development of a comprehensive set of PTFs that can be applied throughout a wide range of disciplines of Earth system science, with emphasis on land surface models. Novel sensing techniques and soil information availability provide a true breakthrough for this, yet further improvements are necessary in three domains: 1) the determining of unknown relationships and dealing with uncertainty in Earth system modeling; 2) the step of spatially deploying this knowledge with PTF validation at regional to global scales; and 3) the integration and linking of the complex model parameterizations (coupled parameterization). Integration is an achievable goal we will show.
Parameterizing the Spatial Markov Model From Breakthrough Curve Data Alone
NASA Astrophysics Data System (ADS)
Sherman, Thomas; Fakhari, Abbas; Miller, Savannah; Singha, Kamini; Bolster, Diogo
2017-12-01
The spatial Markov model (SMM) is an upscaled Lagrangian model that effectively captures anomalous transport across a diverse range of hydrologic systems. The distinct feature of the SMM relative to other random walk models is that successive steps are correlated. To date, with some notable exceptions, the model has primarily been applied to data from high-resolution numerical simulations and correlation effects have been measured from simulated particle trajectories. In real systems such knowledge is practically unattainable and the best one might hope for is breakthrough curves (BTCs) at successive downstream locations. We introduce a novel methodology to quantify velocity correlation from BTC data alone. By discretizing two measured BTCs into a set of arrival times and developing an inverse model, we estimate velocity correlation, thereby enabling parameterization of the SMM in studies where detailed Lagrangian velocity statistics are unavailable. The proposed methodology is applied to two synthetic numerical problems, where we measure all details and thus test the veracity of the approach by comparison of estimated parameters with known simulated values. Our results suggest that our estimated transition probabilities agree with simulated values and using the SMM with this estimated parameterization accurately predicts BTCs downstream. Our methodology naturally allows for estimates of uncertainty by calculating lower and upper bounds of velocity correlation, enabling prediction of a range of BTCs. The measured BTCs fall within the range of predicted BTCs. This novel method to parameterize the SMM from BTC data alone is quite parsimonious, thereby widening the SMM's practical applicability.
Modelling DNA origami self-assembly at the domain level.
Dannenberg, Frits; Dunn, Katherine E; Bath, Jonathan; Kwiatkowska, Marta; Turberfield, Andrew J; Ouldridge, Thomas E
2015-10-28
We present a modelling framework, and basic model parameterization, for the study of DNA origami folding at the level of DNA domains. Our approach is explicitly kinetic and does not assume a specific folding pathway. The binding of each staple is associated with a free-energy change that depends on staple sequence, the possibility of coaxial stacking with neighbouring domains, and the entropic cost of constraining the scaffold by inserting staple crossovers. A rigorous thermodynamic model is difficult to implement as a result of the complex, multiply connected geometry of the scaffold: we present a solution to this problem for planar origami. Coaxial stacking of helices and entropic terms, particularly when loop closure exponents are taken to be larger than those for ideal chains, introduce interactions between staples. These cooperative interactions lead to the prediction of sharp assembly transitions with notable hysteresis that are consistent with experimental observations. We show that the model reproduces the experimentally observed consequences of reducing staple concentration, accelerated cooling, and absent staples. We also present a simpler methodology that gives consistent results and can be used to study a wider range of systems including non-planar origami.
Modelling DNA origami self-assembly at the domain level
NASA Astrophysics Data System (ADS)
Dannenberg, Frits; Dunn, Katherine E.; Bath, Jonathan; Kwiatkowska, Marta; Turberfield, Andrew J.; Ouldridge, Thomas E.
2015-10-01
We present a modelling framework, and basic model parameterization, for the study of DNA origami folding at the level of DNA domains. Our approach is explicitly kinetic and does not assume a specific folding pathway. The binding of each staple is associated with a free-energy change that depends on staple sequence, the possibility of coaxial stacking with neighbouring domains, and the entropic cost of constraining the scaffold by inserting staple crossovers. A rigorous thermodynamic model is difficult to implement as a result of the complex, multiply connected geometry of the scaffold: we present a solution to this problem for planar origami. Coaxial stacking of helices and entropic terms, particularly when loop closure exponents are taken to be larger than those for ideal chains, introduce interactions between staples. These cooperative interactions lead to the prediction of sharp assembly transitions with notable hysteresis that are consistent with experimental observations. We show that the model reproduces the experimentally observed consequences of reducing staple concentration, accelerated cooling, and absent staples. We also present a simpler methodology that gives consistent results and can be used to study a wider range of systems including non-planar origami.
SMRT: A new, modular snow microwave radiative transfer model
NASA Astrophysics Data System (ADS)
Picard, Ghislain; Sandells, Melody; Löwe, Henning; Dumont, Marie; Essery, Richard; Floury, Nicolas; Kontu, Anna; Lemmetyinen, Juha; Maslanka, William; Mätzler, Christian; Morin, Samuel; Wiesmann, Andreas
2017-04-01
Forward models of radiative transfer processes are needed to interpret remote sensing data and derive measurements of snow properties such as snow mass. A key requirement and challenge for microwave emission and scattering models is an accurate description of the snow microstructure. The snow microwave radiative transfer model (SMRT) was designed to cater for potential future active and/or passive satellite missions and developed to improve understanding of how to parameterize snow microstructure. SMRT is implemented in Python and is modular to allow easy intercomparison of different theoretical approaches. Separate modules are included for the snow microstructure model, electromagnetic module, radiative transfer solver, substrate, interface reflectivities, atmosphere and permittivities. An object-oriented approach is used with carefully specified exchanges between modules to allow future extensibility i.e. without constraining the parameter list requirements. This presentation illustrates the capabilities of SMRT. At present, five different snow microstructure models have been implemented, and direct insertion of the autocorrelation function from microtomography data is also foreseen with SMRT. Three electromagnetic modules are currently available. While DMRT-QCA and Rayleigh models need specific microstructure models, the Improved Born Approximation may be used with any microstructure representation. A discrete ordinates approach with stream connection is used to solve the radiative transfer equations, although future inclusion of 6-flux and 2-flux solvers are envisioned. Wrappers have been included to allow existing microwave emission models (MEMLS, HUT, DMRT-QMS) to be run with the same inputs and minimal extra code (2 lines). Comparisons between theoretical approaches will be shown, and evaluation against field experiments in the frequency range 5-150 GHz. SMRT is simple and elegant to use whilst providing a framework for future development within the community.
Legarra, Andres; Christensen, Ole F.; Vitezica, Zulma G.; Aguilar, Ignacio; Misztal, Ignacy
2015-01-01
Recent use of genomic (marker-based) relationships shows that relationships exist within and across base population (breeds or lines). However, current treatment of pedigree relationships is unable to consider relationships within or across base populations, although such relationships must exist due to finite size of the ancestral population and connections between populations. This complicates the conciliation of both approaches and, in particular, combining pedigree with genomic relationships. We present a coherent theoretical framework to consider base population in pedigree relationships. We suggest a conceptual framework that considers each ancestral population as a finite-sized pool of gametes. This generates across-individual relationships and contrasts with the classical view which each population is considered as an infinite, unrelated pool. Several ancestral populations may be connected and therefore related. Each ancestral population can be represented as a “metafounder,” a pseudo-individual included as founder of the pedigree and similar to an “unknown parent group.” Metafounders have self- and across relationships according to a set of parameters, which measure ancestral relationships, i.e., homozygozities within populations and relationships across populations. These parameters can be estimated from existing pedigree and marker genotypes using maximum likelihood or a method based on summary statistics, for arbitrarily complex pedigrees. Equivalences of genetic variance and variance components between the classical and this new parameterization are shown. Segregation variance on crosses of populations is modeled. Efficient algorithms for computation of relationship matrices, their inverses, and inbreeding coefficients are presented. Use of metafounders leads to compatibility of genomic and pedigree relationship matrices and to simple computing algorithms. Examples and code are given. PMID:25873631
Nuclear quantum effects on the structure and the dynamics of [H2O]8 at low temperatures.
Videla, Pablo E; Rossky, Peter J; Laria, D
2013-11-07
We use ring-polymer-molecular-dynamics (RPMD) techniques and the semi-empirical q-TIP4P/F water model to investigate the relationship between hydrogen bond connectivity and the characteristics of nuclear position fluctuations, including explicit incorporation of quantum effects, for the energetically low lying isomers of the prototype cluster [H2O]8 at T = 50 K and at 150 K. Our results reveal that tunneling and zero-point energy effects lead to sensible increments in the magnitudes of the fluctuations of intra and intermolecular distances. The degree of proton spatial delocalization is found to map logically with the hydrogen-bond connectivity pattern of the cluster. Dangling hydrogen bonds exhibit the largest extent of spatial delocalization and participate in shorter intramolecular O-H bonds. Combined effects from quantum and polarization fluctuations on the resulting individual dipole moments are also examined. From the dynamical side, we analyze the characteristics of the infrared absorption spectrum. The incorporation of nuclear quantum fluctuations promotes red shifts and sensible broadening relative to the classical profile, bringing the simulation results in much more satisfactory agreement with direct experimental information in the mid and high frequency range of the stretching band. While RPMD predictions overestimate the peak position of the low frequency shoulder, the overall agreement with that reported using an accurate, parameterized, many-body potential is reasonable, and far superior to that one obtains by implementing a partially adiabatic centroid molecular dynamics approach. Quantum effects on the collective dynamics, as reported by instantaneous normal modes, are also discussed.
Evaluation and intercomparison of five major dry deposition algorithms in North America
Dry deposition of various pollutants needs to be quantified in air quality monitoring networks as well as in chemical transport models. The inferential method is the most commonly used approach in which the dry deposition velocity (Vd) is empirically parameterized as a function o...
USDA-ARS?s Scientific Manuscript database
The complexity of the hydrologic system challenges the development of models. One issue faced during the model development stage is the uncertainty involved in model parameterization. Using a single optimized set of parameters (one snapshot) to represent baseline conditions of the system limits the ...
NASA Astrophysics Data System (ADS)
Lin, Shangfei; Sheng, Jinyu
2017-12-01
Depth-induced wave breaking is the primary dissipation mechanism for ocean surface waves in shallow waters. Different parametrizations were developed for parameterizing depth-induced wave breaking process in ocean surface wave models. The performance of six commonly-used parameterizations in simulating significant wave heights (SWHs) is assessed in this study. The main differences between these six parameterizations are representations of the breaker index and the fraction of breaking waves. Laboratory and field observations consisting of 882 cases from 14 sources of published observational data are used in the assessment. We demonstrate that the six parameterizations have reasonable performance in parameterizing depth-induced wave breaking in shallow waters, but with their own limitations and drawbacks. The widely-used parameterization suggested by Battjes and Janssen (1978, BJ78) has a drawback of underpredicting the SWHs in the locally-generated wave conditions and overpredicting in the remotely-generated wave conditions over flat bottoms. The drawback of BJ78 was addressed by a parameterization suggested by Salmon et al. (2015, SA15). But SA15 had relatively larger errors in SWHs over sloping bottoms than BJ78. We follow SA15 and propose a new parameterization with a dependence of the breaker index on the normalized water depth in deep waters similar to SA15. In shallow waters, the breaker index of the new parameterization has a nonlinear dependence on the local bottom slope rather than the linear dependence used in SA15. Overall, this new parameterization has the best performance with an average scatter index of ∼8.2% in comparison with the three best performing existing parameterizations with the average scatter index between 9.2% and 13.6%.
Comparison of Four Mixed Layer Mesoscale Parameterizations and the Equation for an Arbitrary Tracer
NASA Technical Reports Server (NTRS)
Canuto, V. M.; Dubovikov, M. S.
2011-01-01
In this paper we discuss two issues, the inter-comparison of four mixed layer mesoscale parameterizations and the search for the eddy induced velocity for an arbitrary tracer. It must be stressed that our analysis is limited to mixed layer mesoscales since we do not treat sub-mesoscales and small turbulent mixing. As for the first item, since three of the four parameterizations are expressed in terms of a stream function and a residual flux of the RMT formalism (residual mean theory), while the fourth is expressed in terms of vertical and horizontal fluxes, we needed a formalism to connect the two formulations. The standard RMT representation developed for the deep ocean cannot be extended to the mixed layer since its stream function does not vanish at the ocean's surface. We develop a new RMT representation that satisfies the surface boundary condition. As for the general form of the eddy induced velocity for an arbitrary tracer, thus far, it has been assumed that there is only the one that originates from the curl of the stream function. This is because it was assumed that the tracer residual flux is purely diffusive. On the other hand, we show that in the case of an arbitrary tracer, the residual flux has also a skew component that gives rise to an additional bolus velocity. Therefore, instead of only one bolus velocity, there are now two, one coming from the curl of the stream function and other from the skew part of the residual flux. In the buoyancy case, only one bolus velocity contributes to the mean buoyancy equation since the residual flux is indeed only diffusive.
Young, April M.; Halgin, Daniel S.; DiClemente, Ralph J.; Sterk, Claire E.; Havens, Jennifer R.
2014-01-01
Background An HIV vaccine could substantially impact the epidemic. However, risk compensation (RC), or post-vaccination increase in risk behavior, could present a major challenge. The methodology used in previous studies of risk compensation has been almost exclusively individual-level in focus, and has not explored how increased risk behavior could affect the connectivity of risk networks. This study examined the impact of anticipated HIV vaccine-related RC on the structure of high-risk drug users' sexual and injection risk network. Methods A sample of 433 rural drug users in the US provided data on their risk relationships (i.e., those involving recent unprotected sex and/or injection equipment sharing). Dyad-specific data were collected on likelihood of increasing/initiating risk behavior if they, their partner, or they and their partner received an HIV vaccine. Using these data and social network analysis, a "post-vaccination network" was constructed and compared to the current network on measures relevant to HIV transmission, including network size, cohesiveness (e.g., diameter, component structure, density), and centrality. Results Participants reported 488 risk relationships. Few reported an intention to decrease condom use or increase equipment sharing (4% and 1%, respectively). RC intent was reported in 30 existing risk relationships and vaccination was anticipated to elicit the formation of five new relationships. RC resulted in a 5% increase in risk network size (n = 142 to n = 149) and a significant increase in network density. The initiation of risk relationships resulted in the connection of otherwise disconnected network components, with the largest doubling in size from five to ten. Conclusions This study demonstrates a new methodological approach to studying RC and reveals that behavior change following HIV vaccination could potentially impact risk network connectivity. These data will be valuable in parameterizing future network models that can determine if network-level change precipitated by RC would appreciably impact the vaccine's population-level effectiveness. PMID:24992659
NASA Astrophysics Data System (ADS)
Brodie, E.; King, E.; Molins, S.; Karaoz, U.; Steefel, C. I.; Banfield, J. F.; Beller, H. R.; Anantharaman, K.; Ligocki, T. J.; Trebotich, D.
2015-12-01
Pore-scale processes mediated by microorganisms underlie a range of critical ecosystem services, regulating carbon stability, nutrient flux, and the purification of water. Advances in cultivation-independent approaches now provide us with the ability to reconstruct thousands of genomes from microbial populations from which functional roles may be assigned. With this capability to reveal microbial metabolic potential, the next step is to put these microbes back where they belong to interact with their natural environment, i.e. the pore scale. At this scale, microorganisms communicate, cooperate and compete across their fitness landscapes with communities emerging that feedback on the physical and chemical properties of their environment, ultimately altering the fitness landscape and selecting for new microbial communities with new properties and so on. We have developed a trait-based model of microbial activity that simulates coupled functional guilds that are parameterized with unique combinations of traits that govern fitness under dynamic conditions. Using a reactive transport framework, we simulate the thermodynamics of coupled electron donor-acceptor reactions to predict energy available for cellular maintenance, respiration, biomass development, and enzyme production. From metagenomics, we directly estimate some trait values related to growth and identify the linkage of key traits associated with respiration and fermentation, macromolecule depolymerizing enzymes, and other key functions such as nitrogen fixation. Our simulations were carried out to explore abiotic controls on community emergence such as seasonally fluctuating water table regimes across floodplain organic matter hotspots. Simulations and metagenomic/metatranscriptomic observations highlighted the many dependencies connecting the relative fitness of functional guilds and the importance of chemolithoautotrophic lifestyles. Using an X-Ray microCT-derived soil microaggregate physical model combined with genome-enabled reactive flow and transport we simulated the importance of pore network properties including connectivity in regulating metabolic interdependencies between microbial functional guilds.
NASA Technical Reports Server (NTRS)
Petty, Grant W.; Katsaros, Kristina B.
1994-01-01
Based on a geometric optics model and the assumption of an isotropic Gaussian surface slope distribution, the component of ocean surface microwave emissivity variation due to large-scale surface roughness is parameterized for the frequencies and approximate viewing angle of the Special Sensor Microwave/Imager. Independent geophysical variables in the parameterization are the effective (microwave frequency dependent) slope variance and the sea surface temperature. Using the same physical model, the change in the effective zenith angle of reflected sky radiation arising from large-scale roughness is also parameterized. Independent geophysical variables in this parameterization are the effective slope variance and the atmospheric optical depth at the frequency in question. Both of the above model-based parameterizations are intended for use in conjunction with empirical parameterizations relating effective slope variance and foam coverage to near-surface wind speed. These empirical parameterizations are the subject of a separate paper.
Optimal lattice-structured materials
Messner, Mark C.
2016-07-09
This paper describes a method for optimizing the mesostructure of lattice-structured materials. These materials are periodic arrays of slender members resembling efficient, lightweight macroscale structures like bridges and frame buildings. Current additive manufacturing technologies can assemble lattice structures with length scales ranging from nanometers to millimeters. Previous work demonstrates that lattice materials have excellent stiffness- and strength-to-weight scaling, outperforming natural materials. However, there are currently no methods for producing optimal mesostructures that consider the full space of possible 3D lattice topologies. The inverse homogenization approach for optimizing the periodic structure of lattice materials requires a parameterized, homogenized material model describingmore » the response of an arbitrary structure. This work develops such a model, starting with a method for describing the long-wavelength, macroscale deformation of an arbitrary lattice. The work combines the homogenized model with a parameterized description of the total design space to generate a parameterized model. Finally, the work describes an optimization method capable of producing optimal mesostructures. Several examples demonstrate the optimization method. One of these examples produces an elastically isotropic, maximally stiff structure, here called the isotruss, that arguably outperforms the anisotropic octet truss topology.« less
Parameterizing the Morse Potential for Coarse-Grained Modeling of Blood Plasma
Zhang, Na; Zhang, Peng; Kang, Wei; Bluestein, Danny; Deng, Yuefan
2014-01-01
Multiscale simulations of fluids such as blood represent a major computational challenge of coupling the disparate spatiotemporal scales between molecular and macroscopic transport phenomena characterizing such complex fluids. In this paper, a coarse-grained (CG) particle model is developed for simulating blood flow by modifying the Morse potential, traditionally used in Molecular Dynamics for modeling vibrating structures. The modified Morse potential is parameterized with effective mass scales for reproducing blood viscous flow properties, including density, pressure, viscosity, compressibility and characteristic flow dynamics of human blood plasma fluid. The parameterization follows a standard inverse-problem approach in which the optimal micro parameters are systematically searched, by gradually decoupling loosely correlated parameter spaces, to match the macro physical quantities of viscous blood flow. The predictions of this particle based multiscale model compare favorably to classic viscous flow solutions such as Counter-Poiseuille and Couette flows. It demonstrates that such coarse grained particle model can be applied to replicate the dynamics of viscous blood flow, with the advantage of bridging the gap between macroscopic flow scales and the cellular scales characterizing blood flow that continuum based models fail to handle adequately. PMID:24910470
Structural test of the parameterized-backbone method for protein design.
Plecs, Joseph J; Harbury, Pehr B; Kim, Peter S; Alber, Tom
2004-09-03
Designing new protein folds requires a method for simultaneously optimizing the conformation of the backbone and the side-chains. One approach to this problem is the use of a parameterized backbone, which allows the systematic exploration of families of structures. We report the crystal structure of RH3, a right-handed, three-helix coiled coil that was designed using a parameterized backbone and detailed modeling of core packing. This crystal structure was determined using another rationally designed feature, a metal-binding site that permitted experimental phasing of the X-ray data. RH3 adopted the intended fold, which has not been observed previously in biological proteins. Unanticipated structural asymmetry in the trimer was a principal source of variation within the RH3 structure. The sequence of RH3 differs from that of a previously characterized right-handed tetramer, RH4, at only one position in each 11 amino acid sequence repeat. This close similarity indicates that the design method is sensitive to the core packing interactions that specify the protein structure. Comparison of the structures of RH3 and RH4 indicates that both steric overlap and cavity formation provide strong driving forces for oligomer specificity.
Actual and Idealized Crystal Field Parameterizations for the Uranium Ions in UF 4
NASA Astrophysics Data System (ADS)
Gajek, Z.; Mulak, J.; Krupa, J. C.
1993-12-01
The crystal field parameters for the actual coordination symmetries of the uranium ions in UF 4, C2 and C1, and for their idealizations to D2, C2 v , D4, D4 d , and the Archimedean antiprism point symmetries are given. They have been calculated by means of both the perturbative ab initio model and the angular overlap model and are referenced to the recent results fitted by Carnall's group. The equivalency of some different sets of parameters has been verified with the standardization procedure. The adequacy of several idealized approaches has been tested by comparison of the corresponding splitting patterns of the 3H 4 ground state. Our results support the parameterization given by Carnall. Furthermore, the parameterization of the crystal field potential and the splitting diagram for the symmetryless uranium ion U( C1) are given. Having at our disposal the crystal field splittings for the two kinds of uranium ions in UF 4, U( C2) and U( C1), we calculate the model plots of the paramagnetic susceptibility χ( T) and the magnetic entropy associated with the Schottky anomaly Δ S( T) for UF 4.
Model-driven harmonic parameterization of the cortical surface: HIP-HOP.
Auzias, G; Lefèvre, J; Le Troter, A; Fischer, C; Perrot, M; Régis, J; Coulon, O
2013-05-01
In the context of inter subject brain surface matching, we present a parameterization of the cortical surface constrained by a model of cortical organization. The parameterization is defined via an harmonic mapping of each hemisphere surface to a rectangular planar domain that integrates a representation of the model. As opposed to previous landmark-based registration methods we do not match folds between individuals but instead optimize the fit between cortical sulci and specific iso-coordinate axis in the model. This strategy overcomes some limitation to sulcus-based registration techniques such as topological variability in sulcal landmarks across subjects. Experiments on 62 subjects with manually traced sulci are presented and compared with the result of the Freesurfer software. The evaluation involves a measure of dispersion of sulci with both angular and area distortions. We show that the model-based strategy can lead to a natural, efficient and very fast (less than 5 min per hemisphere) method for defining inter subjects correspondences. We discuss how this approach also reduces the problems inherent to anatomically defined landmarks and open the way to the investigation of cortical organization through the notion of orientation and alignment of structures across the cortex.
Prediction of convective activity using a system of parasitic-nested numerical models
NASA Technical Reports Server (NTRS)
Perkey, D. J.
1976-01-01
A limited area, three dimensional, moist, primitive equation (PE) model is developed to test the sensitivity of quantitative precipitation forecasts to the initial relative humidity distribution. Special emphasis is placed on the squall-line region. To accomplish the desired goal, time dependent lateral boundaries and a general convective parameterization scheme suitable for mid-latitude systems were developed. The sequential plume convective parameterization scheme presented is designed to have the versatility necessary in mid-latitudes and to be applicable for short-range forecasts. The results indicate that the scheme is able to function in the frontally forced squallline region, in the gently rising altostratus region ahead of the approaching low center, and in the over-riding region ahead of the warm front. Three experiments are discussed.
An RBF-based reparameterization method for constrained texture mapping.
Yu, Hongchuan; Lee, Tong-Yee; Yeh, I-Cheng; Yang, Xiaosong; Li, Wenxi; Zhang, Jian J
2012-07-01
Texture mapping has long been used in computer graphics to enhance the realism of virtual scenes. However, to match the 3D model feature points with the corresponding pixels in a texture image, surface parameterization must satisfy specific positional constraints. However, despite numerous research efforts, the construction of a mathematically robust, foldover-free parameterization that is subject to positional constraints continues to be a challenge. In the present paper, this foldover problem is addressed by developing radial basis function (RBF)-based reparameterization. Given initial 2D embedding of a 3D surface, the proposed method can reparameterize 2D embedding into a foldover-free 2D mesh, satisfying a set of user-specified constraint points. In addition, this approach is mesh free. Therefore, generating smooth texture mapping results is possible without extra smoothing optimization.
Lifting SU(2) spin networks to projected spin networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dupuis, Maiete; Livine, Etera R.
2010-09-15
Projected spin network states are the canonical basis of quantum states of geometry for the recent EPRL-FK spinfoam models for quantum gravity introduced by Engle-Pereira-Rovelli-Livine and Freidel-Krasnov. They are functionals of both the Lorentz connection and the time-normal field. We analyze in detail the map from these projected spin networks to the standard SU(2) spin networks of loop quantum gravity. We show that this map is not one to one and that the corresponding ambiguity is parameterized by the Immirzi parameter. We conclude with a comparison of the scalar products between projected spin networks and SU(2) spin network states.
Extending earthquakes' reach through cascading.
Marsan, David; Lengliné, Olivier
2008-02-22
Earthquakes, whatever their size, can trigger other earthquakes. Mainshocks cause aftershocks to occur, which in turn activate their own local aftershock sequences, resulting in a cascade of triggering that extends the reach of the initial mainshock. A long-lasting difficulty is to determine which earthquakes are connected, either directly or indirectly. Here we show that this causal structure can be found probabilistically, with no a priori model nor parameterization. Large regional earthquakes are found to have a short direct influence in comparison to the overall aftershock sequence duration. Relative to these large mainshocks, small earthquakes collectively have a greater effect on triggering. Hence, cascade triggering is a key component in earthquake interactions.
Transverse momentum dependent (TMD) parton distribution functions: Status and prospects*
Angeles-Martinez, R.; Bacchetta, A.; Balitsky, Ian I.; ...
2015-01-01
In this study, we review transverse momentum dependent (TMD) parton distribution functions, their application to topical issues in high-energy physics phenomenology, and their theoretical connections with QCD resummation, evolution and factorization theorems. We illustrate the use of TMDs via examples of multi-scale problems in hadronic collisions. These include transverse momentum q T spectra of Higgs and vector bosons for low q T, and azimuthal correlations in the production of multiple jets associated with heavy bosons at large jet masses. We discuss computational tools for TMDs, and present the application of a new tool, TMD LIB, to parton density fits andmore » parameterizations.« less
Watermarking on 3D mesh based on spherical wavelet transform.
Jin, Jian-Qiu; Dai, Min-Ya; Bao, Hu-Jun; Peng, Qun-Sheng
2004-03-01
In this paper we propose a robust watermarking algorithm for 3D mesh. The algorithm is based on spherical wavelet transform. Our basic idea is to decompose the original mesh into a series of details at different scales by using spherical wavelet transform; the watermark is then embedded into the different levels of details. The embedding process includes: global sphere parameterization, spherical uniform sampling, spherical wavelet forward transform, embedding watermark, spherical wavelet inverse transform, and at last resampling the mesh watermarked to recover the topological connectivity of the original model. Experiments showed that our algorithm can improve the capacity of the watermark and the robustness of watermarking against attacks.
Multimodel Uncertainty Changes in Simulated River Flows Induced by Human Impact Parameterizations
NASA Technical Reports Server (NTRS)
Liu, Xingcai; Tang, Qiuhong; Cui, Huijuan; Mu, Mengfei; Gerten Dieter; Gosling, Simon; Masaki, Yoshimitsu; Satoh, Yusuke; Wada, Yoshihide
2017-01-01
Human impacts increasingly affect the global hydrological cycle and indeed dominate hydrological changes in some regions. Hydrologists have sought to identify the human-impact-induced hydrological variations via parameterizing anthropogenic water uses in global hydrological models (GHMs). The consequently increased model complexity is likely to introduce additional uncertainty among GHMs. Here, using four GHMs, between-model uncertainties are quantified in terms of the ratio of signal to noise (SNR) for average river flow during 1971-2000 simulated in two experiments, with representation of human impacts (VARSOC) and without (NOSOC). It is the first quantitative investigation of between-model uncertainty resulted from the inclusion of human impact parameterizations. Results show that the between-model uncertainties in terms of SNRs in the VARSOC annual flow are larger (about 2 for global and varied magnitude for different basins) than those in the NOSOC, which are particularly significant in most areas of Asia and northern areas to the Mediterranean Sea. The SNR differences are mostly negative (-20 to 5, indicating higher uncertainty) for basin-averaged annual flow. The VARSOC high flow shows slightly lower uncertainties than NOSOC simulations, with SNR differences mostly ranging from -20 to 20. The uncertainty differences between the two experiments are significantly related to the fraction of irrigation areas of basins. The large additional uncertainties in VARSOC simulations introduced by the inclusion of parameterizations of human impacts raise the urgent need of GHMs development regarding a better understanding of human impacts. Differences in the parameterizations of irrigation, reservoir regulation and water withdrawals are discussed towards potential directions of improvements for future GHM development. We also discuss the advantages of statistical approaches to reduce the between-model uncertainties, and the importance of calibration of GHMs for not only better performances of historical simulations but also more robust and confidential future projections of hydrological changes under a changing environment.
Modeling late rectal toxicities based on a parameterized representation of the 3D dose distribution
NASA Astrophysics Data System (ADS)
Buettner, Florian; Gulliford, Sarah L.; Webb, Steve; Partridge, Mike
2011-04-01
Many models exist for predicting toxicities based on dose-volume histograms (DVHs) or dose-surface histograms (DSHs). This approach has several drawbacks as firstly the reduction of the dose distribution to a histogram results in the loss of spatial information and secondly the bins of the histograms are highly correlated with each other. Furthermore, some of the complex nonlinear models proposed in the past lack a direct physical interpretation and the ability to predict probabilities rather than binary outcomes. We propose a parameterized representation of the 3D distribution of the dose to the rectal wall which explicitly includes geometrical information in the form of the eccentricity of the dose distribution as well as its lateral and longitudinal extent. We use a nonlinear kernel-based probabilistic model to predict late rectal toxicity based on the parameterized dose distribution and assessed its predictive power using data from the MRC RT01 trial (ISCTRN 47772397). The endpoints under consideration were rectal bleeding, loose stools, and a global toxicity score. We extract simple rules identifying 3D dose patterns related to a specifically low risk of complication. Normal tissue complication probability (NTCP) models based on parameterized representations of geometrical and volumetric measures resulted in areas under the curve (AUCs) of 0.66, 0.63 and 0.67 for predicting rectal bleeding, loose stools and global toxicity, respectively. In comparison, NTCP models based on standard DVHs performed worse and resulted in AUCs of 0.59 for all three endpoints. In conclusion, we have presented low-dimensional, interpretable and nonlinear NTCP models based on the parameterized representation of the dose to the rectal wall. These models had a higher predictive power than models based on standard DVHs and their low dimensionality allowed for the identification of 3D dose patterns related to a low risk of complication.
Holistic versus monomeric strategies for hydrological modelling of human-modified hydrosystems
NASA Astrophysics Data System (ADS)
Nalbantis, I.; Efstratiadis, A.; Rozos, E.; Kopsiafti, M.; Koutsoyiannis, D.
2011-03-01
The modelling of human-modified basins that are inadequately measured constitutes a challenge for hydrological science. Often, models for such systems are detailed and hydraulics-based for only one part of the system while for other parts oversimplified models or rough assumptions are used. This is typically a bottom-up approach, which seeks to exploit knowledge of hydrological processes at the micro-scale at some components of the system. Also, it is a monomeric approach in two ways: first, essential interactions among system components may be poorly represented or even omitted; second, differences in the level of detail of process representation can lead to uncontrolled errors. Additionally, the calibration procedure merely accounts for the reproduction of the observed responses using typical fitting criteria. The paper aims to raise some critical issues, regarding the entire modelling approach for such hydrosystems. For this, two alternative modelling strategies are examined that reflect two modelling approaches or philosophies: a dominant bottom-up approach, which is also monomeric and, very often, based on output information, and a top-down and holistic approach based on generalized information. Critical options are examined, which codify the differences between the two strategies: the representation of surface, groundwater and water management processes, the schematization and parameterization concepts and the parameter estimation methodology. The first strategy is based on stand-alone models for surface and groundwater processes and for water management, which are employed sequentially. For each model, a different (detailed or coarse) parameterization is used, which is dictated by the hydrosystem schematization. The second strategy involves model integration for all processes, parsimonious parameterization and hybrid manual-automatic parameter optimization based on multiple objectives. A test case is examined in a hydrosystem in Greece with high complexities, such as extended surface-groundwater interactions, ill-defined boundaries, sinks to the sea and anthropogenic intervention with unmeasured abstractions both from surface water and aquifers. Criteria for comparison are the physical consistency of parameters, the reproduction of runoff hydrographs at multiple sites within the studied basin, the likelihood of uncontrolled model outputs, the required amount of computational effort and the performance within a stochastic simulation setting. Our work allows for investigating the deterioration of model performance in cases where no balanced attention is paid to all components of human-modified hydrosystems and the related information. Also, sources of errors are identified and their combined effect are evaluated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wong, May Wai San; Ovchinnikov, Mikhail; Wang, Minghuai
Potential ways of parameterizing vertical turbulent fluxes of hydrometeors are examined using a high-resolution cloud-resolving model. The cloud-resolving model uses the Morrison microphysics scheme, which contains prognostic variables for rain, graupel, ice, and snow. A benchmark simulation with a horizontal grid spacing of 250 m of a deep convection case carried out to evaluate three different ways of parameterizing the turbulent vertical fluxes of hydrometeors: an eddy-diffusion approximation, a quadrant-based decomposition, and a scaling method that accounts for within-quadrant (subplume) correlations. Results show that the down-gradient nature of the eddy-diffusion approximation tends to transport mass away from concentrated regions, whereasmore » the benchmark simulation indicates that the vertical transport tends to transport mass from below the level of maximum to aloft. Unlike the eddy-diffusion approach, the quadri-modal decomposition is able to capture the signs of the flux gradient but underestimates the magnitudes. The scaling approach is shown to perform the best by accounting for within-quadrant correlations, and improves the results for all hydrometeors except for snow. A sensitivity study is performed to examine how vertical transport may affect the microphysics of the hydrometeors. The vertical transport of each hydrometeor type is artificially suppressed in each test. Results from the sensitivity tests show that cloud-droplet-related processes are most sensitive to suppressed rain or graupel transport. In particular, suppressing rain or graupel transport has a strong impact on the production of snow and ice aloft. Lastly, a viable subgrid-scale hydrometeor transport scheme in an assumed probability density function parameterization is discussed.« less
Multi-Level Adaptation in End-User Development of 3D Virtual Chemistry Experiments
ERIC Educational Resources Information Center
Liu, Chang; Zhong, Ying
2014-01-01
Multi-level adaptation in end-user development (EUD) is an effective way to enable non-technical end users such as educators to gradually introduce more functionality with increasing complexity to 3D virtual learning environments developed by themselves using EUD approaches. Parameterization, integration, and extension are three levels of…
The EPA/ORD National Exposure Research Lab's (NERL) UA/SA/PE research program addresses both tactical and strategic needs in direct support of ORD's client base. The design represents an integrated approach in achieving the highest levels of quality assurance in environmental de...
The EPA/ORD National Exposure Research Lab's (NERL) UA/SA/PE research program addresses both tactical and strategic needs in direct support of ORD's client base. The design represents an integrated approach in achieving the highest levels of quality assurance in environmental dec...
Hillslope threshold response to rainfall: (2) development and use of a macroscale model
Chris B. Graham; Jeffrey J. McDonnell
2010-01-01
Hillslope hydrological response to precipitation is extremely complex and poorly modeled. One possible approach for reducing the complexity of hillslope response and its mathematical parameterization is to look for macroscale hydrological behavior. Hillslope threshold response to storm precipitation is one such macroscale behavior observed at field sites across the...
Data-driven RBE parameterization for helium ion beams
NASA Astrophysics Data System (ADS)
Mairani, A.; Magro, G.; Dokic, I.; Valle, S. M.; Tessonnier, T.; Galm, R.; Ciocca, M.; Parodi, K.; Ferrari, A.; Jäkel, O.; Haberer, T.; Pedroni, P.; Böhlen, T. T.
2016-01-01
Helium ion beams are expected to be available again in the near future for clinical use. A suitable formalism to obtain relative biological effectiveness (RBE) values for treatment planning (TP) studies is needed. In this work we developed a data-driven RBE parameterization based on published in vitro experimental values. The RBE parameterization has been developed within the framework of the linear-quadratic (LQ) model as a function of the helium linear energy transfer (LET), dose and the tissue specific parameter {{(α /β )}\\text{ph}} of the LQ model for the reference radiation. Analytic expressions are provided, derived from the collected database, describing the \\text{RB}{{\\text{E}}α}={α\\text{He}}/{α\\text{ph}} and {{\\text{R}}β}={β\\text{He}}/{β\\text{ph}} ratios as a function of LET. Calculated RBE values at 2 Gy photon dose and at 10% survival (\\text{RB}{{\\text{E}}10} ) are compared with the experimental ones. Pearson’s correlation coefficients were, respectively, 0.85 and 0.84 confirming the soundness of the introduced approach. Moreover, due to the lack of experimental data at low LET, clonogenic experiments have been performed irradiating A549 cell line with {{(α /β )}\\text{ph}}=5.4 Gy at the entrance of a 56.4 MeV u-1He beam at the Heidelberg Ion Beam Therapy Center. The proposed parameterization reproduces the measured cell survival within the experimental uncertainties. A RBE formula, which depends only on dose, LET and {{(α /β )}\\text{ph}} as input parameters is proposed, allowing a straightforward implementation in a TP system.
Multisite Evaluation of APEX for Water Quality: I. Best Professional Judgment Parameterization.
Baffaut, Claire; Nelson, Nathan O; Lory, John A; Senaviratne, G M M M Anomaa; Bhandari, Ammar B; Udawatta, Ranjith P; Sweeney, Daniel W; Helmers, Matt J; Van Liew, Mike W; Mallarino, Antonio P; Wortmann, Charles S
2017-11-01
The Agricultural Policy Environmental eXtender (APEX) model is capable of estimating edge-of-field water, nutrient, and sediment transport and is used to assess the environmental impacts of management practices. The current practice is to fully calibrate the model for each site simulation, a task that requires resources and data not always available. The objective of this study was to compare model performance for flow, sediment, and phosphorus transport under two parameterization schemes: a best professional judgment (BPJ) parameterization based on readily available data and a fully calibrated parameterization based on site-specific soil, weather, event flow, and water quality data. The analysis was conducted using 12 datasets at four locations representing poorly drained soils and row-crop production under different tillage systems. Model performance was based on the Nash-Sutcliffe efficiency (NSE), the coefficient of determination () and the regression slope between simulated and measured annualized loads across all site years. Although the BPJ model performance for flow was acceptable (NSE = 0.7) at the annual time step, calibration improved it (NSE = 0.9). Acceptable simulation of sediment and total phosphorus transport (NSE = 0.5 and 0.9, respectively) was obtained only after full calibration at each site. Given the unacceptable performance of the BPJ approach, uncalibrated use of APEX for planning or management purposes may be misleading. Model calibration with water quality data prior to using APEX for simulating sediment and total phosphorus loss is essential. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.
On parameterization of the inverse problem for estimating aquifer properties using tracer data
NASA Astrophysics Data System (ADS)
Kowalsky, M. B.; Finsterle, S.; Williams, K. H.; Murray, C.; Commer, M.; Newcomer, D.; Englert, A.; Steefel, C. I.; Hubbard, S. S.
2012-06-01
In developing a reliable approach for inferring hydrological properties through inverse modeling of tracer data, decisions made on how to parameterize heterogeneity (i.e., how to represent a heterogeneous distribution using a limited number of parameters that are amenable to estimation) are of paramount importance, as errors in the model structure are partly compensated for by estimating biased property values during the inversion. These biased estimates, while potentially providing an improved fit to the calibration data, may lead to wrong interpretations and conclusions and reduce the ability of the model to make reliable predictions. We consider the estimation of spatial variations in permeability and several other parameters through inverse modeling of tracer data, specifically synthetic and actual field data associated with the 2007 Winchester experiment from the Department of Energy Rifle site. Characterization is challenging due to the real-world complexities associated with field experiments in such a dynamic groundwater system. Our aim is to highlight and quantify the impact on inversion results of various decisions related to parameterization, such as the positioning of pilot points in a geostatistical parameterization; the handling of up-gradient regions; the inclusion of zonal information derived from geophysical data or core logs; extension from 2-D to 3-D; assumptions regarding the gradient direction, porosity, and the semivariogram function; and deteriorating experimental conditions. This work adds to the relatively limited number of studies that offer guidance on the use of pilot points in complex real-world experiments involving tracer data (as opposed to hydraulic head data).
NASA Astrophysics Data System (ADS)
Cipriani, L.; Fantini, F.; Bertacchi, S.
2014-06-01
Image-based modelling tools based on SfM algorithms gained great popularity since several software houses provided applications able to achieve 3D textured models easily and automatically. The aim of this paper is to point out the importance of controlling models parameterization process, considering that automatic solutions included in these modelling tools can produce poor results in terms of texture utilization. In order to achieve a better quality of textured models from image-based modelling applications, this research presents a series of practical strategies aimed at providing a better balance between geometric resolution of models from passive sensors and their corresponding (u,v) map reference systems. This aspect is essential for the achievement of a high-quality 3D representation, since "apparent colour" is a fundamental aspect in the field of Cultural Heritage documentation. Complex meshes without native parameterization have to be "flatten" or "unwrapped" in the (u,v) parameter space, with the main objective to be mapped with a single image. This result can be obtained by using two different strategies: the former automatic and faster, while the latter manual and time-consuming. Reverse modelling applications provide automatic solutions based on splitting the models by means of different algorithms, that produce a sort of "atlas" of the original model in the parameter space, in many instances not adequate and negatively affecting the overall quality of representation. Using in synergy different solutions, ranging from semantic aware modelling techniques to quad-dominant meshes achieved using retopology tools, it is possible to obtain a complete control of the parameterization process.
Thermodynamic properties for applications in chemical industry via classical force fields.
Guevara-Carrion, Gabriela; Hasse, Hans; Vrabec, Jadran
2012-01-01
Thermodynamic properties of fluids are of key importance for the chemical industry. Presently, the fluid property models used in process design and optimization are mostly equations of state or G (E) models, which are parameterized using experimental data. Molecular modeling and simulation based on classical force fields is a promising alternative route, which in many cases reasonably complements the well established methods. This chapter gives an introduction to the state-of-the-art in this field regarding molecular models, simulation methods, and tools. Attention is given to the way modeling and simulation on the scale of molecular force fields interact with other scales, which is mainly by parameter inheritance. Parameters for molecular force fields are determined both bottom-up from quantum chemistry and top-down from experimental data. Commonly used functional forms for describing the intra- and intermolecular interactions are presented. Several approaches for ab initio to empirical force field parameterization are discussed. Some transferable force field families, which are frequently used in chemical engineering applications, are described. Furthermore, some examples of force fields that were parameterized for specific molecules are given. Molecular dynamics and Monte Carlo methods for the calculation of transport properties and vapor-liquid equilibria are introduced. Two case studies are presented. First, using liquid ammonia as an example, the capabilities of semi-empirical force fields, parameterized on the basis of quantum chemical information and experimental data, are discussed with respect to thermodynamic properties that are relevant for the chemical industry. Second, the ability of molecular simulation methods to describe accurately vapor-liquid equilibrium properties of binary mixtures containing CO(2) is shown.
Parameterizing deep convection using the assumed probability density function method
Storer, R. L.; Griffin, B. M.; Höft, J.; ...
2014-06-11
Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method. The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and mid-latitude deep convection. These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing ismore » weak. The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.« less
Parameterizing deep convection using the assumed probability density function method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Storer, R. L.; Griffin, B. M.; Höft, J.
2015-01-06
Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method.The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and midlatitude deep convection. These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing is weak.more » The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.« less
Parameterizing deep convection using the assumed probability density function method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Storer, R. L.; Griffin, B. M.; Hoft, Jan
2015-01-06
Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method.The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and mid-latitude deep convection.These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing is weak. Themore » same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.« less
NASA Astrophysics Data System (ADS)
Dörr, Dominik; Schirmaier, Fabian J.; Henning, Frank; Kärger, Luise
2017-10-01
Finite Element (FE) forming simulation offers the possibility of a detailed analysis of the deformation behavior of multilayered thermoplastic blanks during forming, considering material behavior and process conditions. Rate-dependent bending behavior is a material characteristic, which is so far not considered in FE forming simulation of pre-impregnated, continuously fiber reinforced polymers (CFRPs). Therefore, an approach for modeling viscoelastic bending behavior in FE composite forming simulation is presented in this work. The presented approach accounts for the distinct rate-dependent bending behavior of e.g. thermoplastic CFRPs at process conditions. The approach is based on a Voigt-Kelvin (VK) and a generalized Maxwell (GM) approach, implemented within a FE forming simulation framework implemented in several user-subroutines of the commercially available FE solver Abaqus. The VK, GM, as well as purely elastic bending modeling approaches are parameterized according to dynamic bending characterization results for a PA6-CF UD-tape. It is found that only the GM approach is capable to represent the bending deformation characteristic for all of the considered bending deformation rates. The parameterized bending modeling approaches are applied to a hemisphere test and to a generic geometry. A comparison of the forming simulation results of the generic geometry to experimental tests show a good agreement between simulation and experiments. Furthermore, the simulation results reveal that especially a correct modeling of the initial bending stiffness is relevant for the prediction of wrinkling behavior, as a similar onset of wrinkles is observed for the GM, the VK and an elastic approach, fitted to the stiffness observed in the dynamic rheometer test for low curvatures. Hence, characterization and modeling of rate-dependent bending behavior is crucial for FE forming simulation of thermoplastic CFRPs.
NASA Astrophysics Data System (ADS)
Nunes, João Pedro; Catarina Simões Vieira, Diana; Keizer, Jan Jacob
2017-04-01
Fires impact soil hydrological properties, enhancing soil water repellency and therefore increasing the potential for surface runoff generation and soil erosion. In consequence, the successful application of hydrological models to post-fire conditions requires the appropriate simulation of the effects of soil water repellency on soil hydrology. This work compared three approaches to model soil water repellency impacts on soil hydrology in burnt eucalypt and pine forest slopes in central Portugal: 1) Daily approach, simulating repellency as a function of soil moisture, and influencing the maximum soil available water holding capacity. It is based on the Thornthwaite-Mather soil water modelling approach, and is parameterized with the soil's wilting point and field capacity, and a parameter relating soil water repellency with water holding capacity. It was tested with soil moisture data from burnt and unburnt hillslopes. This approach was able to simulate post-fire soil moisture patterns, which the model without repellency was unable to do. However, model parameters were different between the burnt and unburnt slopes, indicating that more research is needed to derive standardized parameters from commonly measured soil and vegetation properties. 2) Seasonal approach, pre-determining repellency at the seasonal scale (3 months) in four classes (from none to extreme). It is based on the Morgan-Morgan-Finney (MMF) runoff and erosion model, applied at the seasonal scale and is parameterized with a parameter relating repellency class with field capacity. It was tested with runoff and erosion data from several experimental plots, and led to important improvements on runoff prediction over an approach with constant field capacity for all seasons (calibrated for repellency effects), but only slight improvements in erosion predictions. In contrast with the daily approach, the parameters could be reproduced between different sites 3) Constant approach, specifying values for soil water repellency for the three years after the fire, and keeping them constant throughout the year. It is based on a daily Curve Number (CN) approach, and was incorporated directly in the Soil and Water Assessment Tool (SWAT) model and tested with erosion data from a burnt hillslope. This approach was able to successfully reproduce soil erosion. The results indicate that simplified approaches can be used to adapt existing models for post-fire simulation, taking repellency into account. Taking into account the seasonality of repellency seems more important to simulate surface runoff than erosion, possibly since simulating the larger runoff rates correctly is sufficient for erosion simulation. The constant approach can be applied directly in the parameterization of existing runoff and erosion models for soil loss and sediment yield prediction, while the seasonal approach can readily be developed as a next step, with further work being needed to assess if the approach and associated parameters can be applied in multiple post-fire environments.
Toward Predictive Theories of Nuclear Reactions Across the Isotopic Chart: Web Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Escher, J. E.; Blackmon, J.; Elster, C.
Recent years have seen exciting new developments and progress in nuclear structure theory, reaction theory, and experimental techniques, that allow us to move towards a description of exotic systems and environments, setting the stage for new discoveries. The purpose of the 5-week program was to bring together physicists from the low-energy nuclear structure and reaction communities to identify avenues for achieving reliable and predictive descriptions of reactions involving nuclei across the isotopic chart. The 4-day embedded workshop focused on connecting theory developments to experimental advances and data needs for astrophysics and other applications. Nuclear theory must address phenomena from laboratorymore » experiments to stellar environments, from stable nuclei to weakly-bound and exotic isotopes. Expanding the reach of theory to these regimes requires a comprehensive understanding of the reaction mechanisms involved as well as detailed knowledge of nuclear structure. A recurring theme throughout the program was the desire to produce reliable predictions rooted in either ab initio or microscopic approaches. At the same time it was recognized that some applications involving heavy nuclei away from stability, e.g. those involving fi ssion fragments, may need to rely on simple parameterizations of incomplete data for the foreseeable future. The goal here, however, is to subsequently improve and refine the descriptions, moving to phenomenological, then microscopic approaches. There was overarching consensus that future work should also focus on reliable estimates of errors in theoretical descriptions.« less
Gahm, Jin Kyu; Shi, Yonggang
2018-01-01
Surface mapping methods play an important role in various brain imaging studies from tracking the maturation of adolescent brains to mapping gray matter atrophy patterns in Alzheimer’s disease. Popular surface mapping approaches based on spherical registration, however, have inherent numerical limitations when severe metric distortions are present during the spherical parameterization step. In this paper, we propose a novel computational framework for intrinsic surface mapping in the Laplace-Beltrami (LB) embedding space based on Riemannian metric optimization on surfaces (RMOS). Given a diffeomorphism between two surfaces, an isometry can be defined using the pullback metric, which in turn results in identical LB embeddings from the two surfaces. The proposed RMOS approach builds upon this mathematical foundation and achieves general feature-driven surface mapping in the LB embedding space by iteratively optimizing the Riemannian metric defined on the edges of triangular meshes. At the core of our framework is an optimization engine that converts an energy function for surface mapping into a distance measure in the LB embedding space, which can be effectively optimized using gradients of the LB eigen-system with respect to the Riemannian metrics. In the experimental results, we compare the RMOS algorithm with spherical registration using large-scale brain imaging data, and show that RMOS achieves superior performance in the prediction of hippocampal subfields and cortical gyral labels, and the holistic mapping of striatal surfaces for the construction of a striatal connectivity atlas from substantia nigra. PMID:29574399
NASA Astrophysics Data System (ADS)
Brenner, F.; Hoffmann, P.; Marwan, N.
2016-12-01
Infectious diseases are a major threat to human health. The spreading of airborne diseases has become fast and hard to predict. Global air travelling created a network which allows a pathogen to migrate worldwide in only a few days. Pandemics of SARS (2002/03) and H1N1 (2009) have impressively shown the epidemiological danger in a strongly connected world. In this study we simulate the outbreak of an airborne infectious disease that is directly transmitted from human to human. We use a regular Susceptible-Infected-Recovered (SIR) model and a modified Susceptible-Exposed-Infected-Recovered (SEIR) compartmental approach with the basis of a complex network built by global air traffic data (from openflights.org). Local Disease propagation is modeled with a global population dataset (from SEDAC and MaxMind) and parameterizations of human behavior regarding mobility, contacts and awareness. As a final component we combine the worldwide outbreak simulation with daily averaged climate data from WATCH-Forcing-Data-ERA-Interim (WFDEI) and Coupled Model Intercomparison Project Phase 5 (CMIP5). Here we focus on Influenza-like illnesses (ILI), whose transmission rate has a dependency on relative humidity and temperature. Even small changes in relative humidity are sufficient to trigger significant differences in the global outbreak behavior. Apart from the direct effect of climate change on the transmission of airborne diseases, there are indirect ramifications that alter spreading patterns. For example seasonal changing human mobility is influenced by climate settings.
Yue, Xu; Mickley, Loretta J.; Logan, Jennifer A.; Kaplan, Jed O.
2013-01-01
We estimate future wildfire activity over the western United States during the mid-21st century (2046–2065), based on results from 15 climate models following the A1B scenario. We develop fire prediction models by regressing meteorological variables from the current and previous years together with fire indexes onto observed regional area burned. The regressions explain 0.25–0.60 of the variance in observed annual area burned during 1980–2004, depending on the ecoregion. We also parameterize daily area burned with temperature, precipitation, and relative humidity. This approach explains ~0.5 of the variance in observed area burned over forest ecoregions but shows no predictive capability in the semi-arid regions of Nevada and California. By applying the meteorological fields from 15 climate models to our fire prediction models, we quantify the robustness of our wildfire projections at mid-century. We calculate increases of 24–124% in area burned using regressions and 63–169% with the parameterization. Our projections are most robust in the southwestern desert, where all GCMs predict significant (p<0.05) meteorological changes. For forested ecoregions, more GCMs predict significant increases in future area burned with the parameterization than with the regressions, because the latter approach is sensitive to hydrological variables that show large inter-model variability in the climate projections. The parameterization predicts that the fire season lengthens by 23 days in the warmer and drier climate at mid-century. Using a chemical transport model, we find that wildfire emissions will increase summertime surface organic carbon aerosol over the western United States by 46–70% and black carbon by 20–27% at midcentury, relative to the present day. The pollution is most enhanced during extreme episodes: above the 84th percentile of concentrations, OC increases by ~90% and BC by ~50%, while visibility decreases from 130 km to 100 km in 32 Federal Class 1 areas in Rocky Mountains Forest. PMID:24015109
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanyal, Tanmoy; Shell, M. Scott, E-mail: shell@engineering.ucsb.edu
Bottom-up multiscale techniques are frequently used to develop coarse-grained (CG) models for simulations at extended length and time scales but are often limited by a compromise between computational efficiency and accuracy. The conventional approach to CG nonbonded interactions uses pair potentials which, while computationally efficient, can neglect the inherently multibody contributions of the local environment of a site to its energy, due to degrees of freedom that were coarse-grained out. This effect often causes the CG potential to depend strongly on the overall system density, composition, or other properties, which limits its transferability to states other than the one atmore » which it was parameterized. Here, we propose to incorporate multibody effects into CG potentials through additional nonbonded terms, beyond pair interactions, that depend in a mean-field manner on local densities of different atomic species. This approach is analogous to embedded atom and bond-order models that seek to capture multibody electronic effects in metallic systems. We show that the relative entropy coarse-graining framework offers a systematic route to parameterizing such local density potentials. We then characterize this approach in the development of implicit solvation strategies for interactions between model hydrophobes in an aqueous environment.« less
NASA Astrophysics Data System (ADS)
Farquharson, C.; Long, J.; Lu, X.; Lelievre, P. G.
2017-12-01
Real-life geology is complex, and so, even when allowing for the diffusive, low resolution nature of geophysical electromagnetic methods, we need Earth models that can accurately represent this complexity when modelling and inverting electromagnetic data. This is particularly the case for the scales, detail and conductivity contrasts involved in mineral and hydrocarbon exploration and development, but also for the larger scale of lithospheric studies. Unstructured tetrahedral meshes provide a flexible means of discretizing a general, arbitrary Earth model. This is important when wanting to integrate a geophysical Earth model with a geological Earth model parameterized in terms of surfaces. Finite-element and finite-volume methods can be derived for computing the electric and magnetic fields in a model parameterized using an unstructured tetrahedral mesh. A number of such variants have been proposed and have proven successful. However, the efficiency and accuracy of these methods can be affected by the "quality" of the tetrahedral discretization, that is, how many of the tetrahedral cells in the mesh are long, narrow and pointy. This is particularly the case if one wants to use an iterative technique to solve the resulting linear system of equations. One approach to deal with this issue is to develop sophisticated model and mesh building and manipulation capabilities in order to ensure that any mesh built from geological information is of sufficient quality for the electromagnetic modelling. Another approach is to investigate other methods of synthesizing the electromagnetic fields. One such example is a "meshfree" approach in which the electromagnetic fields are synthesized using a mesh that is distinct from the mesh used to parameterized the Earth model. There are then two meshes, one describing the Earth model and one used for the numerical mathematics of computing the fields. This means that there are no longer any quality requirements on the model mesh, which makes the process of building a geophysical Earth model from a geological model much simpler. In this presentation we will explore the issues that arise when working with realistic Earth models and when synthesizing geophysical electromagnetic data for them. We briefly consider meshfree methods as a possible means of alleviating some of these issues.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maurer, K. D.; Bohrer, G.; Kenny, W. T.
Surface roughness parameters, namely the roughness length and displacement height, are an integral input used to model surface fluxes. However, most models assume these parameters to be a fixed property of plant functional type and disregard the governing structural heterogeneity and dynamics. In this study, we use large-eddy simulations to explore, in silico, the effects of canopy-structure characteristics on surface roughness parameters. We performed a virtual experiment to test the sensitivity of resolved surface roughness to four axes of canopy structure: (1) leaf area index, (2) the vertical profile of leaf density, (3) canopy height, and (4) canopy gap fraction.more » We found roughness parameters to be highly variable, but uncovered positive relationships between displacement height and maximum canopy height, aerodynamic canopy height and maximum canopy height and leaf area index, and eddy-penetration depth and gap fraction. We also found negative relationships between aerodynamic canopy height and gap fraction, as well as between eddy-penetration depth and maximum canopy height and leaf area index. We generalized our model results into a virtual "biometric" parameterization that relates roughness length and displacement height to canopy height, leaf area index, and gap fraction. Using a decade of wind and canopy-structure observations in a site in Michigan, we tested the effectiveness of our model-driven biometric parameterization approach in predicting the friction velocity over heterogeneous and disturbed canopies. We compared the accuracy of these predictions with the friction-velocity predictions obtained from the common simple approximation related to canopy height, the values calculated with large-eddy simulations of the explicit canopy structure as measured by airborne and ground-based lidar, two other parameterization approaches that utilize varying canopy-structure inputs, and the annual and decadal means of the surface roughness parameters at the site from meteorological observations. We found that the classical representation of constant roughness parameters (in space and time) as a fraction of canopy height performed relatively well. Nonetheless, of the approaches we tested, most of the empirical approaches that incorporate seasonal and interannual variation of roughness length and displacement height as a function of the dynamics of canopy structure produced more precise and less biased estimates for friction velocity than models with temporally invariable parameters.« less
A stochastic parameterization for deep convection using cellular automata
NASA Astrophysics Data System (ADS)
Bengtsson, L.; Steinheimer, M.; Bechtold, P.; Geleyn, J.
2012-12-01
Cumulus parameterizations used in most operational weather and climate models today are based on the mass-flux concept which took form in the early 1970's. In such schemes it is assumed that a unique relationship exists between the ensemble-average of the sub-grid convection, and the instantaneous state of the atmosphere in a vertical grid box column. However, such a relationship is unlikely to be described by a simple deterministic function (Palmer, 2011). Thus, because of the statistical nature of the parameterization challenge, it has been recognized by the community that it is important to introduce stochastic elements to the parameterizations (for instance: Plant and Craig, 2008, Khouider et al. 2010, Frenkel et al. 2011, Bentsson et al. 2011, but the list is far from exhaustive). There are undoubtedly many ways in which stochastisity can enter new developments. In this study we use a two-way interacting cellular automata (CA), as its intrinsic nature possesses many qualities interesting for deep convection parameterization. In the one-dimensional entraining plume approach, there is no parameterization of horizontal transport of heat, moisture or momentum due to cumulus convection. In reality, mass transport due to gravity waves that propagate in the horizontal can trigger new convection, important for the organization of deep convection (Huang, 1988). The self-organizational characteristics of the CA allows for lateral communication between adjacent NWP model grid-boxes, and temporal memory. Thus the CA scheme used in this study contain three interesting components for representation of cumulus convection, which are not present in the traditional one-dimensional bulk entraining plume method: horizontal communication, memory and stochastisity. The scheme is implemented in the high resolution regional NWP model ALARO, and simulations show enhanced organization of convective activity along squall-lines. Probabilistic evaluation demonstrate an enhanced spread in large-scale variables in regions where convective activity is large. A two month extended evaluation of the deterministic behaviour of the scheme indicate a neutral impact on forecast skill. References: Bengtsson, L., H. Körnich, E. Källén, and G. Svensson, 2011: Large-scale dynamical response to sub-grid scale organization provided by cellular automata. Journal of the Atmospheric Sciences, 68, 3132-3144. Frenkel, Y., A. Majda, and B. Khouider, 2011: Using the stochastic multicloud model to improve tropical convective parameterization: A paradigm example. Journal of the Atmospheric Sciences, doi: 10.1175/JAS-D-11-0148.1. Huang, X.-Y., 1988: The organization of moist convection by internal 365 gravity waves. Tellus A, 42, 270-285. Khouider, B., J. Biello, and A. Majda, 2010: A Stochastic Multicloud Model for Tropical Convection. Comm. Math. Sci., 8, 187-216. Palmer, T., 2011: Towards the Probabilistic Earth-System Simulator: A Vision for the Future of Climate and Weather Prediction. Quarterly Journal of the Royal Meteorological Society 138 (2012) 841-861 Plant, R. and G. Craig, 2008: A stochastic parameterization for deep convection based on equilibrium statistics. J. Atmos. Sci., 65, 87-105.
NASA Astrophysics Data System (ADS)
Ullrich, Romy; Hiranuma, Naruki; Hoose, Corinna; Möhler, Ottmar; Niemand, Monika; Steinke, Isabelle; Wagner, Robert
2014-05-01
Developing a new parameterization framework for the heterogeneous ice nucleation of atmospheric aerosol particles Ullrich, R., Hiranuma, N., Hoose, C., Möhler, O., Niemand, M., Steinke, I., Wagner, R. Aerosols of different nature induce microphysical processes of importance for the Earth's atmosphere. They affect not only directly the radiative budget, more importantly they essentially influence the formation and life cycles of clouds. Hence, aerosols and their ice nucleating ability are a fundamental input parameter for weather and climate models. During the previous years, the AIDA (Aerosol Interactions and Dynamics in the Atmosphere) cloud chamber was used to extensively measure, under nearly realistic conditions, the ice nucleating properties of different aerosols. Numerous experiments were performed with a broad variety of aerosol types and under different freezing conditions. A reanalysis of these experiments offers the opportunity to develop a uniform parameterization framework of ice formation for many atmospherically relevant aerosols in a broad temperature and humidity range. The analysis includes both deposition nucleation and immersion freezing. The aim of this study is to develop this comprehensive parameterization for heterogeneous ice formation mainly by using the ice nucleation active site (INAS) approach. Niemand et al. (2012) already developed a temperature dependent parameterization for the INAS- density for immersion freezing on desert dust particles. In addition to a reanalysis of the ice nucleation behaviour of desert dust (Niemand et al. (2012)), volcanic ash (Steinke et al. (2010)) and organic particles (Wagner et al. (2010,2011)) this contribution will also show new results for the immersion freezing and deposition nucleation of soot aerosols. The next step will be the implementation of the parameterizations into the COSMO- ART model in order to test and demonstrate the usability of the framework. Hoose, C. and Möhler, O. (2012) Atmos. Chem. Phys. 12, 9817-9854 Niemand, M., Möhler, O., Vogel, B., Hoose, C., Connolly, P., Klein, H., Bingemer, H., DeMott, P.J., Skrotzki, J. and Leisner, T. (2012) J. Atmos. Sci. 69, 3077-3092 Steinke, I., Möhler, O., Kiselev, A., Niemand, M., Saathoff, H., Schnaiter, M., Skrotzki, J., Hoose, C. and Leisner, T. (2011) Atmos. Chem. Phys. 11, 12945-12958 Wagner, R., Möhler, O., Saathoff, H., Schnaiter, M. and Leisner, T. (2010) Atmos. Chem. Phys. 10, 7617-7641 Wagner, R., Möhler, O., Saathoff, H., Schnaiter, M. and Leisner, T. (2011) Atmos. Chem. Phys. 11, 2083-2110
Topology of the Relative Motion: Circular and Eccentric Reference Orbit Cases
NASA Technical Reports Server (NTRS)
FontdecabaiBaig, Jordi; Metris, Gilles; Exertier, Pierre
2007-01-01
This paper deals with the topology of the relative trajectories in flight formations. The purpose is to study the different types of relative trajectories, their degrees of freedom, and to give an adapted parameterization. The paper also deals with the research of local circular motions. Even if they exist only when the reference orbit is circular, we extrapolate initial conditions to the eccentric reference orbit case.This alternative approach is complementary with traditional approaches in terms of cartesian coordinates or differences of orbital elements.
2012-07-06
layer affected by ground interference. Using this approach for measurements acquired over the Salinas Valley , we showed that additional range gates...demonstrated the benefits of the two-step approach using measurements acquired over the Salinas Valley in central California. The additional range gates...four hours of data between the surface and 3000 m MSL along a 40 km segment of the Salinas Valley during this day. The airborne lidar measurements
Global model comparison of heterogeneous ice nucleation parameterizations in mixed phase clouds
NASA Astrophysics Data System (ADS)
Yun, Yuxing; Penner, Joyce E.
2012-04-01
A new aerosol-dependent mixed phase cloud parameterization for deposition/condensation/immersion (DCI) ice nucleation and one for contact freezing are compared to the original formulations in a coupled general circulation model and aerosol transport model. The present-day cloud liquid and ice water fields and cloud radiative forcing are analyzed and compared to observations. The new DCI freezing parameterization changes the spatial distribution of the cloud water field. Significant changes are found in the cloud ice water fraction and in the middle cloud fractions. The new DCI freezing parameterization predicts less ice water path (IWP) than the original formulation, especially in the Southern Hemisphere. The smaller IWP leads to a less efficient Bergeron-Findeisen process resulting in a larger liquid water path, shortwave cloud forcing, and longwave cloud forcing. It is found that contact freezing parameterizations have a greater impact on the cloud water field and radiative forcing than the two DCI freezing parameterizations that we compared. The net solar flux at top of atmosphere and net longwave flux at the top of the atmosphere change by up to 8.73 and 3.52 W m-2, respectively, due to the use of different DCI and contact freezing parameterizations in mixed phase clouds. The total climate forcing from anthropogenic black carbon/organic matter in mixed phase clouds is estimated to be 0.16-0.93 W m-2using the aerosol-dependent parameterizations. A sensitivity test with contact ice nuclei concentration in the original parameterization fit to that recommended by Young (1974) gives results that are closer to the new contact freezing parameterization.
Flow Charts: Visualization of Vector Fields on Arbitrary Surfaces
Li, Guo-Shi; Tricoche, Xavier; Weiskopf, Daniel; Hansen, Charles
2009-01-01
We introduce a novel flow visualization method called Flow Charts, which uses a texture atlas approach for the visualization of flows defined over curved surfaces. In this scheme, the surface and its associated flow are segmented into overlapping patches, which are then parameterized and packed in the texture domain. This scheme allows accurate particle advection across multiple charts in the texture domain, providing a flexible framework that supports various flow visualization techniques. The use of surface parameterization enables flow visualization techniques requiring the global view of the surface over long time spans, such as Unsteady Flow LIC (UFLIC), particle-based Unsteady Flow Advection Convolution (UFAC), or dye advection. It also prevents visual artifacts normally associated with view-dependent methods. Represented as textures, Flow Charts can be naturally integrated into hardware accelerated flow visualization techniques for interactive performance. PMID:18599918
A Nonlinear Interactions Approximation Model for Large-Eddy Simulation
NASA Astrophysics Data System (ADS)
Haliloglu, Mehmet U.; Akhavan, Rayhaneh
2003-11-01
A new approach to LES modelling is proposed based on direct approximation of the nonlinear terms \\overlineu_iuj in the filtered Navier-Stokes equations, instead of the subgrid-scale stress, τ_ij. The proposed model, which we call the Nonlinear Interactions Approximation (NIA) model, uses graded filters and deconvolution to parameterize the local interactions across the LES cutoff, and a Smagorinsky eddy viscosity term to parameterize the distant interactions. A dynamic procedure is used to determine the unknown eddy viscosity coefficient, rendering the model free of adjustable parameters. The proposed NIA model has been applied to LES of turbulent channel flows at Re_τ ≈ 210 and Re_τ ≈ 570. The results show good agreement with DNS not only for the mean and resolved second-order turbulence statistics but also for the full (resolved plus subgrid) Reynolds stress and turbulence intensities.
Evaluation of Warm-Rain Microphysical Parameterizations in Cloudy Boundary Layer Transitions
NASA Astrophysics Data System (ADS)
Nelson, K.; Mechem, D. B.
2014-12-01
Common warm-rain microphysical parameterizations used for marine boundary layer (MBL) clouds are either tuned for specific cloud types (e.g., the Khairoutdinov and Kogan 2000 parameterization, "KK2000") or are altogether ill-posed (Kessler 1969). An ideal microphysical parameterization should be "unified" in the sense of being suitable across MBL cloud regimes that include stratocumulus, cumulus rising into stratocumulus, and shallow trade cumulus. The recent parameterization of Kogan (2013, "K2013") was formulated for shallow cumulus but has been shown in a large-eddy simulation environment to work quite well for stratocumulus as well. We report on our efforts to implement and test this parameterization into a regional forecast model (NRL COAMPS). Results from K2013 and KK2000 are compared with the operational Kessler parameterization for a 5-day period of the VOCALS-REx field campaign, which took place over the southeast Pacific. We focus on both the relative performance of the three parameterizations and also on how they compare to the VOCALS-REx observations from the NOAA R/V Ronald H. Brown, in particular estimates of boundary-layer depth, liquid water path (LWP), cloud base, and area-mean precipitation rate obtained from C-band radar.
USDA-ARS?s Scientific Manuscript database
Irrigation is a widely used water management practice that is often poorly parameterized in land surface and climate models. Previous studies have addressed this issue via use of irrigation area, applied water inventory data, or soil moisture content. These approaches have a variety of drawbacks i...
USDA-ARS?s Scientific Manuscript database
Biochemical models of leaf photosynthesis, which are essential for understanding the impact of photosynthesis to changing environments, depend on accurate parameterizations. The CO2 photocompensation point can be especially difficult to determine accurately but can be measured from the intersection ...
Slicing cluster mass functions with a Bayesian razor
NASA Astrophysics Data System (ADS)
Sealfon, C. D.
2010-08-01
We apply a Bayesian ``razor" to forecast Bayes factors between different parameterizations of the galaxy cluster mass function. To demonstrate this approach, we calculate the minimum size N-body simulation needed for strong evidence favoring a two-parameter mass function over one-parameter mass functions and visa versa, as a function of the minimum cluster mass.
A modified force-restore approach to modeling snow-surface heat fluxes
Charles H. Luce; David G. Tarboton
2001-01-01
Accurate modeling of the energy balance of a snowpack requires good estimates of the snow surface temperature. The snow surface temperature allows a balance between atmospheric heat fluxes and the conductive flux into the snowpack. While the dependency of atmospheric fluxes on surface temperature is reasonably well understood and parameterized, conduction of heat from...
Abstraction Techniques for Parameterized Verification
2006-11-01
approach for applying model checking to unbounded systems is to extract finite state models from them using conservative abstraction techniques. Prop...36 2.5.1 Multiple Reference Processes . . . . . . . . . . . . . . . . . . . 36 2.5.2 Adding Monitor Processes...model checking to complex pieces of code like device drivers depends on the use of abstraction methods. An abstraction method extracts a small finite
Large-scale tropospheric transport in the Chemistry-Climate Model Initiative (CCMI) simulations
NASA Astrophysics Data System (ADS)
Orbe, Clara; Yang, Huang; Waugh, Darryn W.; Zeng, Guang; Morgenstern, Olaf; Kinnison, Douglas E.; Lamarque, Jean-Francois; Tilmes, Simone; Plummer, David A.; Scinocca, John F.; Josse, Beatrice; Marecal, Virginie; Jöckel, Patrick; Oman, Luke D.; Strahan, Susan E.; Deushi, Makoto; Tanaka, Taichu Y.; Yoshida, Kohei; Akiyoshi, Hideharu; Yamashita, Yousuke; Stenke, Andreas; Revell, Laura; Sukhodolov, Timofei; Rozanov, Eugene; Pitari, Giovanni; Visioni, Daniele; Stone, Kane A.; Schofield, Robyn; Banerjee, Antara
2018-05-01
Understanding and modeling the large-scale transport of trace gases and aerosols is important for interpreting past (and projecting future) changes in atmospheric composition. Here we show that there are large differences in the global-scale atmospheric transport properties among the models participating in the IGAC SPARC Chemistry-Climate Model Initiative (CCMI). Specifically, we find up to 40 % differences in the transport timescales connecting the Northern Hemisphere (NH) midlatitude surface to the Arctic and to Southern Hemisphere high latitudes, where the mean age ranges between 1.7 and 2.6 years. We show that these differences are related to large differences in vertical transport among the simulations, in particular to differences in parameterized convection over the oceans. While stronger convection over NH midlatitudes is associated with slower transport to the Arctic, stronger convection in the tropics and subtropics is associated with faster interhemispheric transport. We also show that the differences among simulations constrained with fields derived from the same reanalysis products are as large as (and in some cases larger than) the differences among free-running simulations, most likely due to larger differences in parameterized convection. Our results indicate that care must be taken when using simulations constrained with analyzed winds to interpret the influence of meteorology on tropospheric composition.
Theory and praxis pf map analsys in CHEF part 1: Linear normal form
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michelotti, Leo; /Fermilab
2008-10-01
This memo begins a series which, put together, could comprise the 'CHEF Documentation Project' if there were such a thing. The first--and perhaps only--three will telegraphically describe theory, algorithms, implementation and usage of the normal form map analysis procedures encoded in CHEF's collection of libraries. [1] This one will begin the sequence by explaining the linear manipulations that connect the Jacobian matrix of a symplectic mapping to its normal form. It is a 'Reader's Digest' version of material I wrote in Intermediate Classical Dynamics (ICD) [2] and randomly scattered across technical memos, seminar viewgraphs, and lecture notes for the pastmore » quarter century. Much of its content is old, well known, and in some places borders on the trivial.1 Nevertheless, completeness requires their inclusion. The primary objective is the 'fundamental theorem' on normalization written on page 8. I plan to describe the nonlinear procedures in a subsequent memo and devote a third to laying out algorithms and lines of code, connecting them with equations written in the first two. Originally this was to be done in one short paper, but I jettisoned that approach after its first section exceeded a dozen pages. The organization of this document is as follows. A brief description of notation is followed by a section containing a general treatment of the linear problem. After the 'fundamental theorem' is proved, two further subsections discuss the generation of equilibrium distributions and issue of 'phase'. The final major section reviews parameterizations--that is, lattice functions--in two and four dimensions with a passing glance at the six-dimensional version. Appearances to the contrary, for the most part I have tried to restrict consideration to matters needed to understand the code in CHEF's libraries.« less
Legarra, Andres; Christensen, Ole F; Vitezica, Zulma G; Aguilar, Ignacio; Misztal, Ignacy
2015-06-01
Recent use of genomic (marker-based) relationships shows that relationships exist within and across base population (breeds or lines). However, current treatment of pedigree relationships is unable to consider relationships within or across base populations, although such relationships must exist due to finite size of the ancestral population and connections between populations. This complicates the conciliation of both approaches and, in particular, combining pedigree with genomic relationships. We present a coherent theoretical framework to consider base population in pedigree relationships. We suggest a conceptual framework that considers each ancestral population as a finite-sized pool of gametes. This generates across-individual relationships and contrasts with the classical view which each population is considered as an infinite, unrelated pool. Several ancestral populations may be connected and therefore related. Each ancestral population can be represented as a "metafounder," a pseudo-individual included as founder of the pedigree and similar to an "unknown parent group." Metafounders have self- and across relationships according to a set of parameters, which measure ancestral relationships, i.e., homozygozities within populations and relationships across populations. These parameters can be estimated from existing pedigree and marker genotypes using maximum likelihood or a method based on summary statistics, for arbitrarily complex pedigrees. Equivalences of genetic variance and variance components between the classical and this new parameterization are shown. Segregation variance on crosses of populations is modeled. Efficient algorithms for computation of relationship matrices, their inverses, and inbreeding coefficients are presented. Use of metafounders leads to compatibility of genomic and pedigree relationship matrices and to simple computing algorithms. Examples and code are given. Copyright © 2015 by the Genetics Society of America.
NASA Astrophysics Data System (ADS)
Samala, Ravi K.; Chan, Heang-Ping; Hadjiiski, Lubomir; Helvie, Mark A.; Richter, Caleb; Cha, Kenny
2018-02-01
Deep-learning models are highly parameterized, causing difficulty in inference and transfer learning. We propose a layered pathway evolution method to compress a deep convolutional neural network (DCNN) for classification of masses in DBT while maintaining the classification accuracy. Two-stage transfer learning was used to adapt the ImageNet-trained DCNN to mammography and then to DBT. In the first-stage transfer learning, transfer learning from ImageNet trained DCNN was performed using mammography data. In the second-stage transfer learning, the mammography-trained DCNN was trained on the DBT data using feature extraction from fully connected layer, recursive feature elimination and random forest classification. The layered pathway evolution encapsulates the feature extraction to the classification stages to compress the DCNN. Genetic algorithm was used in an iterative approach with tournament selection driven by count-preserving crossover and mutation to identify the necessary nodes in each convolution layer while eliminating the redundant nodes. The DCNN was reduced by 99% in the number of parameters and 95% in mathematical operations in the convolutional layers. The lesion-based area under the receiver operating characteristic curve on an independent DBT test set from the original and the compressed network resulted in 0.88+/-0.05 and 0.90+/-0.04, respectively. The difference did not reach statistical significance. We demonstrated a DCNN compression approach without additional fine-tuning or loss of performance for classification of masses in DBT. The approach can be extended to other DCNNs and transfer learning tasks. An ensemble of these smaller and focused DCNNs has the potential to be used in multi-target transfer learning.
Uncertainty Quantification and Sensitivity Analysis in the CICE v5.1 Sea Ice Model
NASA Astrophysics Data System (ADS)
Urrego-Blanco, J. R.; Urban, N. M.
2015-12-01
Changes in the high latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with mid latitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. In this work we characterize parametric uncertainty in Los Alamos Sea Ice model (CICE) and quantify the sensitivity of sea ice area, extent and volume with respect to uncertainty in about 40 individual model parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one-at-a-time, this study uses a global variance-based approach in which Sobol sequences are used to efficiently sample the full 40-dimensional parameter space. This approach requires a very large number of model evaluations, which are expensive to run. A more computationally efficient approach is implemented by training and cross-validating a surrogate (emulator) of the sea ice model with model output from 400 model runs. The emulator is used to make predictions of sea ice extent, area, and volume at several model configurations, which are then used to compute the Sobol sensitivity indices of the 40 parameters. A ranking based on the sensitivity indices indicates that model output is most sensitive to snow parameters such as conductivity and grain size, and the drainage of melt ponds. The main effects and interactions among the most influential parameters are also estimated by a non-parametric regression technique based on generalized additive models. It is recommended research to be prioritized towards more accurately determining these most influential parameters values by observational studies or by improving existing parameterizations in the sea ice model.
NASA Astrophysics Data System (ADS)
Liou, K. N.; Takano, Y.; He, C.; Yang, P.; Leung, L. R.; Gu, Y.; Lee, W. L.
2014-06-01
A stochastic approach has been developed to model the positions of BC (black carbon)/dust internally mixed with two snow grain types: hexagonal plate/column (convex) and Koch snowflake (concave). Subsequently, light absorption and scattering analysis can be followed by means of an improved geometric-optics approach coupled with Monte Carlo photon tracing to determine BC/dust single-scattering properties. For a given shape (plate, Koch snowflake, spheroid, or sphere), the action of internal mixing absorbs substantially more light than external mixing. The snow grain shape effect on absorption is relatively small, but its effect on asymmetry factor is substantial. Due to a greater probability of intercepting photons, multiple inclusions of BC/dust exhibit a larger absorption than an equal-volume single inclusion. The spectral absorption (0.2-5 µm) for snow grains internally mixed with BC/dust is confined to wavelengths shorter than about 1.4 µm, beyond which ice absorption predominates. Based on the single-scattering properties determined from stochastic and light absorption parameterizations and using the adding/doubling method for spectral radiative transfer, we find that internal mixing reduces snow albedo substantially more than external mixing and that the snow grain shape plays a critical role in snow albedo calculations through its forward scattering strength. Also, multiple inclusion of BC/dust significantly reduces snow albedo as compared to an equal-volume single sphere. For application to land/snow models, we propose a two-layer spectral snow parameterization involving contaminated fresh snow on top of old snow for investigating and understanding the climatic impact of multiple BC/dust internal mixing associated with snow grain metamorphism, particularly over mountain/snow topography.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liou, K. N.; Takano, Y.; He, Cenlin
2014-06-27
A stochastic approach to model the positions of BC/dust internally mixed with two snow-grain types has been developed, including hexagonal plate/column (convex) and Koch snowflake (concave). Subsequently, light absorption and scattering analysis can be followed by means of an improved geometric-optics approach coupled with Monte Carlo photon tracing to determine their single-scattering properties. For a given shape (plate, Koch snowflake, spheroid, or sphere), internal mixing absorbs more light than external mixing. The snow-grain shape effect on absorption is relatively small, but its effect on the asymmetry factor is substantial. Due to a greater probability of intercepting photons, multiple inclusions ofmore » BC/dust exhibit a larger absorption than an equal-volume single inclusion. The spectral absorption (0.2 – 5 um) for snow grains internally mixed with BC/dust is confined to wavelengths shorter than about 1.4 um, beyond which ice absorption predominates. Based on the single-scattering properties determined from stochastic and light absorption parameterizations and using the adding/doubling method for spectral radiative transfer, we find that internal mixing reduces snow albedo more than external mixing and that the snow-grain shape plays a critical role in snow albedo calculations through the asymmetry factor. Also, snow albedo reduces more in the case of multiple inclusion of BC/dust compared to that of an equal-volume single sphere. For application to land/snow models, we propose a two-layer spectral snow parameterization containing contaminated fresh snow on top of old snow for investigating and understanding the climatic impact of multiple BC/dust internal mixing associated with snow grain metamorphism, particularly over mountains/snow topography.« less
NASA Astrophysics Data System (ADS)
Qian, Y.; Wang, C.; Huang, M.; Berg, L. K.; Duan, Q.; Feng, Z.; Shrivastava, M. B.; Shin, H. H.; Hong, S. Y.
2016-12-01
This study aims to quantify the relative importance and uncertainties of different physical processes and parameters in affecting simulated surface fluxes and land-atmosphere coupling strength over the Amazon region. We used two-legged coupling metrics, which include both terrestrial (soil moisture to surface fluxes) and atmospheric (surface fluxes to atmospheric state or precipitation) legs, to diagnose the land-atmosphere interaction and coupling strength. Observations made using the Department of Energy's Atmospheric Radiation Measurement (ARM) Mobile Facility during the GoAmazon field campaign together with satellite and reanalysis data are used to evaluate model performance. To quantify the uncertainty in physical parameterizations, we performed a 120 member ensemble of simulations with the WRF model using a stratified experimental design including 6 cloud microphysics, 3 convection, 6 PBL and surface layer, and 3 land surface schemes. A multiple-way analysis of variance approach is used to quantitatively analyze the inter- and intra-group (scheme) means and variances. To quantify parameter sensitivity, we conducted an additional 256 WRF simulations in which an efficient sampling algorithm is used to explore the multiple-dimensional parameter space. Three uncertainty quantification approaches are applied for sensitivity analysis (SA) of multiple variables of interest to 20 selected parameters in YSU PBL and MM5 surface layer schemes. Results show consistent parameter sensitivity across different SA methods. We found that 5 out of 20 parameters contribute more than 90% total variance, and first-order effects dominate comparing to the interaction effects. Results of this uncertainty quantification study serve as guidance for better understanding the roles of different physical processes in land-atmosphere interactions, quantifying model uncertainties from various sources such as physical processes, parameters and structural errors, and providing insights for improving the model physics parameterizations.
2015-06-13
The Berkeley Out-of-Order Machine (BOOM): An Industry- Competitive, Synthesizable, Parameterized RISC-V Processor Christopher Celio David A...Synthesizable, Parameterized RISC-V Processor Christopher Celio, David Patterson, and Krste Asanović University of California, Berkeley, California 94720...Order Machine BOOM is a synthesizable, parameterized, superscalar out- of-order RISC-V core designed to serve as the prototypical baseline processor
Implementing a warm cloud microphysics parameterization for convective clouds in NCAR CESM
NASA Astrophysics Data System (ADS)
Shiu, C.; Chen, Y.; Chen, W.; Li, J. F.; Tsai, I.; Chen, J.; Hsu, H.
2013-12-01
Most of cumulus convection schemes use simple empirical approaches to convert cloud liquid mass to rain water or cloud ice to snow e.g. using a constant autoconversion rate and dividing cloud liquid mass into cloud water and ice as function of air temperature (e.g. Zhang and McFarlane scheme in NCAR CAM model). There are few studies trying to use cloud microphysical schemes to better simulate such precipitation processes in the convective schemes of global models (e.g. Lohmann [2008] and Song, Zhang, and Li [2012]). A two-moment warm cloud parameterization (i.e. Chen and Liu [2004]) is implemented into the deep convection scheme of CAM5.2 of CESM model for treatment of conversion of cloud liquid water to rain water. Short-term AMIP type global simulations are conducted to evaluate the possible impacts from the modification of this physical parameterization. Simulated results are further compared to observational results from AMWG diagnostic package and CloudSAT data sets. Several sensitivity tests regarding to changes in cloud top droplet concentration (here as a rough testing for aerosol indirect effects) and changes in detrained cloud size of convective cloud ice are also carried out to understand their possible impacts on the cloud and precipitation simulations.
Enhanced representation of soil NO emissions in the ...
Modeling of soil nitric oxide (NO) emissions is highly uncertain and may misrepresent its spatial and temporal distribution. This study builds upon a recently introduced parameterization to improve the timing and spatial distribution of soil NO emission estimates in the Community Multiscale Air Quality (CMAQ) model. The parameterization considers soil parameters, meteorology, land use, and mineral nitrogen (N) availability to estimate NO emissions. We incorporate daily year-specific fertilizer data from the Environmental Policy Integrated Climate (EPIC) agricultural model to replace the annual generic data of the initial parameterization, and use a 12 km resolution soil biome map over the continental USA. CMAQ modeling for July 2011 shows slight differences in model performance in simulating fine particulate matter and ozone from Interagency Monitoring of Protected Visual Environments (IMPROVE) and Clean Air Status and Trends Network (CASTNET) sites and NO2 columns from Ozone Monitoring Instrument (OMI) satellite retrievals. We also simulate how the change in soil NO emissions scheme affects the expected O3 response to projected emissions reductions. The National Exposure Research Laboratory (NERL) Computational Exposure Division (CED) develops and evaluates data, decision-support tools, and models to be applied to media-specific or receptor-specific problem areas. CED uses modeling-based approaches to characterize exposures, evaluate fate and transport, and
NASA Astrophysics Data System (ADS)
Hailegeorgis, Teklu T.; Alfredsen, Knut; Abdella, Yisak S.; Kolberg, Sjur
2015-03-01
Identification of proper parameterizations of spatial heterogeneity is required for precipitation-runoff models. However, relevant studies with a specific aim at hourly runoff simulation in boreal mountainous catchments are not common. We conducted calibration and evaluation of hourly runoff simulation in a boreal mountainous watershed based on six different parameterizations of the spatial heterogeneity of subsurface storage capacity for a semi-distributed (subcatchments hereafter called elements) and distributed (1 × 1 km2 grid) setup. We evaluated representation of element-to-element, grid-to-grid, and probabilistic subcatchment/subbasin, subelement and subgrid heterogeneities. The parameterization cases satisfactorily reproduced the streamflow hydrographs with Nash-Sutcliffe efficiency values for the calibration and validation periods up to 0.84 and 0.86 respectively, and similarly for the log-transformed streamflow up to 0.85 and 0.90. The parameterizations reproduced the flow duration curves, but predictive reliability in terms of quantile-quantile (Q-Q) plots indicated marked over and under predictions. The simple and parsimonious parameterizations with no subelement or no subgrid heterogeneities provided equivalent simulation performance compared to the more complex cases. The results indicated that (i) identification of parameterizations require measurements from denser precipitation stations than what is required for acceptable calibration of the precipitation-streamflow relationships, (ii) there is challenges in the identification of parameterizations based on only calibration to catchment integrated streamflow observations and (iii) a potential preference for the simple and parsimonious parameterizations for operational forecast contingent on their equivalent simulation performance for the available input data. In addition, the effects of non-identifiability of parameters (interactions and equifinality) can contribute to the non-identifiability of the parameterizations.
NASA Astrophysics Data System (ADS)
Guo, Yamin; Cheng, Jie; Liang, Shunlin
2018-02-01
Surface downward longwave radiation (SDLR) is a key variable for calculating the earth's surface radiation budget. In this study, we evaluated seven widely used clear-sky parameterization methods using ground measurements collected from 71 globally distributed fluxnet sites. The Bayesian model averaging (BMA) method was also introduced to obtain a multi-model ensemble estimate. As a whole, the parameterization method of Carmona et al. (2014) performs the best, with an average BIAS, RMSE, and R 2 of - 0.11 W/m2, 20.35 W/m2, and 0.92, respectively, followed by the parameterization methods of Idso (1981), Prata (Q J R Meteorol Soc 122:1127-1151, 1996), Brunt and Sc (Q J R Meteorol Soc 58:389-420, 1932), and Brutsaert (Water Resour Res 11:742-744, 1975). The accuracy of the BMA is close to that of the parameterization method of Carmona et al. (2014) and comparable to that of the parameterization method of Idso (1981). The advantage of the BMA is that it achieves balanced results compared to the integrated single parameterization methods. To fully assess the performance of the parameterization methods, the effects of climate type, land cover, and surface elevation were also investigated. The five parameterization methods and BMA all failed over land with the tropical climate type, with high water vapor, and had poor results over forest, wetland, and ice. These methods achieved better results over desert, bare land, cropland, and grass and had acceptable accuracies for sites at different elevations, except for the parameterization method of Carmona et al. (2014) over high elevation sites. Thus, a method that can be successfully applied everywhere does not exist.
Establishing the kinetics of ballistic-to-diffusive transition using directional statistics
NASA Astrophysics Data System (ADS)
Liu, Pai; Heinson, William R.; Sumlin, Benjamin J.; Shen, Kuan-Yu; Chakrabarty, Rajan K.
2018-04-01
We establish the kinetics of ballistic-to-diffusive (BD) transition observed in two-dimensional random walk using directional statistics. Directional correlation is parameterized using the walker's turning angle distribution, which follows the commonly adopted wrapped Cauchy distribution (WCD) function. During the BD transition, the concentration factor (ρ) governing the WCD shape is observed to decrease from its initial value. We next analytically derive the relationship between effective ρ and time, which essentially quantifies the BD transition rate. The prediction of our kinetic expression agrees well with the empirical datasets obtained from correlated random walk simulation. We further connect our formulation with the conventionally used scaling relationship between the walker's mean-square displacement and time.
An Overview of Numerical Weather Prediction on Various Scales
NASA Astrophysics Data System (ADS)
Bao, J.-W.
2009-04-01
The increasing public need for detailed weather forecasts, along with the advances in computer technology, has motivated many research institutes and national weather forecasting centers to develop and run global as well as regional numerical weather prediction (NWP) models at high resolutions (i.e., with horizontal resolutions of ~10 km or higher for global models and 1 km or higher for regional models, and with ~60 vertical levels or higher). The need for running NWP models at high horizontal and vertical resolutions requires the implementation of non-hydrostatic dynamic core with a choice of horizontal grid configurations and vertical coordinates that are appropriate for high resolutions. Development of advanced numerics will also be needed for high resolution global and regional models, in particular, when the models are applied to transport problems and air quality applications. In addition to the challenges in numerics, the NWP community is also facing the challenges of developing physics parameterizations that are well suited for high-resolution NWP models. For example, when NWP models are run at resolutions of ~5 km or higher, the use of much more detailed microphysics parameterizations than those currently used in NWP model will become important. Another example is that regional NWP models at ~1 km or higher only partially resolve convective energy containing eddies in the lower troposphere. Parameterizations to account for the subgrid diffusion associated with unresolved turbulence still need to be developed. Further, physically sound parameterizations for air-sea interaction will be a critical component for tropical NWP models, particularly for hurricane predictions models. In this review presentation, the above issues will be elaborated on and the approaches to address them will be discussed.
A physically-based approach of treating dust-water cloud interactions in climate models
NASA Astrophysics Data System (ADS)
Kumar, P.; Karydis, V.; Barahona, D.; Sokolik, I. N.; Nenes, A.
2011-12-01
All aerosol-cloud-climate assessment studies to date assume that the ability of dust (and other insoluble species) to act as a Cloud Condensation Nuclei (CCN) is determined solely by their dry size and amount of soluble material. Recent evidence however clearly shows that dust can act as efficient CCN (even if lacking appreciable amounts of soluble material) through adsorption of water vapor onto the surface of the particle. This "inherent" CCN activity is augmented as the dust accumulates soluble material through atmospheric aging. A comprehensive treatment of dust-cloud interactions therefore requires including both of these sources of CCN activity in atmospheric models. This study presents a "unified" theory of CCN activity that considers both effects of adsorption and solute. The theory is corroborated and constrained with experiments of CCN activity of mineral aerosols generated from clays, calcite, quartz, dry lake beds and desert soil samples from Northern Africa, East Asia/China, and Northern America. The unified activation theory then is included within the mechanistic droplet activation parameterization of Kumar et al. (2009) (including the giant CCN correction of Barahona et al., 2010), for a comprehensive treatment of dust impacts on global CCN and cloud droplet number. The parameterization is demonstrated with the NASA Global Modeling Initiative (GMI) Chemical Transport Model using wind fields computed with the Goddard Institute for Space Studies (GISS) general circulation model. References Barahona, D. et al. (2010) Comprehensively Accounting for the Effect of Giant CCN in Cloud Activation Parameterizations, Atmos.Chem.Phys., 10, 2467-2473 Kumar, P., I.N. Sokolik, and A. Nenes (2009), Parameterization of cloud droplet formation for global and regional models: including adsorption activation from insoluble CCN, Atmos.Chem.Phys., 9, 2517- 2532
Aerosol hygroscopic growth parameterization based on a solute specific coefficient
NASA Astrophysics Data System (ADS)
Metzger, S.; Steil, B.; Xu, L.; Penner, J. E.; Lelieveld, J.
2011-09-01
Water is a main component of atmospheric aerosols and its amount depends on the particle chemical composition. We introduce a new parameterization for the aerosol hygroscopic growth factor (HGF), based on an empirical relation between water activity (aw) and solute molality (μs) through a single solute specific coefficient νi. Three main advantages are: (1) wide applicability, (2) simplicity and (3) analytical nature. (1) Our approach considers the Kelvin effect and covers ideal solutions at large relative humidity (RH), including CCN activation, as well as concentrated solutions with high ionic strength at low RH such as the relative humidity of deliquescence (RHD). (2) A single νi coefficient suffices to parameterize the HGF for a wide range of particle sizes, from nanometer nucleation mode to micrometer coarse mode particles. (3) In contrast to previous methods, our analytical aw parameterization depends not only on a linear correction factor for the solute molality, instead νi also appears in the exponent in form x · ax. According to our findings, νi can be assumed constant for the entire aw range (0-1). Thus, the νi based method is computationally efficient. In this work we focus on single solute solutions, where νi is pre-determined with the bisection method from our analytical equations using RHD measurements and the saturation molality μssat. The computed aerosol HGF and supersaturation (Köhler-theory) compare well with the results of the thermodynamic reference model E-AIM for the key compounds NaCl and (NH4)2SO4 relevant for CCN modeling and calibration studies. The equations introduced here provide the basis of our revised gas-liquid-solid partitioning model, i.e. version 4 of the EQuilibrium Simplified Aerosol Model (EQSAM4), described in a companion paper.
NASA Astrophysics Data System (ADS)
Breen, S. J.; Lochbuehler, T.; Detwiler, R. L.; Linde, N.
2013-12-01
Electrical resistivity tomography (ERT) is a well-established method for geophysical characterization and has shown potential for monitoring geologic CO2 sequestration, due to its sensitivity to electrical resistivity contrasts generated by liquid/gas saturation variability. In contrast to deterministic ERT inversion approaches, probabilistic inversion provides not only a single saturation model but a full posterior probability density function for each model parameter. Furthermore, the uncertainty inherent in the underlying petrophysics (e.g., Archie's Law) can be incorporated in a straightforward manner. In this study, the data are from bench-scale ERT experiments conducted during gas injection into a quasi-2D (1 cm thick), translucent, brine-saturated sand chamber with a packing that mimics a simple anticlinal geological reservoir. We estimate saturation fields by Markov chain Monte Carlo sampling with the MT-DREAM(ZS) algorithm and compare them quantitatively to independent saturation measurements from a light transmission technique, as well as results from deterministic inversions. Different model parameterizations are evaluated in terms of the recovered saturation fields and petrophysical parameters. The saturation field is parameterized (1) in cartesian coordinates, (2) by means of its discrete cosine transform coefficients, and (3) by fixed saturation values and gradients in structural elements defined by a gaussian bell of arbitrary shape and location. Synthetic tests reveal that a priori knowledge about the expected geologic structures (as in parameterization (3)) markedly improves the parameter estimates. The number of degrees of freedom thus strongly affects the inversion results. In an additional step, we explore the effects of assuming that the total volume of injected gas is known a priori and that no gas has migrated away from the monitored region.
A cohesive-frictional force field (CFFF) for colloidal calcium-silicate-hydrates
NASA Astrophysics Data System (ADS)
Palkovic, Steven D.; Yip, Sidney; Büyüköztürk, Oral
2017-12-01
Calcium-silicate-hydrate (C-S-H) gel is a cohesive-frictional material that exhibits strength asymmetry in compression and tension and normal-stress dependency of the maximum shear strength. Experiments suggest the basic structural component of C-S-H is a colloidal particle with an internal layered structure. These colloids form heterogeneous assemblies with a complex pore network at the mesoscale. We propose a cohesive-frictional force field (CFFF) to describe the interactions in colloidal C-S-H materials that incorporates the strength anisotropy fundamental to the C-S-H molecular structure that has been omitted from recent mesoscale models. We parameterize the CFFF from reactive force field simulations of an internal interface that controls mechanical performance, describing the behavior of thousands of atoms through a single effective pair interaction. We apply the CFFF to study the mesoscale elastic and Mohr-Coulomb strength properties of C-S-H with varying polydispersity and packing density. Our results show that the consideration of cohesive-frictional interactions lead to an increase in stiffness, shear strength, and normal-stress dependency, while also changing the nature of local deformation processes. The CFFF and our coarse-graining approach provide an essential connection between nanoscale molecular interactions and macroscale continuum behavior for hydrated cementitious materials.
Design of the Protocol Processor for the ROBUS-2 Communication System
NASA Technical Reports Server (NTRS)
Torres-Pomales, Wilfredo; Malekpour, Mahyar R.; Miner, Paul S.
2005-01-01
The ROBUS-2 Protocol Processor (RPP) is a custom-designed hardware component implementing the functionality of the ROBUS-2 fault-tolerant communication system. The Reliable Optical Bus (ROBUS) is the core communication system of the Scalable Processor-Independent Design for Enhanced Reliability (SPIDER), a general-purpose fault tolerant integrated modular architecture currently under development at NASA Langley Research Center. ROBUS is a time-division multiple access (TDMA) broadcast communication system with medium access control by means of time-indexed communication schedule. ROBUS-2 is a developmental version of the ROBUS providing guaranteed fault-tolerant services to the attached processing elements (PEs), in the presence of a bounded number of faults. These services include message broadcast (Byzantine Agreement), dynamic communication schedule update, time reference (clock synchronization), and distributed diagnosis (group membership). ROBUS also features fault-tolerant startup and restart capabilities. ROBUS-2 tolerates internal as well as PE faults, and incorporates a dynamic self-reconfiguration capability driven by the internal diagnostic system. ROBUS consists of RPPs connected to each other by a lower-level physical communication network. The RPP has a pipelined architecture and the design is parameterized in the behavioral and structural domains. The design of the RPP enables the bus to achieve a PE-message throughput that approaches the available bandwidth at the physical layer.
Kashefolgheta, Sadra; Vila Verde, Ana
2017-08-09
We present a set of Lennard-Jones parameters for classical, all-atom models of acetate and various alkylated and non-alkylated forms of sulfate, sulfonate and phosphate ions, optimized to reproduce their interactions with water and with the physiologically relevant sodium, ammonium and methylammonium cations. The parameters are internally consistent and are fully compatible with the Generalized Amber Force Field (GAFF), the AMBER force field for proteins, the accompanying TIP3P water model and the sodium model of Joung and Cheatham. The parameters were developed primarily relying on experimental information - hydration free energies and solution activity derivatives at 0.5 m concentration - with ab initio, gas phase calculations being used for the cases where experimental information is missing. The ab initio parameterization scheme presented here is distinct from other approaches because it explicitly connects gas phase binding energies to intermolecular interactions in solution. We demonstrate that the original GAFF/AMBER parameters often overestimate anion-cation interactions, leading to an excessive number of contact ion pairs in solutions of carboxylate ions, and to aggregation in solutions of divalent ions. GAFF/AMBER parameters lead to excessive numbers of salt bridges in proteins and of contact ion pairs between sodium and acidic protein groups, issues that are resolved by using the optimized parameters presented here.
INDIVIDUAL BASED MODELLING APPROACH TO THERMAL ...
Diadromous fish populations in the Pacific Northwest face challenges along their migratory routes from declining habitat quality, harvest, and barriers to longitudinal connectivity. Changes in river temperature regimes are producing an additional challenge for upstream migrating adult salmon and steelhead, species that are sensitive to absolute and cumulative thermal exposure. Adult salmon populations have been shown to utilize cold water patches along migration routes when mainstem river temperatures exceed thermal optimums. We are employing an individual based model (IBM) to explore the costs and benefits of spatially-distributed cold water refugia for adult migrating salmon. Our model, developed in the HexSim platform, is built around a mechanistic behavioral decision tree that drives individual interactions with their spatially explicit simulated environment. Population-scale responses to dynamic thermal regimes, coupled with other stressors such as disease and harvest, become emergent properties of the spatial IBM. Other model outputs include arrival times, species-specific survival rates, body energetic content, and reproductive fitness levels. Here, we discuss the challenges associated with parameterizing an individual based model of salmon and steelhead in a section of the Columbia River. Many rivers and streams in the Pacific Northwest are currently listed as impaired under the Clean Water Act as a result of high summer water temperatures. Adverse effec
Spatial prediction of near surface soil water retention functions using hydrogeophysics
NASA Astrophysics Data System (ADS)
Gibson, J. P.; Franz, T. E.
2017-12-01
The hydrological community often turns to widely available spatial datasets such as SSURGO to characterize the spatial variability of soil across a landscape of interest. This has served as a reasonable first approximation when lacking localized soil data. However, previous work has shown that information loss within land surface models primarily stems from parameterization. Localized soil sampling is both expensive and time intense, and thus a need exists in connecting spatial datasets with ground observations. Given that hydrogeophysics is data-dense, rapid, and relatively easy to adopt, it is a promising technique to help dovetail localized soil sampling with larger spatial datasets. In this work, we utilize 2 geophysical techniques; cosmic ray neutron probe and electromagnetic induction, to identify temporally stable soil moisture patterns. This is achieved by measuring numerous times over a range of wet to dry field conditions in order to apply an empirical orthogonal function. We then present measured water retention functions of shallow cores extracted within each temporally stable zone. Lastly, we use soil moisture patterns as a covariate to predict soil hydraulic properties in areas without measurement and validate using a leave-one-out cross validation analysis. Using these approaches to better constrain soil hydraulic property variability, we speculate that further research can better estimate hydrologic fluxes in areas of interest.
NASA Technical Reports Server (NTRS)
Chao, Winston C.
2015-01-01
The excessive precipitation over steep and high mountains (EPSM) in GCMs and meso-scale models is due to a lack of parameterization of the thermal effects of the subgrid-scale topographic variation. These thermal effects drive subgrid-scale heated slope induced vertical circulations (SHVC). SHVC provide a ventilation effect of removing heat from the boundary layer of resolvable-scale mountain slopes and depositing it higher up. The lack of SHVC parameterization is the cause of EPSM. The author has previously proposed a method of parameterizing SHVC, here termed SHVC.1. Although this has been successful in avoiding EPSM, the drawback of SHVC.1 is that it suppresses convective type precipitation in the regions where it is applied. In this article we propose a new method of parameterizing SHVC, here termed SHVC.2. In SHVC.2 the potential temperature and mixing ratio of the boundary layer are changed when used as input to the cumulus parameterization scheme over mountainous regions. This allows the cumulus parameterization to assume the additional function of SHVC parameterization. SHVC.2 has been tested in NASA Goddard's GEOS-5 GCM. It achieves the primary goal of avoiding EPSM while also avoiding the suppression of convective-type precipitation in regions where it is applied.
Ruff, Kiersten M.; Harmon, Tyler S.; Pappu, Rohit V.
2015-01-01
We report the development and deployment of a coarse-graining method that is well suited for computer simulations of aggregation and phase separation of protein sequences with block-copolymeric architectures. Our algorithm, named CAMELOT for Coarse-grained simulations Aided by MachinE Learning Optimization and Training, leverages information from converged all atom simulations that is used to determine a suitable resolution and parameterize the coarse-grained model. To parameterize a system-specific coarse-grained model, we use a combination of Boltzmann inversion, non-linear regression, and a Gaussian process Bayesian optimization approach. The accuracy of the coarse-grained model is demonstrated through direct comparisons to results from all atom simulations. We demonstrate the utility of our coarse-graining approach using the block-copolymeric sequence from the exon 1 encoded sequence of the huntingtin protein. This sequence comprises of 17 residues from the N-terminal end of huntingtin (N17) followed by a polyglutamine (polyQ) tract. Simulations based on the CAMELOT approach are used to show that the adsorption and unfolding of the wild type N17 and its sequence variants on the surface of polyQ tracts engender a patchy colloid like architecture that promotes the formation of linear aggregates. These results provide a plausible explanation for experimental observations, which show that N17 accelerates the formation of linear aggregates in block-copolymeric N17-polyQ sequences. The CAMELOT approach is versatile and is generalizable for simulating the aggregation and phase behavior of a range of block-copolymeric protein sequences. PMID:26723608
Evaluation of scale-aware subgrid mesoscale eddy models in a global eddy-rich model
NASA Astrophysics Data System (ADS)
Pearson, Brodie; Fox-Kemper, Baylor; Bachman, Scott; Bryan, Frank
2017-07-01
Two parameterizations for horizontal mixing of momentum and tracers by subgrid mesoscale eddies are implemented in a high-resolution global ocean model. These parameterizations follow on the techniques of large eddy simulation (LES). The theory underlying one parameterization (2D Leith due to Leith, 1996) is that of enstrophy cascades in two-dimensional turbulence, while the other (QG Leith) is designed for potential enstrophy cascades in quasi-geostrophic turbulence. Simulations using each of these parameterizations are compared with a control simulation using standard biharmonic horizontal mixing.Simulations using the 2D Leith and QG Leith parameterizations are more realistic than those using biharmonic mixing. In particular, the 2D Leith and QG Leith simulations have more energy in resolved mesoscale eddies, have a spectral slope more consistent with turbulence theory (an inertial enstrophy or potential enstrophy cascade), have bottom drag and vertical viscosity as the primary sinks of energy instead of lateral friction, and have isoneutral parameterized mesoscale tracer transport. The parameterization choice also affects mass transports, but the impact varies regionally in magnitude and sign.
NASA Astrophysics Data System (ADS)
Firl, G. J.; Randall, D. A.
2013-12-01
The so-called "assumed probability density function (PDF)" approach to subgrid-scale (SGS) parameterization has shown to be a promising method for more accurately representing boundary layer cloudiness under a wide range of conditions. A new parameterization has been developed, named the Two-and-a-Half ORder closure (THOR), that combines this approach with a higher-order turbulence closure. THOR predicts the time evolution of the turbulence kinetic energy components, the variance of ice-liquid water potential temperature (θil) and total non-precipitating water mixing ratio (qt) and the covariance between the two, and the vertical fluxes of horizontal momentum, θil, and qt. Ten corresponding third-order moments in addition to the skewnesses of θil and qt are calculated using diagnostic functions assuming negligible time tendencies. The statistical moments are used to define a trivariate double Gaussian PDF among vertical velocity, θil, and qt. The first three statistical moments of each variable are used to estimate the two Gaussian plume means, variances, and weights. Unlike previous similar models, plume variances are not assumed to be equal or zero. Instead, they are parameterized using the idea that the less dominant Gaussian plume (typically representing the updraft-containing portion of a grid cell) has greater variance than the dominant plume (typically representing the "environmental" or slowly subsiding portion of a grid cell). Correlations among the three variables are calculated using the appropriate covariance moments, and both plume correlations are assumed to be equal. The diagnosed PDF in each grid cell is used to calculate SGS condensation, SGS fluxes of cloud water species, SGS buoyancy terms, and to inform other physical parameterizations about SGS variability. SGS condensation is extended from previous similar models to include condensation over both liquid and ice substrates, dependent on the grid cell temperature. Implementations have been included in THOR to drive existing microphysical and radiation parameterizations with samples drawn from the trivariate PDF. THOR has been tested in a single-column model framework using standardized test cases spanning a range of large-scale conditions conducive to both shallow cumulus and stratocumulus clouds and the transition between the two states. The results were compared to published LES intercomparison results using the same cases, and the gross characteristics of both cloudiness and boundary layer turbulence produced by THOR were within the range of results from the respective LES ensembles. In addition, THOR was used in a single-column model framework to study low cloud feedbacks in the northeastern Pacific Ocean. Using initialization and forcings developed as part of the CGILS project, THOR was run at 8 points along a cross-section from the trade-wind cumulus region east of Hawaii to the coastal stratocumulus region off the coast of California for both the control climate and a climate perturbed by +2K SST. A neutral to weakly positive cloud feedback of 0-4 W m-2 K-1 was simulated along the cross-section. The physical mechanisms responsible appeared to be increased boundary layer entrainment and stratocumulus decoupling leading to reduced maximum cloud cover and liquid water path.
NASA Astrophysics Data System (ADS)
Zielke, Olaf; McDougall, Damon; Mai, Martin; Babuska, Ivo
2014-05-01
Seismic, often augmented with geodetic data, are frequently used to invert for the spatio-temporal evolution of slip along a rupture plane. The resulting images of the slip evolution for a single event, inferred by different research teams, often vary distinctly, depending on the adopted inversion approach and rupture model parameterization. This observation raises the question, which of the provided kinematic source inversion solutions is most reliable and most robust, and — more generally — how accurate are fault parameterization and solution predictions? These issues are not included in "standard" source inversion approaches. Here, we present a statistical inversion approach to constrain kinematic rupture parameters from teleseismic body waves. The approach is based a) on a forward-modeling scheme that computes synthetic (body-)waves for a given kinematic rupture model, and b) on the QUESO (Quantification of Uncertainty for Estimation, Simulation, and Optimization) library that uses MCMC algorithms and Bayes theorem for sample selection. We present Bayesian inversions for rupture parameters in synthetic earthquakes (i.e. for which the exact rupture history is known) in an attempt to identify the cross-over at which further model discretization (spatial and temporal resolution of the parameter space) is no longer attributed to a decreasing misfit. Identification of this cross-over is of importance as it reveals the resolution power of the studied data set (i.e. teleseismic body waves), enabling one to constrain kinematic earthquake rupture histories of real earthquakes at a resolution that is supported by data. In addition, the Bayesian approach allows for mapping complete posterior probability density functions of the desired kinematic source parameters, thus enabling us to rigorously assess the uncertainties in earthquake source inversions.
Nowke, Christian; Diaz-Pier, Sandra; Weyers, Benjamin; Hentschel, Bernd; Morrison, Abigail; Kuhlen, Torsten W.; Peyser, Alexander
2018-01-01
Simulation models in many scientific fields can have non-unique solutions or unique solutions which can be difficult to find. Moreover, in evolving systems, unique final state solutions can be reached by multiple different trajectories. Neuroscience is no exception. Often, neural network models are subject to parameter fitting to obtain desirable output comparable to experimental data. Parameter fitting without sufficient constraints and a systematic exploration of the possible solution space can lead to conclusions valid only around local minima or around non-minima. To address this issue, we have developed an interactive tool for visualizing and steering parameters in neural network simulation models. In this work, we focus particularly on connectivity generation, since finding suitable connectivity configurations for neural network models constitutes a complex parameter search scenario. The development of the tool has been guided by several use cases—the tool allows researchers to steer the parameters of the connectivity generation during the simulation, thus quickly growing networks composed of multiple populations with a targeted mean activity. The flexibility of the software allows scientists to explore other connectivity and neuron variables apart from the ones presented as use cases. With this tool, we enable an interactive exploration of parameter spaces and a better understanding of neural network models and grapple with the crucial problem of non-unique network solutions and trajectories. In addition, we observe a reduction in turn around times for the assessment of these models, due to interactive visualization while the simulation is computed. PMID:29937723
A general science-based framework for dynamical spatio-temporal models
Wikle, C.K.; Hooten, M.B.
2010-01-01
Spatio-temporal statistical models are increasingly being used across a wide variety of scientific disciplines to describe and predict spatially-explicit processes that evolve over time. Correspondingly, in recent years there has been a significant amount of research on new statistical methodology for such models. Although descriptive models that approach the problem from the second-order (covariance) perspective are important, and innovative work is being done in this regard, many real-world processes are dynamic, and it can be more efficient in some cases to characterize the associated spatio-temporal dependence by the use of dynamical models. The chief challenge with the specification of such dynamical models has been related to the curse of dimensionality. Even in fairly simple linear, first-order Markovian, Gaussian error settings, statistical models are often over parameterized. Hierarchical models have proven invaluable in their ability to deal to some extent with this issue by allowing dependency among groups of parameters. In addition, this framework has allowed for the specification of science based parameterizations (and associated prior distributions) in which classes of deterministic dynamical models (e. g., partial differential equations (PDEs), integro-difference equations (IDEs), matrix models, and agent-based models) are used to guide specific parameterizations. Most of the focus for the application of such models in statistics has been in the linear case. The problems mentioned above with linear dynamic models are compounded in the case of nonlinear models. In this sense, the need for coherent and sensible model parameterizations is not only helpful, it is essential. Here, we present an overview of a framework for incorporating scientific information to motivate dynamical spatio-temporal models. First, we illustrate the methodology with the linear case. We then develop a general nonlinear spatio-temporal framework that we call general quadratic nonlinearity and demonstrate that it accommodates many different classes of scientific-based parameterizations as special cases. The model is presented in a hierarchical Bayesian framework and is illustrated with examples from ecology and oceanography. ?? 2010 Sociedad de Estad??stica e Investigaci??n Operativa.
NASA Astrophysics Data System (ADS)
Serbin, S.; Walker, A. P.; Wu, J.; Ely, K.; Rogers, A.; Wolfe, B.
2017-12-01
Tropical forests play a key role in regulating the global carbon (C), water, and energy cycles and stores, as well as influence climate through the exchanges of mass and energy with the atmosphere. However, projected changes in temperature and precipitation patterns are expected to impact the tropics and the strength of the tropical C sink, likely resulting in significant climate feedbacks. Moreover, the impact of stronger, longer, and more extensive droughts not well understood. Critical for the accurate modeling of the tropical C and water cycle in Earth System Models (ESMs) is the representation of the coupled photosynthetic and stomatal conductance processes and how these processes are impacted by environmental and other drivers. Moreover, the parameterization and representation of these processes is an important consideration for ESM projections. We use a novel model framework, the Multi-Assumption Architecture and Testbed (MAAT), together with the open-source bioinformatics toolbox, the Predictive Ecosystem Analyzer (PEcAn), to explore the impact of the multiple mechanistic hypotheses of coupled photosynthesis and stomatal conductance as well as the additional uncertainty related to model parameterization. Our goal was to better understand how model choice and parameterization influences diurnal and seasonal modeling of leaf-level photosynthesis and stomatal conductance. We focused on the 2016 ENSO period and starting in February, monthly measurements of diurnal photosynthesis and conductance were made on 7-9 dominant species at the two Smithsonian canopy crane sites. This benchmark dataset was used to test different representations of stomatal conductance and photosynthetic parameterizations with the MAAT model, running within PEcAn. The MAAT model allows for the easy selection of competing hypotheses to test different photosynthetic modeling approaches while PEcAn provides the ability to explore the uncertainties introduced through parameterization. We found that stomatal choice can play a large role in model-data mismatch and observational constraints can be used to reduce simulated model spread, but can also result in large model disagreements with measurements. These results will be used to help inform the modeling of photosynthesis in tropical systems for the larger ESM community.
Objective calibration of regional climate models
NASA Astrophysics Data System (ADS)
Bellprat, O.; Kotlarski, S.; Lüthi, D.; SchäR, C.
2012-12-01
Climate models are subject to high parametric uncertainty induced by poorly confined model parameters of parameterized physical processes. Uncertain model parameters are typically calibrated in order to increase the agreement of the model with available observations. The common practice is to adjust uncertain model parameters manually, often referred to as expert tuning, which lacks objectivity and transparency in the use of observations. These shortcomings often haze model inter-comparisons and hinder the implementation of new model parameterizations. Methods which would allow to systematically calibrate model parameters are unfortunately often not applicable to state-of-the-art climate models, due to computational constraints facing the high dimensionality and non-linearity of the problem. Here we present an approach to objectively calibrate a regional climate model, using reanalysis driven simulations and building upon a quadratic metamodel presented by Neelin et al. (2010) that serves as a computationally cheap surrogate of the model. Five model parameters originating from different parameterizations are selected for the optimization according to their influence on the model performance. The metamodel accurately estimates spatial averages of 2 m temperature, precipitation and total cloud cover, with an uncertainty of similar magnitude as the internal variability of the regional climate model. The non-linearities of the parameter perturbations are well captured, such that only a limited number of 20-50 simulations are needed to estimate optimal parameter settings. Parameter interactions are small, which allows to further reduce the number of simulations. In comparison to an ensemble of the same model which has undergone expert tuning, the calibration yields similar optimal model configurations, but leading to an additional reduction of the model error. The performance range captured is much wider than sampled with the expert-tuned ensemble and the presented methodology is effective and objective. It is argued that objective calibration is an attractive tool and could become standard procedure after introducing new model implementations, or after a spatial transfer of a regional climate model. Objective calibration of parameterizations with regional models could also serve as a strategy toward improving parameterization packages of global climate models.
NASA Astrophysics Data System (ADS)
Anderson, Ray; Skaggs, Todd; Alfieri, Joseph; Kustas, William; Wang, Dong; Ayars, James
2016-04-01
Partitioned land surfaces fluxes (e.g. evaporation, transpiration, photosynthesis, and ecosystem respiration) are needed as input, calibration, and validation data for numerous hydrological and land surface models. However, one of the most commonly used techniques for measuring land surface fluxes, Eddy Covariance (EC), can directly measure net, combined water and carbon fluxes (evapotranspiration and net ecosystem exchange/productivity). Analysis of the correlation structure of high frequency EC time series (hereafter flux partitioning or FP) has been proposed to directly partition net EC fluxes into their constituent components using leaf-level water use efficiency (WUE) data to separate stomatal and non-stomatal transport processes. FP has significant logistical and spatial representativeness advantages over other partitioning approaches (e.g. isotopic fluxes, sap flow, microlysimeters), but the performance of the FP algorithm is reliant on the accuracy of the intercellular CO2 (ci) concentration used to parameterize WUE for each flux averaging interval. In this study, we tested several parameterizations for ci as a function of atmospheric CO2 (ca), including (1) a constant ci/ca ratio for C3 and C4 photosynthetic pathway plants, (2) species-specific ci/ca-Vapor Pressure Deficit (VPD) relationships (quadratic and linear), and (3) generalized C3 and C4 photosynthetic pathway ci/ca-VPD relationships. We tested these ci parameterizations at three agricultural EC towers from 2011-present in C4 and C3 crops (sugarcane - Saccharum officinarum L. and peach - Prunus persica), and validated again sap-flow sensors installed at the peach site. The peach results show that the species-specific parameterizations driven FP algorithm came to convergence significantly more frequently (~20% more frequently) than the constant ci/ca ratio or generic C3-VPD relationship. The FP algorithm parameterizations with a generic VPD relationship also had slightly higher transpiration (5 Wm-2 difference) than the constant ci/ca ratio. However, photosynthesis and respiration fluxes over sugarcane were ~15% lower with a VPD-ci/ca relationship than a constant ci/ca ratio. The results illustrate the importance of combining leaf-level physiological observations with EC to improve the performance of the FP algorithm.
Summary and Findings of the ARL Dynamic Failure Forum
2016-09-29
short beam shear, quasi -static indentation, depth of penetration, and V50 limit velocity. o Experimental technique suggestions for improvement included...art in experimental , theoretical, and computational studies of dynamic failure. The forum also focused on identifying technologies and approaches...Army-specific problems. Experimental exploration of material behavior and an improved ability to parameterize material models is essential to improving
Geometry modeling and grid generation using 3D NURBS control volume
NASA Technical Reports Server (NTRS)
Yu, Tzu-Yi; Soni, Bharat K.; Shih, Ming-Hsin
1995-01-01
The algorithms for volume grid generation using NURBS geometric representation are presented. The parameterization algorithm is enhanced to yield a desired physical distribution on the curve, surface and volume. This approach bridges the gap between CAD surface/volume definition and surface/volume grid generation. Computational examples associated with practical configurations have shown the utilization of these algorithms.
Alignment dynamics of diffusive scalar gradient in a two-dimensional model flow
NASA Astrophysics Data System (ADS)
Gonzalez, M.
2018-04-01
The Lagrangian two-dimensional approach of scalar gradient kinematics is revisited accounting for molecular diffusion. Numerical simulations are performed in an analytic, parameterized model flow, which enables considering different regimes of scalar gradient dynamics. Attention is especially focused on the influence of molecular diffusion on Lagrangian statistical orientations and on the dynamics of scalar gradient alignment.
2012-09-30
oscillation (SAO) and quasi-biennial oscillation ( QBO ) of stratospheric equatorial winds in long-term (10-year) nature runs. The ability of these new schemes...to generate and maintain tropical SAO and QBO circulations in Navy models for the first time is an important breakthrough, since these circulations
Michael J. Falkowski; Andrew T. Hudak; Nicholas L. Crookston; Paul E. Gessler; Edward H. Uebler; Alistair M. S. Smith
2010-01-01
Sustainable forest management requires timely, detailed forest inventory data across large areas, which is difficult to obtain via traditional forest inventory techniques. This study evaluated k-nearest neighbor imputation models incorporating LiDAR data to predict tree-level inventory data (individual tree height, diameter at breast height, and...
NASA Astrophysics Data System (ADS)
Lai, Changliang; Wang, Junbiao; Liu, Chuang
2014-10-01
Six typical composite grid cylindrical shells are constructed by superimposing three basic types of ribs. Then buckling behavior and structural efficiency of these shells are analyzed under axial compression, pure bending, torsion and transverse bending by finite element (FE) models. The FE models are created by a parametrical FE modeling approach that defines FE models with original natural twisted geometry and orients cross-sections of beam elements exactly. And the approach is parameterized and coded by Patran Command Language (PCL). The demonstrations of FE modeling indicate the program enables efficient generation of FE models and facilitates parametric studies and design of grid shells. Using the program, the effects of helical angles on the buckling behavior of six typical grid cylindrical shells are determined. The results of these studies indicate that the triangle grid and rotated triangle grid cylindrical shell are more efficient than others under axial compression and pure bending, whereas under torsion and transverse bending, the hexagon grid cylindrical shell is most efficient. Additionally, buckling mode shapes are compared and provide an understanding of composite grid cylindrical shells that is useful in preliminary design of such structures.
Ferentinos, Konstantinos P
2005-09-01
Two neural network (NN) applications in the field of biological engineering are developed, designed and parameterized by an evolutionary method based on the evolutionary process of genetic algorithms. The developed systems are a fault detection NN model and a predictive modeling NN system. An indirect or 'weak specification' representation was used for the encoding of NN topologies and training parameters into genes of the genetic algorithm (GA). Some a priori knowledge of the demands in network topology for specific application cases is required by this approach, so that the infinite search space of the problem is limited to some reasonable degree. Both one-hidden-layer and two-hidden-layer network architectures were explored by the GA. Except for the network architecture, each gene of the GA also encoded the type of activation functions in both hidden and output nodes of the NN and the type of minimization algorithm that was used by the backpropagation algorithm for the training of the NN. Both models achieved satisfactory performance, while the GA system proved to be a powerful tool that can successfully replace the problematic trial-and-error approach that is usually used for these tasks.
NASA Astrophysics Data System (ADS)
Pathiraja, S. D.; van Leeuwen, P. J.
2017-12-01
Model Uncertainty Quantification remains one of the central challenges of effective Data Assimilation (DA) in complex partially observed non-linear systems. Stochastic parameterization methods have been proposed in recent years as a means of capturing the uncertainty associated with unresolved sub-grid scale processes. Such approaches generally require some knowledge of the true sub-grid scale process or rely on full observations of the larger scale resolved process. We present a methodology for estimating the statistics of sub-grid scale processes using only partial observations of the resolved process. It finds model error realisations over a training period by minimizing their conditional variance, constrained by available observations. Special is that these realisations are binned conditioned on the previous model state during the minimization process, allowing for the recovery of complex error structures. The efficacy of the approach is demonstrated through numerical experiments on the multi-scale Lorenz 96' model. We consider different parameterizations of the model with both small and large time scale separations between slow and fast variables. Results are compared to two existing methods for accounting for model uncertainty in DA and shown to provide improved analyses and forecasts.
Ocean-Forced Ice-Shelf Thinning in a Synchronously Coupled Ice-Ocean Model
NASA Astrophysics Data System (ADS)
Jordan, James R.; Holland, Paul R.; Goldberg, Dan; Snow, Kate; Arthern, Robert; Campin, Jean-Michel; Heimbach, Patrick; Jenkins, Adrian
2018-02-01
The first fully synchronous, coupled ice shelf-ocean model with a fixed grounding line and imposed upstream ice velocity has been developed using the MITgcm (Massachusetts Institute of Technology general circulation model). Unlike previous, asynchronous, approaches to coupled modeling our approach is fully conservative of heat, salt, and mass. Synchronous coupling is achieved by continuously updating the ice-shelf thickness on the ocean time step. By simulating an idealized, warm-water ice shelf we show how raising the pycnocline leads to a reduction in both ice-shelf mass and back stress, and hence buttressing. Coupled runs show the formation of a western boundary channel in the ice-shelf base due to increased melting on the western boundary due to Coriolis enhanced flow. Eastern boundary ice thickening is also observed. This is not the case when using a simple depth-dependent parameterized melt, as the ice shelf has relatively thinner sides and a thicker central "bulge" for a given ice-shelf mass. Ice-shelf geometry arising from the parameterized melt rate tends to underestimate backstress (and therefore buttressing) for a given ice-shelf mass due to a thinner ice shelf at the boundaries when compared to coupled model simulations.
A projected decrease in lightning under climate change
NASA Astrophysics Data System (ADS)
Finney, Declan L.; Doherty, Ruth M.; Wild, Oliver; Stevenson, David S.; MacKenzie, Ian A.; Blyth, Alan M.
2018-03-01
Lightning strongly influences atmospheric chemistry1-3, and impacts the frequency of natural wildfires4. Most previous studies project an increase in global lightning with climate change over the coming century1,5-7, but these typically use parameterizations of lightning that neglect cloud ice fluxes, a component generally considered to be fundamental to thunderstorm charging8. As such, the response of lightning to climate change is uncertain. Here, we compare lightning projections for 2100 using two parameterizations: the widely used cloud-top height (CTH) approach9, and a new upward cloud ice flux (IFLUX) approach10 that overcomes previous limitations. In contrast to the previously reported global increase in lightning based on CTH, we find a 15% decrease in total lightning flash rate with IFLUX in 2100 under a strong global warming scenario. Differences are largest in the tropics, where most lightning occurs, with implications for the estimation of future changes in tropospheric ozone and methane, as well as differences in their radiative forcings. These results suggest that lightning schemes more closely related to cloud ice and microphysical processes are needed to robustly estimate future changes in lightning and atmospheric composition.
A New Canopy Integration Factor
NASA Astrophysics Data System (ADS)
Badgley, G.; Anderegg, L. D. L.; Baker, I. T.; Berry, J. A.
2017-12-01
Ecosystem modelers have long debated how to best represent within-canopy heterogeneity. Can one big leaf represent the full range of canopy physiological responses? Or you need two leaves - sun and shade - to get things right? Is it sufficient to treat the canopy as a diffuse medium? Or would it be better to explicitly represent separate canopy layers? These are open questions that have been subject of an enormous amount of research and scrutiny. Yet regardless of how the canopy is represented, each model must grapple with correctly parameterizing its canopy in a way that properly translates leaf-level processes to the canopy and ecosystem scale. We present a new approach for integrating whole-canopy biochemistry by combining remote sensing with ecological theory. Using the Simple Biosphere model (SiB), we redefined how SiB scales photosynthetic processes from leaf-to-canopy as a function of satellite-derived measurements of solar-induced chlorophyll fluorescence (SIF). Across multiple long-term study sites, our approach improves the accuracy of daily modeled photosynthesis by as much as 25 percent. We share additional insights on how SIF might be more directly integrated into photosynthesis models, as well as present ideas for harnessing SIF to more accurately parameterize canopy biochemical variables.
Multiscale Cloud System Modeling
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo; Moncrieff, Mitchell W.
2009-01-01
The central theme of this paper is to describe how cloud system resolving models (CRMs) of grid spacing approximately 1 km have been applied to various important problems in atmospheric science across a wide range of spatial and temporal scales and how these applications relate to other modeling approaches. A long-standing problem concerns the representation of organized precipitating convective cloud systems in weather and climate models. Since CRMs resolve the mesoscale to large scales of motion (i.e., 10 km to global) they explicitly address the cloud system problem. By explicitly representing organized convection, CRMs bypass restrictive assumptions associated with convective parameterization such as the scale gap between cumulus and large-scale motion. Dynamical models provide insight into the physical mechanisms involved with scale interaction and convective organization. Multiscale CRMs simulate convective cloud systems in computational domains up to global and have been applied in place of contemporary convective parameterizations in global models. Multiscale CRMs pose a new challenge for model validation, which is met in an integrated approach involving CRMs, operational prediction systems, observational measurements, and dynamical models in a new international project: the Year of Tropical Convection, which has an emphasis on organized tropical convection and its global effects.
Diagnosing the impact of alternative calibration strategies on coupled hydrologic models
NASA Astrophysics Data System (ADS)
Smith, T. J.; Perera, C.; Corrigan, C.
2017-12-01
Hydrologic models represent a significant tool for understanding, predicting, and responding to the impacts of water on society and society on water resources and, as such, are used extensively in water resources planning and management. Given this important role, the validity and fidelity of hydrologic models is imperative. While extensive focus has been paid to improving hydrologic models through better process representation, better parameter estimation, and better uncertainty quantification, significant challenges remain. In this study, we explore a number of competing model calibration scenarios for simple, coupled snowmelt-runoff models to better understand the sensitivity / variability of parameterizations and its impact on model performance, robustness, fidelity, and transferability. Our analysis highlights the sensitivity of coupled snowmelt-runoff model parameterizations to alterations in calibration approach, underscores the concept of information content in hydrologic modeling, and provides insight into potential strategies for improving model robustness / fidelity.
A frequentist approach to computer model calibration
Wong, Raymond K. W.; Storlie, Curtis Byron; Lee, Thomas C. M.
2016-05-05
The paper considers the computer model calibration problem and provides a general frequentist solution. Under the framework proposed, the data model is semiparametric with a non-parametric discrepancy function which accounts for any discrepancy between physical reality and the computer model. In an attempt to solve a fundamentally important (but often ignored) identifiability issue between the computer model parameters and the discrepancy function, the paper proposes a new and identifiable parameterization of the calibration problem. It also develops a two-step procedure for estimating all the relevant quantities under the new parameterization. This estimation procedure is shown to enjoy excellent rates ofmore » convergence and can be straightforwardly implemented with existing software. For uncertainty quantification, bootstrapping is adopted to construct confidence regions for the quantities of interest. As a result, the practical performance of the methodology is illustrated through simulation examples and an application to a computational fluid dynamics model.« less
The eddy transport of nonconserved trace species derived from satellite data
NASA Technical Reports Server (NTRS)
Smith, Anne K.; Lyjak, Lawrence V.; Gille, John C.
1988-01-01
Using the approach of the Garcia and Solomon (1983) model and data obtained by the LIMS instrument on Nimbus 7, the chemical eddy transport matrix for planetary waves was calculated, and the chemical eddy contribution to the components of the matrix obtained from the LIMS satellite observations was computed using specified photochemical damping time scales. The dominant component of the transport matrices for several winter months were obtained for ozone, nitric acid, and quasi-geostrophic potential vorticity (PV), and the parameterized transports of these were compared with the 'exact' transports, computed directly from the eddy LIMS data. The results indicate that the chemical eddy effect can account for most of the observed ozone transport in early winter, decreasing to less than half in late winter. The agreement between the parameterized and observed nitric acid and PV was not as good. Reasons for this are discussed.
Evaluation of Surface Flux Parameterizations with Long-Term ARM Observations
Liu, Gang; Liu, Yangang; Endo, Satoshi
2013-02-01
Surface momentum, sensible heat, and latent heat fluxes are critical for atmospheric processes such as clouds and precipitation, and are parameterized in a variety of models ranging from cloud-resolving models to large-scale weather and climate models. However, direct evaluation of the parameterization schemes for these surface fluxes is rare due to limited observations. This study takes advantage of the long-term observations of surface fluxes collected at the Southern Great Plains site by the Department of Energy Atmospheric Radiation Measurement program to evaluate the six surface flux parameterization schemes commonly used in the Weather Research and Forecasting (WRF) model and threemore » U.S. general circulation models (GCMs). The unprecedented 7-yr-long measurements by the eddy correlation (EC) and energy balance Bowen ratio (EBBR) methods permit statistical evaluation of all six parameterizations under a variety of stability conditions, diurnal cycles, and seasonal variations. The statistical analyses show that the momentum flux parameterization agrees best with the EC observations, followed by latent heat flux, sensible heat flux, and evaporation ratio/Bowen ratio. The overall performance of the parameterizations depends on atmospheric stability, being best under neutral stratification and deteriorating toward both more stable and more unstable conditions. Further diagnostic analysis reveals that in addition to the parameterization schemes themselves, the discrepancies between observed and parameterized sensible and latent heat fluxes may stem from inadequate use of input variables such as surface temperature, moisture availability, and roughness length. The results demonstrate the need for improving the land surface models and measurements of surface properties, which would permit the evaluation of full land surface models.« less
Endalamaw, Abraham; Bolton, W. Robert; Young-Robertson, Jessica M.; ...
2017-09-14
Modeling hydrological processes in the Alaskan sub-arctic is challenging because of the extreme spatial heterogeneity in soil properties and vegetation communities. Nevertheless, modeling and predicting hydrological processes is critical in this region due to its vulnerability to the effects of climate change. Coarse-spatial-resolution datasets used in land surface modeling pose a new challenge in simulating the spatially distributed and basin-integrated processes since these datasets do not adequately represent the small-scale hydrological, thermal, and ecological heterogeneity. The goal of this study is to improve the prediction capacity of mesoscale to large-scale hydrological models by introducing a small-scale parameterization scheme, which bettermore » represents the spatial heterogeneity of soil properties and vegetation cover in the Alaskan sub-arctic. The small-scale parameterization schemes are derived from observations and a sub-grid parameterization method in the two contrasting sub-basins of the Caribou Poker Creek Research Watershed (CPCRW) in Interior Alaska: one nearly permafrost-free (LowP) sub-basin and one permafrost-dominated (HighP) sub-basin. The sub-grid parameterization method used in the small-scale parameterization scheme is derived from the watershed topography. We found that observed soil thermal and hydraulic properties – including the distribution of permafrost and vegetation cover heterogeneity – are better represented in the sub-grid parameterization method than the coarse-resolution datasets. Parameters derived from the coarse-resolution datasets and from the sub-grid parameterization method are implemented into the variable infiltration capacity (VIC) mesoscale hydrological model to simulate runoff, evapotranspiration (ET), and soil moisture in the two sub-basins of the CPCRW. Simulated hydrographs based on the small-scale parameterization capture most of the peak and low flows, with similar accuracy in both sub-basins, compared to simulated hydrographs based on the coarse-resolution datasets. On average, the small-scale parameterization scheme improves the total runoff simulation by up to 50 % in the LowP sub-basin and by up to 10 % in the HighP sub-basin from the large-scale parameterization. This study shows that the proposed sub-grid parameterization method can be used to improve the performance of mesoscale hydrological models in the Alaskan sub-arctic watersheds.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Endalamaw, Abraham; Bolton, W. Robert; Young-Robertson, Jessica M.
Modeling hydrological processes in the Alaskan sub-arctic is challenging because of the extreme spatial heterogeneity in soil properties and vegetation communities. Nevertheless, modeling and predicting hydrological processes is critical in this region due to its vulnerability to the effects of climate change. Coarse-spatial-resolution datasets used in land surface modeling pose a new challenge in simulating the spatially distributed and basin-integrated processes since these datasets do not adequately represent the small-scale hydrological, thermal, and ecological heterogeneity. The goal of this study is to improve the prediction capacity of mesoscale to large-scale hydrological models by introducing a small-scale parameterization scheme, which bettermore » represents the spatial heterogeneity of soil properties and vegetation cover in the Alaskan sub-arctic. The small-scale parameterization schemes are derived from observations and a sub-grid parameterization method in the two contrasting sub-basins of the Caribou Poker Creek Research Watershed (CPCRW) in Interior Alaska: one nearly permafrost-free (LowP) sub-basin and one permafrost-dominated (HighP) sub-basin. The sub-grid parameterization method used in the small-scale parameterization scheme is derived from the watershed topography. We found that observed soil thermal and hydraulic properties – including the distribution of permafrost and vegetation cover heterogeneity – are better represented in the sub-grid parameterization method than the coarse-resolution datasets. Parameters derived from the coarse-resolution datasets and from the sub-grid parameterization method are implemented into the variable infiltration capacity (VIC) mesoscale hydrological model to simulate runoff, evapotranspiration (ET), and soil moisture in the two sub-basins of the CPCRW. Simulated hydrographs based on the small-scale parameterization capture most of the peak and low flows, with similar accuracy in both sub-basins, compared to simulated hydrographs based on the coarse-resolution datasets. On average, the small-scale parameterization scheme improves the total runoff simulation by up to 50 % in the LowP sub-basin and by up to 10 % in the HighP sub-basin from the large-scale parameterization. This study shows that the proposed sub-grid parameterization method can be used to improve the performance of mesoscale hydrological models in the Alaskan sub-arctic watersheds.« less
Extensions and applications of a second-order landsurface parameterization
NASA Technical Reports Server (NTRS)
Andreou, S. A.; Eagleson, P. S.
1983-01-01
Extensions and applications of a second order land surface parameterization, proposed by Andreou and Eagleson are developed. Procedures for evaluating the near surface storage depth used in one cell land surface parameterizations are suggested and tested by using the model. Sensitivity analysis to the key soil parameters is performed. A case study involving comparison with an "exact" numerical model and another simplified parameterization, under very dry climatic conditions and for two different soil types, is also incorporated.
NASA Technical Reports Server (NTRS)
Stauffer, David R.; Seaman, Nelson L.; Munoz, Ricardo C.
2000-01-01
The objective of this investigation was to study the role of shallow convection on the regional water cycle of the Mississippi and Little Washita Basins using a 3-D mesoscale model, the PSUINCAR MM5. The underlying premise of the project was that current modeling of regional-scale climate and moisture cycles over the continents is deficient without adequate treatment of shallow convection. It was hypothesized that an improved treatment of the regional water cycle can be achieved by using a 3-D mesoscale numerical model having a detailed land-surface parameterization, an advanced boundary-layer parameterization, and a more complete shallow convection parameterization than are available in most current models. The methodology was based on the application in the MM5 of new or recently improved parameterizations covering these three physical processes. Therefore, the work plan focused on integrating, improving, and testing these parameterizations in the MM5 and applying them to study water-cycle processes over the Southern Great Plains (SGP): (1) the Parameterization for Land-Atmosphere-Cloud Exchange (PLACE) described by Wetzel and Boone; (2) the 1.5-order turbulent kinetic energy (TKE)-predicting scheme of Shafran et al.; and (3) the hybrid-closure sub-grid shallow convection parameterization of Deng. Each of these schemes has been tested extensively through this study and the latter two have been improved significantly to extend their capabilities.
NASA Astrophysics Data System (ADS)
Belmont, P.; Viparelli, E.; Parker, G.; Lauer, W.; Jennings, C.; Gran, K.; Wilcock, P.; Melesse, A.
2008-12-01
Modeling sediment fluxes and pathways in complex landscapes is limited by our inability to accurately measure and integrate heterogeneous, spatially distributed sources into a single coherent, predictive geomorphic transport law. In this study, we partition the complex landscape of the Le Sueur River watershed into five distributed primary source types, bluffs (including strath terrace caps), ravines, streambanks, tributaries, and flat,agriculture-dominated uplands. The sediment contribution of each source is quantified independently and parameterized for use in a sand and mud routing model. Rigorous modeling of the evolution of this landscape and sediment flux from each source type requires consideration of substrate characteristics, heterogeneity, and spatial connectivity. The subsurface architecture of the Le Sueur drainage basin is defined by a layer cake sequence of fine-grained tills, interbedded with fluvioglacial sands. Nearly instantaneous baselevel fall of 65 m occurred at 11.5 ka, as a result of the catastrophic draining of glacial Lake Agassiz through the Minnesota River, to which the Le Sueur is a tributary. The major knickpoint that was generated from that event has propagated 40 km into the Le Sueur network, initiating an incised river valley with tall, retreating bluffs and actively incising ravines. Loading estimates constrained by river gaging records that bound the knick zone indicate that bluffs connected to the river are retreating at an average rate of less than 2 cm per year and ravines are incising at an average rate of less than 0.8 mm per year, consistent with the Holocene average incision rate on the main stem of the river of less than 0.6 mm per year. Ongoing work with cosmogenic nuclide sediment tracers, ground-based LiDAR, historic aerial photos, and field mapping will be combined to represent the diversity of erosional environments and processes in a single coherent routing model.
A Simple Parameterization of 3 x 3 Magic Squares
ERIC Educational Resources Information Center
Trenkler, Gotz; Schmidt, Karsten; Trenkler, Dietrich
2012-01-01
In this article a new parameterization of magic squares of order three is presented. This parameterization permits an easy computation of their inverses, eigenvalues, eigenvectors and adjoints. Some attention is paid to the Luoshu, one of the oldest magic squares.
NASA Astrophysics Data System (ADS)
Guo, X.; Yang, K.; Yang, W.; Li, S.; Long, Z.
2011-12-01
We present a field investigation over a melting valley glacier on the Tibetan Plateau. One particular aspect lies in that three melt phases are distinguished during the glacier's ablation season, which enables us to compare results over snow, bare-ice, and hummocky surfaces [with aerodynamic roughness lengths (z0M) varying on the order of 10-4-10-2 m]. We address two issues of common concern in the study of glacio-meteorology and micrometeorology. First, we study turbulent energy flux estimation through a critical evaluation of three parameterizations of the scalar roughness lengths (z0T for temperature and z0q for humidity), viz. key factors for the accurate estimation of sensible heat and latent heat fluxes using the bulk aerodynamic method. The first approach (Andreas 1987, Boundary-Layer Meteorol 38:159-184) is based on surface-renewal models and has been very widely applied in glaciated areas; the second (Yang et al. 2002, Q J Roy Meteorol Soc 128:2073-2087) has never received application over an ice/snow surface, despite its validity in arid regions; the third approach (Smeets and van den Broeke 2008, Boundary-Layer Meteorol 128:339-355) is proposed for use specifically over rough ice defined as z0M > 10-3 m or so. This empirical z0M threshold value is deemed of general relevance to glaciated areas (e.g. ice sheet/cap and valley/outlet glaciers), above which the first approach gives underestimated z0T and z0q. The first and the third approaches tend to underestimate and overestimate turbulent heat/moisture exchange, respectively (relative errors often > 30%). Overall, the second approach produces fairly low errors in energy flux estimates; it thus emerges as a practically useful choice to parameterize z0T and z0q over an ice/snow surface. Our evaluation of z0T and z0q parameterizations hopefully serves as a useful source of reference for physically based modeling of land-ice surface energy budget and mass balance. Second, we explore how scalar turbulence behaves in the glacier winds, based on the turbulent fluctuations of temperature (T'), and water vapor (q') and CO2 (c') concentrations. This dataset is advantageous to analyses of turbulent scalar similarity, because the source/sink distribution of scalars is uniform over an ice/snow surface. New pieces of knowledge are: (1) T' and q' can be highly correlated, even when sensible heat and latent heat fluxes are in opposite directions. - The same direction of scalar fluxes is not a necessary condition for high scalar correlation. (2) The vertical transport efficiency of T' is always higher than that of q'. - The Bowen ratio (|β| > 1) is one factor underlying the T'-to-q' transport efficiency in stable conditions as well. (3) We provide confirmatory evidence of Detto and Katul's (Boundary-Layer Meteorol 122:205-216) original argument: density effect correction to q' and c' is necessitated for eddy-covariance analyses of turbulence structure.
NASA Astrophysics Data System (ADS)
Berloff, P. S.
2016-12-01
This work aims at developing a framework for dynamically consistent parameterization of mesoscale eddy effects for use in non-eddy-resolving ocean circulation models. The proposed eddy parameterization framework is successfully tested on the classical, wind-driven double-gyre model, which is solved both with explicitly resolved vigorous eddy field and in the non-eddy-resolving configuration with the eddy parameterization replacing the eddy effects. The parameterization focuses on the effect of the stochastic part of the eddy forcing that backscatters and induces eastward jet extension of the western boundary currents and its adjacent recirculation zones. The parameterization locally approximates transient eddy flux divergence by spatially localized and temporally periodic forcing, referred to as the plunger, and focuses on the linear-dynamics flow solution induced by it. The nonlinear self-interaction of this solution, referred to as the footprint, characterizes and quantifies the induced eddy forcing exerted on the large-scale flow. We find that spatial pattern and amplitude of each footprint strongly depend on the underlying large-scale flow, and the corresponding relationships provide the basis for the eddy parameterization and its closure on the large-scale flow properties. Dependencies of the footprints on other important parameters of the problem are also systematically analyzed. The parameterization utilizes the local large-scale flow information, constructs and scales the corresponding footprints, and then sums them up over the gyres to produce the resulting eddy forcing field, which is interactively added to the model as an extra forcing. Thus, the assumed ensemble of plunger solutions can be viewed as a simple model for the cumulative effect of the stochastic eddy forcing. The parameterization framework is implemented in the simplest way, but it provides a systematic strategy for improving the implementation algorithm.
NASA Astrophysics Data System (ADS)
Vorobyov, E. I.
2010-01-01
We study numerically the applicability of the effective-viscosity approach for simulating the effect of gravitational instability (GI) in disks of young stellar objects with different disk-to-star mass ratios ξ . We adopt two α-parameterizations for the effective viscosity based on Lin and Pringle [Lin, D.N.C., Pringle, J.E., 1990. ApJ 358, 515] and Kratter et al. [Kratter, K.M., Matzner, Ch.D., Krumholz, M.R., 2008. ApJ 681, 375] and compare the resultant disk structure, disk and stellar masses, and mass accretion rates with those obtained directly from numerical simulations of self-gravitating disks around low-mass (M∗ ∼ 1.0M⊙) protostars. We find that the effective viscosity can, in principle, simulate the effect of GI in stellar systems with ξ≲ 0.2- 0.3 , thus corroborating a similar conclusion by Lodato and Rice [Lodato, G., Rice, W.K.M., 2004. MNRAS 351, 630] that was based on a different α-parameterization. In particular, the Kratter et al.'s α-parameterization has proven superior to that of Lin and Pringle's, because the success of the latter depends crucially on the proper choice of the α-parameter. However, the α-parameterization generally fails in stellar systems with ξ≳ 0.3 , particularly in the Classes 0 and I phases of stellar evolution, yielding too small stellar masses and too large disk-to-star mass ratios. In addition, the time-averaged mass accretion rates onto the star are underestimated in the early disk evolution and greatly overestimated in the late evolution. The failure of the α-parameterization in the case of large ξ is caused by a growing strength of low-order spiral modes in massive disks. Only in the late Class II phase, when the magnitude of spiral modes diminishes and the mode-to-mode interaction ensues, may the effective viscosity be used to simulate the effect of GI in stellar systems with ξ≳ 0.3 . A simple modification of the effective viscosity that takes into account disk fragmentation can somewhat improve the performance of α-models in the case of large ξ and even approximately reproduce the mass accretion burst phenomenon, the latter being a signature of the early gravitationally unstable stage of stellar evolution [Vorobyov, E.I., Basu, S., 2006. ApJ 650, 956]. However, further numerical experiments are needed to explore this issue.
NASA Technical Reports Server (NTRS)
Stone, Peter H.; Yao, Mao-Sung
1990-01-01
A number of perpetual January simulations are carried out with a two-dimensional zonally averaged model employing various parameterizations of the eddy fluxes of heat (potential temperature) and moisture. The parameterizations are evaluated by comparing these results with the eddy fluxes calculated in a parallel simulation using a three-dimensional general circulation model with zonally symmetric forcing. The three-dimensional model's performance in turn is evaluated by comparing its results using realistic (nonsymmetric) boundary conditions with observations. Branscome's parameterization of the meridional eddy flux of heat and Leovy's parameterization of the meridional eddy flux of moisture simulate the seasonal and latitudinal variations of these fluxes reasonably well, while somewhat underestimating their magnitudes. New parameterizations of the vertical eddy fluxes are developed that take into account the enhancement of the eddy mixing slope in a growing baroclinic wave due to condensation, and also the effect of eddy fluctuations in relative humidity. The new parameterizations, when tested in the two-dimensional model, simulate the seasonal, latitudinal, and vertical variations of the vertical eddy fluxes quite well, when compared with the three-dimensional model, and only underestimate the magnitude of the fluxes by 10 to 20 percent.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiao, Heng; Gustafson, Jr., William I.; Hagos, Samson M.
2015-04-18
With this study, to better understand the behavior of quasi-equilibrium-based convection parameterizations at higher resolution, we use a diagnostic framework to examine the resolution-dependence of subgrid-scale vertical transport of moist static energy as parameterized by the Zhang-McFarlane convection parameterization (ZM). Grid-scale input to ZM is supplied by coarsening output from cloud-resolving model (CRM) simulations onto subdomains ranging in size from 8 × 8 to 256 × 256 km 2s.
A Numerical Optimization Approach for Tuning Fuzzy Logic Controllers
NASA Technical Reports Server (NTRS)
Woodard, Stanley E.; Garg, Devendra P.
1998-01-01
This paper develops a method to tune fuzzy controllers using numerical optimization. The main attribute of this approach is that it allows fuzzy logic controllers to be tuned to achieve global performance requirements. Furthermore, this approach allows design constraints to be implemented during the tuning process. The method tunes the controller by parameterizing the membership functions for error, change-in-error and control output. The resulting parameters form a design vector which is iteratively changed to minimize an objective function. The minimal objective function results in an optimal performance of the system. A spacecraft mounted science instrument line-of-sight pointing control is used to demonstrate results.
A preference-ordered discrete-gaming approach to air-combat analysis
NASA Technical Reports Server (NTRS)
Kelley, H. J.; Lefton, L.
1978-01-01
An approach to one-on-one air-combat analysis is described which employs discrete gaming of a parameterized model featuring choice between several closed-loop control policies. A preference-ordering formulation due to Falco is applied to rational choice between outcomes: win, loss, mutual capture, purposeful disengagement, draw. Approximate optimization is provided by an active-cell scheme similar to Falco's obtained by a 'backing up' process similar to that of Kopp. The approach is designed primarily for short-duration duels between craft with large-envelope weaponry. Some illustrative computations are presented for an example modeled using constant-speed vehicles and very rough estimation of energy shifts.
Uncertainty quantification and global sensitivity analysis of the Los Alamos sea ice model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Urrego-Blanco, Jorge Rolando; Urban, Nathan Mark; Hunke, Elizabeth Clare
Changes in the high-latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with midlatitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. We present a quantitative way to assess uncertainty in complex computer models, which is a new approach in the analysis of sea ice models. We characterize parametric uncertainty in the Los Alamos sea ice model (CICE) in a standalone configuration and quantify the sensitivity of sea ice area, extent, and volume with respect to uncertainty in 39 individual modelmore » parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one at a time, this study uses a global variance-based approach in which Sobol' sequences are used to efficiently sample the full 39-dimensional parameter space. We implement a fast emulator of the sea ice model whose predictions of sea ice extent, area, and volume are used to compute the Sobol' sensitivity indices of the 39 parameters. Main effects and interactions among the most influential parameters are also estimated by a nonparametric regression technique based on generalized additive models. A ranking based on the sensitivity indices indicates that model predictions are most sensitive to snow parameters such as snow conductivity and grain size, and the drainage of melt ponds. Lastly, it is recommended that research be prioritized toward more accurately determining these most influential parameter values by observational studies or by improving parameterizations in the sea ice model.« less
NASA Astrophysics Data System (ADS)
Peng, Bo; Blackman, Eric
2018-01-01
Closely interacting binary stars can incur Common Envelope Evolution (CEE) when at least one of the stars enters a giant phase. The extent to which CEE leads to envelope ejection and how tight the binaries become after CEE as a function of the mass and type of the companion stars has a broad range of phenomenological implications for both low mass and high mass binary stellar systems. Global simulations of CEE are emerging, but to understand the underlying physics of CEE and make connections with analytic formalisms, it helpful to employ reduced numerical models. Here we present results and analyses from simulations of gravitational drag using a Cartesian approach. Using AstroBEAR, a parallelized hydrodynamic/MHD simulation code, we simulate a system in which a 0.1 MSun main sequence secondary star is embedded in gas characteristic of the Envelope of a 3 MSun AGB star. The relative motion of the secondary star against the stationary envelope is represented by a supersonic wind that immerses a point particle, which is initially at rest, yet gradually dragged by the wind. Our approach differs from previous related wind-tunnel work by MacLeod et al. (2015,2017) in that we allow the particle to be displaced, offering a direct measurement of the drag force from its motion. We verify the validity of our method, extract the accretion rate of material in the wake via numerical integration, and compare the results between our method and previous work. We also use the results to help constrain the efficiency parameter in widely used analytic parameterizations of CEE.
Gahm, Jin Kyu; Shi, Yonggang
2018-05-01
Surface mapping methods play an important role in various brain imaging studies from tracking the maturation of adolescent brains to mapping gray matter atrophy patterns in Alzheimer's disease. Popular surface mapping approaches based on spherical registration, however, have inherent numerical limitations when severe metric distortions are present during the spherical parameterization step. In this paper, we propose a novel computational framework for intrinsic surface mapping in the Laplace-Beltrami (LB) embedding space based on Riemannian metric optimization on surfaces (RMOS). Given a diffeomorphism between two surfaces, an isometry can be defined using the pullback metric, which in turn results in identical LB embeddings from the two surfaces. The proposed RMOS approach builds upon this mathematical foundation and achieves general feature-driven surface mapping in the LB embedding space by iteratively optimizing the Riemannian metric defined on the edges of triangular meshes. At the core of our framework is an optimization engine that converts an energy function for surface mapping into a distance measure in the LB embedding space, which can be effectively optimized using gradients of the LB eigen-system with respect to the Riemannian metrics. In the experimental results, we compare the RMOS algorithm with spherical registration using large-scale brain imaging data, and show that RMOS achieves superior performance in the prediction of hippocampal subfields and cortical gyral labels, and the holistic mapping of striatal surfaces for the construction of a striatal connectivity atlas from substantia nigra. Copyright © 2018 Elsevier B.V. All rights reserved.
Uncertainty quantification and global sensitivity analysis of the Los Alamos sea ice model
Urrego-Blanco, Jorge Rolando; Urban, Nathan Mark; Hunke, Elizabeth Clare; ...
2016-04-01
Changes in the high-latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with midlatitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. We present a quantitative way to assess uncertainty in complex computer models, which is a new approach in the analysis of sea ice models. We characterize parametric uncertainty in the Los Alamos sea ice model (CICE) in a standalone configuration and quantify the sensitivity of sea ice area, extent, and volume with respect to uncertainty in 39 individual modelmore » parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one at a time, this study uses a global variance-based approach in which Sobol' sequences are used to efficiently sample the full 39-dimensional parameter space. We implement a fast emulator of the sea ice model whose predictions of sea ice extent, area, and volume are used to compute the Sobol' sensitivity indices of the 39 parameters. Main effects and interactions among the most influential parameters are also estimated by a nonparametric regression technique based on generalized additive models. A ranking based on the sensitivity indices indicates that model predictions are most sensitive to snow parameters such as snow conductivity and grain size, and the drainage of melt ponds. Lastly, it is recommended that research be prioritized toward more accurately determining these most influential parameter values by observational studies or by improving parameterizations in the sea ice model.« less
Uncertainty quantification and global sensitivity analysis of the Los Alamos sea ice model
NASA Astrophysics Data System (ADS)
Urrego-Blanco, Jorge R.; Urban, Nathan M.; Hunke, Elizabeth C.; Turner, Adrian K.; Jeffery, Nicole
2016-04-01
Changes in the high-latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with midlatitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. We present a quantitative way to assess uncertainty in complex computer models, which is a new approach in the analysis of sea ice models. We characterize parametric uncertainty in the Los Alamos sea ice model (CICE) in a standalone configuration and quantify the sensitivity of sea ice area, extent, and volume with respect to uncertainty in 39 individual model parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one at a time, this study uses a global variance-based approach in which Sobol' sequences are used to efficiently sample the full 39-dimensional parameter space. We implement a fast emulator of the sea ice model whose predictions of sea ice extent, area, and volume are used to compute the Sobol' sensitivity indices of the 39 parameters. Main effects and interactions among the most influential parameters are also estimated by a nonparametric regression technique based on generalized additive models. A ranking based on the sensitivity indices indicates that model predictions are most sensitive to snow parameters such as snow conductivity and grain size, and the drainage of melt ponds. It is recommended that research be prioritized toward more accurately determining these most influential parameter values by observational studies or by improving parameterizations in the sea ice model.
A test harness for accelerating physics parameterization advancements into operations
NASA Astrophysics Data System (ADS)
Firl, G. J.; Bernardet, L.; Harrold, M.; Henderson, J.; Wolff, J.; Zhang, M.
2017-12-01
The process of transitioning advances in parameterization of sub-grid scale processes from initial idea to implementation is often much quicker than the transition from implementation to use in an operational setting. After all, considerable work must be undertaken by operational centers to fully test, evaluate, and implement new physics. The process is complicated by the scarcity of like-to-like comparisons, availability of HPC resources, and the ``tuning problem" whereby advances in physics schemes are difficult to properly evaluate without first undertaking the expensive and time-consuming process of tuning to other schemes within a suite. To address this process shortcoming, the Global Model TestBed (GMTB), supported by the NWS NGGPS project and undertaken by the Developmental Testbed Center, has developed a physics test harness. It implements the concept of hierarchical testing, where the same code can be tested in model configurations of varying complexity from single column models (SCM) to fully coupled, cycled global simulations. Developers and users may choose at which level of complexity to engage. Several components of the physics test harness have been implemented, including a SCM and an end-to-end workflow that expands upon the one used at NOAA/EMC to run the GFS operationally, although the testbed components will necessarily morph to coincide with changes to the operational configuration (FV3-GFS). A standard, relatively user-friendly interface known as the Interoperable Physics Driver (IPD) is available for physics developers to connect their codes. This prerequisite exercise allows access to the testbed tools and removes a technical hurdle for potential inclusion into the Common Community Physics Package (CCPP). The testbed offers users the opportunity to conduct like-to-like comparisons between the operational physics suite and new development as well as among multiple developments. GMTB staff have demonstrated use of the testbed through a comparison between the 2017 operational GFS suite and one containing the Grell-Freitas convective parameterization. An overview of the physics test harness and its early use will be presented.
Parameterizing by the Number of Numbers
NASA Astrophysics Data System (ADS)
Fellows, Michael R.; Gaspers, Serge; Rosamond, Frances A.
The usefulness of parameterized algorithmics has often depended on what Niedermeier has called "the art of problem parameterization". In this paper we introduce and explore a novel but general form of parameterization: the number of numbers. Several classic numerical problems, such as Subset Sum, Partition, 3-Partition, Numerical 3-Dimensional Matching, and Numerical Matching with Target Sums, have multisets of integers as input. We initiate the study of parameterizing these problems by the number of distinct integers in the input. We rely on an FPT result for Integer Linear Programming Feasibility to show that all the above-mentioned problems are fixed-parameter tractable when parameterized in this way. In various applied settings, problem inputs often consist in part of multisets of integers or multisets of weighted objects (such as edges in a graph, or jobs to be scheduled). Such number-of-numbers parameterized problems often reduce to subproblems about transition systems of various kinds, parameterized by the size of the system description. We consider several core problems of this kind relevant to number-of-numbers parameterization. Our main hardness result considers the problem: given a non-deterministic Mealy machine M (a finite state automaton outputting a letter on each transition), an input word x, and a census requirement c for the output word specifying how many times each letter of the output alphabet should be written, decide whether there exists a computation of M reading x that outputs a word y that meets the requirement c. We show that this problem is hard for W[1]. If the question is whether there exists an input word x such that a computation of M on x outputs a word that meets c, the problem becomes fixed-parameter tractable.
Accelerating the connection between experiments and models: The FACE-MDS experience
NASA Astrophysics Data System (ADS)
Norby, R. J.; Medlyn, B. E.; De Kauwe, M. G.; Zaehle, S.; Walker, A. P.
2014-12-01
The mandate is clear for improving communication between models and experiments to better evaluate terrestrial responses to atmospheric and climatic change. Unfortunately, progress in linking experimental and modeling approaches has been slow and sometimes frustrating. Recent successes in linking results from the Duke and Oak Ridge free-air CO2 enrichment (FACE) experiments with ecosystem and land surface models - the FACE Model-Data Synthesis (FACE-MDS) project - came only after a period of slow progress, but the experience points the way to future model-experiment interactions. As the FACE experiments were approaching their termination, the FACE research community made an explicit attempt to work together with the modeling community to synthesize and deliver experimental data to benchmark models and to use models to supply appropriate context for the experimental results. Initial problems that impeded progress were: measurement protocols were not consistent across different experiments; data were not well organized for model input; and parameterizing and spinning up models that were not designed for simulating a specific site was difficult. Once these problems were worked out, the FACE-MDS project has been very successful in using data from the Duke and ORNL FACE experiment to test critical assumptions in the models. The project showed, for example, that the stomatal conductance model most widely used in models was supported by experimental data, but models did not capture important responses such as increased leaf mass per unit area in elevated CO2, and did not appropriately represent foliar nitrogen allocation. We now have an opportunity to learn from this experience. New FACE experiments that have recently been initiated, or are about to be initiated, include a eucalyptus forest in Australia; the AmazonFACE experiment in a primary, tropical forest in Brazil; and a mature oak woodland in England. Cross-site science questions are being developed that will have a strong modeling framework, and modelers and experimentalists will work to establish common measurement protocols and data format. By starting the model-experiment connection early and learning from our past experiences, we expect to significantly shorten the time lags between advances in process-oriented studies and large-scale models.
Parameterized Cross Sections for Pion Production in Proton-Proton Collisions
NASA Technical Reports Server (NTRS)
Blattnig, Steve R.; Swaminathan, Sudha R.; Kruger, Adam T.; Ngom, Moussa; Norbury, John W.; Tripathi, R. K.
2000-01-01
An accurate knowledge of cross sections for pion production in proton-proton collisions finds wide application in particle physics, astrophysics, cosmic ray physics, and space radiation problems, especially in situations where an incident proton is transported through some medium and knowledge of the output particle spectrum is required when given the input spectrum. In these cases, accurate parameterizations of the cross sections are desired. In this paper much of the experimental data are reviewed and compared with a wide variety of different cross section parameterizations. Therefore, parameterizations of neutral and charged pion cross sections are provided that give a very accurate description of the experimental data. Lorentz invariant differential cross sections, spectral distributions, and total cross section parameterizations are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larson, Vincent; Gettelman, Andrew; Morrison, Hugh
In state-of-the-art climate models, each cloud type is treated using its own separate cloud parameterization and its own separate microphysics parameterization. This use of separate schemes for separate cloud regimes is undesirable because it is theoretically unfounded, it hampers interpretation of results, and it leads to the temptation to overtune parameters. In this grant, we are creating a climate model that contains a unified cloud parameterization and a unified microphysics parameterization. This model will be used to address the problems of excessive frequency of drizzle in climate models and excessively early onset of deep convection in the Tropics over land.more » The resulting model will be compared with ARM observations.« less
Mapping behavioral landscapes for animal movement: a finite mixture modeling approach
Tracey, Jeff A.; Zhu, Jun; Boydston, Erin E.; Lyren, Lisa M.; Fisher, Robert N.; Crooks, Kevin R.
2013-01-01
Because of its role in many ecological processes, movement of animals in response to landscape features is an important subject in ecology and conservation biology. In this paper, we develop models of animal movement in relation to objects or fields in a landscape. We take a finite mixture modeling approach in which the component densities are conceptually related to different choices for movement in response to a landscape feature, and the mixing proportions are related to the probability of selecting each response as a function of one or more covariates. We combine particle swarm optimization and an Expectation-Maximization (EM) algorithm to obtain maximum likelihood estimates of the model parameters. We use this approach to analyze data for movement of three bobcats in relation to urban areas in southern California, USA. A behavioral interpretation of the models revealed similarities and differences in bobcat movement response to urbanization. All three bobcats avoided urbanization by moving either parallel to urban boundaries or toward less urban areas as the proportion of urban land cover in the surrounding area increased. However, one bobcat, a male with a dispersal-like large-scale movement pattern, avoided urbanization at lower densities and responded strictly by moving parallel to the urban edge. The other two bobcats, which were both residents and occupied similar geographic areas, avoided urban areas using a combination of movements parallel to the urban edge and movement toward areas of less urbanization. However, the resident female appeared to exhibit greater repulsion at lower levels of urbanization than the resident male, consistent with empirical observations of bobcats in southern California. Using the parameterized finite mixture models, we mapped behavioral states to geographic space, creating a representation of a behavioral landscape. This approach can provide guidance for conservation planning based on analysis of animal movement data using statistical models, thereby linking connectivity evaluations to empirical data.
Ice-nucleating particle emissions from photochemically aged diesel and biodiesel exhaust
NASA Astrophysics Data System (ADS)
Schill, G. P.; Jathar, S. H.; Kodros, J. K.; Levin, E. J. T.; Galang, A. M.; Friedman, B.; Link, M. F.; Farmer, D. K.; Pierce, J. R.; Kreidenweis, S. M.; DeMott, P. J.
2016-05-01
Immersion-mode ice-nucleating particle (INP) concentrations from an off-road diesel engine were measured using a continuous-flow diffusion chamber at -30°C. Both petrodiesel and biodiesel were utilized, and the exhaust was aged up to 1.5 photochemically equivalent days using an oxidative flow reactor. We found that aged and unaged diesel exhaust of both fuels is not likely to contribute to atmospheric INP concentrations at mixed-phase cloud conditions. To explore this further, a new limit-of-detection parameterization for ice nucleation on diesel exhaust was developed. Using a global-chemical transport model, potential black carbon INP (INPBC) concentrations were determined using a current literature INPBC parameterization and the limit-of-detection parameterization. Model outputs indicate that the current literature parameterization likely overemphasizes INPBC concentrations, especially in the Northern Hemisphere. These results highlight the need to integrate new INPBC parameterizations into global climate models as generalized INPBC parameterizations are not valid for diesel exhaust.
Radiative flux and forcing parameterization error in aerosol-free clear skies
Pincus, Robert; Mlawer, Eli J.; Oreopoulos, Lazaros; ...
2015-07-03
This article reports on the accuracy in aerosol- and cloud-free conditions of the radiation parameterizations used in climate models. Accuracy is assessed relative to observationally validated reference models for fluxes under present-day conditions and forcing (flux changes) from quadrupled concentrations of carbon dioxide. Agreement among reference models is typically within 1 W/m 2, while parameterized calculations are roughly half as accurate in the longwave and even less accurate, and more variable, in the shortwave. Absorption of shortwave radiation is underestimated by most parameterizations in the present day and has relatively large errors in forcing. Error in present-day conditions is essentiallymore » unrelated to error in forcing calculations. Recent revisions to parameterizations have reduced error in most cases. As a result, a dependence on atmospheric conditions, including integrated water vapor, means that global estimates of parameterization error relevant for the radiative forcing of climate change will require much more ambitious calculations.« less
NASA Astrophysics Data System (ADS)
Kelly, R. E. J.; Saberi, N.; Li, Q.
2017-12-01
With moderate to high spatial resolution (<1 km) regional to global snow water equivalent (SWE) observation approaches yet to be fully scoped and developed, the long-term satellite passive microwave record remains an important tool for cryosphere-climate diagnostics. A new satellite microwave remote sensing approach is described for estimating snow depth (SD) and snow water equivalent (SWE). The algorithm, called the Satellite-based Microwave Snow Algorithm (SMSA), uses Advanced Microwave Scanning Radiometer - 2 (AMSR2) observations aboard the Global Change Observation Mission - Water mission launched by the Japan Aerospace Exploration Agency in 2012. The approach is unique since it leverages observed brightness temperatures (Tb) with static ancillary data to parameterize a physically-based retrieval without requiring parameter constraints from in situ snow depth observations or historical snow depth climatology. After screening snow from non-snow surface targets (water bodies [including freeze/thaw state], rainfall, high altitude plateau regions [e.g. Tibetan plateau]), moderate and shallow snow depths are estimated by minimizing the difference between Dense Media Radiative Transfer model estimates (Tsang et al., 2000; Picard et al., 2011) and AMSR2 Tb observations to retrieve SWE and SD. Parameterization of the model combines a parsimonious snow grain size and density approach originally developed by Kelly et al. (2003). Evaluation of the SMSA performance is achieved using in situ snow depth data from a variety of standard and experiment data sources. Results presented from winter seasons 2012-13 to 2016-17 illustrate the improved performance of the new approach in comparison with the baseline AMSR2 algorithm estimates and approach the performance of the model assimilation-based approach of GlobSnow. Given the variation in estimation power of SWE by different land surface/climate models and selected satellite-derived passive microwave approaches, SMSA provides SWE estimates that are independent of real or near real-time in situ and model data.
Quantifying Groundwater Model Uncertainty
NASA Astrophysics Data System (ADS)
Hill, M. C.; Poeter, E.; Foglia, L.
2007-12-01
Groundwater models are characterized by the (a) processes simulated, (b) boundary conditions, (c) initial conditions, (d) method of solving the equation, (e) parameterization, and (f) parameter values. Models are related to the system of concern using data, some of which form the basis of observations used most directly, through objective functions, to estimate parameter values. Here we consider situations in which parameter values are determined by minimizing an objective function. Other methods of model development are not considered because their ad hoc nature generally prohibits clear quantification of uncertainty. Quantifying prediction uncertainty ideally includes contributions from (a) to (f). The parameter values of (f) tend to be continuous with respect to both the simulated equivalents of the observations and the predictions, while many aspects of (a) through (e) are discrete. This fundamental difference means that there are options for evaluating the uncertainty related to parameter values that generally do not exist for other aspects of a model. While the methods available for (a) to (e) can be used for the parameter values (f), the inferential methods uniquely available for (f) generally are less computationally intensive and often can be used to considerable advantage. However, inferential approaches require calculation of sensitivities. Whether the numerical accuracy and stability of the model solution required for accurate sensitivities is more broadly important to other model uses is an issue that needs to be addressed. Alternative global methods can require 100 or even 1,000 times the number of runs needed by inferential methods, though methods of reducing the number of needed runs are being developed and tested. Here we present three approaches for quantifying model uncertainty and investigate their strengths and weaknesses. (1) Represent more aspects as parameters so that the computationally efficient methods can be broadly applied. This approach is attainable through universal model analysis software such as UCODE-2005, PEST, and joint use of these programs, which allow many aspects of a model to be defined as parameters. (2) Use highly parameterized models to quantify aspects of (e). While promising, this approach implicitly includes parameterizations that may be considered unreasonable if investigated explicitly, so that resulting measures of uncertainty may be too large. (3) Use a combination of inferential and global methods that can be facilitated using the new software MMA (Multi-Model Analysis), which is constructed using the JUPITER API. Here we consider issues related to the model discrimination criteria calculated by MMA.
Maurer, K. D.; Bohrer, G.; Kenny, W. T.; ...
2015-04-30
Surface roughness parameters, namely the roughness length and displacement height, are an integral input used to model surface fluxes. However, most models assume these parameters to be a fixed property of plant functional type and disregard the governing structural heterogeneity and dynamics. In this study, we use large-eddy simulations to explore, in silico, the effects of canopy-structure characteristics on surface roughness parameters. We performed a virtual experiment to test the sensitivity of resolved surface roughness to four axes of canopy structure: (1) leaf area index, (2) the vertical profile of leaf density, (3) canopy height, and (4) canopy gap fraction.more » We found roughness parameters to be highly variable, but uncovered positive relationships between displacement height and maximum canopy height, aerodynamic canopy height and maximum canopy height and leaf area index, and eddy-penetration depth and gap fraction. We also found negative relationships between aerodynamic canopy height and gap fraction, as well as between eddy-penetration depth and maximum canopy height and leaf area index. We generalized our model results into a virtual "biometric" parameterization that relates roughness length and displacement height to canopy height, leaf area index, and gap fraction. Using a decade of wind and canopy-structure observations in a site in Michigan, we tested the effectiveness of our model-driven biometric parameterization approach in predicting the friction velocity over heterogeneous and disturbed canopies. We compared the accuracy of these predictions with the friction-velocity predictions obtained from the common simple approximation related to canopy height, the values calculated with large-eddy simulations of the explicit canopy structure as measured by airborne and ground-based lidar, two other parameterization approaches that utilize varying canopy-structure inputs, and the annual and decadal means of the surface roughness parameters at the site from meteorological observations. We found that the classical representation of constant roughness parameters (in space and time) as a fraction of canopy height performed relatively well. Nonetheless, of the approaches we tested, most of the empirical approaches that incorporate seasonal and interannual variation of roughness length and displacement height as a function of the dynamics of canopy structure produced more precise and less biased estimates for friction velocity than models with temporally invariable parameters.« less
NASA Astrophysics Data System (ADS)
Maurer, K. D.; Bohrer, G.; Kenny, W. T.; Ivanov, V. Y.
2015-04-01
Surface roughness parameters, namely the roughness length and displacement height, are an integral input used to model surface fluxes. However, most models assume these parameters to be a fixed property of plant functional type and disregard the governing structural heterogeneity and dynamics. In this study, we use large-eddy simulations to explore, in silico, the effects of canopy-structure characteristics on surface roughness parameters. We performed a virtual experiment to test the sensitivity of resolved surface roughness to four axes of canopy structure: (1) leaf area index, (2) the vertical profile of leaf density, (3) canopy height, and (4) canopy gap fraction. We found roughness parameters to be highly variable, but uncovered positive relationships between displacement height and maximum canopy height, aerodynamic canopy height and maximum canopy height and leaf area index, and eddy-penetration depth and gap fraction. We also found negative relationships between aerodynamic canopy height and gap fraction, as well as between eddy-penetration depth and maximum canopy height and leaf area index. We generalized our model results into a virtual "biometric" parameterization that relates roughness length and displacement height to canopy height, leaf area index, and gap fraction. Using a decade of wind and canopy-structure observations in a site in Michigan, we tested the effectiveness of our model-driven biometric parameterization approach in predicting the friction velocity over heterogeneous and disturbed canopies. We compared the accuracy of these predictions with the friction-velocity predictions obtained from the common simple approximation related to canopy height, the values calculated with large-eddy simulations of the explicit canopy structure as measured by airborne and ground-based lidar, two other parameterization approaches that utilize varying canopy-structure inputs, and the annual and decadal means of the surface roughness parameters at the site from meteorological observations. We found that the classical representation of constant roughness parameters (in space and time) as a fraction of canopy height performed relatively well. Nonetheless, of the approaches we tested, most of the empirical approaches that incorporate seasonal and interannual variation of roughness length and displacement height as a function of the dynamics of canopy structure produced more precise and less biased estimates for friction velocity than models with temporally invariable parameters.
[Formula: see text] regularity properties of singular parameterizations in isogeometric analysis.
Takacs, T; Jüttler, B
2012-11-01
Isogeometric analysis (IGA) is a numerical simulation method which is directly based on the NURBS-based representation of CAD models. It exploits the tensor-product structure of 2- or 3-dimensional NURBS objects to parameterize the physical domain. Hence the physical domain is parameterized with respect to a rectangle or to a cube. Consequently, singularly parameterized NURBS surfaces and NURBS volumes are needed in order to represent non-quadrangular or non-hexahedral domains without splitting, thereby producing a very compact and convenient representation. The Galerkin projection introduces finite-dimensional spaces of test functions in the weak formulation of partial differential equations. In particular, the test functions used in isogeometric analysis are obtained by composing the inverse of the domain parameterization with the NURBS basis functions. In the case of singular parameterizations, however, some of the resulting test functions do not necessarily fulfill the required regularity properties. Consequently, numerical methods for the solution of partial differential equations cannot be applied properly. We discuss the regularity properties of the test functions. For one- and two-dimensional domains we consider several important classes of singularities of NURBS parameterizations. For specific cases we derive additional conditions which guarantee the regularity of the test functions. In addition we present a modification scheme for the discretized function space in case of insufficient regularity. It is also shown how these results can be applied for computational domains in higher dimensions that can be parameterized via sweeping.
Parameterization Interactions in Global Aquaplanet Simulations
NASA Astrophysics Data System (ADS)
Bhattacharya, Ritthik; Bordoni, Simona; Suselj, Kay; Teixeira, João.
2018-02-01
Global climate simulations rely on parameterizations of physical processes that have scales smaller than the resolved ones. In the atmosphere, these parameterizations represent moist convection, boundary layer turbulence and convection, cloud microphysics, longwave and shortwave radiation, and the interaction with the land and ocean surface. These parameterizations can generate different climates involving a wide range of interactions among parameterizations and between the parameterizations and the resolved dynamics. To gain a simplified understanding of a subset of these interactions, we perform aquaplanet simulations with the global version of the Weather Research and Forecasting (WRF) model employing a range (in terms of properties) of moist convection and boundary layer (BL) parameterizations. Significant differences are noted in the simulated precipitation amounts, its partitioning between convective and large-scale precipitation, as well as in the radiative impacts. These differences arise from the way the subcloud physics interacts with convection, both directly and through various pathways involving the large-scale dynamics and the boundary layer, convection, and clouds. A detailed analysis of the profiles of the different tendencies (from the different physical processes) for both potential temperature and water vapor is performed. While different combinations of convection and boundary layer parameterizations can lead to different climates, a key conclusion of this study is that similar climates can be simulated with model versions that are different in terms of the partitioning of the tendencies: the vertically distributed energy and water balances in the tropics can be obtained with significantly different profiles of large-scale, convection, and cloud microphysics tendencies.
Brain Surface Conformal Parameterization Using Riemann Surface Structure
Wang, Yalin; Lui, Lok Ming; Gu, Xianfeng; Hayashi, Kiralee M.; Chan, Tony F.; Toga, Arthur W.; Thompson, Paul M.; Yau, Shing-Tung
2011-01-01
In medical imaging, parameterized 3-D surface models are useful for anatomical modeling and visualization, statistical comparisons of anatomy, and surface-based registration and signal processing. Here we introduce a parameterization method based on Riemann surface structure, which uses a special curvilinear net structure (conformal net) to partition the surface into a set of patches that can each be conformally mapped to a parallelogram. The resulting surface subdivision and the parameterizations of the components are intrinsic and stable (their solutions tend to be smooth functions and the boundary conditions of the Dirichlet problem can be enforced). Conformal parameterization also helps transform partial differential equations (PDEs) that may be defined on 3-D brain surface manifolds to modified PDEs on a two-dimensional parameter domain. Since the Jacobian matrix of a conformal parameterization is diagonal, the modified PDE on the parameter domain is readily solved. To illustrate our techniques, we computed parameterizations for several types of anatomical surfaces in 3-D magnetic resonance imaging scans of the brain, including the cerebral cortex, hippocampi, and lateral ventricles. For surfaces that are topologically homeomorphic to each other and have similar geometrical structures, we show that the parameterization results are consistent and the subdivided surfaces can be matched to each other. Finally, we present an automatic sulcal landmark location algorithm by solving PDEs on cortical surfaces. The landmark detection results are used as constraints for building conformal maps between surfaces that also match explicitly defined landmarks. PMID:17679336
Solar and chemical reaction-induced heating in the terrestrial mesosphere and lower thermosphere
NASA Technical Reports Server (NTRS)
Mlynczak, Martin G.
1992-01-01
Airglow and chemical processes in the terrestrial mesosphere and lower thermosphere are reviewed, and initial parameterizations of the processes applicable to multidimensional models are presented. The basic processes by which absorbed solar energy participates in middle atmosphere energetics for absorption events in which photolysis occurs are illustrated. An approach that permits the heating processes to be incorporated in numerical models is presented.
Image Processing for Planetary Limb/Terminator Extraction
NASA Technical Reports Server (NTRS)
Udomkesmalee, S.; Zhu, D. Q.; Chu, C. -C.
1995-01-01
A novel image segmentation technique for extracting limb and terminator of planetary bodies is proposed. Conventional edge- based histogramming approaches are used to trace object boundaries. The limb and terminator bifurcation is achieved by locating the harmonized segment in the two equations representing the 2-D parameterized boundary curve. Real planetary images from Voyager 1 and 2 served as representative test cases to verify the proposed methodology.
NASA Technical Reports Server (NTRS)
Barber, Peter W.; Demerdash, Nabeel A. O.; Wang, R.; Hurysz, B.; Luo, Z.
1991-01-01
The goal is to analyze the potential effects of electromagnetic interference (EMI) originating from power system processing and transmission components for Space Station Freedom.The approach consists of four steps: (1) develop analytical tools (models and computer programs); (2) conduct parameterization studies; (3) predict the global space station EMI environment; and (4) provide a basis for modification of EMI standards.
Optical Characterization of Deep-Space Object Rotation States
2014-09-01
surface bi-directional reflectance distribution function ( BRDF ), and then estimate the asteroid’s shape via a best-fit parameterized model . This hybrid...approach can be used because asteroid BRDFs are relatively well studied, but their shapes are generally unknown [17]. Asteroid shape models range...can be accomplished using a shape-dependent method that employs a model of the shape and reflectance characteristics of the object. Our analysis
NASA Astrophysics Data System (ADS)
Leckler, F.; Hanafin, J. A.; Ardhuin, F.; Filipot, J.; Anguelova, M. D.; Moat, B. I.; Yelland, M.; Prytherch, J.
2012-12-01
Whitecaps are the main sink of wave energy. Although the exact processes are still unknown, it is clear that they play a significant role in momentum exchange between atmosphere and ocean, and also influence gas and aerosol exchange. Recently, modeling of whitecap properties was implemented in the spectral wave model WAVEWATCH-III ®. This modeling takes place in the context of the Oceanflux-Greenhouse Gas project, to provide a climatology of breaking waves for gas transfer studies. We present here a validation study for two different wave breaking parameterizations implemented in the spectral wave model WAVEWATCH-III ®. The model parameterizations use different approaches related to the steepness of the carrying waves to estimate breaking wave probabilities. That of Ardhuin et al. (2010) is based on the hypothesis that breaking probabilities become significant when the saturation spectrum exceeds a threshold, and includes a modification to allow for greater breaking in the mean wave direction, to agree with observations. It also includes suppression of shorter waves by longer breaking waves. In the second, (Filipot and Ardhuin, 2012) breaking probabilities are defined at different scales using wave steepness, then the breaking wave height distribution is integrated over all scales. We also propose an adaptation of the latter to make it self-consistent. The breaking probabilities parameterized by Filipot and Ardhuin (2012) are much larger for dominant waves than those from the other parameterization, and show better agreement with modeled statistics of breaking crest lengths measured during the FAIRS experiment. This stronger breaking also has an impact on the shorter waves due to the parameterization of short wave damping associated with large breakers, and results in a different distribution of the breaking crest lengths. Converted to whitecap coverage using Reul and Chapron (2003), both parameterizations agree reasonably well with commonly-used empirical fits of whitecap coverage against wind speed (Monahan and Woolf, 1989) and with the global whitecap coverage of Anguelova and Webster (2006), derived from space-borne radiometry. This is mainly due to the fact that the breaking of larger waves in the parametrization by Filipot and Ardhuin (2012) is compensated for by the intense breaking of smaller waves in that of Ardhuin et al. (2010). Comparison with in situ data collected during research ship cruises in the North and South Atlantic (SEASAW, DOGEE and WAGES), and the Norwegian Sea (HiWASE) between 2006 and 2011 also shows good agreement. However, as large scale breakers produce a thicker foam layer, modeled mean foam thickness clearly depends on the scale of the breakers. Foam thickness is thus a more interesting parameter for calibrating and validating breaking wave parameterizations, as the differences in scale can be determined. With this in mind, we present the initial results of validation using an estimation of mean foam thickness using multiple radiometric bands from satellites SMOS and AMSR-E.
Impact of Apex Model parameterization strategy on estimated benefit of conservation practices
USDA-ARS?s Scientific Manuscript database
Three parameterized Agriculture Policy Environmental eXtender (APEX) models for corn-soybean rotation on clay pan soils were developed with the objectives, 1. Evaluate model performance of three parameterization strategies on a validation watershed; and 2. Compare predictions of water quality benefi...
Single-Column Modeling, GCM Parameterizations and Atmospheric Radiation Measurement Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Somerville, R.C.J.; Iacobellis, S.F.
2005-03-18
Our overall goal is identical to that of the Atmospheric Radiation Measurement (ARM) Program: the development of new and improved parameterizations of cloud-radiation effects and related processes, using ARM data at all three ARM sites, and the implementation and testing of these parameterizations in global and regional models. To test recently developed prognostic parameterizations based on detailed cloud microphysics, we have first compared single-column model (SCM) output with ARM observations at the Southern Great Plains (SGP), North Slope of Alaska (NSA) and Topical Western Pacific (TWP) sites. We focus on the predicted cloud amounts and on a suite of radiativemore » quantities strongly dependent on clouds, such as downwelling surface shortwave radiation. Our results demonstrate the superiority of parameterizations based on comprehensive treatments of cloud microphysics and cloud-radiative interactions. At the SGP and NSA sites, the SCM results simulate the ARM measurements well and are demonstrably more realistic than typical parameterizations found in conventional operational forecasting models. At the TWP site, the model performance depends strongly on details of the scheme, and the results of our diagnostic tests suggest ways to develop improved parameterizations better suited to simulating cloud-radiation interactions in the tropics generally. These advances have made it possible to take the next step and build on this progress, by incorporating our parameterization schemes in state-of-the-art 3D atmospheric models, and diagnosing and evaluating the results using independent data. Because the improved cloud-radiation results have been obtained largely via implementing detailed and physically comprehensive cloud microphysics, we anticipate that improved predictions of hydrologic cycle components, and hence of precipitation, may also be achievable. We are currently testing the performance of our ARM-based parameterizations in state-of-the--art global and regional models. One fruitful strategy for evaluating advances in parameterizations has turned out to be using short-range numerical weather prediction as a test-bed within which to implement and improve parameterizations for modeling and predicting climate variability. The global models we have used to date are the CAM atmospheric component of the National Center for Atmospheric Research (NCAR) CCSM climate model as well as the National Centers for Environmental Prediction (NCEP) numerical weather prediction model, thus allowing testing in both climate simulation and numerical weather prediction modes. We present detailed results of these tests, demonstrating the sensitivity of model performance to changes in parameterizations.« less
NASA Technical Reports Server (NTRS)
Steffen, K.; Schweiger, A.; Maslanik, J.; Key, J.; Weaver, R.; Barry, R.
1990-01-01
The application of multi-spectral satellite data to estimate polar surface energy fluxes is addressed. To what accuracy and over which geographic areas large scale energy budgets can be estimated are investigated based upon a combination of available remote sensing and climatological data sets. The general approach was to: (1) formulate parameterization schemes for the appropriate sea ice energy budget terms based upon the remotely sensed and/or in-situ data sets; (2) conduct sensitivity analyses using as input both natural variability (observed data in regional case studies) and theoretical variability based upon energy flux model concepts; (3) assess the applicability of these parameterization schemes to both regional and basin wide energy balance estimates using remote sensing data sets; and (4) assemble multi-spectral, multi-sensor data sets for at least two regions of the Arctic Basin and possibly one region of the Antarctic. The type of data needed for a basin-wide assessment is described and the temporal coverage of these data sets are determined by data availability and need as defined by parameterization scheme. The titles of the subjects are as follows: (1) Heat flux calculations from SSM/I and LANDSAT data in the Bering Sea; (2) Energy flux estimation using passive microwave data; (3) Fetch and stability sensitivity estimates of turbulent heat flux; and (4) Surface temperature algorithm.
Longwave Radiative Flux Calculations in the TOVS Pathfinder Path A Data Set
NASA Technical Reports Server (NTRS)
Mehta, Amita; Susskind, Joel
1999-01-01
A radiative transfer model developed to calculate outgoing longwave radiation (OLR) and downwelling longwave, surface flux (DSF) from the Television and Infrared Operational Satellite (TIROS) Operational Vertical Sounder (TOVS) Pathfinder Path A retrieval products is described. The model covers the spectral range of 2 to 2800 cm in 14 medium medium spectral bands. For each band, transmittances are parameterized as a function of temperature, water vapor, and ozone profiles. The form of the band transmittance parameterization is a modified version of the approach we use to model channel transmittances for the High Resolution Infrared Sounder 2 (HIRS2) instrument. We separately derive effective zenith angle for each spectral band such that band-averaged radiance calculated at that angle best approximates directionally integrated radiance for that band. We develop the transmittance parameterization at these band-dependent effective zenith angles to incorporate directional integration of radiances required in the calculations of OLR and DSF. The model calculations of OLR and DSF are accurate and differ by less than 1% from our line-by-line calculations. Also, the model results are within 1% range of other line-by-line calculations provided by the Intercomparison of Radiation Codes in Climate Models (ICRCCM) project for clear-sky and cloudy conditions. The model is currently used to calculate global, multiyear (1985-1998) OLR and DSF from the TOVS Pathfinder Path A Retrievals.
NASA Astrophysics Data System (ADS)
Mazoyer, M.; Roehrig, R.; Nuissier, O.; Duffourg, F.; Somot, S.
2017-12-01
Most regional climate models (RCSMs) face difficulties in representing a reasonable pre-cipitation probability density function in the Mediterranean area and especially over land.Small amounts of rain are too frequent, preventing any realistic representation of droughts orheat waves, while the intensity of heavy precipitating events is underestimated and not welllocated by most state-of-the-art RCSMs using parameterized convection (resolution from10 to 50 km). Convective parameterization is a key point for the representation of suchevents and recently, the new physics implemented in the CNRM-RCSM has been shown toremarkably improve it, even at a 50-km scale.The present study seeks to further analyse the representation of heavy precipitating eventsby this new version of CNRM-RCSM using a process oriented approach. We focus on oneparticular event in the south-east of France, over the Cévennes. Two hindcast experimentswith the CNRM-RCSM (12 and 50 km) are performed and compared with a simulationbased on the convection-permitting model Meso-NH, which makes use of a very similarsetup as CNRM-RCSM hindcasts. The role of small-scale features of the regional topogra-phy and its interaction with the impinging large-scale flow in triggering the convective eventare investigated. This study provides guidance in the ongoing implementation and use of aspecific parameterization dedicated to account for subgrid-scale orography in the triggeringand closure conditions of the CNRM-RCSM convection scheme.
Price of anarchy on heterogeneous traffic-flow networks
NASA Astrophysics Data System (ADS)
Rose, A.; O'Dea, R.; Hopcraft, K. I.
2016-09-01
The efficiency of routing traffic through a network, comprising nodes connected by links whose cost of traversal is either fixed or varies in proportion to volume of usage, can be measured by the "price of anarchy." This is the ratio of the cost incurred by agents who act to minimize their individual expenditure to the optimal cost borne by the entire system. As the total traffic load and the network variability—parameterized by the proportion of variable-cost links in the network—changes, the behaviors that the system presents can be understood with the introduction of a network of simpler structure. This is constructed from classes of nonoverlapping paths connecting source to destination nodes that are characterized by the number of variable-cost edges they contain. It is shown that localized peaks in the price of anarchy occur at critical traffic volumes at which it becomes beneficial to exploit ostensibly more expensive paths as the network becomes more congested. Simulation results verifying these findings are presented for the variation of the price of anarchy with the network's size, aspect ratio, variability, and traffic load.
Perkins, Casey; Muller, George
2015-10-08
The number of connections between physical and cyber security systems is rapidly increasing due to centralized control from automated and remotely connected means. As the number of interfaces between systems continues to grow, the interactions and interdependencies between them cannot be ignored. Historically, physical and cyber vulnerability assessments have been performed independently. This independent evaluation omits important aspects of the integrated system, where the impacts resulting from malicious or opportunistic attacks are not easily known or understood. Here, we describe a discrete event simulation model that uses information about integrated physical and cyber security systems, attacker characteristics and simple responsemore » rules to identify key safeguards that limit an attacker's likelihood of success. Key features of the proposed model include comprehensive data generation to support a variety of sophisticated analyses, and full parameterization of safeguard performance characteristics and attacker behaviours to evaluate a range of scenarios. Lastly, we also describe the core data requirements and the network of networks that serves as the underlying simulation structure.« less
Wagener, T.; Hogue, T.; Schaake, J.; Duan, Q.; Gupta, H.; Andreassian, V.; Hall, A.; Leavesley, G.
2006-01-01
The Model Parameter Estimation Experiment (MOPEX) is an international project aimed at developing enhanced techniques for the a priori estimation of parameters in hydrological models and in land surface parameterization schemes connected to atmospheric models. The MOPEX science strategy involves: database creation, a priori parameter estimation methodology development, parameter refinement or calibration, and the demonstration of parameter transferability. A comprehensive MOPEX database has been developed that contains historical hydrometeorological data and land surface characteristics data for many hydrological basins in the United States (US) and in other countries. This database is being continuously expanded to include basins from various hydroclimatic regimes throughout the world. MOPEX research has largely been driven by a series of international workshops that have brought interested hydrologists and land surface modellers together to exchange knowledge and experience in developing and applying parameter estimation techniques. With its focus on parameter estimation, MOPEX plays an important role in the international context of other initiatives such as GEWEX, HEPEX, PUB and PILPS. This paper outlines the MOPEX initiative, discusses its role in the scientific community, and briefly states future directions.
Price of anarchy on heterogeneous traffic-flow networks.
Rose, A; O'Dea, R; Hopcraft, K I
2016-09-01
The efficiency of routing traffic through a network, comprising nodes connected by links whose cost of traversal is either fixed or varies in proportion to volume of usage, can be measured by the "price of anarchy." This is the ratio of the cost incurred by agents who act to minimize their individual expenditure to the optimal cost borne by the entire system. As the total traffic load and the network variability-parameterized by the proportion of variable-cost links in the network-changes, the behaviors that the system presents can be understood with the introduction of a network of simpler structure. This is constructed from classes of nonoverlapping paths connecting source to destination nodes that are characterized by the number of variable-cost edges they contain. It is shown that localized peaks in the price of anarchy occur at critical traffic volumes at which it becomes beneficial to exploit ostensibly more expensive paths as the network becomes more congested. Simulation results verifying these findings are presented for the variation of the price of anarchy with the network's size, aspect ratio, variability, and traffic load.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perkins, Casey; Muller, George
The number of connections between physical and cyber security systems is rapidly increasing due to centralized control from automated and remotely connected means. As the number of interfaces between systems continues to grow, the interactions and interdependencies between them cannot be ignored. Historically, physical and cyber vulnerability assessments have been performed independently. This independent evaluation omits important aspects of the integrated system, where the impacts resulting from malicious or opportunistic attacks are not easily known or understood. Here, we describe a discrete event simulation model that uses information about integrated physical and cyber security systems, attacker characteristics and simple responsemore » rules to identify key safeguards that limit an attacker's likelihood of success. Key features of the proposed model include comprehensive data generation to support a variety of sophisticated analyses, and full parameterization of safeguard performance characteristics and attacker behaviours to evaluate a range of scenarios. Lastly, we also describe the core data requirements and the network of networks that serves as the underlying simulation structure.« less
Status of CSR RL06 GRACE reprocessing and preliminary results
NASA Astrophysics Data System (ADS)
Save, H.
2017-12-01
The GRACE project plans to re-processes the GRACE mission data in order to be consistent with the first gravity products released by the GRACE-FO project. The RL06 reprocessing will harmonize the GRACE time-series with the first release of GRACE-FO. This paper catalogues the changes in the upcoming RL06 release and discusses the quality improvements as compared to the current RL05 release. The processing and parameterization changes as compared to the current release are also discussed. This paper discusses the evolution of the quality of the GRACE solutions and characterize the errors over the past few years. The possible challenges associated with connecting the GRACE time series with that from GRACE-FO are also discussed.
Improved parameterization for the vertical flux of dust aerosols emitted by an eroding soil
USDA-ARS?s Scientific Manuscript database
The representation of the dust cycle in atmospheric circulation models hinges on an accurate parameterization of the vertical dust flux at emission. However, existing parameterizations of the vertical dust flux vary substantially in their scaling with wind friction velocity, require input parameters...
Climate and the equilibrium state of land surface hydrology parameterizations
NASA Technical Reports Server (NTRS)
Entekhabi, Dara; Eagleson, Peter S.
1991-01-01
For given climatic rates of precipitation and potential evaporation, the land surface hydrology parameterizations of atmospheric general circulation models will maintain soil-water storage conditions that balance the moisture input and output. The surface relative soil saturation for such climatic conditions serves as a measure of the land surface parameterization state under a given forcing. The equilibrium value of this variable for alternate parameterizations of land surface hydrology are determined as a function of climate and the sensitivity of the surface to shifts and changes in climatic forcing are estimated.
Cross-Section Parameterizations for Pion and Nucleon Production From Negative Pion-Proton Collisions
NASA Technical Reports Server (NTRS)
Norbury, John W.; Blattnig, Steve R.; Norman, Ryan; Tripathi, R. K.
2002-01-01
Ranft has provided parameterizations of Lorentz invariant differential cross sections for pion and nucleon production in pion-proton collisions that are compared to some recent data. The Ranft parameterizations are then numerically integrated to form spectral and total cross sections. These numerical integrations are further parameterized to provide formula for spectral and total cross sections suitable for use in radiation transport codes. The reactions analyzed are for charged pions in the initial state and both charged and neutral pions in the final state.
Anisotropic Shear Dispersion Parameterization for Mesoscale Eddy Transport
NASA Astrophysics Data System (ADS)
Reckinger, S. J.; Fox-Kemper, B.
2016-02-01
The effects of mesoscale eddies are universally treated isotropically in general circulation models. However, the processes that the parameterization approximates, such as shear dispersion, typically have strongly anisotropic characteristics. The Gent-McWilliams/Redi mesoscale eddy parameterization is extended for anisotropy and tested using 1-degree Community Earth System Model (CESM) simulations. The sensitivity of the model to anisotropy includes a reduction of temperature and salinity biases, a deepening of the southern ocean mixed-layer depth, and improved ventilation of biogeochemical tracers, particularly in oxygen minimum zones. The parameterization is further extended to include the effects of unresolved shear dispersion, which sets the strength and direction of anisotropy. The shear dispersion parameterization is similar to drifter observations in spatial distribution of diffusivity and high-resolution model diagnosis in the distribution of eddy flux orientation.
Controllers, observers, and applications thereof
NASA Technical Reports Server (NTRS)
Gao, Zhiqiang (Inventor); Zhou, Wankun (Inventor); Miklosovic, Robert (Inventor); Radke, Aaron (Inventor); Zheng, Qing (Inventor)
2011-01-01
Controller scaling and parameterization are described. Techniques that can be improved by employing the scaling and parameterization include, but are not limited to, controller design, tuning and optimization. The scaling and parameterization methods described here apply to transfer function based controllers, including PID controllers. The parameterization methods also apply to state feedback and state observer based controllers, as well as linear active disturbance rejection (ADRC) controllers. Parameterization simplifies the use of ADRC. A discrete extended state observer (DESO) and a generalized extended state observer (GESO) are described. They improve the performance of the ESO and therefore ADRC. A tracking control algorithm is also described that improves the performance of the ADRC controller. A general algorithm is described for applying ADRC to multi-input multi-output systems. Several specific applications of the control systems and processes are disclosed.
NASA Astrophysics Data System (ADS)
Pincus, R.; Mlawer, E. J.
2017-12-01
Radiation is key process in numerical models of the atmosphere. The problem is well-understood and the parameterization of radiation has seen relatively few conceptual advances in the past 15 years. It is nonthelss often the single most expensive component of all physical parameterizations despite being computed less frequently than other terms. This combination of cost and maturity suggests value in a single radiation parameterization that could be shared across models; devoting effort to a single parameterization might allow for fine tuning for efficiency. The challenge lies in the coupling of this parameterization to many disparate representations of clouds and aerosols. This talk will describe RRTMGP, a new radiation parameterization that seeks to balance efficiency and flexibility. This balance is struck by isolating computational tasks in "kernels" that expose as much fine-grained parallelism as possible. These have simple interfaces and are interoperable across programming languages so that they might be repalced by alternative implementations in domain-specific langauges. Coupling to the host model makes use of object-oriented features of Fortran 2003, minimizing branching within the kernels and the amount of data that must be transferred. We will show accuracy and efficiency results for a globally-representative set of atmospheric profiles using a relatively high-resolution spectral discretization.
Spectral cumulus parameterization based on cloud-resolving model
NASA Astrophysics Data System (ADS)
Baba, Yuya
2018-02-01
We have developed a spectral cumulus parameterization using a cloud-resolving model. This includes a new parameterization of the entrainment rate which was derived from analysis of the cloud properties obtained from the cloud-resolving model simulation and was valid for both shallow and deep convection. The new scheme was examined in a single-column model experiment and compared with the existing parameterization of Gregory (2001, Q J R Meteorol Soc 127:53-72) (GR scheme). The results showed that the GR scheme simulated more shallow and diluted convection than the new scheme. To further validate the physical performance of the parameterizations, Atmospheric Model Intercomparison Project (AMIP) experiments were performed, and the results were compared with reanalysis data. The new scheme performed better than the GR scheme in terms of mean state and variability of atmospheric circulation, i.e., the new scheme improved positive bias of precipitation in western Pacific region, and improved positive bias of outgoing shortwave radiation over the ocean. The new scheme also simulated better features of convectively coupled equatorial waves and Madden-Julian oscillation. These improvements were found to be derived from the modification of parameterization for the entrainment rate, i.e., the proposed parameterization suppressed excessive increase of entrainment, thus suppressing excessive increase of low-level clouds.
Modeling the formation and aging of secondary organic aerosols in Los Angeles during CalNex 2010
NASA Astrophysics Data System (ADS)
Hayes, P. L.; Carlton, A. G.; Baker, K. R.; Ahmadov, R.; Washenfelder, R. A.; Alvarez, S.; Rappenglück, B.; Gilman, J. B.; Kuster, W. C.; de Gouw, J. A.; Zotter, P.; Prévôt, A. S. H.; Szidat, S.; Kleindienst, T. E.; Offenberg, J. H.; Jimenez, J. L.
2014-12-01
Four different parameterizations for the formation and evolution of secondary organic aerosol (SOA) are evaluated using a 0-D box model representing the Los Angeles Metropolitan Region during the CalNex 2010 field campaign. We constrain the model predictions with measurements from several platforms and compare predictions with particle and gas-phase observations from the CalNex Pasadena ground site. That site provides a unique opportunity to study aerosol formation close to anthropogenic emission sources with limited recirculation. The model SOA formed only from the oxidation of VOCs (V-SOA) is insufficient to explain the observed SOA concentrations, even when using SOA parameterizations with multi-generation oxidation that produce much higher yields than have been observed in chamber experiments, or when increasing yields to their upper limit estimates accounting for recently reported losses of vapors to chamber walls. The Community Multiscale Air Quality (WRF-CMAQ) model (version 5.0.1) provides excellent predictions of secondary inorganic particle species but underestimates the observed SOA mass by a factor of 25 when an older VOC-only parameterization is used, which is consistent with many previous model-measurement comparisons for pre-2007 anthropogenic SOA modules in urban areas. Including SOA from primary semi-volatile and intermediate volatility organic compounds (P-S/IVOCs) following the parameterizations of Robinson et al. (2007), Grieshop et al. (2009), or Pye and Seinfeld (2010) improves model/measurement agreement for mass concentration. When comparing the three parameterizations, the Grieshop et al. (2009) parameterization more accurately reproduces both the SOA mass concentration and oxygen-to-carbon ratio inside the urban area. Our results strongly suggest that other precursors besides VOCs, such as P-S/IVOCs, are needed to explain the observed SOA concentrations in Pasadena. All the parameterizations over-predict urban SOA formation at long photochemical ages (≈ 3 days) compared to observations from multiple sites, which can lead to problems in regional and global modeling. Among the explicitly modeled VOCs, the precursor compounds that contribute the greatest SOA mass are methylbenzenes. Polycyclic aromatic hydrocarbons (PAHs) are less important precursors and contribute less than 4% of the SOA mass. The amounts of SOA mass from diesel vehicles, gasoline vehicles, and cooking emissions are estimated to be 16-27, 35-61, and 19-35%, respectively, depending on the parameterization used, which is consistent with the observed fossil fraction of urban SOA, 71 (±3) %. In-basin biogenic VOCs are predicted to contribute only a few percent to SOA. A regional SOA background of approximately 2.1 μg m-3 is also present due to the long distance transport of highly aged OA. The percentage of SOA from diesel vehicle emissions is the same, within the estimated uncertainty, as reported in previous work that analyzed the weekly cycles in OA concentrations (Bahreini et al., 2012; Hayes et al., 2013). However, the modeling work presented here suggests a strong anthropogenic source of modern carbon in SOA, due to cooking emissions, which was not accounted for in those previous studies. Lastly, this work adapts a simple two-parameter model to predict SOA concentration and O/C from urban emissions. This model successfully predicts SOA concentration, and the optimal parameter combination is very similar to that found for Mexico City. This approach provides a computationally inexpensive method for predicting urban SOA in global and climate models. We estimate pollution SOA to account for 26 Tg yr-1 of SOA globally, or 17% of global SOA, 1/3 of which is likely to be non-fossil.
NASA Astrophysics Data System (ADS)
White, Jeremy; Stengel, Victoria; Rendon, Samuel; Banta, John
2017-08-01
Computer models of hydrologic systems are frequently used to investigate the hydrologic response of land-cover change. If the modeling results are used to inform resource-management decisions, then providing robust estimates of uncertainty in the simulated response is an important consideration. Here we examine the importance of parameterization, a necessarily subjective process, on uncertainty estimates of the simulated hydrologic response of land-cover change. Specifically, we applied the soil water assessment tool (SWAT) model to a 1.4 km2 watershed in southern Texas to investigate the simulated hydrologic response of brush management (the mechanical removal of woody plants), a discrete land-cover change. The watershed was instrumented before and after brush-management activities were undertaken, and estimates of precipitation, streamflow, and evapotranspiration (ET) are available; these data were used to condition and verify the model. The role of parameterization in brush-management simulation was evaluated by constructing two models, one with 12 adjustable parameters (reduced parameterization) and one with 1305 adjustable parameters (full parameterization). Both models were subjected to global sensitivity analysis as well as Monte Carlo and generalized likelihood uncertainty estimation (GLUE) conditioning to identify important model inputs and to estimate uncertainty in several quantities of interest related to brush management. Many realizations from both parameterizations were identified as behavioral
in that they reproduce daily mean streamflow acceptably well according to Nash-Sutcliffe model efficiency coefficient, percent bias, and coefficient of determination. However, the total volumetric ET difference resulting from simulated brush management remains highly uncertain after conditioning to daily mean streamflow, indicating that streamflow data alone are not sufficient to inform the model inputs that influence the simulated outcomes of brush management the most. Additionally, the reduced-parameterization model grossly underestimates uncertainty in the total volumetric ET difference compared to the full-parameterization model; total volumetric ET difference is a primary metric for evaluating the outcomes of brush management. The failure of the reduced-parameterization model to provide robust uncertainty estimates demonstrates the importance of parameterization when attempting to quantify uncertainty in land-cover change simulations.
White, Jeremy; Stengel, Victoria G.; Rendon, Samuel H.; Banta, John
2017-01-01
Computer models of hydrologic systems are frequently used to investigate the hydrologic response of land-cover change. If the modeling results are used to inform resource-management decisions, then providing robust estimates of uncertainty in the simulated response is an important consideration. Here we examine the importance of parameterization, a necessarily subjective process, on uncertainty estimates of the simulated hydrologic response of land-cover change. Specifically, we applied the soil water assessment tool (SWAT) model to a 1.4 km2 watershed in southern Texas to investigate the simulated hydrologic response of brush management (the mechanical removal of woody plants), a discrete land-cover change. The watershed was instrumented before and after brush-management activities were undertaken, and estimates of precipitation, streamflow, and evapotranspiration (ET) are available; these data were used to condition and verify the model. The role of parameterization in brush-management simulation was evaluated by constructing two models, one with 12 adjustable parameters (reduced parameterization) and one with 1305 adjustable parameters (full parameterization). Both models were subjected to global sensitivity analysis as well as Monte Carlo and generalized likelihood uncertainty estimation (GLUE) conditioning to identify important model inputs and to estimate uncertainty in several quantities of interest related to brush management. Many realizations from both parameterizations were identified as behavioral in that they reproduce daily mean streamflow acceptably well according to Nash–Sutcliffe model efficiency coefficient, percent bias, and coefficient of determination. However, the total volumetric ET difference resulting from simulated brush management remains highly uncertain after conditioning to daily mean streamflow, indicating that streamflow data alone are not sufficient to inform the model inputs that influence the simulated outcomes of brush management the most. Additionally, the reduced-parameterization model grossly underestimates uncertainty in the total volumetric ET difference compared to the full-parameterization model; total volumetric ET difference is a primary metric for evaluating the outcomes of brush management. The failure of the reduced-parameterization model to provide robust uncertainty estimates demonstrates the importance of parameterization when attempting to quantify uncertainty in land-cover change simulations.
FINAL REPORT (DE-FG02-97ER62338): Single-column modeling, GCM parameterizations, and ARM data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Richard C. J. Somerville
2009-02-27
Our overall goal is the development of new and improved parameterizations of cloud-radiation effects and related processes, using ARM data at all three ARM sites, and the implementation and testing of these parameterizations in global models. To test recently developed prognostic parameterizations based on detailed cloud microphysics, we have compared SCM (single-column model) output with ARM observations at the SGP, NSA and TWP sites. We focus on the predicted cloud amounts and on a suite of radiative quantities strongly dependent on clouds, such as downwelling surface shortwave radiation. Our results demonstrate the superiority of parameterizations based on comprehensive treatments ofmore » cloud microphysics and cloud-radiative interactions. At the SGP and NSA sites, the SCM results simulate the ARM measurements well and are demonstrably more realistic than typical parameterizations found in conventional operational forecasting models. At the TWP site, the model performance depends strongly on details of the scheme, and the results of our diagnostic tests suggest ways to develop improved parameterizations better suited to simulating cloud-radiation interactions in the tropics generally. These advances have made it possible to take the next step and build on this progress, by incorporating our parameterization schemes in state-of-the-art three-dimensional atmospheric models, and diagnosing and evaluating the results using independent data. Because the improved cloud-radiation results have been obtained largely via implementing detailed and physically comprehensive cloud microphysics, we anticipate that improved predictions of hydrologic cycle components, and hence of precipitation, may also be achievable.« less
NASA Astrophysics Data System (ADS)
Madi, Raneem; Huibert de Rooij, Gerrit; Mielenz, Henrike; Mai, Juliane
2018-02-01
Few parametric expressions for the soil water retention curve are suitable for dry conditions. Furthermore, expressions for the soil hydraulic conductivity curves associated with parametric retention functions can behave unrealistically near saturation. We developed a general criterion for water retention parameterizations that ensures physically plausible conductivity curves. Only 3 of the 18 tested parameterizations met this criterion without restrictions on the parameters of a popular conductivity curve parameterization. A fourth required one parameter to be fixed. We estimated parameters by shuffled complex evolution (SCE) with the objective function tailored to various observation methods used to obtain retention curve data. We fitted the four parameterizations with physically plausible conductivities as well as the most widely used parameterization. The performance of the resulting 12 combinations of retention and conductivity curves was assessed in a numerical study with 751 days of semiarid atmospheric forcing applied to unvegetated, uniform, 1 m freely draining columns for four textures. Choosing different parameterizations had a minor effect on evaporation, but cumulative bottom fluxes varied by up to an order of magnitude between them. This highlights the need for a careful selection of the soil hydraulic parameterization that ideally does not only rely on goodness of fit to static soil water retention data but also on hydraulic conductivity measurements. Parameter fits for 21 soils showed that extrapolations into the dry range of the retention curve often became physically more realistic when the parameterization had a logarithmic dry branch, particularly in fine-textured soils where high residual water contents would otherwise be fitted.
How certain are the process parameterizations in our models?
NASA Astrophysics Data System (ADS)
Gharari, Shervan; Hrachowitz, Markus; Fenicia, Fabrizio; Matgen, Patrick; Razavi, Saman; Savenije, Hubert; Gupta, Hoshin; Wheater, Howard
2016-04-01
Environmental models are abstract simplifications of real systems. As a result, the elements of these models, including system architecture (structure), process parameterization and parameters inherit a high level of approximation and simplification. In a conventional model building exercise the parameter values are the only elements of a model which can vary while the rest of the modeling elements are often fixed a priori and therefore not subjected to change. Once chosen the process parametrization and model structure usually remains the same throughout the modeling process. The only flexibility comes from the changing parameter values, thereby enabling these models to reproduce the desired observation. This part of modeling practice, parameter identification and uncertainty, has attracted a significant attention in the literature during the last years. However what remains unexplored in our point of view is to what extent the process parameterization and system architecture (model structure) can support each other. In other words "Does a specific form of process parameterization emerge for a specific model given its system architecture and data while no or little assumption has been made about the process parameterization itself? In this study we relax the assumption regarding a specific pre-determined form for the process parameterizations of a rainfall/runoff model and examine how varying the complexity of the system architecture can lead to different or possibly contradictory parameterization forms than what would have been decided otherwise. This comparison implicitly and explicitly provides us with an assessment of how uncertain is our perception of model process parameterization in respect to the extent the data can support.
Sensitivity of Pacific Cold Tongue and Double-ITCZ Bias to Convective Parameterization
NASA Astrophysics Data System (ADS)
Woelfle, M.; Bretherton, C. S.; Pritchard, M. S.; Yu, S.
2016-12-01
Many global climate models struggle to accurately simulate annual mean precipitation and sea surface temperature (SST) fields in the tropical Pacific basin. Precipitation biases are dominated by the double intertropical convergence zone (ITCZ) bias where models exhibit precipitation maxima straddling the equator while only a single Northern Hemispheric maximum exists in observations. The major SST bias is the enhancement of the equatorial cold tongue. A series of coupled model simulations are used to investigate the sensitivity of the bias development to convective parameterization. Model components are initialized independently prior to coupling to allow analysis of the transient response of the system directly following coupling. These experiments show precipitation and SST patterns to be highly sensitive to convective parameterization. Simulations in which the deep convective parameterization is disabled forcing all convection to be resolved by the shallow convection parameterization showed a degradation in both the cold tongue and double-ITCZ biases as precipitation becomes focused into off-equatorial regions of local SST maxima. Simulations using superparameterization in place of traditional cloud parameterizations showed a reduced cold tongue bias at the expense of additional precipitation biases. The equatorial SST responses to changes in convective parameterization are driven by changes in near equatorial zonal wind stress. The sensitivity of convection to SST is important in determining the precipitation and wind stress fields. However, differences in convective momentum transport also play a role. While no significant improvement is seen in these simulations of the double-ITCZ, the system's sensitivity to these changes reaffirm that improved convective parameterizations may provide an avenue for improving simulations of tropical Pacific precipitation and SST.
USDA-ARS?s Scientific Manuscript database
Simulation models can be used to make management decisions when properly parameterized. This study aimed to parameterize the ALMANAC (Agricultural Land Management Alternatives with Numerical Assessment Criteria) crop simulation model for dry bean in the semi-arid temperate areas of Mexico. The par...
Midgley, S M
2004-01-21
A novel parameterization of x-ray interaction cross-sections is developed, and employed to describe the x-ray linear attenuation coefficient and mass energy absorption coefficient for both elements and mixtures. The new parameterization scheme addresses the Z-dependence of elemental cross-sections (per electron) using a simple function of atomic number, Z. This obviates the need for a complicated mathematical formalism. Energy dependent coefficients describe the Z-direction curvature of the cross-sections. The composition dependent quantities are the electron density and statistical moments describing the elemental distribution. We show that it is possible to describe elemental cross-sections for the entire periodic table and at energies above the K-edge (from 6 keV to 125 MeV), with an accuracy of better than 2% using a parameterization containing not more than five coefficients. For the biologically important elements 1 < or = Z < or = 20, and the energy range 30-150 keV, the parameterization utilizes four coefficients. At higher energies, the parameterization uses fewer coefficients with only two coefficients needed at megavoltage energies.
A unified spectral,parameterization for wave breaking: from the deep ocean to the surf zone
NASA Astrophysics Data System (ADS)
Filipot, J.
2010-12-01
A new wave-breaking dissipation parameterization designed for spectral wave models is presented. It combines wave breaking basic physical quantities, namely, the breaking probability and the dissipation rate per unit area. The energy lost by waves is fi[|#12#|]rst calculated in the physical space before being distributed over the relevant spectral components. This parameterization allows a seamless numerical model from the deep ocean into the surf zone. This transition from deep to shallow water is made possible by a dissipation rate per unit area of breaking waves that varies with the wave height, wavelength and water depth.The parameterization is further tested in the WAVEWATCH III TM code, from the global ocean to the beach scale. Model errors are smaller than with most specialized deep or shallow water parameterizations.
Senay, Gabriel B.; Bohms, Stefanie; Singh, Ramesh K.; Gowda, Prasanna H.; Velpuri, Naga Manohar; Alemu, Henok; Verdin, James P.
2013-01-01
The increasing availability of multi-scale remotely sensed data and global weather datasets is allowing the estimation of evapotranspiration (ET) at multiple scales. We present a simple but robust method that uses remotely sensed thermal data and model-assimilated weather fields to produce ET for the contiguous United States (CONUS) at monthly and seasonal time scales. The method is based on the Simplified Surface Energy Balance (SSEB) model, which is now parameterized for operational applications, renamed as SSEBop. The innovative aspect of the SSEBop is that it uses predefined boundary conditions that are unique to each pixel for the "hot" and "cold" reference conditions. The SSEBop model was used for computing ET for 12 years (2000-2011) using the MODIS and Global Data Assimilation System (GDAS) data streams. SSEBop ET results compared reasonably well with monthly eddy covariance ET data explaining 64% of the observed variability across diverse ecosystems in the CONUS during 2005. Twelve annual ET anomalies (2000-2011) depicted the spatial extent and severity of the commonly known drought years in the CONUS. More research is required to improve the representation of the predefined boundary conditions in complex terrain at small spatial scales. SSEBop model was found to be a promising approach to conduct water use studies in the CONUS, with a similar opportunity in other parts of the world. The approach can also be applied with other thermal sensors such as Landsat.
Parameterising User Uptake in Economic Evaluations: The role of discrete choice experiments.
Terris-Prestholt, Fern; Quaife, Matthew; Vickerman, Peter
2016-02-01
Model-based economic evaluations of new interventions have shown that user behaviour (uptake) is a critical driver of overall impact achieved. However, early economic evaluations, prior to introduction, often rely on assumed levels of uptake based on expert opinion or uptake of similar interventions. In addition to the likely uncertainty surrounding these uptake assumptions, they also do not allow for uptake to be a function of product, intervention, or user characteristics. This letter proposes using uptake projections from discrete choice experiments (DCE) to better parameterize uptake and substitution in cost-effectiveness models. A simple impact model is developed and illustrated using an example from the HIV prevention field in South Africa. Comparison between the conventional approach and the DCE-based approach shows that, in our example, DCE-based impact predictions varied by up to 50% from conventional estimates and provided far more nuanced projections. In the absence of observed uptake data and to model the effect of variations in intervention characteristics, DCE-based uptake predictions are likely to greatly improve models parameterizing uptake solely based on expert opinion. This is particularly important for global and national level decision making around introducing new and probably more expensive interventions, particularly where resources are most constrained. © 2016 The Authors. Health Economics published by John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Sauerteig, Daniel; Hanselmann, Nina; Arzberger, Arno; Reinshagen, Holger; Ivanov, Svetlozar; Bund, Andreas
2018-02-01
The intercalation and aging induced volume changes of lithium-ion battery electrodes lead to significant mechanical pressure or volume changes on cell and module level. As the correlation between electrochemical and mechanical performance of lithium ion batteries at nano and macro scale requires a comprehensive and multidisciplinary approach, physical modeling accounting for chemical and mechanical phenomena during operation is very useful for the battery design. Since the introduced fully-coupled physical model requires proper parameterization, this work also focuses on identifying appropriate mathematical representation of compressibility as well as the ionic transport in the porous electrodes and the separator. The ionic transport is characterized by electrochemical impedance spectroscopy (EIS) using symmetric pouch cells comprising LiNi1/3Mn1/3Co1/3O2 (NMC) cathode, graphite anode and polyethylene separator. The EIS measurements are carried out at various mechanical loads. The observed decrease of the ionic conductivity reveals a significant transport limitation at high pressures. The experimentally obtained data are applied as input to the electrochemical-mechanical model of a prismatic 10 Ah cell. Our computational approach accounts intercalation induced electrode expansion, stress generation caused by mechanical boundaries, compression of the electrodes and the separator, outer expansion of the cell and finally the influence of the ionic transport within the electrolyte.
Optimal Recursive Digital Filters for Active Bending Stabilization
NASA Technical Reports Server (NTRS)
Orr, Jeb S.
2013-01-01
In the design of flight control systems for large flexible boosters, it is common practice to utilize active feedback control of the first lateral structural bending mode so as to suppress transients and reduce gust loading. Typically, active stabilization or phase stabilization is achieved by carefully shaping the loop transfer function in the frequency domain via the use of compensating filters combined with the frequency response characteristics of the nozzle/actuator system. In this paper we present a new approach for parameterizing and determining optimal low-order recursive linear digital filters so as to satisfy phase shaping constraints for bending and sloshing dynamics while simultaneously maximizing attenuation in other frequency bands of interest, e.g. near higher frequency parasitic structural modes. By parameterizing the filter directly in the z-plane with certain restrictions, the search space of candidate filter designs that satisfy the constraints is restricted to stable, minimum phase recursive low-pass filters with well-conditioned coefficients. Combined with optimal output feedback blending from multiple rate gyros, the present approach enables rapid and robust parametrization of autopilot bending filters to attain flight control performance objectives. Numerical results are presented that illustrate the application of the present technique to the development of rate gyro filters for an exploration-class multi-engined space launch vehicle.
Generative electronic background music system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mazurowski, Lukasz
In this short paper-extended abstract the new approach to generation of electronic background music has been presented. The Generative Electronic Background Music System (GEBMS) has been located between other related approaches within the musical algorithm positioning framework proposed by Woller et al. The music composition process is performed by a number of mini-models parameterized by further described properties. The mini-models generate fragments of musical patterns used in output composition. Musical pattern and output generation are controlled by container for the mini-models - a host-model. General mechanism has been presented including the example of the synthesized output compositions.
NASA Technical Reports Server (NTRS)
Choi, Hyun-Joo; Chun, Hye-Yeong; Gong, Jie; Wu, Dong L.
2012-01-01
The realism of ray-based spectral parameterization of convective gravity wave drag, which considers the updated moving speed of the convective source and multiple wave propagation directions, is tested against the Atmospheric Infrared Sounder (AIRS) onboard the Aqua satellite. Offline parameterization calculations are performed using the global reanalysis data for January and July 2005, and gravity wave temperature variances (GWTVs) are calculated at z = 2.5 hPa (unfiltered GWTV). AIRS-filtered GWTV, which is directly compared with AIRS, is calculated by applying the AIRS visibility function to the unfiltered GWTV. A comparison between the parameterization calculations and AIRS observations shows that the spatial distribution of the AIRS-filtered GWTV agrees well with that of the AIRS GWTV. However, the magnitude of the AIRS-filtered GWTV is smaller than that of the AIRS GWTV. When an additional cloud top gravity wave momentum flux spectrum with longer horizontal wavelength components that were obtained from the mesoscale simulations is included in the parameterization, both the magnitude and spatial distribution of the AIRS-filtered GWTVs from the parameterization are in good agreement with those of the AIRS GWTVs. The AIRS GWTV can be reproduced reasonably well by the parameterization not only with multiple wave propagation directions but also with two wave propagation directions of 45 degrees (northeast-southwest) and 135 degrees (northwest-southeast), which are optimally chosen for computational efficiency.
The Influence of Microphysical Cloud Parameterization on Microwave Brightness Temperatures
NASA Technical Reports Server (NTRS)
Skofronick-Jackson, Gail M.; Gasiewski, Albin J.; Wang, James R.; Zukor, Dorothy J. (Technical Monitor)
2000-01-01
The microphysical parameterization of clouds and rain-cells plays a central role in atmospheric forward radiative transfer models used in calculating passive microwave brightness temperatures. The absorption and scattering properties of a hydrometeor-laden atmosphere are governed by particle phase, size distribution, aggregate density., shape, and dielectric constant. This study identifies the sensitivity of brightness temperatures with respect to the microphysical cloud parameterization. Cloud parameterizations for wideband (6-410 GHz observations of baseline brightness temperatures were studied for four evolutionary stages of an oceanic convective storm using a five-phase hydrometeor model in a planar-stratified scattering-based radiative transfer model. Five other microphysical cloud parameterizations were compared to the baseline calculations to evaluate brightness temperature sensitivity to gross changes in the hydrometeor size distributions and the ice-air-water ratios in the frozen or partly frozen phase. The comparison shows that, enlarging the rain drop size or adding water to the partly Frozen hydrometeor mix warms brightness temperatures by up to .55 K at 6 GHz. The cooling signature caused by ice scattering intensifies with increasing ice concentrations and at higher frequencies. An additional comparison to measured Convection and Moisture LA Experiment (CAMEX 3) brightness temperatures shows that in general all but, two parameterizations produce calculated T(sub B)'s that fall within the observed clear-air minima and maxima. The exceptions are for parameterizations that, enhance the scattering characteristics of frozen hydrometeors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daly, Christopher; Halbleib, Michael D.; Hannaway, David B.
Several crops have recently been identified as potential dedicated bioenergy feedstocks for the production of power, fuels, and bioproducts. Despite being identified as early as the 1980s, no systematic work has been undertaken to characterize the spatial distribution of their long-term production potentials in the United states. Such information is a starting point for planners and economic modelers, and there is a need for this spatial information to be developed in a consistent manner for a variety of crops, so that their production potentials can be intercompared to support crop selection decisions. As part of the Sun Grant Regional Feedstockmore » Partnership (RFP), an approach to mapping these potential biomass resources was developed to take advantage of the informational synergy realized when bringing together coordinated field trials, close interaction with expert agronomists, and spatial modeling into a single, collaborative effort. A modeling and mapping system called PRISM-ELM was designed to answer a basic question: How do climate and soil characteristics affect the spatial distribution and long-term production patterns of a given crop? This empirical/mechanistic/biogeographical hybrid model employs a limiting factor approach, where productivity is determined by the most limiting of the factors addressed in submodels that simulate water balance, winter low-temperature response, summer high-temperature response, and soil pH, salinity, and drainage. Yield maps are developed through linear regressions relating soil and climate attributes to reported yield data. The model was parameterized and validated using grain yield data for winter wheat and maize, which served as benchmarks for parameterizing the model for upland and lowland switchgrass, CRP grasses, Miscanthus, biomass sorghum, energycane, willow, and poplar. The resulting maps served as potential production inputs to analyses comparing the viability of biomass crops under various economic scenarios. The modeling and parameterization framework can be expanded to include other biomass crops.« less
Daly, Christopher; Halbleib, Michael D.; Hannaway, David B.; ...
2017-12-22
Several crops have recently been identified as potential dedicated bioenergy feedstocks for the production of power, fuels, and bioproducts. Despite being identified as early as the 1980s, no systematic work has been undertaken to characterize the spatial distribution of their long-term production potentials in the United states. Such information is a starting point for planners and economic modelers, and there is a need for this spatial information to be developed in a consistent manner for a variety of crops, so that their production potentials can be intercompared to support crop selection decisions. As part of the Sun Grant Regional Feedstockmore » Partnership (RFP), an approach to mapping these potential biomass resources was developed to take advantage of the informational synergy realized when bringing together coordinated field trials, close interaction with expert agronomists, and spatial modeling into a single, collaborative effort. A modeling and mapping system called PRISM-ELM was designed to answer a basic question: How do climate and soil characteristics affect the spatial distribution and long-term production patterns of a given crop? This empirical/mechanistic/biogeographical hybrid model employs a limiting factor approach, where productivity is determined by the most limiting of the factors addressed in submodels that simulate water balance, winter low-temperature response, summer high-temperature response, and soil pH, salinity, and drainage. Yield maps are developed through linear regressions relating soil and climate attributes to reported yield data. The model was parameterized and validated using grain yield data for winter wheat and maize, which served as benchmarks for parameterizing the model for upland and lowland switchgrass, CRP grasses, Miscanthus, biomass sorghum, energycane, willow, and poplar. The resulting maps served as potential production inputs to analyses comparing the viability of biomass crops under various economic scenarios. The modeling and parameterization framework can be expanded to include other biomass crops.« less
A moist aquaplanet variant of the Held–Suarez test for atmospheric model dynamical cores
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thatcher, Diana R.; Jablonowski, Christiane
A moist idealized test case (MITC) for atmospheric model dynamical cores is presented. The MITC is based on the Held–Suarez (HS) test that was developed for dry simulations on “a flat Earth” and replaces the full physical parameterization package with a Newtonian temperature relaxation and Rayleigh damping of the low-level winds. This new variant of the HS test includes moisture and thereby sheds light on the nonlinear dynamics–physics moisture feedbacks without the complexity of full-physics parameterization packages. In particular, it adds simplified moist processes to the HS forcing to model large-scale condensation, boundary-layer mixing, and the exchange of latent and sensible heat betweenmore » the atmospheric surface and an ocean-covered planet. Using a variety of dynamical cores of the National Center for Atmospheric Research (NCAR)'s Community Atmosphere Model (CAM), this paper demonstrates that the inclusion of the moist idealized physics package leads to climatic states that closely resemble aquaplanet simulations with complex physical parameterizations. This establishes that the MITC approach generates reasonable atmospheric circulations and can be used for a broad range of scientific investigations. This paper provides examples of two application areas. First, the test case reveals the characteristics of the physics–dynamics coupling technique and reproduces coupling issues seen in full-physics simulations. In particular, it is shown that sudden adjustments of the prognostic fields due to moist physics tendencies can trigger undesirable large-scale gravity waves, which can be remedied by a more gradual application of the physical forcing. Second, the moist idealized test case can be used to intercompare dynamical cores. These examples demonstrate the versatility of the MITC approach and suggestions are made for further application areas. Furthermore, the new moist variant of the HS test can be considered a test case of intermediate complexity.« less
A moist aquaplanet variant of the Held–Suarez test for atmospheric model dynamical cores
Thatcher, Diana R.; Jablonowski, Christiane
2016-04-04
A moist idealized test case (MITC) for atmospheric model dynamical cores is presented. The MITC is based on the Held–Suarez (HS) test that was developed for dry simulations on “a flat Earth” and replaces the full physical parameterization package with a Newtonian temperature relaxation and Rayleigh damping of the low-level winds. This new variant of the HS test includes moisture and thereby sheds light on the nonlinear dynamics–physics moisture feedbacks without the complexity of full-physics parameterization packages. In particular, it adds simplified moist processes to the HS forcing to model large-scale condensation, boundary-layer mixing, and the exchange of latent and sensible heat betweenmore » the atmospheric surface and an ocean-covered planet. Using a variety of dynamical cores of the National Center for Atmospheric Research (NCAR)'s Community Atmosphere Model (CAM), this paper demonstrates that the inclusion of the moist idealized physics package leads to climatic states that closely resemble aquaplanet simulations with complex physical parameterizations. This establishes that the MITC approach generates reasonable atmospheric circulations and can be used for a broad range of scientific investigations. This paper provides examples of two application areas. First, the test case reveals the characteristics of the physics–dynamics coupling technique and reproduces coupling issues seen in full-physics simulations. In particular, it is shown that sudden adjustments of the prognostic fields due to moist physics tendencies can trigger undesirable large-scale gravity waves, which can be remedied by a more gradual application of the physical forcing. Second, the moist idealized test case can be used to intercompare dynamical cores. These examples demonstrate the versatility of the MITC approach and suggestions are made for further application areas. Furthermore, the new moist variant of the HS test can be considered a test case of intermediate complexity.« less
Efficient voxel navigation for proton therapy dose calculation in TOPAS and Geant4
NASA Astrophysics Data System (ADS)
Schümann, J.; Paganetti, H.; Shin, J.; Faddegon, B.; Perl, J.
2012-06-01
A key task within all Monte Carlo particle transport codes is ‘navigation’, the calculation to determine at each particle step what volume the particle may be leaving and what volume the particle may be entering. Navigation should be optimized to the specific geometry at hand. For patient dose calculation, this geometry generally involves voxelized computed tomography (CT) data. We investigated the efficiency of navigation algorithms on currently available voxel geometry parameterizations in the Monte Carlo simulation package Geant4: G4VPVParameterisation, G4VNestedParameterisation and G4PhantomParameterisation, the last with and without boundary skipping, a method where neighboring voxels with the same Hounsfield unit are combined into one larger voxel. A fourth parameterization approach (MGHParameterization), developed in-house before the latter two parameterizations became available in Geant4, was also included in this study. All simulations were performed using TOPAS, a tool for particle simulations layered on top of Geant4. Runtime comparisons were made on three distinct patient CT data sets: a head and neck, a liver and a prostate patient. We included an additional version of these three patients where all voxels, including the air voxels outside of the patient, were uniformly set to water in the runtime study. The G4VPVParameterisation offers two optimization options. One option has a 60-150 times slower simulation speed. The other is compatible in speed but requires 15-19 times more memory compared to the other parameterizations. We found the average CPU time used for the simulation relative to G4VNestedParameterisation to be 1.014 for G4PhantomParameterisation without boundary skipping and 1.015 for MGHParameterization. The average runtime ratio for G4PhantomParameterisation with and without boundary skipping for our heterogeneous data was equal to 0.97: 1. The calculated dose distributions agreed with the reference distribution for all but the G4PhantomParameterisation with boundary skipping for the head and neck patient. The maximum memory usage ranged from 0.8 to 1.8 GB depending on the CT volume independent of parameterizations, except for the 15-19 times greater memory usage with the G4VPVParameterisation when using the option with a higher simulation speed. The G4VNestedParameterisation was selected as the preferred choice for the patient geometries and treatment plans studied.
NASA Astrophysics Data System (ADS)
Drozdov, Alexander; Shprits, Yuri; Aseev, Nikita; Kellerman, Adam; Reeves, Geoffrey
2017-04-01
Radial diffusion is one of the dominant physical mechanisms that drives acceleration and loss of the radiation belt electrons, which makes it very important for nowcasting and forecasting space weather models. We investigate the sensitivity of the two parameterizations of the radial diffusion of Brautigam and Albert [2000] and Ozeke et al. [2014] on long-term radiation belt modeling using the Versatile Electron Radiation Belt (VERB). Following Brautigam and Albert [2000] and Ozeke et al. [2014], we first perform 1-D radial diffusion simulations. Comparison of the simulation results with observations shows that the difference between simulations with either radial diffusion parameterization is small. To take into account effects of local acceleration and loss, we perform 3-D simulations, including pitch-angle, energy and mixed diffusion. We found that the results of 3-D simulations are even less sensitive to the choice of parameterization of radial diffusion rates than the results of 1-D simulations at various energies (from 0.59 to 1.80 MeV). This result demonstrates that the inclusion of local acceleration and pitch-angle diffusion can provide a negative feedback effect, such that the result is largely indistinguishable simulations conducted with different radial diffusion parameterizations. We also perform a number of sensitivity tests by multiplying radial diffusion rates by constant factors and show that such an approach leads to unrealistic predictions of radiation belt dynamics. References Brautigam, D. H., and J. M. Albert (2000), Radial diffusion analysis of outer radiation belt electrons during the October 9, 1990, magnetic storm, J. Geophys. Res., 105(A1), 291-309, doi:10.1029/1999ja900344. Ozeke, L. G., I. R. Mann, K. R. Murphy, I. Jonathan Rae, and D. K. Milling (2014), Analytic expressions for ULF wave radiation belt radial diffusion coefficients, J. Geophys. Res. [Space Phys.], 119(3), 1587-1605, doi:10.1002/2013JA019204.
Towards a simple representation of chalk hydrology in land surface modelling
NASA Astrophysics Data System (ADS)
Rahman, Mostaquimur; Rosolem, Rafael
2017-01-01
Modelling and monitoring of hydrological processes in the unsaturated zone of chalk, a porous medium with fractures, is important to optimize water resource assessment and management practices in the United Kingdom (UK). However, incorporating the processes governing water movement through a chalk unsaturated zone in a numerical model is complicated mainly due to the fractured nature of chalk that creates high-velocity preferential flow paths in the subsurface. In general, flow through a chalk unsaturated zone is simulated using the dual-porosity concept, which often involves calibration of a relatively large number of model parameters, potentially undermining applications to large regions. In this study, a simplified parameterization, namely the Bulk Conductivity (BC) model, is proposed for simulating hydrology in a chalk unsaturated zone. This new parameterization introduces only two additional parameters (namely the macroporosity factor and the soil wetness threshold parameter for fracture flow activation) and uses the saturated hydraulic conductivity from the chalk matrix. The BC model is implemented in the Joint UK Land Environment Simulator (JULES) and applied to a study area encompassing the Kennet catchment in the southern UK. This parameterization is further calibrated at the point scale using soil moisture profile observations. The performance of the calibrated BC model in JULES is assessed and compared against the performance of both the default JULES parameterization and the uncalibrated version of the BC model implemented in JULES. Finally, the model performance at the catchment scale is evaluated against independent data sets (e.g. runoff and latent heat flux). The results demonstrate that the inclusion of the BC model in JULES improves simulated land surface mass and energy fluxes over the chalk-dominated Kennet catchment. Therefore, the simple approach described in this study may be used to incorporate the flow processes through a chalk unsaturated zone in large-scale land surface modelling applications.
A conceptual model for quantifying connectivity using graph theory and cellular (per-pixel) approach
NASA Astrophysics Data System (ADS)
Singh, Manudeo; Sinha, Rajiv; Tandon, Sampat K.
2016-04-01
The concept of connectivity is being increasingly used for understanding hydro-geomorphic processes at all spatio-temporal scales. Connectivity is defined as the potential for energy and material flux (water, sediments, nutrients, heat, etc.) to navigate within or between the landscape systems and has two components, structural connectivity and dynamic connectivity. Structural connectivity is defined by the spatially connected features (physical linkages) through which energy and materials flow. Dynamic connectivity is a process defined connectivity component. These two connectivity components also interact with each other by forming a feedback system. This study attempts to explore a method to quantify structural and dynamic connectivity. In fluvial transport systems, sediment and water can flow in either a diffused manner or in a channelized way. At all the scales, hydrological and sediment fluxes can be tracked using a cellular (per-pixel) approach and can be quantified using graphical approach. The material flux, slope and LULC (Land Use Land Cover) weightage factors of a pixel together determine if it will contribute towards connectivity of the landscape/system. In a graphical approach, all the contributing pixels will form a node at their centroid and this node will be connected to the next 'down-node' via a directed edge with 'least cost path'. The length of the edge will depend on the desired spatial scale and its path direction will depend on the traversed pixel's slope and the LULC (weightage) factors. The weightage factors will lie in-between 0 to 1. This value approaches 1 for the LULC factors which promote connectivity. For example, in terms of sediment connectivity, the weightage could be RUSLE (Revised Universal Soil Loss Equation) C-factors with bare unconsolidated surfaces having values close to 1. This method is best suited for areas with low slopes, where LULC can be a deciding as well as dominating factor. The degree of connectivity and its pathways will show changes under different LULC conditions even if the slope remains the same. The graphical approach provides the statistics of connected and disconnected graph elements (edges, nodes) and graph components, thereby allowing the quantification of structural connectivity. This approach also quantifies the dynamic connectivity by allowing the measurement of the fluxes (e.g. via hydrographs or sedimentographs) at any node as well as at any system outlet. The contribution of any sub-system can be understood by removing the remaining sub-systems which can be conveniently achieved by masking associated graph elements.
A Flexible Parameterization for Shortwave Optical Properties of Ice Crystals
NASA Technical Reports Server (NTRS)
VanDiedenhoven, Bastiaan; Ackerman, Andrew S.; Cairns, Brian; Fridlind, Ann M.
2014-01-01
A parameterization is presented that provides extinction cross section sigma (sub e), single-scattering albedo omega, and asymmetry parameter (g) of ice crystals for any combination of volume, projected area, aspect ratio, and crystal distortion at any wavelength in the shortwave. Similar to previous parameterizations, the scheme makes use of geometric optics approximations and the observation that optical properties of complex, aggregated ice crystals can be well approximated by those of single hexagonal crystals with varying size, aspect ratio, and distortion levels. In the standard geometric optics implementation used here, sigma (sub e) is always twice the particle projected area. It is shown that omega is largely determined by the newly defined absorption size parameter and the particle aspect ratio. These dependences are parameterized using a combination of exponential, lognormal, and polynomial functions. The variation of (g) with aspect ratio and crystal distortion is parameterized for one reference wavelength using a combination of several polynomials. The dependences of g on refractive index and omega are investigated and factors are determined to scale the parameterized (g) to provide values appropriate for other wavelengths. The parameterization scheme consists of only 88 coefficients. The scheme is tested for a large variety of hexagonal crystals in several wavelength bands from 0.2 to 4 micron, revealing absolute differences with reference calculations of omega and (g) that are both generally below 0.015. Over a large variety of cloud conditions, the resulting root-mean-squared differences with reference calculations of cloud reflectance, transmittance, and absorptance are 1.4%, 1.1%, and 3.4%, respectively. Some practical applications of the parameterization in atmospheric models are highlighted.
Domain-averaged snow depth over complex terrain from flat field measurements
NASA Astrophysics Data System (ADS)
Helbig, Nora; van Herwijnen, Alec
2017-04-01
Snow depth is an important parameter for a variety of coarse-scale models and applications, such as hydrological forecasting. Since high-resolution snow cover models are computational expensive, simplified snow models are often used. Ground measured snow depth at single stations provide a chance for snow depth data assimilation to improve coarse-scale model forecasts. Snow depth is however commonly recorded at so-called flat fields, often in large measurement networks. While these ground measurement networks provide a wealth of information, various studies questioned the representativity of such flat field snow depth measurements for the surrounding topography. We developed two parameterizations to compute domain-averaged snow depth for coarse model grid cells over complex topography using easy to derive topographic parameters. To derive the two parameterizations we performed a scale dependent analysis for domain sizes ranging from 50m to 3km using highly-resolved snow depth maps at the peak of winter from two distinct climatic regions in Switzerland and in the Spanish Pyrenees. The first, simpler parameterization uses a commonly applied linear lapse rate. For the second parameterization, we first removed the obvious elevation gradient in mean snow depth, which revealed an additional correlation with the subgrid sky view factor. We evaluated domain-averaged snow depth derived with both parameterizations using flat field measurements nearby with the domain-averaged highly-resolved snow depth. This revealed an overall improved performance for the parameterization combining a power law elevation trend scaled with the subgrid parameterized sky view factor. We therefore suggest the parameterization could be used to assimilate flat field snow depth into coarse-scale snow model frameworks in order to improve coarse-scale snow depth estimates over complex topography.
González, Isaías; Calderón, Antonio José; Mejías, Andrés; Andújar, José Manuel
2016-10-31
In this paper the design and implementation of a network for integrating Programmable Logic Controllers (PLC), the Object-Linking and Embedding for Process Control protocol (OPC) and the open-source Easy Java Simulations (EJS) package is presented. A LabVIEW interface and the Java-Internet-LabVIEW (JIL) server complete the scheme for data exchange. This configuration allows the user to remotely interact with the PLC. Such integration can be considered a novelty in scientific literature for remote control and sensor data acquisition of industrial plants. An experimental application devoted to remote laboratories is developed to demonstrate the feasibility and benefits of the proposed approach. The experiment to be conducted is the parameterization and supervision of a fuzzy controller of a DC servomotor. The graphical user interface has been developed with EJS and the fuzzy control is carried out by our own PLC. In fact, the distinctive features of the proposed novel network application are the integration of the OPC protocol to share information with the PLC and the application under control. The user can perform the tuning of the controller parameters online and observe in real time the effect on the servomotor behavior. The target group is engineering remote users, specifically in control- and automation-related tasks. The proposed architecture system is described and experimental results are presented.
González, Isaías; Calderón, Antonio José; Mejías, Andrés; Andújar, José Manuel
2016-01-01
In this paper the design and implementation of a network for integrating Programmable Logic Controllers (PLC), the Object-Linking and Embedding for Process Control protocol (OPC) and the open-source Easy Java Simulations (EJS) package is presented. A LabVIEW interface and the Java-Internet-LabVIEW (JIL) server complete the scheme for data exchange. This configuration allows the user to remotely interact with the PLC. Such integration can be considered a novelty in scientific literature for remote control and sensor data acquisition of industrial plants. An experimental application devoted to remote laboratories is developed to demonstrate the feasibility and benefits of the proposed approach. The experiment to be conducted is the parameterization and supervision of a fuzzy controller of a DC servomotor. The graphical user interface has been developed with EJS and the fuzzy control is carried out by our own PLC. In fact, the distinctive features of the proposed novel network application are the integration of the OPC protocol to share information with the PLC and the application under control. The user can perform the tuning of the controller parameters online and observe in real time the effect on the servomotor behavior. The target group is engineering remote users, specifically in control- and automation-related tasks. The proposed architecture system is described and experimental results are presented. PMID:27809229
Integrated topology and shape optimization in structural design
NASA Technical Reports Server (NTRS)
Bremicker, M.; Chirehdast, M.; Kikuchi, N.; Papalambros, P. Y.
1990-01-01
Structural optimization procedures usually start from a given design topology and vary its proportions or boundary shapes to achieve optimality under various constraints. Two different categories of structural optimization are distinguished in the literature, namely sizing and shape optimization. A major restriction in both cases is that the design topology is considered fixed and given. Questions concerning the general layout of a design (such as whether a truss or a solid structure should be used) as well as more detailed topology features (e.g., the number and connectivities of bars in a truss or the number of holes in a solid) have to be resolved by design experience before formulating the structural optimization model. Design quality of an optimized structure still depends strongly on engineering intuition. This article presents a novel approach for initiating formal structural optimization at an earlier stage, where the design topology is rigorously generated in addition to selecting shape and size dimensions. A three-phase design process is discussed: an optimal initial topology is created by a homogenization method as a gray level image, which is then transformed to a realizable design using computer vision techniques; this design is then parameterized and treated in detail by sizing and shape optimization. A fully automated process is described for trusses. Optimization of two dimensional solid structures is also discussed. Several application-oriented examples illustrate the usefulness of the proposed methodology.
``Low Power Wireless Technologies: An Approach to Medical Applications''
NASA Astrophysics Data System (ADS)
Bellido O., Francisco J.; González R., Miguel; Moreno M., Antonio; de La Cruz F, José Luis
Wireless communication supposed a great both -quantitative and qualitative, jump in the management of the information, allowing the access and interchange of it without the need of a physical cable connection. The wireless transmission of voice and information has remained in constant evolution, arising new standards like BluetoothTM, WibreeTM or ZigbeeTM developed under the IEEE 802.15 norm. These newest wireless technologies are oriented to systems of communication of short-medium distance and optimized for a low cost and minor consume, becoming recognized as a flexible and reliable medium for data communications across a broad range of applications due to the potential that the wireless networks presents to operate in demanding environments providing clear advantages in cost, size, power, flexibility, and distributed intelligence. About the medical applications, the remote health or telecare (also called eHealth) is getting a bigger place into the manufacturers and medical companies, in order to incorporate products for assisted living and remote monitoring of health parameteres. At this point, the IEEE 1073, Personal Health Devices Working Group, stablish the framework for these kind of applications. Particularly, the 1073.3.X describes the physical and transport layers, where the new ultra low power short range wireless technologies can play a big role, providing solutions that allow the design of products which are particularly appropriate for monitor people’s health with interoperability requirements.
Module-based multiscale simulation of angiogenesis in skeletal muscle
2011-01-01
Background Mathematical modeling of angiogenesis has been gaining momentum as a means to shed new light on the biological complexity underlying blood vessel growth. A variety of computational models have been developed, each focusing on different aspects of the angiogenesis process and occurring at different biological scales, ranging from the molecular to the tissue levels. Integration of models at different scales is a challenging and currently unsolved problem. Results We present an object-oriented module-based computational integration strategy to build a multiscale model of angiogenesis that links currently available models. As an example case, we use this approach to integrate modules representing microvascular blood flow, oxygen transport, vascular endothelial growth factor transport and endothelial cell behavior (sensing, migration and proliferation). Modeling methodologies in these modules include algebraic equations, partial differential equations and agent-based models with complex logical rules. We apply this integrated model to simulate exercise-induced angiogenesis in skeletal muscle. The simulation results compare capillary growth patterns between different exercise conditions for a single bout of exercise. Results demonstrate how the computational infrastructure can effectively integrate multiple modules by coordinating their connectivity and data exchange. Model parameterization offers simulation flexibility and a platform for performing sensitivity analysis. Conclusions This systems biology strategy can be applied to larger scale integration of computational models of angiogenesis in skeletal muscle, or other complex processes in other tissues under physiological and pathological conditions. PMID:21463529
Predictive Compensator Optimization for Head Tracking Lag in Virtual Environments
NASA Technical Reports Server (NTRS)
Adelstein, Barnard D.; Jung, Jae Y.; Ellis, Stephen R.
2001-01-01
We examined the perceptual impact of plant noise parameterization for Kalman Filter predictive compensation of time delays intrinsic to head tracked virtual environments (VEs). Subjects were tested in their ability to discriminate between the VE system's minimum latency and conditions in which artificially added latency was then predictively compensated back to the system minimum. Two head tracking predictors were parameterized off-line according to cost functions that minimized prediction errors in (1) rotation, and (2) rotation projected into translational displacement with emphasis on higher frequency human operator noise. These predictors were compared with a parameterization obtained from the VE literature for cost function (1). Results from 12 subjects showed that both parameterization type and amount of compensated latency affected discrimination. Analysis of the head motion used in the parameterizations and the subsequent discriminability results suggest that higher frequency predictor artifacts are contributory cues for discriminating the presence of predictive compensation.
NASA Astrophysics Data System (ADS)
Miner, Nadine Elizabeth
1998-09-01
This dissertation presents a new wavelet-based method for synthesizing perceptually convincing, dynamic sounds using parameterized sound models. The sound synthesis method is applicable to a variety of applications including Virtual Reality (VR), multi-media, entertainment, and the World Wide Web (WWW). A unique contribution of this research is the modeling of the stochastic, or non-pitched, sound components. This stochastic-based modeling approach leads to perceptually compelling sound synthesis. Two preliminary studies conducted provide data on multi-sensory interaction and audio-visual synchronization timing. These results contributed to the design of the new sound synthesis method. The method uses a four-phase development process, including analysis, parameterization, synthesis and validation, to create the wavelet-based sound models. A patent is pending for this dynamic sound synthesis method, which provides perceptually-realistic, real-time sound generation. This dissertation also presents a battery of perceptual experiments developed to verify the sound synthesis results. These experiments are applicable for validation of any sound synthesis technique.
A coarse grain model for protein-surface interactions
NASA Astrophysics Data System (ADS)
Wei, Shuai; Knotts, Thomas A.
2013-09-01
The interaction of proteins with surfaces is important in numerous applications in many fields—such as biotechnology, proteomics, sensors, and medicine—but fundamental understanding of how protein stability and structure are affected by surfaces remains incomplete. Over the last several years, molecular simulation using coarse grain models has yielded significant insights, but the formalisms used to represent the surface interactions have been rudimentary. We present a new model for protein surface interactions that incorporates the chemical specificity of both the surface and the residues comprising the protein in the context of a one-bead-per-residue, coarse grain approach that maintains computational efficiency. The model is parameterized against experimental adsorption energies for multiple model peptides on different types of surfaces. The validity of the model is established by its ability to quantitatively and qualitatively predict the free energy of adsorption and structural changes for multiple biologically-relevant proteins on different surfaces. The validation, done with proteins not used in parameterization, shows that the model produces remarkable agreement between simulation and experiment.
NASA Astrophysics Data System (ADS)
Argüeso, D.; Hidalgo-Muñoz, J. M.; Gámiz-Fortis, S. R.; Esteban-Parra, M. J.; Castro-Díez, Y.
2009-04-01
An evaluation of MM5 mesoscale model sensitivity to different parameterizations schemes is presented in terms of temperature and precipitation for high-resolution integrations over Andalusia (South of Spain). As initial and boundary conditions ERA-40 Reanalysis data are used. Two domains were used, a coarse one with dimensions of 55 by 60 grid points with spacing of 30 km and a nested domain of 48 by 72 grid points grid spaced 10 km. Coarse domain fully covers Iberian Peninsula and Andalusia fits loosely in the finer one. In addition to parameterization tests, two dynamical downscaling techniques have been applied in order to examine the influence of initial conditions on RCM long-term studies. Regional climate studies usually employ continuous integration for the period under survey, initializing atmospheric fields only at the starting point and feeding boundary conditions regularly. An alternative approach is based on frequent re-initialization of atmospheric fields; hence the simulation is divided in several independent integrations. Altogether, 20 simulations have been performed using varying physics options, of which 4 were fulfilled applying the re-initialization technique. Surface temperature and accumulated precipitation (daily and monthly scale) were analyzed for a 5-year period covering from 1990 to 1994. Results have been compared with daily observational data series from 110 stations for temperature and 95 for precipitation Both daily and monthly average temperatures are generally well represented by the model. Conversely, daily precipitation results present larger deviations from observational data. However, noticeable accuracy is gained when comparing with monthly precipitation observations. There are some especially conflictive subregions where precipitation is scarcely captured, such as the Southeast of the Iberian Peninsula, mainly due to its extremely convective nature. Regarding parameterization schemes performance, every set provides very similar results either for temperature or precipitation and no configuration seems to outperform the others both for the whole region and for every season. Nevertheless, some marked differences between areas within the domain appear when analyzing certain physics options, particularly for precipitation. Some of the physics options, such as radiation, have little impact on model performance with respect to precipitation and results do not vary when the scheme is modified. On the other hand, cumulus and boundary layer parameterizations are responsible for most of the differences obtained between configurations. Acknowledgements: The Spanish Ministry of Science and Innovation, with additional support from the European Community Funds (FEDER), project CGL2007-61151/CLI, and the Regional Government of Andalusia project P06-RNM-01622, have financed this study. The "Centro de Servicios de Informática y Redes de Comunicaciones" (CSIRC), Universidad de Granada, has provided the computing time. Key words: MM5 mesoscale model, parameterizations schemes, temperature and precipitation, South of Spain.
Frederix, Gerardus W J; van Hasselt, Johan G C; Schellens, Jan H M; Hövels, Anke M; Raaijmakers, Jan A M; Huitema, Alwin D R; Severens, Johan L
2014-01-01
Structural uncertainty relates to differences in model structure and parameterization. For many published health economic analyses in oncology, substantial differences in model structure exist, leading to differences in analysis outcomes and potentially impacting decision-making processes. The objectives of this analysis were (1) to identify differences in model structure and parameterization for cost-effectiveness analyses (CEAs) comparing tamoxifen and anastrazole for adjuvant breast cancer (ABC) treatment; and (2) to quantify the impact of these differences on analysis outcome metrics. The analysis consisted of four steps: (1) review of the literature for identification of eligible CEAs; (2) definition and implementation of a base model structure, which included the core structural components for all identified CEAs; (3) definition and implementation of changes or additions in the base model structure or parameterization; and (4) quantification of the impact of changes in model structure or parameterizations on the analysis outcome metrics life-years gained (LYG), incremental costs (IC) and the incremental cost-effectiveness ratio (ICER). Eleven CEA analyses comparing anastrazole and tamoxifen as ABC treatment were identified. The base model consisted of the following health states: (1) on treatment; (2) off treatment; (3) local recurrence; (4) metastatic disease; (5) death due to breast cancer; and (6) death due to other causes. The base model estimates of anastrazole versus tamoxifen for the LYG, IC and ICER were 0.263 years, €3,647 and €13,868/LYG, respectively. In the published models that were evaluated, differences in model structure included the addition of different recurrence health states, and associated transition rates were identified. Differences in parameterization were related to the incidences of recurrence, local recurrence to metastatic disease, and metastatic disease to death. The separate impact of these model components on the LYG ranged from 0.207 to 0.356 years, while incremental costs ranged from €3,490 to €3,714 and ICERs ranged from €9,804/LYG to €17,966/LYG. When we re-analyzed the published CEAs in our framework by including their respective model properties, the LYG ranged from 0.207 to 0.383 years, IC ranged from €3,556 to €3,731 and ICERs ranged from €9,683/LYG to €17,570/LYG. Differences in model structure and parameterization lead to substantial differences in analysis outcome metrics. This analysis supports the need for more guidance regarding structural uncertainty and the use of standardized disease-specific models for health economic analyses of adjuvant endocrine breast cancer therapies. The developed approach in the current analysis could potentially serve as a template for further evaluations of structural uncertainty and development of disease-specific models.
Nano-volume drop patterning for rapid on-chip neuronal connect-ability assays.
Petrelli, Alessia; Marconi, Emanuele; Salerno, Marco; De Pietri Tonelli, Davide; Berdondini, Luca; Dante, Silvia
2013-11-21
The ability of neurons to extend projections and to form physical connections among them (i.e., "connect-ability") is altered in several neuropathologies. The quantification of these alterations is an important read-out to investigate pathogenic mechanisms and for research and development of neuropharmacological therapies, however current morphological analysis methods are very time-intensive. Here, we present and characterize a novel on-chip approach that we propose as a rapid assay. Our approach is based on the definition on a neuronal cell culture substrate of discrete patterns of adhesion protein spots (poly-d-lysine, 23 ± 5 μm in diameter) characterized by controlled inter-spot separations of increasing distance (from 40 μm to 100 μm), locally adsorbed in an adhesion-repulsive agarose layer. Under these conditions, the connect-ability of wild type primary neurons from rodents is shown to be strictly dependent on the inter-spot distance, and can be rapidly documented by simple optical read-outs. Moreover, we applied our approach to identify connect-ability defects in neurons from a mouse model of 22q11.2 deletion syndrome/DiGeorge syndrome, by comparative trials with wild type preparations. The presented results demonstrate the sensitivity and reliability of this novel on-chip-based connect-ability approach and validate the use of this method for the rapid assessment of neuronal connect-ability defects in neuropathologies.
Earl, Julia E.; Fuhlendorf, Samuel D.; Haukos, David A.; Tanner, Ashley M.; Elmore, Dwayne; Carleton, Scott A.
2016-01-01
Long-distance movements are important adaptive behaviors that contribute to population, community, and ecosystem connectivity. However, researchers have a poor understanding of the characteristics of long-distance movements for most species. Here, we examined long-distance movements for the lesser prairie-chicken (Tympanuchus pallidicinctus), a species of conservation concern. We addressed the following questions: (1) At what distances could populations be connected? (2) What are the characteristics and probability of dispersal movements? (3) Do lesser prairie-chickens display exploratory and round-trip movements? (4) Do the characteristics of long-distance movements vary by site? Movements were examined from populations using satellite GPS transmitters across the entire distribution of the species in New Mexico, Oklahoma, Kansas, and Colorado. Dispersal movements were recorded up to 71 km net displacement, much farther than hitherto recorded. These distances suggest that there may be greater potential connectivity among populations than previously thought. Dispersal movements were displayed primarily by females and had a northerly directional bias. Dispersal probabilities ranged from 0.08 to 0.43 movements per year for both sexes combined, although these movements averaged only 16 km net displacement. Lesser prairie-chickens displayed both exploratory foray loops and round-trip movements. Half of round-trip movements appeared seasonal, suggesting a partial migration in some populations. None of the long-distance movements varied by study site. Data presented here will be important in parameterizing models assessing population viability and informing conservation planning, although further work is needed to identify landscape features that may reduce connectivity among populations.
Cross section parameterizations for cosmic ray nuclei. 1: Single nucleon removal
NASA Technical Reports Server (NTRS)
Norbury, John W.; Townsend, Lawrence W.
1992-01-01
Parameterizations of single nucleon removal from electromagnetic and strong interactions of cosmic rays with nuclei are presented. These parameterizations are based upon the most accurate theoretical calculations available to date. They should be very suitable for use in cosmic ray propagation through interstellar space, the Earth's atmosphere, lunar samples, meteorites, spacecraft walls and lunar and martian habitats.
Improvement of the GEOS-5 AGCM upon Updating the Air-Sea Roughness Parameterization
NASA Technical Reports Server (NTRS)
Garfinkel, C. I.; Molod, A.; Oman, L. D.; Song, I.-S.
2011-01-01
The impact of an air-sea roughness parameterization over the ocean that more closely matches recent observations of air-sea exchange is examined in the NASA Goddard Earth Observing System, version 5 (GEOS-5) atmospheric general circulation model. Surface wind biases in the GEOS-5 AGCM are decreased by up to 1.2m/s. The new parameterization also has implications aloft as improvements extend into the stratosphere. Many other GCMs (both for operational weather forecasting and climate) use a similar class of parameterization for their air-sea roughness scheme. We therefore expect that results from GEOS-5 are relevant to other models as well.
Structural and parameteric uncertainty quantification in cloud microphysics parameterization schemes
NASA Astrophysics Data System (ADS)
van Lier-Walqui, M.; Morrison, H.; Kumjian, M. R.; Prat, O. P.; Martinkus, C.
2017-12-01
Atmospheric model parameterization schemes employ approximations to represent the effects of unresolved processes. These approximations are a source of error in forecasts, caused in part by considerable uncertainty about the optimal value of parameters within each scheme -- parameteric uncertainty. Furthermore, there is uncertainty regarding the best choice of the overarching structure of the parameterization scheme -- structrual uncertainty. Parameter estimation can constrain the first, but may struggle with the second because structural choices are typically discrete. We address this problem in the context of cloud microphysics parameterization schemes by creating a flexible framework wherein structural and parametric uncertainties can be simultaneously constrained. Our scheme makes no assuptions about drop size distribution shape or the functional form of parametrized process rate terms. Instead, these uncertainties are constrained by observations using a Markov Chain Monte Carlo sampler within a Bayesian inference framework. Our scheme, the Bayesian Observationally-constrained Statistical-physical Scheme (BOSS), has flexibility to predict various sets of prognostic drop size distribution moments as well as varying complexity of process rate formulations. We compare idealized probabilistic forecasts from versions of BOSS with varying levels of structural complexity. This work has applications in ensemble forecasts with model physics uncertainty, data assimilation, and cloud microphysics process studies.
NASA Astrophysics Data System (ADS)
Zhong, Shuixin; Chen, Zitong; Xu, Daosheng; Zhang, Yanxia
2018-06-01
Unresolved small-scale orographic (SSO) drags are parameterized in a regional model based on the Global/Regional Assimilation and Prediction System for the Tropical Mesoscale Model (GRAPES TMM). The SSO drags are represented by adding a sink term in the momentum equations. The maximum height of the mountain within the grid box is adopted in the SSO parameterization (SSOP) scheme as compensation for the drag. The effects of the unresolved topography are parameterized as the feedbacks to the momentum tendencies on the first model level in planetary boundary layer (PBL) parameterization. The SSOP scheme has been implemented and coupled with the PBL parameterization scheme within the model physics package. A monthly simulation is designed to examine the performance of the SSOP scheme over the complex terrain areas located in the southwest of Guangdong. The verification results show that the surface wind speed bias has been much alleviated by adopting the SSOP scheme, in addition to reduction of the wind bias in the lower troposphere. The target verification over Xinyi shows that the simulations with the SSOP scheme provide improved wind estimation over the complex regions in the southwest of Guangdong.
NASA Technical Reports Server (NTRS)
Plumb, R. A.
1985-01-01
Two dimensional modeling has become an established technique for the simulation of the global structure of trace constituents. Such models are simpler to formulate and cheaper to operate than three dimensional general circulation models, while avoiding some of the gross simplifications of one dimensional models. Nevertheless, the parameterization of eddy fluxes required in a 2-D model is not a trivial problem. This fact has apparently led some to interpret the shortcomings of existing 2-D models as indicating that the parameterization procedure is wrong in principle. There are grounds to believe that these shortcomings result primarily from incorrect implementations of the predictions of eddy transport theory and that a properly based parameterization may provide a good basis for atmospheric modeling. The existence of these GCM-derived coefficients affords an unprecedented opportunity to test the validity of the flux-gradient parameterization. To this end, a zonally averaged (2-D) model was developed, using these coefficients in the transport parameterization. Results from this model for a number of contrived tracer experiments were compared with the parent GCM. The generally good agreement substantially validates the flus-gradient parameterization, and thus the basic principle of 2-D modeling.
NASA Astrophysics Data System (ADS)
Schwartz, M. Christian
2017-08-01
This paper addresses two straightforward questions. First, how similar are the statistics of cirrus particle size distribution (PSD) datasets collected using the Two-Dimensional Stereo (2D-S) probe to cirrus PSD datasets collected using older Particle Measuring Systems (PMS) 2-D Cloud (2DC) and 2-D Precipitation (2DP) probes? Second, how similar are the datasets when shatter-correcting post-processing is applied to the 2DC datasets? To answer these questions, a database of measured and parameterized cirrus PSDs - constructed from measurements taken during the Small Particles in Cirrus (SPARTICUS); Mid-latitude Airborne Cirrus Properties Experiment (MACPEX); and Tropical Composition, Cloud, and Climate Coupling (TC4) flight campaigns - is used.Bulk cloud quantities are computed from the 2D-S database in three ways: first, directly from the 2D-S data; second, by applying the 2D-S data to ice PSD parameterizations developed using sets of cirrus measurements collected using the older PMS probes; and third, by applying the 2D-S data to a similar parameterization developed using the 2D-S data themselves. This is done so that measurements of the same cloud volumes by parameterized versions of the 2DC and 2D-S can be compared with one another. It is thereby seen - given the same cloud field and given the same assumptions concerning ice crystal cross-sectional area, density, and radar cross section - that the parameterized 2D-S and the parameterized 2DC predict similar distributions of inferred shortwave extinction coefficient, ice water content, and 94 GHz radar reflectivity. However, the parameterization of the 2DC based on uncorrected data predicts a statistically significantly higher number of total ice crystals and a larger ratio of small ice crystals to large ice crystals than does the parameterized 2D-S. The 2DC parameterization based on shatter-corrected data also predicts statistically different numbers of ice crystals than does the parameterized 2D-S, but the comparison between the two is nevertheless more favorable. It is concluded that the older datasets continue to be useful for scientific purposes, with certain caveats, and that continuing field investigations of cirrus with more modern probes is desirable.
NASA Astrophysics Data System (ADS)
Khan, Tanvir R.; Perlinger, Judith A.
2017-10-01
Despite considerable effort to develop mechanistic dry particle deposition parameterizations for atmospheric transport models, current knowledge has been inadequate to propose quantitative measures of the relative performance of available parameterizations. In this study, we evaluated the performance of five dry particle deposition parameterizations developed by Zhang et al. (2001) (Z01), Petroff and Zhang (2010) (PZ10), Kouznetsov and Sofiev (2012) (KS12), Zhang and He (2014) (ZH14), and Zhang and Shao (2014) (ZS14), respectively. The evaluation was performed in three dimensions: model ability to reproduce observed deposition velocities, Vd (accuracy); the influence of imprecision in input parameter values on the modeled Vd (uncertainty); and identification of the most influential parameter(s) (sensitivity). The accuracy of the modeled Vd was evaluated using observations obtained from five land use categories (LUCs): grass, coniferous and deciduous forests, natural water, and ice/snow. To ascertain the uncertainty in modeled Vd, and quantify the influence of imprecision in key model input parameters, a Monte Carlo uncertainty analysis was performed. The Sobol' sensitivity analysis was conducted with the objective to determine the parameter ranking from the most to the least influential. Comparing the normalized mean bias factors (indicators of accuracy), we find that the ZH14 parameterization is the most accurate for all LUCs except for coniferous forest, for which it is second most accurate. From Monte Carlo simulations, the estimated mean normalized uncertainties in the modeled Vd obtained for seven particle sizes (ranging from 0.005 to 2.5 µm) for the five LUCs are 17, 12, 13, 16, and 27 % for the Z01, PZ10, KS12, ZH14, and ZS14 parameterizations, respectively. From the Sobol' sensitivity results, we suggest that the parameter rankings vary by particle size and LUC for a given parameterization. Overall, for dp = 0.001 to 1.0 µm, friction velocity was one of the three most influential parameters in all parameterizations. For giant particles (dp = 10 µm), relative humidity was the most influential parameter. Because it is the least complex of the five parameterizations, and it has the greatest accuracy and least uncertainty, we propose that the ZH14 parameterization is currently superior for incorporation into atmospheric transport models.
Dynamics of Intersubject Brain Networks during Anxious Anticipation
Najafi, Mahshid; Kinnison, Joshua; Pessoa, Luiz
2017-01-01
How do large-scale brain networks reorganize during the waxing and waning of anxious anticipation? Here, threat was dynamically modulated during human functional MRI as two circles slowly meandered on the screen; if they touched, an unpleasant shock was delivered. We employed intersubject correlation analysis, which allowed the investigation of network-level functional connectivity across brains, and sought to determine how network connectivity changed during periods of approach (circles moving closer) and periods of retreat (circles moving apart). Analysis of positive connection weights revealed that dynamic threat altered connectivity within and between the salience, executive, and task-negative networks. For example, dynamic functional connectivity increased within the salience network during approach and decreased during retreat. The opposite pattern was found for the functional connectivity between the salience and task-negative networks: decreases during approach and increases during approach. Functional connections between subcortical regions and the salience network also changed dynamically during approach and retreat periods. Subcortical regions exhibiting such changes included the putative periaqueductal gray, putative habenula, and putative bed nucleus of the stria terminalis. Additional analysis of negative functional connections revealed dynamic changes, too. For example, negative weights within the salience network decreased during approach and increased during retreat, opposite what was found for positive weights. Together, our findings unraveled dynamic features of functional connectivity of large-scale networks and subcortical regions across participants while threat levels varied continuously, and demonstrate the potential of characterizing emotional processing at the level of dynamic networks. PMID:29209184
Adaptive Aft Signature Shaping of a Low-Boom Supersonic Aircraft Using Off-Body Pressures
NASA Technical Reports Server (NTRS)
Ordaz, Irian; Li, Wu
2012-01-01
The design and optimization of a low-boom supersonic aircraft using the state-of-the- art o -body aerodynamics and sonic boom analysis has long been a challenging problem. The focus of this paper is to demonstrate an e ective geometry parameterization scheme and a numerical optimization approach for the aft shaping of a low-boom supersonic aircraft using o -body pressure calculations. A gradient-based numerical optimization algorithm that models the objective and constraints as response surface equations is used to drive the aft ground signature toward a ramp shape. The design objective is the minimization of the variation between the ground signature and the target signature subject to several geometric and signature constraints. The target signature is computed by using a least-squares regression of the aft portion of the ground signature. The parameterization and the deformation of the geometry is performed with a NASA in- house shaping tool. The optimization algorithm uses the shaping tool to drive the geometric deformation of a horizontal tail with a parameterization scheme that consists of seven camber design variables and an additional design variable that describes the spanwise location of the midspan section. The demonstration cases show that numerical optimization using the state-of-the-art o -body aerodynamic calculations is not only feasible and repeatable but also allows the exploration of complex design spaces for which a knowledge-based design method becomes less effective.
NASA Astrophysics Data System (ADS)
Paukert, M.; Hoose, C.; Simmel, M.
2017-03-01
In model studies of aerosol-dependent immersion freezing in clouds, a common assumption is that each ice nucleating aerosol particle corresponds to exactly one cloud droplet. In contrast, the immersion freezing of larger drops—"rain"—is usually represented by a liquid volume-dependent approach, making the parameterizations of rain freezing independent of specific aerosol types and concentrations. This may lead to inconsistencies when aerosol effects on clouds and precipitation shall be investigated, since raindrops consist of the cloud droplets—and corresponding aerosol particles—that have been involved in drop-drop-collisions. Here we introduce an extension to a two-moment microphysical scheme in order to account explicitly for particle accumulation in raindrops by tracking the rates of selfcollection, autoconversion, and accretion. This provides a direct link between ice nuclei and the primary formation of large precipitating ice particles. A new parameterization scheme of drop freezing is presented to consider multiple ice nuclei within one drop and effective drop cooling rates. In our test cases of deep convective clouds, we find that at altitudes which are most relevant for immersion freezing, the majority of potential ice nuclei have been converted from cloud droplets into raindrops. Compared to the standard treatment of freezing in our model, the less efficient mineral dust-based freezing results in higher rainwater contents in the convective core, affecting both rain and hail precipitation. The aerosol-dependent treatment of rain freezing can reverse the signs of simulated precipitation sensitivities to ice nuclei perturbations.
Griffin, Brian M.; Larson, Vincent E.
2016-11-25
Microphysical processes, such as the formation, growth, and evaporation of precipitation, interact with variability and covariances (e.g., fluxes) in moisture and heat content. For instance, evaporation of rain may produce cold pools, which in turn may trigger fresh convection and precipitation. These effects are usually omitted or else crudely parameterized at subgrid scales in weather and climate models.A more formal approach is pursued here, based on predictive, horizontally averaged equations for the variances, covariances, and fluxes of moisture and heat content. These higher-order moment equations contain microphysical source terms. The microphysics terms can be integrated analytically, given a suitably simplemore » warm-rain microphysics scheme and an approximate assumption about the multivariate distribution of cloud-related and precipitation-related variables. Performing the integrations provides exact expressions within an idealized context.A large-eddy simulation (LES) of a shallow precipitating cumulus case is performed here, and it indicates that the microphysical effects on (co)variances and fluxes can be large. In some budgets and altitude ranges, they are dominant terms. The analytic expressions for the integrals are implemented in a single-column, higher-order closure model. Interactive single-column simulations agree qualitatively with the LES. The analytic integrations form a parameterization of microphysical effects in their own right, and they also serve as benchmark solutions that can be compared to non-analytic integration methods.« less
Adaptively Parameterized Tomography of the Western Hellenic Subduction Zone
NASA Astrophysics Data System (ADS)
Hansen, S. E.; Papadopoulos, G. A.
2017-12-01
The Hellenic subduction zone (HSZ) is the most seismically active region in Europe and plays a major role in the active tectonics of the eastern Mediterranean. This complicated environment has the potential to generate both large magnitude (M > 8) earthquakes and tsunamis. Situated above the western end of the HSZ, Greece faces a high risk from these geologic hazards, and characterizing this risk requires detailed understanding of the geodynamic processes occurring in this area. However, despite previous investigations, the kinematics of the HSZ are still controversial. Regional tomographic studies have yielded important information about the shallow seismic structure of the HSZ, but these models only image down to 150 km depth within small geographic areas. Deeper structure is constrained by global tomographic models but with coarser resolution ( 200-300 km). Additionally, current tomographic models focused on the HSZ were generated with regularly-spaced gridding, and this type of parameterization often over-emphasizes poorly sampled regions of the model or under-represents small-scale structure. Therefore, we are developing a new, high-resolution image of the mantle structure beneath the western HSZ using an adaptively parameterized seismic tomography approach. By combining multiple, regional travel-time datasets in the context of a global model, with adaptable gridding based on the sampling density of high-frequency data, this method generates a composite model of mantle structure that is being used to better characterize geodynamic processes within the HSZ, thereby allowing for improved hazard assessment. Preliminary results will be shown.
Physics-based distributed snow models in the operational arena: Current and future challenges
NASA Astrophysics Data System (ADS)
Winstral, A. H.; Jonas, T.; Schirmer, M.; Helbig, N.
2017-12-01
The demand for modeling tools robust to climate change and weather extremes along with coincident increases in computational capabilities have led to an increase in the use of physics-based snow models in operational applications. Current operational applications include the WSL-SLF's across Switzerland, ASO's in California, and USDA-ARS's in Idaho. While the physics-based approaches offer many advantages there remain limitations and modeling challenges. The most evident limitation remains computation times that often limit forecasters to a single, deterministic model run. Other limitations however remain less conspicuous amidst the assumptions that these models require little to no calibration based on their foundation on physical principles. Yet all energy balance snow models seemingly contain parameterizations or simplifications of processes where validation data are scarce or present understanding is limited. At the research-basin scale where many of these models were developed these modeling elements may prove adequate. However when applied over large areas, spatially invariable parameterizations of snow albedo, roughness lengths and atmospheric exchange coefficients - all vital to determining the snowcover energy balance - become problematic. Moreover as we apply models over larger grid cells, the representation of sub-grid variability such as the snow-covered fraction adds to the challenges. Here, we will demonstrate some of the major sensitivities of distributed energy balance snow models to particular model constructs, the need for advanced and spatially flexible methods and parameterizations, and prompt the community for open dialogue and future collaborations to further modeling capabilities.
NASA Astrophysics Data System (ADS)
Parishani, H.; Pritchard, M. S.; Bretherton, C. S.; Wyant, M. C.; Khairoutdinov, M.; Singh, B.
2017-12-01
Biases and parameterization formulation uncertainties in the representation of boundary layer clouds remain a leading source of possible systematic error in climate projections. Here we show the first results of cloud feedback to +4K SST warming in a new experimental climate model, the ``Ultra-Parameterized (UP)'' Community Atmosphere Model, UPCAM. We have developed UPCAM as an unusually high-resolution implementation of cloud superparameterization (SP) in which a global set of cloud resolving arrays is embedded in a host global climate model. In UP, the cloud-resolving scale includes sufficient internal resolution to explicitly generate the turbulent eddies that form marine stratocumulus and trade cumulus clouds. This is computationally costly but complements other available approaches for studying low clouds and their climate interaction, by avoiding parameterization of the relevant scales. In a recent publication we have shown that UP, while not without its own complexity trade-offs, can produce encouraging improvements in low cloud climatology in multi-month simulations of the present climate and is a promising target for exascale computing (Parishani et al. 2017). Here we show results of its low cloud feedback to warming in multi-year simulations for the first time. References: Parishani, H., M. S. Pritchard, C. S. Bretherton, M. C. Wyant, and M. Khairoutdinov (2017), Toward low-cloud-permitting cloud superparameterization with explicit boundary layer turbulence, J. Adv. Model. Earth Syst., 9, doi:10.1002/2017MS000968.
NASA Astrophysics Data System (ADS)
Daras, Ilias; Pail, Roland
2017-09-01
Temporal aliasing effects have a large impact on the gravity field accuracy of current gravimetry missions and are also expected to dominate the error budget of Next Generation Gravimetry Missions (NGGMs). This paper focuses on aspects concerning their treatment in the context of Low-Low Satellite-to-Satellite Tracking NGGMs. Closed-loop full-scale simulations are performed for a two-pair Bender-type Satellite Formation Flight (SFF), by taking into account error models of new generation instrument technology. The enhanced spatial sampling and error isotropy enable a further reduction of temporal aliasing errors from the processing perspective. A parameterization technique is adopted where the functional model is augmented by low-resolution gravity field solutions coestimated at short time intervals, while the remaining higher-resolution gravity field solution is estimated at a longer time interval. Fine-tuning the parameterization choices leads to significant reduction of the temporal aliasing effects. The investigations reveal that the parameterization technique in case of a Bender-type SFF can successfully mitigate aliasing effects caused by undersampling of high-frequency atmospheric and oceanic signals, since their most significant variations can be captured by daily coestimated solutions. This amounts to a "self-dealiasing" method that differs significantly from the classical dealiasing approach used nowadays for Gravity Recovery and Climate Experiment processing, enabling NGGMs to retrieve the complete spectrum of Earth's nontidal geophysical processes, including, for the first time, high-frequency atmospheric and oceanic variations.
NASA Astrophysics Data System (ADS)
Jánský, Jaroslav; Lucas, Greg M.; Kalb, Christina; Bayona, Victor; Peterson, Michael J.; Deierling, Wiebke; Flyer, Natasha; Pasko, Victor P.
2017-12-01
This work analyzes different current source and conductivity parameterizations and their influence on the diurnal variation of the global electric circuit (GEC). The diurnal variations of the current source parameterizations obtained using electric field and conductivity measurements from plane overflights combined with global Tropical Rainfall Measuring Mission satellite data give generally good agreement with measured diurnal variation of the electric field at Vostok, Antarctica, where reference experimental measurements are performed. An approach employing 85 GHz passive microwave observations to infer currents within the GEC is compared and shows the best agreement in amplitude and phase with experimental measurements. To study the conductivity influence, GEC models solving the continuity equation in 3-D are used to calculate atmospheric resistance using yearly averaged conductivity obtained from the global circulation model Community Earth System Model (CESM). Then, using current source parameterization combining mean currents and global counts of electrified clouds, if the exponential conductivity is substituted by the conductivity from CESM, the peak to peak diurnal variation of the ionospheric potential of the GEC decreases from 24% to 20%. The main reason for the change is the presence of clouds while effects of 222Rn ionization, aerosols, and topography are less pronounced. The simulated peak to peak diurnal variation of the electric field at Vostok is increased from 15% to 18% from the diurnal variation of the global current in the GEC if conductivity from CESM is used.
NASA Astrophysics Data System (ADS)
Krzyścin, Janusz
1990-01-01
In this paper we solve analytically wave kinematic equations and the wave energy transport equation, for basic long surface gravity wave in the coastal upwelling zone. Using Gent and Taylor's (1978) parameterization of drag coefficient (which includes interaction between long surface waves and the air flow) we find variability of this coefficient due to wave amplification and refraction caused by specific surface water current in the region. The drag coefficient grows towards the shore. The growth is faster for stronger current. When the angle between waves and the current is less than 90° the growth is mainly connected with the waves steepness, but when the angle is larger, it is caused by relative growth of the wave phase velocity.
Computational prediction of the pKas of small peptides through Conceptual DFT descriptors
NASA Astrophysics Data System (ADS)
Frau, Juan; Hernández-Haro, Noemí; Glossman-Mitnik, Daniel
2017-03-01
The experimental pKa of a group of simple amines have been plotted against several Conceptual DFT descriptors calculated by means of different density functionals, basis sets and solvation schemes. It was found that the best fits are those that relate the pKa of the amines with the global hardness η through the MN12SX density functional in connection with the Def2TZVP basis set and the SMD solvation model, using water as a solvent. The parameterized equation resulting from the linear regression analysis has then been used for the prediction of the pKa of small peptides of interest in the study of diabetes and Alzheimer disease. The accuracy of the results is relatively good, with a MAD of 0.36 units of pKa.
Shortwave radiation parameterization scheme for subgrid topography
NASA Astrophysics Data System (ADS)
Helbig, N.; LöWe, H.
2012-02-01
Topography is well known to alter the shortwave radiation balance at the surface. A detailed radiation balance is therefore required in mountainous terrain. In order to maintain the computational performance of large-scale models while at the same time increasing grid resolutions, subgrid parameterizations are gaining more importance. A complete radiation parameterization scheme for subgrid topography accounting for shading, limited sky view, and terrain reflections is presented. Each radiative flux is parameterized individually as a function of sky view factor, slope and sun elevation angle, and albedo. We validated the parameterization with domain-averaged values computed from a distributed radiation model which includes a detailed shortwave radiation balance. Furthermore, we quantify the individual topographic impacts on the shortwave radiation balance. Rather than using a limited set of real topographies we used a large ensemble of simulated topographies with a wide range of typical terrain characteristics to study all topographic influences on the radiation balance. To this end slopes and partial derivatives of seven real topographies from Switzerland and the United States were analyzed and Gaussian statistics were found to best approximate real topographies. Parameterized direct beam radiation presented previously compared well with modeled values over the entire range of slope angles. The approximation of multiple, anisotropic terrain reflections with single, isotropic terrain reflections was confirmed as long as domain-averaged values are considered. The validation of all parameterized radiative fluxes showed that it is indeed not necessary to compute subgrid fluxes in order to account for all topographic influences in large grid sizes.
NASA Astrophysics Data System (ADS)
Johnson, D. J.; Needham, J.; Xu, C.; Davies, S. J.; Bunyavejchewin, S.; Giardina, C. P.; Condit, R.; Cordell, S.; Litton, C. M.; Hubbell, S.; Kassim, A. R. B.; Shawn, L. K. Y.; Nasardin, M. B.; Ong, P.; Ostertag, R.; Sack, L.; Tan, S. K. S.; Yap, S.; McDowell, N. G.; McMahon, S.
2016-12-01
Terrestrial carbon cycling is a function of the growth and survival of trees. Current model representations of tree growth and survival at a global scale rely on coarse plant functional traits that are parameterized very generally. In view of the large biodiversity in the tropical forests, it is important that we account for the functional diversity in order to better predict tropical forest responses to future climate changes. Several next generation Earth System Models are moving towards a size-structured, trait-based approach to modelling vegetation globally, but the challenge of which and how many traits are necessary to capture forest complexity remains. Additionally, the challenge of collecting sufficient trait data to describe the vast species richness of tropical forests is enormous. We propose a more fundamental approach to these problems by characterizing forests by their patterns of survival. We expect our approach to distill real-world tree survival into a reasonable number of functional types. Using 10 large-area tropical forest plots that span geographic, edaphic and climatic gradients, we model tree survival as a function of tree size for hundreds of species. We found surprisingly few categories of size-survival functions emerge. This indicates some fundamental strategies at play across diverse forests to constrain the range of possible size-survival functions. Initial cluster analysis indicates that four to eight functional forms are necessary to describe variation in size-survival relations. Temporal variation in size-survival functions can be related to local environmental variation, allowing us to parameterize how demographically similar groups of species respond to perturbations in the ecosystem. We believe this methodology will yield a synthetic approach to classifying forest systems that will greatly reduce uncertainty and complexity in global vegetation models.
The practice of prediction: What can ecologists learn from applied, ecology-related fields?
Pennekamp, Frank; Adamson, Matthew; Petchey, Owen L; Poggiale, Jean-Christophe; Aguiar, Maira; Kooi, Bob W.; Botkin, Daniel B.; DeAngelis, Donald L.
2017-01-01
The pervasive influence of human induced global environmental change affects biodiversity across the globe, and there is great uncertainty as to how the biosphere will react on short and longer time scales. To adapt to what the future holds and to manage the impacts of global change, scientists need to predict the expected effects with some confidence and communicate these predictions to policy makers. However, recent reviews found that we currently lack a clear understanding of how predictable ecology is, with views seeing it as mostly unpredictable to potentially predictable, at least over short time frames. However, in applied, ecology-related fields predictions are more commonly formulated and reported, as well as evaluated in hindsight, potentially allowing one to define baselines of predictive proficiency in these fields. We searched the literature for representative case studies in these fields and collected information about modeling approaches, target variables of prediction, predictive proficiency achieved, as well as the availability of data to parameterize predictive models. We find that some fields such as epidemiology achieve high predictive proficiency, but even in the more predictive fields proficiency is evaluated in different ways. Both phenomenological and mechanistic approaches are used in most fields, but differences are often small, with no clear superiority of one approach over the other. Data availability is limiting in most fields, with long-term studies being rare and detailed data for parameterizing mechanistic models being in short supply. We suggest that ecologists adopt a more rigorous approach to report and assess predictive proficiency, and embrace the challenges of real world decision making to strengthen the practice of prediction in ecology.
2008-07-29
minimization is performed. It is critical that all other force field parameters (for bonds, angles, charges, and Lennard-Jones interactions) be pre...and tailoring the parameterization accordingly may be critical . For Phase I, the above described procedure was performed manually to obtain dihedral... critical that a reliable approach is available to guide experimental efforts and design. In addition, the automation of force field development will
NASA Technical Reports Server (NTRS)
Barber, Peter W.; Demerdash, Nabeel A. O.; Hurysz, B.; Luo, Z.; Denny, Hugh W.; Millard, David P.; Herkert, R.; Wang, R.
1992-01-01
The goal of this research project was to analyze the potential effects of electromagnetic interference (EMI) originating from power system processing and transmission components for Space Station Freedom. The approach consists of four steps: (1) developing analytical tools (models and computer programs); (2) conducting parameterization (what if?) studies; (3) predicting the global space station EMI environment; and (4) providing a basis for modification of EMI standards.
Bayesian Statistics and Uncertainty Quantification for Safety Boundary Analysis in Complex Systems
NASA Technical Reports Server (NTRS)
He, Yuning; Davies, Misty Dawn
2014-01-01
The analysis of a safety-critical system often requires detailed knowledge of safe regions and their highdimensional non-linear boundaries. We present a statistical approach to iteratively detect and characterize the boundaries, which are provided as parameterized shape candidates. Using methods from uncertainty quantification and active learning, we incrementally construct a statistical model from only few simulation runs and obtain statistically sound estimates of the shape parameters for safety boundaries.
NASA Astrophysics Data System (ADS)
Schneider, F. D.; Leiterer, R.; Morsdorf, F.; Gastellu-Etchegorry, J.; Lauret, N.; Pfeifer, N.; Schaepman, M. E.
2013-12-01
Remote sensing offers unique potential to study forest ecosystems by providing spatially and temporally distributed information that can be linked with key biophysical and biochemical variables. The estimation of biochemical constituents of leaves from remotely sensed data is of high interest revealing insight on photosynthetic processes, plant health, plant functional types, and speciation. However, the scaling of observations at the canopy level to the leaf level or vice versa is not trivial due to the structural complexity of forests. Thus, a common solution for scaling spectral information is the use of physically-based radiative transfer models. The discrete anisotropic radiative transfer model (DART), being one of the most complete coupled canopy-atmosphere 3D radiative transfer models, was parameterized based on airborne and in-situ measurements. At-sensor radiances were simulated and compared with measurements from an airborne imaging spectrometer. The study was performed on the Laegern site, a temperate mixed forest characterized by steep slopes, a heterogeneous spectral background, and deciduous and coniferous trees at different development stages (dominated by beech trees; 47°28'42.0' N, 8°21'51.8' E, 682 m asl, Switzerland). It is one of the few studies conducted on an old-growth forest. Particularly the 3D modeling of the complex canopy architecture is crucial to model the interaction of photons with the vegetation canopy and its background. Thus, we developed two forest reconstruction approaches: 1) based on a voxel grid, and 2) based on individual tree detection. Both methods are transferable to various forest ecosystems and applicable at scales between plot and landscape. Our results show that the newly developed voxel grid approach is favorable over a parameterization based on individual trees. In comparison to the actual imaging spectrometer data, the simulated images exhibit very similar spatial patterns, whereas absolute radiance values are partially differing depending on the respective wavelength. We conclude that our proposed method provides a representation of the 3D radiative regime within old-growth forests that is suitable for simulating most spectral and spatial features of imaging spectrometer data. It indicates the potential of simulating future Earth observation missions, such as ESA's Sentinel-2. However, the high spectral variability of leaf optical properties among species has to be addressed in future radiative transfer modeling. The results further reveal that research emphasis has to be put on the accurate parameterization of small-scale structures, such as the clumping of needles into shoots or the distribution of leaf angles.
Meade Krosby; Ian Breckheimer; D. John Pierce; Peter H. Singleton; Sonia A. Hall; Karl C. Halupka; William L. Gaines; Robert A. Long; Brad H. McRae; Brian L. Cosentino; Joanne P. Schuett-Hames
2015-01-01
Context  The dual threats of habitat fragmentation and climate change have led to a proliferation of approaches for connectivity conservation planning. Corridor analyses have traditionally taken a focal species approach, but the landscape âânaturalnessââ approach of modeling connectivity among areas of low human modification has gained popularity...
NASA Astrophysics Data System (ADS)
Johnson, E. S.; Rupper, S.; Steenburgh, W. J.; Strong, C.; Kochanski, A.
2017-12-01
Climate model outputs are often used as inputs to glacier energy and mass balance models, which are essential glaciological tools for testing glacier sensitivity, providing mass balance estimates in regions with little glaciological data, and providing a means to model future changes. Climate model outputs, however, are sensitive to the choice of physical parameterizations, such as those for cloud microphysics, land-surface schemes, surface layer options, etc. Furthermore, glacier mass balance (MB) estimates that use these climate model outputs as inputs are likely sensitive to the specific parameterization schemes, but this sensitivity has not been carefully assessed. Here we evaluate the sensitivity of glacier MB estimates across the Indus Basin to the selection of cloud microphysics parameterizations in the Weather Research and Forecasting Model (WRF). Cloud microphysics parameterizations differ in how they specify the size distributions of hydrometeors, the rate of graupel and snow production, their fall speed assumptions, the rates at which they convert from one hydrometeor type to the other, etc. While glacier MB estimates are likely sensitive to other parameterizations in WRF, our preliminary results suggest that glacier MB is highly sensitive to the timing, frequency, and amount of snowfall, which is influenced by the cloud microphysics parameterization. To this end, the Indus Basin is an ideal study site, as it has both westerly (winter) and monsoonal (summer) precipitation influences, is a data-sparse region (so models are critical), and still has lingering questions as to glacier importance for local and regional resources. WRF is run at a 4 km grid scale using two commonly used parameterizations: the Thompson scheme and the Goddard scheme. On average, these parameterizations result in minimal differences in annual precipitation. However, localized regions exhibit differences in precipitation of up to 3 m w.e. a-1. The different schemes also impact the radiative budgets over the glacierized areas. Our results show that glacier MB estimates can differ by up to 45% depending on the chosen cloud microphysics scheme. These findings highlight the need to better account for uncertainties in meteorological inputs into glacier energy and mass balance models.
Testing a common ice-ocean parameterization with laboratory experiments
NASA Astrophysics Data System (ADS)
McConnochie, C. D.; Kerr, R. C.
2017-07-01
Numerical models of ice-ocean interactions typically rely upon a parameterization for the transport of heat and salt to the ice face that has not been satisfactorily validated by observational or experimental data. We compare laboratory experiments of ice-saltwater interactions to a common numerical parameterization and find a significant disagreement in the dependence of the melt rate on the fluid velocity. We suggest a resolution to this disagreement based on a theoretical analysis of the boundary layer next to a vertical heated plate, which results in a threshold fluid velocity of approximately 4 cm/s at driving temperatures between 0.5 and 4°C, above which the form of the parameterization should be valid.
Dmitriev, Egor V; Khomenko, Georges; Chami, Malik; Sokolov, Anton A; Churilova, Tatyana Y; Korotaev, Gennady K
2009-03-01
The absorption of sunlight by oceanic constituents significantly contributes to the spectral distribution of the water-leaving radiance. Here it is shown that current parameterizations of absorption coefficients do not apply to the optically complex waters of the Crimea Peninsula. Based on in situ measurements, parameterizations of phytoplankton, nonalgal, and total particulate absorption coefficients are proposed. Their performance is evaluated using a log-log regression combined with a low-pass filter and the nonlinear least-square method. Statistical significance of the estimated parameters is verified using the bootstrap method. The parameterizations are relevant for chlorophyll a concentrations ranging from 0.45 up to 2 mg/m(3).
A second-order Budkyo-type parameterization of landsurface hydrology
NASA Technical Reports Server (NTRS)
Andreou, S. A.; Eagleson, P. S.
1982-01-01
A simple, second order parameterization of the water fluxes at a land surface for use as the appropriate boundary condition in general circulation models of the global atmosphere was developed. The derived parameterization incorporates the high nonlinearities in the relationship between the near surface soil moisture and the evaporation, runoff and percolation fluxes. Based on the one dimensional statistical dynamic derivation of the annual water balance, it makes the transition to short term prediction of the moisture fluxes, through a Taylor expansion around the average annual soil moisture. A comparison of the suggested parameterization is made with other existing techniques and available measurements. A thermodynamic coupling is applied in order to obtain estimations of the surface ground temperature.
Improvement of fog predictability in a coupled system of PAFOG and WRF
NASA Astrophysics Data System (ADS)
Kim, Wonheung; Yum, Seong Soo; Kim, Chang Ki
2017-04-01
Fog is difficult to predict because of the multi-scale nature of its formation mechanism: not only the synoptic conditions but also the local meteorological conditions crucially influence fog formation. Coarse vertical resolution and parameterization errors in fog prediction models are also critical reasons for low predictability. In this study, we use a coupled model system of a 3D mesoscale model (WRF) and a single column model with a fine vertical resolution (PAFOG, PArameterized FOG) to simulate fogs formed over the southern coastal region of the Korean Peninsula, where National Center for Intensive Observation of Severe Weather (NCIO) is located. NCIO is unique in that it has a 300 m meteorological tower built at the location to measure basic meteorological variables (temperature, dew point temperature and winds) at eleven different altitudes, and comprehensive atmospheric physics measurements are made with the various remote sensing instruments such as visibility meter, cloud radar, wind profiler, microwave radiometer, and ceilometer. These measurement data are used as input data to the model system and for evaluating the results. Particularly the data for initial and external forcings, which are tightly connected to the predictability of coupled model system, are derived from the tower measurement. This study aims at finding out the most important factors that influence fog predictability of the coupled system for NCIO. Nudging of meteorological tower data and soil moisture variability are found to be critically influencing fog predictability. Detailed results will be discussed at the conference.
McClure, James E.; Berrill, Mark A.; Gray, William G.; ...
2016-09-02
Here, multiphase flow in porous medium systems is typically modeled using continuum mechanical representations at the macroscale in terms of averaged quantities. These models require closure relations to produce solvable forms. One of these required closure relations is an expression relating fluid pressures, fluid saturations, and, in some cases, the interfacial area between the fluid phases, and the Euler characteristic. An unresolved question is whether the inclusion of these additional morphological and topological measures can lead to a non-hysteretic closure relation compared to the hysteretic forms that are used in traditional models, which typically do not include interfacial areas, ormore » the Euler characteristic. We develop a lattice-Boltzmann (LB) simulation approach to investigate the equilibrium states of a two-fluid-phase porous medium system, which include disconnected now- wetting phase features. The proposed approach is applied to a synthetic medium consisting of 1,964 spheres arranged in a random, non-overlapping, close-packed manner, yielding a total of 42,908 different equilibrium points. This information is evaluated using a generalized additive modeling approach to determine if a unique function from this family exists, which can explain the data. The variance of various model estimates is computed, and we conclude that, except for the limiting behavior close to a single fluid regime, capillary pressure can be expressed as a deterministic and non-hysteretic function of fluid saturation, interfacial area between the fluid phases, and the Euler characteristic. This work is unique in the methods employed, the size of the data set, the resolution in space and time, the true equilibrium nature of the data, the parameterizations investigated, and the broad set of functions examined. The conclusion of essentially non-hysteretic behavior provides support for an evolving class of two-fluid-phase flow in porous medium systems models.« less
NASA Astrophysics Data System (ADS)
Shuler, C. K.; El-Kadi, A. I.; Dulai, H.; Glenn, C. R.; Mariner, M. K. E.; DeWees, R.; Schmaedick, M.; Gurr, I.; Comeros, M.; Bodell, T.
2017-12-01
In small-island developing communities, effective communication and collaboration with local stakeholders is imperative for successful implementation of hydrologic or other socially pertinent research. American Samoa's isolated location highlights the need for water resource sustainability, and effective scientific research is a key component to addressing critical challenges in water storage and management. Currently, aquifer degradation from salt-water-intrusion or surface-water contaminated groundwater adversely affects much of the islands' municipal water supply, necessitating an almost decade long Boil-Water-Advisory. This presentation will share the approach our research group, based at the University of Hawaii Water Resources Research Center, has taken for successfully implementing a collaboration-focused water research program in American Samoa. Instead of viewing research as a one-sided activity, our program seeks opportunities to build local capacity, develop relationships with key on-island stakeholders, and involve local community through forward-looking projects. This presentation will highlight three applications of collaborative research with water policy and management, water supply and sustainability, and science education stakeholders. Projects include: 1) working with the island's water utility to establish a long-term hydrological monitoring network, motivated by a need for data to parameterize numerical groundwater models, 2) collaboration with the American Samoa Environmental Protection Agency to better understand groundwater discharge and watershed scale land-use impacts for management of nearshore coral reef ecosystems, and 3) participation of local community college and high school students as research interns to increase involvement in, and exposure to socially pertinent water focused research. Through these innovative collaborative approaches we have utilized resources more effectively, and focused research efforts on more pertinent locally-driven research questions. Additionally, this approach has enhanced our ability to provide technical support and knowledge transfer for on-island scientific needs, and helped overcome data availability barriers faced by water managers, planners, and future investigators.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McClure, James E.; Berrill, Mark A.; Gray, William G.
Here, multiphase flow in porous medium systems is typically modeled using continuum mechanical representations at the macroscale in terms of averaged quantities. These models require closure relations to produce solvable forms. One of these required closure relations is an expression relating fluid pressures, fluid saturations, and, in some cases, the interfacial area between the fluid phases, and the Euler characteristic. An unresolved question is whether the inclusion of these additional morphological and topological measures can lead to a non-hysteretic closure relation compared to the hysteretic forms that are used in traditional models, which typically do not include interfacial areas, ormore » the Euler characteristic. We develop a lattice-Boltzmann (LB) simulation approach to investigate the equilibrium states of a two-fluid-phase porous medium system, which include disconnected now- wetting phase features. The proposed approach is applied to a synthetic medium consisting of 1,964 spheres arranged in a random, non-overlapping, close-packed manner, yielding a total of 42,908 different equilibrium points. This information is evaluated using a generalized additive modeling approach to determine if a unique function from this family exists, which can explain the data. The variance of various model estimates is computed, and we conclude that, except for the limiting behavior close to a single fluid regime, capillary pressure can be expressed as a deterministic and non-hysteretic function of fluid saturation, interfacial area between the fluid phases, and the Euler characteristic. This work is unique in the methods employed, the size of the data set, the resolution in space and time, the true equilibrium nature of the data, the parameterizations investigated, and the broad set of functions examined. The conclusion of essentially non-hysteretic behavior provides support for an evolving class of two-fluid-phase flow in porous medium systems models.« less
Ramalho, Cristina E; Ottewell, Kym M; Chambers, Brian K; Yates, Colin J; Wilson, Barbara A; Bencini, Roberta; Barrett, Geoff
2018-01-01
The rapid and large-scale urbanization of peri-urban areas poses major and complex challenges for wildlife conservation. We used population viability analysis (PVA) to evaluate the influence of urban encroachment, fire, and fauna crossing structures, with and without accounting for inbreeding effects, on the metapopulation viability of a medium-sized ground-dwelling mammal, the southern brown bandicoot (Isoodon obesulus), in the rapidly expanding city of Perth, Australia. We surveyed two metapopulations over one and a half years, and parameterized the PVA models using largely field-collected data. The models revealed that spatial isolation imposed by housing and road encroachment has major impacts on I. obesulus. Although the species is known to persist in small metapopulations at moderate levels of habitat fragmentation, the models indicate that these populations become highly vulnerable to demographic decline, genetic deterioration, and local extinction under increasing habitat connectivity loss. Isolated metapopulations were also predicted to be highly sensitive to fire, with large-scale fires having greater negative impacts on population abundance than small-scale ones. To reduce the risk of decline and local extirpation of I. obesulus and other small- to medium-sized ground-dwelling mammals in urbanizing, fire prone landscapes, we recommend that remnant vegetation and vegetated, structurally-complex corridors between habitat patches be retained. Well-designed road underpasses can be effective to connect habitat patches and reduce the probability of inbreeding and genetic differentiation; however, adjustment of fire management practices to limit the size of unplanned fires and ensure the retention of long unburnt vegetation will also be required to ensure persistence. Our study supports the evidence that in rapidly urbanizing landscapes, a pro-active conservation approach is required that manages species at the metapopulation level and that prioritizes metapopulations and habitat with greater long-term probability of persistence and conservation capacity, respectively. This strategy may help us prevent future declines and local extirpations, and currently relatively common species from becoming rare.
Ottewell, Kym M.; Chambers, Brian K.; Yates, Colin J.; Wilson, Barbara A.; Bencini, Roberta; Barrett, Geoff
2018-01-01
The rapid and large-scale urbanization of peri-urban areas poses major and complex challenges for wildlife conservation. We used population viability analysis (PVA) to evaluate the influence of urban encroachment, fire, and fauna crossing structures, with and without accounting for inbreeding effects, on the metapopulation viability of a medium-sized ground-dwelling mammal, the southern brown bandicoot (Isoodon obesulus), in the rapidly expanding city of Perth, Australia. We surveyed two metapopulations over one and a half years, and parameterized the PVA models using largely field-collected data. The models revealed that spatial isolation imposed by housing and road encroachment has major impacts on I. obesulus. Although the species is known to persist in small metapopulations at moderate levels of habitat fragmentation, the models indicate that these populations become highly vulnerable to demographic decline, genetic deterioration, and local extinction under increasing habitat connectivity loss. Isolated metapopulations were also predicted to be highly sensitive to fire, with large-scale fires having greater negative impacts on population abundance than small-scale ones. To reduce the risk of decline and local extirpation of I. obesulus and other small- to medium-sized ground-dwelling mammals in urbanizing, fire prone landscapes, we recommend that remnant vegetation and vegetated, structurally-complex corridors between habitat patches be retained. Well-designed road underpasses can be effective to connect habitat patches and reduce the probability of inbreeding and genetic differentiation; however, adjustment of fire management practices to limit the size of unplanned fires and ensure the retention of long unburnt vegetation will also be required to ensure persistence. Our study supports the evidence that in rapidly urbanizing landscapes, a pro-active conservation approach is required that manages species at the metapopulation level and that prioritizes metapopulations and habitat with greater long-term probability of persistence and conservation capacity, respectively. This strategy may help us prevent future declines and local extirpations, and currently relatively common species from becoming rare. PMID:29444118
NASA Astrophysics Data System (ADS)
Gregory, A. E.; Benedict, K. K.; Zhang, S.; Savickas, J.
2017-12-01
Large scale, high severity wildfires in forests have become increasingly prevalent in the western United States due to fire exclusion. Although past work has focused on the immediate consequences of wildfire (ie. runoff magnitude and debris flow), little has been done to understand the post wildfire hydrologic consequences of vegetation regrowth. Furthermore, vegetation is often characterized by static parameterizations within hydrological models. In order to understand the temporal relationship between hydrologic processes and revegetation, we modularized and partially automated the hydrologic modeling process to increase connectivity between remotely sensed data, the Virtual Watershed Platform (a data management resource, called the VWP), input meteorological data, and the Precipitation-Runoff Modeling System (PRMS). This process was used to run simulations in the Valles Caldera of NM, an area impacted by the 2011 Las Conchas Fire, in PRMS before and after the Las Conchas to evaluate hydrologic process changes. The modeling environment addressed some of the existing challenges faced by hydrological modelers. At present, modelers are somewhat limited in their ability to push the boundaries of hydrologic understanding. Specific issues faced by modelers include limited computational resources to model processes at large spatial and temporal scales, data storage capacity and accessibility from the modeling platform, computational and time contraints for experimental modeling, and the skills to integrate modeling software in ways that have not been explored. By taking an interdisciplinary approach, we were able to address some of these challenges by leveraging the skills of hydrologic, data, and computer scientists; and the technical capabilities provided by a combination of on-demand/high-performance computing, distributed data, and cloud services. The hydrologic modeling process was modularized to include options for distributing meteorological data, parameter space experimentation, data format transformation, looping, validation of models and containerization for enabling new analytic scenarios. The user interacts with the modules through Jupyter Notebooks which can be connected to an on-demand computing and HPC environment, and data services built as part of the VWP.
NASA Technical Reports Server (NTRS)
Pulliam, T. H.; Nemec, M.; Holst, T.; Zingg, D. W.; Kwak, Dochan (Technical Monitor)
2002-01-01
A comparison between an Evolutionary Algorithm (EA) and an Adjoint-Gradient (AG) Method applied to a two-dimensional Navier-Stokes code for airfoil design is presented. Both approaches use a common function evaluation code, the steady-state explicit part of the code,ARC2D. The parameterization of the design space is a common B-spline approach for an airfoil surface, which together with a common griding approach, restricts the AG and EA to the same design space. Results are presented for a class of viscous transonic airfoils in which the optimization tradeoff between drag minimization as one objective and lift maximization as another, produces the multi-objective design space. Comparisons are made for efficiency, accuracy and design consistency.
Electron Impact Ionization: A New Parameterization for 100 eV to 1 MeV Electrons
NASA Technical Reports Server (NTRS)
Fang, Xiaohua; Randall, Cora E.; Lummerzheim, Dirk; Solomon, Stanley C.; Mills, Michael J.; Marsh, Daniel; Jackman, Charles H.; Wang, Wenbin; Lu, Gang
2008-01-01
Low, medium and high energy electrons can penetrate to the thermosphere (90-400 km; 55-240 miles) and mesosphere (50-90 km; 30-55 miles). These precipitating electrons ionize that region of the atmosphere, creating positively charged atoms and molecules and knocking off other negatively charged electrons. The precipitating electrons also create nitrogen-containing compounds along with other constituents. Since the electron precipitation amounts change within minutes, it is necessary to have a rapid method of computing the ionization and production of nitrogen-containing compounds for inclusion in computationally-demanding global models. A new methodology has been developed, which has parameterized a more detailed model computation of the ionizing impact of precipitating electrons over the very large range of 100 eV up to 1,000,000 eV. This new parameterization method is more accurate than a previous parameterization scheme, when compared with the more detailed model computation. Global models at the National Center for Atmospheric Research will use this new parameterization method in the near future.
NASA Technical Reports Server (NTRS)
Chou, Ming-Dah; Lee, Kyu-Tae; Yang, Ping; Lau, William K. M. (Technical Monitor)
2002-01-01
Based on the single-scattering optical properties pre-computed with an improved geometric optics method, the bulk absorption coefficient, single-scattering albedo, and asymmetry factor of ice particles have been parameterized as a function of the effective particle size of a mixture of ice habits, the ice water amount, and spectral band. The parameterization has been applied to computing fluxes for sample clouds with various particle size distributions and assumed mixtures of particle habits. It is found that flux calculations are not overly sensitive to the assumed particle habits if the definition of the effective particle size is consistent with the particle habits that the parameterization is based. Otherwise, the error in the flux calculations could reach a magnitude unacceptable for climate studies. Different from many previous studies, the parameterization requires only an effective particle size representing all ice habits in a cloud layer, but not the effective size of individual ice habits.
Anisotropic shear dispersion parameterization for ocean eddy transport
NASA Astrophysics Data System (ADS)
Reckinger, Scott; Fox-Kemper, Baylor
2015-11-01
The effects of mesoscale eddies are universally treated isotropically in global ocean general circulation models. However, observations and simulations demonstrate that the mesoscale processes that the parameterization is intended to represent, such as shear dispersion, are typified by strong anisotropy. We extend the Gent-McWilliams/Redi mesoscale eddy parameterization to include anisotropy and test the effects of varying levels of anisotropy in 1-degree Community Earth System Model (CESM) simulations. Anisotropy has many effects on the simulated climate, including a reduction of temperature and salinity biases, a deepening of the southern ocean mixed-layer depth, impacts on the meridional overturning circulation and ocean energy and tracer uptake, and improved ventilation of biogeochemical tracers, particularly in oxygen minimum zones. A process-based parameterization to approximate the effects of unresolved shear dispersion is also used to set the strength and direction of anisotropy. The shear dispersion parameterization is similar to drifter observations in spatial distribution of diffusivity and high-resolution model diagnosis in the distribution of eddy flux orientation.
A unified spectral parameterization for wave breaking: From the deep ocean to the surf zone
NASA Astrophysics Data System (ADS)
Filipot, J.-F.; Ardhuin, F.
2012-11-01
A new wave-breaking dissipation parameterization designed for phase-averaged spectral wave models is presented. It combines wave breaking basic physical quantities, namely, the breaking probability and the dissipation rate per unit area. The energy lost by waves is first explicitly calculated in physical space before being distributed over the relevant spectral components. The transition from deep to shallow water is made possible by using a dissipation rate per unit area of breaking waves that varies with the wave height, wavelength and water depth. This parameterization is implemented in the WAVEWATCH III modeling framework, which is applied to a wide range of conditions and scales, from the global ocean to the beach scale. Wave height, peak and mean periods, and spectral data are validated using in situ and remote sensing data. Model errors are comparable to those of other specialized deep or shallow water parameterizations. This work shows that it is possible to have a seamless parameterization from the deep ocean to the surf zone.
Parameterized spectral distributions for meson production in proton-proton collisions
NASA Technical Reports Server (NTRS)
Schneider, John P.; Norbury, John W.; Cucinotta, Francis A.
1995-01-01
Accurate semiempirical parameterizations of the energy-differential cross sections for charged pion and kaon production from proton-proton collisions are presented at energies relevant to cosmic rays. The parameterizations, which depend on both the outgoing meson parallel momentum and the incident proton kinetic energy, are able to be reduced to very simple analytical formulas suitable for cosmic ray transport through spacecraft walls, interstellar space, the atmosphere, and meteorites.
Implementation of a Parameterization Framework for Cybersecurity Laboratories
2017-03-01
designer of laboratory exercises with tools to parameterize labs for each student , and automate some aspects of the grading of laboratory exercises. A...is to provide the designer of laboratory exercises with tools to parameterize labs for each student , and automate some aspects of the grading of...support might assist the designer of laboratory exercises to achieve the following? 1. Verify that students performed lab exercises, with some
Govindan, R B; Kota, Srinivas; Al-Shargabi, Tareq; Massaro, An N; Chang, Taeun; du Plessis, Adre
2016-09-01
Electroencephalogram (EEG) signals are often contaminated by the electrocardiogram (ECG) interference, which affects quantitative characterization of EEG. We propose null-coherence, a frequency-based approach, to attenuate the ECG interference in EEG using simultaneously recorded ECG as a reference signal. After validating the proposed approach using numerically simulated data, we apply this approach to EEG recorded from six newborns receiving therapeutic hypothermia for neonatal encephalopathy. We compare our approach with an independent component analysis (ICA), a previously proposed approach to attenuate ECG artifacts in the EEG signal. The power spectrum and the cortico-cortical connectivity of the ECG attenuated EEG was compared against the power spectrum and the cortico-cortical connectivity of the raw EEG. The null-coherence approach attenuated the ECG contamination without leaving any residual of the ECG in the EEG. We show that the null-coherence approach performs better than ICA in attenuating the ECG contamination without enhancing cortico-cortical connectivity. Our analysis suggests that using ICA to remove ECG contamination from the EEG suffers from redistribution problems, whereas the null-coherence approach does not. We show that both the null-coherence and ICA approaches attenuate the ECG contamination. However, the EEG obtained after ICA cleaning displayed higher cortico-cortical connectivity compared with that obtained using the null-coherence approach. This suggests that null-coherence is superior to ICA in attenuating the ECG interference in EEG for cortico-cortical connectivity analysis. Copyright © 2016 Elsevier B.V. All rights reserved.
Relativistic three-dimensional Lippmann-Schwinger cross sections for space radiation applications
NASA Astrophysics Data System (ADS)
Werneth, C. M.; Xu, X.; Norman, R. B.; Maung, K. M.
2017-12-01
Radiation transport codes require accurate nuclear cross sections to compute particle fluences inside shielding materials. The Tripathi semi-empirical reaction cross section, which includes over 60 parameters tuned to nucleon-nucleus (NA) and nucleus-nucleus (AA) data, has been used in many of the world's best-known transport codes. Although this parameterization fits well to reaction cross section data, the predictive capability of any parameterization is questionable when it is used beyond the range of the data to which it was tuned. Using uncertainty analysis, it is shown that a relativistic three-dimensional Lippmann-Schwinger (LS3D) equation model based on Multiple Scattering Theory (MST) that uses 5 parameterizations-3 fundamental parameterizations to nucleon-nucleon (NN) data and 2 nuclear charge density parameterizations-predicts NA and AA reaction cross sections as well as the Tripathi cross section parameterization for reactions in which the kinetic energy of the projectile in the laboratory frame (TLab) is greater than 220 MeV/n. The relativistic LS3D model has the additional advantage of being able to predict highly accurate total and elastic cross sections. Consequently, it is recommended that the relativistic LS3D model be used for space radiation applications in which TLab > 220MeV /n .
NASA Technical Reports Server (NTRS)
Betancourt, R. Morales; Lee, D.; Oreopoulos, L.; Sud, Y. C.; Barahona, D.; Nenes, A.
2012-01-01
The salient features of mixed-phase and ice clouds in a GCM cloud scheme are examined using the ice formation parameterizations of Liu and Penner (LP) and Barahona and Nenes (BN). The performance of LP and BN ice nucleation parameterizations were assessed in the GEOS-5 AGCM using the McRAS-AC cloud microphysics framework in single column mode. Four dimensional assimilated data from the intensive observation period of ARM TWP-ICE campaign was used to drive the fluxes and lateral forcing. Simulation experiments where established to test the impact of each parameterization in the resulting cloud fields. Three commonly used IN spectra were utilized in the BN parameterization to described the availability of IN for heterogeneous ice nucleation. The results show large similarities in the cirrus cloud regime between all the schemes tested, in which ice crystal concentrations were within a factor of 10 regardless of the parameterization used. In mixed-phase clouds there are some persistent differences in cloud particle number concentration and size, as well as in cloud fraction, ice water mixing ratio, and ice water path. Contact freezing in the simulated mixed-phase clouds contributed to transfer liquid to ice efficiently, so that on average, the clouds were fully glaciated at T approximately 260K, irrespective of the ice nucleation parameterization used. Comparison of simulated ice water path to available satellite derived observations were also performed, finding that all the schemes tested with the BN parameterization predicted 20 average values of IWP within plus or minus 15% of the observations.
NASA Astrophysics Data System (ADS)
Lim, Kyo-Sun Sunny; Lim, Jong-Myoung; Shin, Hyeyum Hailey; Hong, Jinkyu; Ji, Young-Yong; Lee, Wanno
2018-06-01
A substantial over-prediction bias at low-to-moderate wind speeds in the Weather Research and Forecasting (WRF) model has been reported in the previous studies. Low-level wind fields play an important role in dispersion of air pollutants, including radionuclides, in a high-resolution WRF framework. By implementing two subgrid-scale orography parameterizations (Jimenez and Dudhia in J Appl Meteorol Climatol 51:300-316, 2012; Mass and Ovens in WRF model physics: problems, solutions and a new paradigm for progress. Preprints, 2010 WRF Users' Workshop, NCAR, Boulder, Colo. http://www.mmm.ucar.edu/wrf/users/workshops/WS2010/presentations/session%204/4-1_WRFworkshop2010Final.pdf, 2010), we tried to compare the performance of parameterizations and to enhance the forecast skill of low-level wind fields over the central western part of South Korea. Even though both subgrid-scale orography parameterizations significantly alleviated the positive bias at 10-m wind speed, the parameterization by Jimenez and Dudhia revealed a better forecast skill in wind speed under our modeling configuration. Implementation of the subgrid-scale orography parameterizations in the model did not affect the forecast skills in other meteorological fields including 10-m wind direction. Our study also brought up the problem of discrepancy in the definition of "10-m" wind between model physics parameterizations and observations, which can cause overestimated winds in model simulations. The overestimation was larger in stable conditions than in unstable conditions, indicating that the weak diurnal cycle in the model could be attributed to the representation error.
NASA Astrophysics Data System (ADS)
Pasquet, Simon; Bouruet-Aubertot, Pascale; Reverdin, Gilles; Turnherr, Andreas; Laurent, Lou St.
2016-06-01
The relevance of finescale parameterizations of dissipation rate of turbulent kinetic energy is addressed using finescale and microstructure measurements collected in the Lucky Strike segment of the Mid-Atlantic Ridge (MAR). There, high amplitude internal tides and a strongly sheared mean flow sustain a high level of dissipation rate and turbulent mixing. Two sets of parameterizations are considered: the first ones (Gregg, 1989; Kunze et al., 2006) were derived to estimate dissipation rate of turbulent kinetic energy induced by internal wave breaking, while the second one aimed to estimate dissipation induced by shear instability of a strongly sheared mean flow and is a function of the Richardson number (Kunze et al., 1990; Polzin, 1996). The latter parameterization has low skill in reproducing the observed dissipation rate when shear unstable events are resolved presumably because there is no scale separation between the duration of unstable events and the inverse growth rate of unstable billows. Instead GM based parameterizations were found to be relevant although slight biases were observed. Part of these biases result from the small value of the upper vertical wavenumber integration limit in the computation of shear variance in Kunze et al. (2006) parameterization that does not take into account internal wave signal of high vertical wavenumbers. We showed that significant improvement is obtained when the upper integration limit is set using a signal to noise ratio criterion and that the spatial structure of dissipation rates is reproduced with this parameterization.
NASA Technical Reports Server (NTRS)
Iguchi, Takamichi; Tao, Wei-Kuo; Wu, Di; Peters-Lidard, Christa; Santanello, Joseph A.; Kemp, Eric; Tian, Yudong; Case, Jonathan; Wang, Weile; Ferraro, Robert;
2017-01-01
This study investigates the sensitivity of daily rainfall rates in regional seasonal simulations over the contiguous United States (CONUS) to different cumulus parameterization schemes. Daily rainfall fields were simulated at 24-km resolution using the NASA-Unified Weather Research and Forecasting (NU-WRF) Model for June-August 2000. Four cumulus parameterization schemes and two options for shallow cumulus components in a specific scheme were tested. The spread in the domain-mean rainfall rates across the parameterization schemes was generally consistent between the entire CONUS and most subregions. The selection of the shallow cumulus component in a specific scheme had more impact than that of the four cumulus parameterization schemes. Regional variability in the performance of each scheme was assessed by calculating optimally weighted ensembles that minimize full root-mean-square errors against reference datasets. The spatial pattern of the seasonally averaged rainfall was insensitive to the selection of cumulus parameterization over mountainous regions because of the topographical pattern constraint, so that the simulation errors were mostly attributed to the overall bias there. In contrast, the spatial patterns over the Great Plains regions as well as the temporal variation over most parts of the CONUS were relatively sensitive to cumulus parameterization selection. Overall, adopting a single simulation result was preferable to generating a better ensemble for the seasonally averaged daily rainfall simulation, as long as their overall biases had the same positive or negative sign. However, an ensemble of multiple simulation results was more effective in reducing errors in the case of also considering temporal variation.
Spiral swimming of an artificial micro-swimmer
NASA Astrophysics Data System (ADS)
Keaveny, Eric E.; Maxey, Martin R.
A device constructed from a filament of paramagnetic beads connected to a human red blood cell will swim when subject to an oscillating magnetic field. Bending waves propagate from the tip of the tail toward the red blood cell in a fashion analogous to flagellum beating, making the artificial swimmer a candidate for studying what has been referred to as micro-swimming. In this study, we demonstrate that under the influence of a rotating field the artificial swimmer will perform -type swimming. We conduct numerical simulations of the swimmer where the paramagnetic tail is represented as a series of rigid spheres connected by flexible but inextensible links. An optimal range of parameters governing the relative strength of viscous, elastic and magnetic forces is identified for swimming speed. A parameterization of the motion is extracted and examined as a function of the driving frequency. With a continuous elastica/resistive force model, we obtain an expression for the swimming speed in the low-frequency limit. Using this expression we explore further the effects of the applied field, the ratio of the transverse field to the constant field, and the ratio of the radius of the sphere to the length of the filament tail on the resulting dynamics.
Ma, H. -Y.; Chuang, C. C.; Klein, S. A.; ...
2015-11-06
Here, we present an improved procedure of generating initial conditions (ICs) for climate model hindcast experiments with specified sea surface temperature and sea ice. The motivation is to minimize errors in the ICs and lead to a better evaluation of atmospheric parameterizations' performance in the hindcast mode. We apply state variables (horizontal velocities, temperature and specific humidity) from the operational analysis/reanalysis for the atmospheric initial states. Without a data assimilation system, we apply a two-step process to obtain other necessary variables to initialize both the atmospheric (e.g., aerosols and clouds) and land models (e.g., soil moisture). First, we nudge onlymore » the model horizontal velocities towards operational analysis/reanalysis values, given a 6-hour relaxation time scale, to obtain all necessary variables. Compared to the original strategy in which horizontal velocities, temperature and specific humidity are nudged, the revised approach produces a better representation of initial aerosols and cloud fields which are more consistent and closer to observations and model's preferred climatology. Second, we obtain land ICs from an offline land model simulation forced with observed precipitation, winds, and surface fluxes. This approach produces more realistic soil moisture in the land ICs. With this refined procedure, the simulated precipitation, clouds, radiation, and surface air temperature over land are improved in the Day 2 mean hindcasts. Following this procedure, we propose a “Core” integration suite which provides an easily repeatable test allowing model developers to rapidly assess the impacts of various parameterization changes on the fidelity of modelled cloud-associated processes relative to observations.« less
NASA Astrophysics Data System (ADS)
Ma, H.-Y.; Chuang, C. C.; Klein, S. A.; Lo, M.-H.; Zhang, Y.; Xie, S.; Zheng, X.; Ma, P.-L.; Zhang, Y.; Phillips, T. J.
2015-12-01
We present an improved procedure of generating initial conditions (ICs) for climate model hindcast experiments with specified sea surface temperature and sea ice. The motivation is to minimize errors in the ICs and lead to a better evaluation of atmospheric parameterizations' performance in the hindcast mode. We apply state variables (horizontal velocities, temperature, and specific humidity) from the operational analysis/reanalysis for the atmospheric initial states. Without a data assimilation system, we apply a two-step process to obtain other necessary variables to initialize both the atmospheric (e.g., aerosols and clouds) and land models (e.g., soil moisture). First, we nudge only the model horizontal velocities toward operational analysis/reanalysis values, given a 6 h relaxation time scale, to obtain all necessary variables. Compared to the original strategy in which horizontal velocities, temperature, and specific humidity are nudged, the revised approach produces a better representation of initial aerosols and cloud fields which are more consistent and closer to observations and model's preferred climatology. Second, we obtain land ICs from an off-line land model simulation forced with observed precipitation, winds, and surface fluxes. This approach produces more realistic soil moisture in the land ICs. With this refined procedure, the simulated precipitation, clouds, radiation, and surface air temperature over land are improved in the Day 2 mean hindcasts. Following this procedure, we propose a "Core" integration suite which provides an easily repeatable test allowing model developers to rapidly assess the impacts of various parameterization changes on the fidelity of modeled cloud-associated processes relative to observations.
Oppugning the assumptions of spatial averaging of segment and joint orientations.
Pierrynowski, Michael Raymond; Ball, Kevin Arthur
2009-02-09
Movement scientists frequently calculate "arithmetic averages" when examining body segment or joint orientations. Such calculations appear routinely, yet are fundamentally flawed. Three-dimensional orientation data are computed as matrices, yet three-ordered Euler/Cardan/Bryant angle parameters are frequently used for interpretation. These parameters are not geometrically independent; thus, the conventional process of averaging each parameter is incorrect. The process of arithmetic averaging also assumes that the distances between data are linear (Euclidean); however, for the orientation data these distances are geodesically curved (Riemannian). Therefore we question (oppugn) whether use of the conventional averaging approach is an appropriate statistic. Fortunately, exact methods of averaging orientation data have been developed which both circumvent the parameterization issue, and explicitly acknowledge the Euclidean or Riemannian distance measures. The details of these matrix-based averaging methods are presented and their theoretical advantages discussed. The Euclidian and Riemannian approaches offer appealing advantages over the conventional technique. With respect to practical biomechanical relevancy, examinations of simulated data suggest that for sets of orientation data possessing characteristics of low dispersion, an isotropic distribution, and less than 30 degrees second and third angle parameters, discrepancies with the conventional approach are less than 1.1 degrees . However, beyond these limits, arithmetic averaging can have substantive non-linear inaccuracies in all three parameterized angles. The biomechanics community is encouraged to recognize that limitations exist with the use of the conventional method of averaging orientations. Investigations requiring more robust spatial averaging over a broader range of orientations may benefit from the use of matrix-based Euclidean or Riemannian calculations.
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Newman, James C., III; Barnwell, Richard W.
1997-01-01
A three-dimensional unstructured grid approach to aerodynamic shape sensitivity analysis and design optimization has been developed and is extended to model geometrically complex configurations. The advantage of unstructured grids (when compared with a structured-grid approach) is their inherent ability to discretize irregularly shaped domains with greater efficiency and less effort. Hence, this approach is ideally suited for geometrically complex configurations of practical interest. In this work the nonlinear Euler equations are solved using an upwind, cell-centered, finite-volume scheme. The discrete, linearized systems which result from this scheme are solved iteratively by a preconditioned conjugate-gradient-like algorithm known as GMRES for the two-dimensional geometry and a Gauss-Seidel algorithm for the three-dimensional; similar procedures are used to solve the accompanying linear aerodynamic sensitivity equations in incremental iterative form. As shown, this particular form of the sensitivity equation makes large-scale gradient-based aerodynamic optimization possible by taking advantage of memory efficient methods to construct exact Jacobian matrix-vector products. Simple parameterization techniques are utilized for demonstrative purposes. Once the surface has been deformed, the unstructured grid is adapted by considering the mesh as a system of interconnected springs. Grid sensitivities are obtained by differentiating the surface parameterization and the grid adaptation algorithms with ADIFOR (which is an advanced automatic-differentiation software tool). To demonstrate the ability of this procedure to analyze and design complex configurations of practical interest, the sensitivity analysis and shape optimization has been performed for a two-dimensional high-lift multielement airfoil and for a three-dimensional Boeing 747-200 aircraft.
NASA Astrophysics Data System (ADS)
Basu, Nandita B.; Fure, Adrian D.; Jawitz, James W.
2008-07-01
Simulations of nonpartitioning and partitioning tracer tests were used to parameterize the equilibrium stream tube model (ESM) that predicts the dissolution dynamics of dense nonaqueous phase liquids (DNAPLs) as a function of the Lagrangian properties of DNAPL source zones. Lagrangian, or stream-tube-based, approaches characterize source zones with as few as two trajectory-integrated parameters, in contrast to the potentially thousands of parameters required to describe the point-by-point variability in permeability and DNAPL in traditional Eulerian modeling approaches. The spill and subsequent dissolution of DNAPLs were simulated in two-dimensional domains having different hydrologic characteristics (variance of the log conductivity field = 0.2, 1, and 3) using the multiphase flow and transport simulator UTCHEM. Nonpartitioning and partitioning tracers were used to characterize the Lagrangian properties (travel time and trajectory-integrated DNAPL content statistics) of DNAPL source zones, which were in turn shown to be sufficient for accurate prediction of source dissolution behavior using the ESM throughout the relatively broad range of hydraulic conductivity variances tested here. The results were found to be relatively insensitive to travel time variability, suggesting that dissolution could be accurately predicted even if the travel time variance was only coarsely estimated. Estimation of the ESM parameters was also demonstrated using an approximate technique based on Eulerian data in the absence of tracer data; however, determining the minimum amount of such data required remains for future work. Finally, the stream tube model was shown to be a more unique predictor of dissolution behavior than approaches based on the ganglia-to-pool model for source zone characterization.
NASA Astrophysics Data System (ADS)
Harrington, J. Y.
2017-12-01
Parameterizing the growth of ice particles in numerical models is at an interesting cross-roads. Most parameterizations developed in the past, including some that I have developed, parse model ice into numerous categories based primarily on the growth mode of the particle. Models routinely possess smaller ice, snow crystals, aggregates, graupel, and hail. The snow and ice categories in some models are further split into subcategories to account for the various shapes of ice. There has been a relatively recent shift towards a new class of microphysical models that predict the properties of ice particles instead of using multiple categories and subcategories. Particle property models predict the physical characteristics of ice, such as aspect ratio, maximum dimension, effective density, rime density, effective area, and so forth. These models are attractive in the sense that particle characteristics evolve naturally in time and space without the need for numerous (and somewhat artificial) transitions among pre-defined classes. However, particle property models often require fundamental parameters that are typically derived from laboratory measurements. For instance, the evolution of particle shape during vapor depositional growth requires knowledge of the growth efficiencies for the various axis of the crystals, which in turn depends on surface parameters that can only be determined in the laboratory. The evolution of particle shapes and density during riming, aggregation, and melting require data on the redistribution of mass across a crystals axis as that crystal collects water drops, ice crystals, or melts. Predicting the evolution of particle properties based on laboratory-determined parameters has a substantial influence on the evolution of some cloud systems. Radiatively-driven cirrus clouds show a broader range of competition between heterogeneous nucleation and homogeneous freezing when ice crystal properties are predicted. Even strongly convective squall lines show a substantial influence to predicted particle properties: The more natural evolution of ice crystals during riming produces graupel-like particles with size and fall-speeds required for the formation of a classic transition zone and extended stratiform precipitation region.
2014-09-30
existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this...through downscaling future projection simulations. APPROACH To address the scientific objectives, we plan to develop, implement, and validate a...ITD and FSD at the same time. The development of MIZMAS will be based on systematic model parameterization, calibration, and validation, and data
NASA Astrophysics Data System (ADS)
Agishev, Ravil; Comerón, Adolfo
2018-04-01
As an application of the dimensionless parameterization concept proposed earlier for the characterization of lidar systems, the universal assessment of lidar capabilities in day and night conditions is considered. The dimensionless parameters encapsulate the atmospheric conditions, the lidar optical and optoelectronic characteristics, including the photodetector internal noise, and the sky background radiation. Approaches to ensure immunity of the lidar system to external background radiation are discussed.
Welter, David E.; White, Jeremy T.; Hunt, Randall J.; Doherty, John E.
2015-09-18
The PEST++ Version 3 software suite can be compiled for Microsoft Windows®4 and Linux®5 operating systems; the source code is available in a Microsoft Visual Studio®6 2013 solution; Linux Makefiles are also provided. PEST++ Version 3 continues to build a foundation for an open-source framework capable of producing robust and efficient parameter estimation tools for large environmental models.
The novel high-performance 3-D MT inverse solver
NASA Astrophysics Data System (ADS)
Kruglyakov, Mikhail; Geraskin, Alexey; Kuvshinov, Alexey
2016-04-01
We present novel, robust, scalable, and fast 3-D magnetotelluric (MT) inverse solver. The solver is written in multi-language paradigm to make it as efficient, readable and maintainable as possible. Separation of concerns and single responsibility concepts go through implementation of the solver. As a forward modelling engine a modern scalable solver extrEMe, based on contracting integral equation approach, is used. Iterative gradient-type (quasi-Newton) optimization scheme is invoked to search for (regularized) inverse problem solution, and adjoint source approach is used to calculate efficiently the gradient of the misfit. The inverse solver is able to deal with highly detailed and contrasting models, allows for working (separately or jointly) with any type of MT responses, and supports massive parallelization. Moreover, different parallelization strategies implemented in the code allow optimal usage of available computational resources for a given problem statement. To parameterize an inverse domain the so-called mask parameterization is implemented, which means that one can merge any subset of forward modelling cells in order to account for (usually) irregular distribution of observation sites. We report results of 3-D numerical experiments aimed at analysing the robustness, performance and scalability of the code. In particular, our computational experiments carried out at different platforms ranging from modern laptops to HPC Piz Daint (6th supercomputer in the world) demonstrate practically linear scalability of the code up to thousands of nodes.
A Fast Vector Radiative Transfer Model for Atmospheric and Oceanic Remote Sensing
NASA Astrophysics Data System (ADS)
Ding, J.; Yang, P.; King, M. D.; Platnick, S. E.; Meyer, K.
2017-12-01
A fast vector radiative transfer model is developed in support of atmospheric and oceanic remote sensing. This model is capable of simulating the Stokes vector observed at the top of the atmosphere (TOA) and the terrestrial surface by considering absorption, scattering, and emission. The gas absorption is parameterized in terms of atmospheric gas concentrations, temperature, and pressure. The parameterization scheme combines a regression method and the correlated-K distribution method, and can easily integrate with multiple scattering computations. The approach is more than four orders of magnitude faster than a line-by-line radiative transfer model with errors less than 0.5% in terms of transmissivity. A two-component approach is utilized to solve the vector radiative transfer equation (VRTE). The VRTE solver separates the phase matrices of aerosol and cloud into forward and diffuse parts and thus the solution is also separated. The forward solution can be expressed by a semi-analytical equation based on the small-angle approximation, and serves as the source of the diffuse part. The diffuse part is solved by the adding-doubling method. The adding-doubling implementation is computationally efficient because the diffuse component needs much fewer spherical function expansion terms. The simulated Stokes vector at both the TOA and the surface have comparable accuracy compared with the counterparts based on numerically rigorous methods.
Multi-scale Modeling of Arctic Clouds
NASA Astrophysics Data System (ADS)
Hillman, B. R.; Roesler, E. L.; Dexheimer, D.
2017-12-01
The presence and properties of clouds are critically important to the radiative budget in the Arctic, but clouds are notoriously difficult to represent in global climate models (GCMs). The challenge stems partly from a disconnect in the scales at which these models are formulated and the scale of the physical processes important to the formation of clouds (e.g., convection and turbulence). Because of this, these processes are parameterized in large-scale models. Over the past decades, new approaches have been explored in which a cloud system resolving model (CSRM), or in the extreme a large eddy simulation (LES), is embedded into each gridcell of a traditional GCM to replace the cloud and convective parameterizations to explicitly simulate more of these important processes. This approach is attractive in that it allows for more explicit simulation of small-scale processes while also allowing for interaction between the small and large-scale processes. The goal of this study is to quantify the performance of this framework in simulating Arctic clouds relative to a traditional global model, and to explore the limitations of such a framework using coordinated high-resolution (eddy-resolving) simulations. Simulations from the global model are compared with satellite retrievals of cloud fraction partioned by cloud phase from CALIPSO, and limited-area LES simulations are compared with ground-based and tethered-balloon measurements from the ARM Barrow and Oliktok Point measurement facilities.
NASA Astrophysics Data System (ADS)
Poulter, B.; Ciais, P.; Joetzjer, E.; Maignan, F.; Luyssaert, S.; Barichivich, J.
2015-12-01
Accurately estimating forest biomass and forest carbon dynamics requires new integrated remote sensing, forest inventory, and carbon cycle modeling approaches. Presently, there is an increasing and urgent need to reduce forest biomass uncertainty in order to meet the requirements of carbon mitigation treaties, such as Reducing Emissions from Deforestation and forest Degradation (REDD+). Here we describe a new parameterization and assimilation methodology used to estimate tropical forest biomass using the ORCHIDEE-CAN dynamic global vegetation model. ORCHIDEE-CAN simulates carbon uptake and allocation to individual trees using a mechanistic representation of photosynthesis, respiration and other first-order processes. The model is first parameterized using forest inventory data to constrain background mortality rates, i.e., self-thinning, and productivity. Satellite remote sensing data for forest structure, i.e., canopy height, is used to constrain simulated forest stand conditions using a look-up table approach to match canopy height distributions. The resulting forest biomass estimates are provided for spatial grids that match REDD+ project boundaries and aim to provide carbon estimates for the criteria described in the IPCC Good Practice Guidelines Tier 3 category. With the increasing availability of forest structure variables derived from high-resolution LIDAR, RADAR, and optical imagery, new methodologies and applications with process-based carbon cycle models are becoming more readily available to inform land management.
Flexible climate modeling systems: Lessons from Snowball Earth, Titan and Mars
NASA Astrophysics Data System (ADS)
Pierrehumbert, R. T.
2007-12-01
Climate models are only useful to the extent that real understanding can be extracted from them. Most leading- edge problems in climate change, paleoclimate and planetary climate require a high degree of flexibility in terms of incorporating model physics -- for example in allowing methane or CO2 to be a condensible substance instead of water vapor. This puts a premium on model design that allows easy modification, and on physical parameterizations that are close to fundamentals with as little empirical ad-hoc formulation as possible. I will provide examples from two approaches to this problem we have been using at the University of Chicago. The first is the FOAM general circulation model, which is a clean single-executable Fortran-77/c code supported by auxiliary applications in Python and Java. The second is a new approach based on using Python as a shell for assembling building blocks in compiled-code into full models. Applications to Snowball Earth, Titan and Mars, as well as pedagogical uses, will be discussed. One painful lesson we have learned is that Fortran-95 is a major impediment to portability and cross-language interoperability; in this light the trend toward Fortran-95 in major modelling groups is seen as a significant step backwards. In this talk, I will focus on modeling projects employing a full representation of atmospheric fluid dynamics, rather than "intermediate complexity" models in which the associated transports are parameterized.
NASA Astrophysics Data System (ADS)
Laloy, Eric; Hérault, Romain; Lee, John; Jacques, Diederik; Linde, Niklas
2017-12-01
Efficient and high-fidelity prior sampling and inversion for complex geological media is still a largely unsolved challenge. Here, we use a deep neural network of the variational autoencoder type to construct a parametric low-dimensional base model parameterization of complex binary geological media. For inversion purposes, it has the attractive feature that random draws from an uncorrelated standard normal distribution yield model realizations with spatial characteristics that are in agreement with the training set. In comparison with the most commonly used parametric representations in probabilistic inversion, we find that our dimensionality reduction (DR) approach outperforms principle component analysis (PCA), optimization-PCA (OPCA) and discrete cosine transform (DCT) DR techniques for unconditional geostatistical simulation of a channelized prior model. For the considered examples, important compression ratios (200-500) are achieved. Given that the construction of our parameterization requires a training set of several tens of thousands of prior model realizations, our DR approach is more suited for probabilistic (or deterministic) inversion than for unconditional (or point-conditioned) geostatistical simulation. Probabilistic inversions of 2D steady-state and 3D transient hydraulic tomography data are used to demonstrate the DR-based inversion. For the 2D case study, the performance is superior compared to current state-of-the-art multiple-point statistics inversion by sequential geostatistical resampling (SGR). Inversion results for the 3D application are also encouraging.
Effect of planning for connectivity on linear reserve networks.
Lentini, Pia E; Gibbons, Philip; Carwardine, Josie; Fischer, Joern; Drielsma, Michael; Martin, Tara G
2013-08-01
Although the concept of connectivity is decades old, it remains poorly understood and defined, and some argue that habitat quality and area should take precedence in conservation planning instead. However, fragmented landscapes are often characterized by linear features that are inherently connected, such as streams and hedgerows. For these, both representation and connectivity targets may be met with little effect on the cost, area, or quality of the reserve network. We assessed how connectivity approaches affect planning outcomes for linear habitat networks by using the stock-route network of Australia as a case study. With the objective of representing vegetation communities across the network at a minimal cost, we ran scenarios with a range of representation targets (10%, 30%, 50%, and 70%) and used 3 approaches to account for connectivity (boundary length modifier, Euclidean distance, and landscape-value [LV]). We found that decisions regarding the target and connectivity approach used affected the spatial allocation of reserve systems. At targets ≥50%, networks designed with the Euclidean distance and LV approaches consisted of a greater number of small reserves. Hence, by maximizing both representation and connectivity, these networks compromised on larger contiguous areas. However, targets this high are rarely used in real-world conservation planning. Approaches for incorporating connectivity into the planning of linear reserve networks that account for both the spatial arrangement of reserves and the characteristics of the intervening matrix highlight important sections that link the landscape and that may otherwise be overlooked. © 2013 Society for Conservation Biology.
NASA Astrophysics Data System (ADS)
Han, Xiaobao; Li, Huacong; Jia, Qiusheng
2017-12-01
For dynamic decoupling of polynomial linear parameter varying(PLPV) system, a robust dominance pre-compensator design method is given. The parameterized precompensator design problem is converted into an optimal problem constrained with parameterized linear matrix inequalities(PLMI) by using the conception of parameterized Lyapunov function(PLF). To solve the PLMI constrained optimal problem, the precompensator design problem is reduced into a normal convex optimization problem with normal linear matrix inequalities (LMI) constraints on a new constructed convex polyhedron. Moreover, a parameter scheduling pre-compensator is achieved, which satisfies robust performance and decoupling performances. Finally, the feasibility and validity of the robust diagonal dominance pre-compensator design method are verified by the numerical simulation on a turbofan engine PLPV model.
Deriving Flood-Mediated Connectivity between River Channels and Floodplains: Data-Driven Approaches
NASA Astrophysics Data System (ADS)
Zhao, Tongtiegang; Shao, Quanxi; Zhang, Yongyong
2017-03-01
The flood-mediated connectivity between river channels and floodplains plays a fundamental role in flood hazard mapping and exerts profound ecological effects. The classic nearest neighbor search (NNS) fails to derive this connectivity because of spatial heterogeneity and continuity. We develop two novel data-driven connectivity-deriving approaches, namely, progressive nearest neighbor search (PNNS) and progressive iterative nearest neighbor search (PiNNS). These approaches are illustrated through a case study in Northern Australia. First, PNNS and PiNNS are employed to identify flood pathways on floodplains through forward tracking. That is, progressive search is performed to associate newly inundated cells in each time step to previously inundated cells. In particular, iterations in PiNNS ensure that the connectivity is continuous - the connection between any two cells along the pathway is built through intermediate inundated cells. Second, inundated floodplain cells are collectively connected to river channel cells through backward tracing. Certain river channel sections are identified to connect to a large number of inundated floodplain cells. That is, the floodwater from these sections causes widespread floodplain inundation. Our proposed approaches take advantage of spatial-temporal data. They can be applied to achieve connectivity from hydro-dynamic and remote sensing data and assist in river basin planning and management.
Modeling particle nucleation and growth over northern California during the 2010 CARES campaign
NASA Astrophysics Data System (ADS)
Lupascu, A.; Easter, R.; Zaveri, R.; Shrivastava, M.; Pekour, M.; Tomlinson, J.; Yang, Q.; Matsui, H.; Hodzic, A.; Zhang, Q.; Fast, J. D.
2015-11-01
Accurate representation of the aerosol lifecycle requires adequate modeling of the particle number concentration and size distribution in addition to their mass, which is often the focus of aerosol modeling studies. This paper compares particle number concentrations and size distributions as predicted by three empirical nucleation parameterizations in the Weather Research and Forecast coupled with chemistry (WRF-Chem) regional model using 20 discrete size bins ranging from 1 nm to 10 μm. Two of the parameterizations are based on H2SO4, while one is based on both H2SO4 and organic vapors. Budget diagnostic terms for transport, dry deposition, emissions, condensational growth, nucleation, and coagulation of aerosol particles have been added to the model and are used to analyze the differences in how the new particle formation parameterizations influence the evolving aerosol size distribution. The simulations are evaluated using measurements collected at surface sites and from a research aircraft during the Carbonaceous Aerosol and Radiative Effects Study (CARES) conducted in the vicinity of Sacramento, California. While all three parameterizations captured the temporal variation of the size distribution during observed nucleation events as well as the spatial variability in aerosol number, all overestimated by up to a factor of 2.5 the total particle number concentration for particle diameters greater than 10 nm. Using the budget diagnostic terms, we demonstrate that the combined H2SO4 and low-volatility organic vapor parameterization leads to a different diurnal variability of new particle formation and growth to larger sizes compared to the parameterizations based on only H2SO4. At the CARES urban ground site, peak nucleation rates are predicted to occur around 12:00 Pacific (local) standard time (PST) for the H2SO4 parameterizations, whereas the highest rates were predicted at 08:00 and 16:00 PST when low-volatility organic gases are included in the parameterization. This can be explained by higher anthropogenic emissions of organic vapors at these times as well as lower boundary-layer heights that reduce vertical mixing. The higher nucleation rates in the H2SO4-organic parameterization at these times were largely offset by losses due to coagulation. Despite the different budget terms for ultrafine particles, the 10-40 nm diameter particle number concentrations from all three parameterizations increased from 10:00 to 14:00 PST and then decreased later in the afternoon, consistent with changes in the observed size and number distribution. We found that newly formed particles could explain up to 20-30 % of predicted cloud condensation nuclei at 0.5 % supersaturation, depending on location and the specific nucleation parameterization. A sensitivity simulation using 12 discrete size bins ranging from 1 nm to 10 μm diameter gave a reasonable estimate of particle number and size distribution compared to the 20 size bin simulation, while reducing the associated computational cost by ~ 36 %.
NASA Astrophysics Data System (ADS)
Fangohr, Susanne; Woolf, David K.
2007-06-01
One of the dominant sources of uncertainty in the calculation of air-sea flux of carbon dioxide on a global scale originates from the various parameterizations of the gas transfer velocity, k, that are in use. Whilst it is undisputed that most of these parameterizations have shortcomings and neglect processes which influence air-sea gas exchange and do not scale with wind speed alone, there is no general agreement about their relative accuracy. The most widely used parameterizations are based on non-linear functions of wind speed and, to a lesser extent, on sea surface temperature and salinity. Processes such as surface film damping and whitecapping are known to have an effect on air-sea exchange. More recently published parameterizations use friction velocity, sea surface roughness, and significant wave height. These new parameters can account to some extent for processes such as film damping and whitecapping and could potentially explain the spread of wind-speed based transfer velocities published in the literature. We combine some of the principles of two recently published k parameterizations [Glover, D.M., Frew, N.M., McCue, S.J. and Bock, E.J., 2002. A multiyear time series of global gas transfer velocity from the TOPEX dual frequency, normalized radar backscatter algorithm. In: Donelan, M.A., Drennan, W.M., Saltzman, E.S., and Wanninkhof, R. (Eds.), Gas Transfer at Water Surfaces, Geophys. Monograph 127. AGU,Washington, DC, 325-331; Woolf, D.K., 2005. Parameterization of gas transfer velocities and sea-state dependent wave breaking. Tellus, 57B: 87-94] to calculate k as the sum of a linear function of total mean square slope of the sea surface and a wave breaking parameter. This separates contributions from direct and bubble-mediated gas transfer as suggested by Woolf [Woolf, D.K., 2005. Parameterization of gas transfer velocities and sea-state dependent wave breaking. Tellus, 57B: 87-94] and allows us to quantify contributions from these two processes independently. We then apply our parameterization to a monthly TOPEX altimeter gridded 1.5° × 1.5° data set and compare our results to transfer velocities calculated using the popular wind-based k parameterizations by Wanninkhof [Wanninkhof, R., 1992. Relationship between wind speed and gas exchange over the ocean. J. Geophys. Res., 97: 7373-7382.] and Wanninkhof and McGillis [Wanninkhof, R. and McGillis, W., 1999. A cubic relationship between air-sea CO2 exchange and wind speed. Geophys. Res. Lett., 26(13): 1889-1892]. We show that despite good agreement of the globally averaged transfer velocities, global and regional fluxes differ by up to 100%. These discrepancies are a result of different spatio-temporal distributions of the processes involved in the parameterizations of k, indicating the importance of wave field parameters and a need for further validation.
NASA Astrophysics Data System (ADS)
Xu, Kuan-Man; Cheng, Anning
2014-05-01
A high-resolution cloud-resolving model (CRM) embedded in a general circulation model (GCM) is an attractive alternative for climate modeling because it replaces all traditional cloud parameterizations and explicitly simulates cloud physical processes in each grid column of the GCM. Such an approach is called "Multiscale Modeling Framework." MMF still needs to parameterize the subgrid-scale (SGS) processes associated with clouds and large turbulent eddies because circulations associated with planetary boundary layer (PBL) and in-cloud turbulence are unresolved by CRMs with horizontal grid sizes on the order of a few kilometers. A third-order turbulence closure (IPHOC) has been implemented in the CRM component of the super-parameterized Community Atmosphere Model (SPCAM). IPHOC is used to predict (or diagnose) fractional cloudiness and the variability of temperature and water vapor at scales that are not resolved on the CRM's grid. This model has produced promised results, especially for low-level cloud climatology, seasonal variations and diurnal variations (Cheng and Xu 2011, 2013a, b; Xu and Cheng 2013a, b). Because of the enormous computational cost of SPCAM-IPHOC, which is 400 times of a conventional CAM, we decided to bypass the CRM and implement the IPHOC directly to CAM version 5 (CAM5). IPHOC replaces the PBL/stratocumulus, shallow convection, and cloud macrophysics parameterizations in CAM5. Since there are large discrepancies in the spatial and temporal scales between CRM and CAM5, IPHOC used in CAM5 has to be modified from that used in SPCAM. In particular, we diagnose all second- and third-order moments except for the fluxes. These prognostic and diagnostic moments are used to select a double-Gaussian probability density function to describe the SGS variability. We also incorporate a diagnostic PBL height parameterization to represent the strong inversion above PBL. The goal of this study is to compare the simulation of the climatology from these three models (CAM5, CAM5-IPHOC and SPCAM-IPHOC), with emphasis on low-level clouds and precipitation. Detailed comparisons of scatter diagrams among the monthly-mean low-level cloudiness, PBL height, surface relative humidity and lower tropospheric stability (LTS) reveal the relative strengths and weaknesses for five coastal low-cloud regions among the three models. Observations from CloudSat and CALIPSO and ECMWF Interim reanalysis are used as the truths for the comparisons. We found that the standard CAM5 underestimates cloudiness and produces small cloud fractions at low PBL heights that contradict with observations. CAM5-IPHOC tends to overestimate low clouds but the ranges of LTS and PBL height variations are most realistic. SPCAM-IPHOC seems to produce most realistic results with relatively consistent results from one region to another. Further comparisons with other atmospheric environmental variables will be helpful to reveal the causes of model deficiencies so that SPCAM-IPHOC results will provide guidance to the other two models.
Measured and parameterized energy fluxes estimated for Atlantic transects of RV Polarstern
NASA Astrophysics Data System (ADS)
Bumke, Karl; Macke, Andreas; Kalisch, John; Kleta, Henry
2013-04-01
Even to date energy fluxes over the oceans are difficult to assess. As an example the relative paucity of evaporation observations and the uncertainties of currently employed empirical approaches lead to large uncertainties of evaporation products over the ocean (e.g. Large and Yeager, 2009). Within the frame of OCEANET (Macke et al., 2010) we performed such measurements on Atlantic transects between Bremerhaven (Germany) and Cape Town (South Africa) or Punta Arenas (Chile) onboard RV Polarstern during the recent years. The basic measurements of sensible and latent heat fluxes are inertial-dissipation (e.g. Dupuis et al., 1997) flux estimates and measurements of the bulk variables. Turbulence measurements included a sonic anemometer and an infrared hygrometer, both mounted on the crow's nest. Mean meteorological sensors were those of the ship's operational measurement system. The global radiation and the down terrestrial radiation were measured on the OCEANET container placed on the monkey island. At least about 1000 time series of 1 h length were analyzed to derive bulk transfer coefficients for the fluxes of sensible and latent heat. The bulk transfer coefficients were applied to the ship's meteorological data to derive the heat fluxes at the sea surface. The reflected solar radiation was estimated from measured global radiation. The up terrestrial radiation was derived from the skin temperature according to the Stefan-Boltzmann law. Parameterized heat fluxes were compared to the widely used COARE-parameterization (Fairall et al., 2003), the agreement is excellent. Measured and parameterized heat and radiation fluxes gave the total energy budget at the air sea interface. As expected the mean total flux is positive, but there are also areas, where it is negative, indicating an energy loss of the ocean. It could be shown that the variations in the energy budget are mainly due to insolation and evaporation. A comparison between the mean values of measured and parameterized sensible and latent heat fluxes shows that the data are suitable to validate satellite derived fluxes at the sea surface and re-analysis data. References Dupuis, H., P. K. Taylor, A. Weill, and K. Katsaros, 1997: Inertial dissipation method applied to derive turbulent fluxes over the ocean during the surface of the ocean. J. Geophys. Res., 102 (C9), 21 115-21 129. Fairall, C. W., E. F. Bradley, J. E. Hare, A. A. Grachev, J. B. Edson, 2003: Bulk Parameterization of Air-Sea Fluxes: Updates and Verification for the COARE Algorithm. J. Climate, 16, 571-591. Large, W.G., and S.G. Yeager, 2009: The global climatology of an interannually varying air-sea flux data set. Climate Dynamics 33, 341-364. Macke, A., Kalisch, J., Zoll, Y., and Bumke, K., 2010: Radiative effects of the cloudy atmosphere from ground and satellite based observations, EPJ Web of Conferences, 5 9, 83-94
Mixing parametrizations for ocean climate modelling
NASA Astrophysics Data System (ADS)
Gusev, Anatoly; Moshonkin, Sergey; Diansky, Nikolay; Zalesny, Vladimir
2016-04-01
The algorithm is presented of splitting the total evolutionary equations for the turbulence kinetic energy (TKE) and turbulence dissipation frequency (TDF), which is used to parameterize the viscosity and diffusion coefficients in ocean circulation models. The turbulence model equations are split into the stages of transport-diffusion and generation-dissipation. For the generation-dissipation stage, the following schemes are implemented: the explicit-implicit numerical scheme, analytical solution and the asymptotic behavior of the analytical solutions. The experiments were performed with different mixing parameterizations for the modelling of Arctic and the Atlantic climate decadal variability with the eddy-permitting circulation model INMOM (Institute of Numerical Mathematics Ocean Model) using vertical grid refinement in the zone of fully developed turbulence. The proposed model with the split equations for turbulence characteristics is similar to the contemporary differential turbulence models, concerning the physical formulations. At the same time, its algorithm has high enough computational efficiency. Parameterizations with using the split turbulence model make it possible to obtain more adequate structure of temperature and salinity at decadal timescales, compared to the simpler Pacanowski-Philander (PP) turbulence parameterization. Parameterizations with using analytical solution or numerical scheme at the generation-dissipation step of the turbulence model leads to better representation of ocean climate than the faster parameterization using the asymptotic behavior of the analytical solution. At the same time, the computational efficiency left almost unchanged relative to the simple PP parameterization. Usage of PP parametrization in the circulation model leads to realistic simulation of density and circulation with violation of T,S-relationships. This error is majorly avoided with using the proposed parameterizations containing the split turbulence model. The high sensitivity of the eddy-permitting circulation model to the definition of mixing is revealed, which is associated with significant changes of density fields in the upper baroclinic ocean layer over the total considered area. For instance, usage of the turbulence parameterization instead of PP algorithm leads to increasing circulation velocity in the Gulf Stream and North Atlantic Current, as well as the subpolar cyclonic gyre in the North Atlantic and Beaufort Gyre in the Arctic basin are reproduced more realistically. Consideration of the Prandtl number as a function of the Richardson number significantly increases the modelling quality. The research was supported by the Russian Foundation for Basic Research (grant № 16-05-00534) and the Council on the Russian Federation President Grants (grant № MK-3241.2015.5)
Constraints to Dark Energy Using PADE Parameterizations
NASA Astrophysics Data System (ADS)
Rezaei, M.; Malekjani, M.; Basilakos, S.; Mehrabi, A.; Mota, D. F.
2017-07-01
We put constraints on dark energy (DE) properties using PADE parameterization, and compare it to the same constraints using Chevalier-Polarski-Linder (CPL) and ΛCDM, at both the background and the perturbation levels. The DE equation of the state parameter of the models is derived following the mathematical treatment of PADE expansion. Unlike CPL parameterization, PADE approximation provides different forms of the equation of state parameter that avoid the divergence in the far future. Initially we perform a likelihood analysis in order to put constraints on the model parameters using solely background expansion data, and we find that all parameterizations are consistent with each other. Then, combining the expansion and the growth rate data, we test the viability of PADE parameterizations and compare them with CPL and ΛCDM models, respectively. Specifically, we find that the growth rate of the current PADE parameterizations is lower than ΛCDM model at low redshifts, while the differences among the models are negligible at high redshifts. In this context, we provide for the first time a growth index of linear matter perturbations in PADE cosmologies. Considering that DE is homogeneous, we recover the well-known asymptotic value of the growth index (namely {γ }∞ =\\tfrac{3({w}∞ -1)}{6{w}∞ -5}), while in the case of clustered DE, we obtain {γ }∞ ≃ \\tfrac{3{w}∞ (3{w}∞ -5)}{(6{w}∞ -5)(3{w}∞ -1)}. Finally, we generalize the growth index analysis in the case where γ is allowed to vary with redshift, and we find that the form of γ (z) in PADE parameterization extends that of the CPL and ΛCDM cosmologies, respectively.
Local Interactions of Hydrometeors by Diffusion in Mixed-Phase Clouds
NASA Astrophysics Data System (ADS)
Baumgartner, Manuel; Spichtinger, Peter
2017-04-01
Mixed-phase clouds, containing both ice particles and liquid droplets, are important for the Earth-Atmosphere system. They modulate the radiation budget by a combination of albedo effect and greenhouse effect. In contrast to liquid water clouds, the radiative impact of clouds containing ice particles is still uncertain. Scattering and absorption highly depends in microphysical properties of ice crystals, e.g. size and shape. In addition, most precipitation on Earth forms via the ice phase. Thus, better understanding of ice processes as well as their representation in models is required. A key process for determining shape and size of ice crystals is diffusional growth. Diffusion processes in mixed-phase clouds are highly uncertain; in addition they are usually highly simplified in cloud models, especially in bulk microphysics parameterizations. The direct interaction between cloud droplets and ice particles, due to spatial inhomogeneities, is ignored; the particles can only interact via their environmental conditions. Local effects as supply of supersaturation due to clusters of droplets around ice particles are usually not represented, although they form the physical basis of the Wegener-Bergeron-Findeisen process. We present direct numerical simulations of the interaction of single ice particles and droplets, especially their local competition for the available water vapor. In addition, we show an approach to parameterize local interactions by diffusion. The suggested parameterization uses local steady-state solutions of the diffusion equations for water vapor for an ice particle as well as a droplet. The individual solutions are coupled together to obtain the desired interaction. We show some results of the scheme as implemented in a parcel model.
Controls on Turbulent Mixing in a Strongly Stratified and Sheared Tidal River Plume
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jurisa, Joseph T.; Nash, Jonathan D.; Moum, James N.
Considerable effort has been made to parameterize turbulent kinetic energy (TKE) dissipation rate ..epsilon.. and mixing in buoyant plumes and stratified shear flows. Here, a parameterization based on Kunze et al. is examined, which estimates ..epsilon.. as the amount of energy contained in an unstable shear layer (Ri < Ric) that must be dissipated to increase the Richardson number Ri = N2/S2 to a critical value Ric within a turbulent decay time scale. Observations from the tidal Columbia River plume are used to quantitatively assess the relevant parameters controlling ..epsilon.. over a range of tidal and river discharge forcings. Observedmore » ..epsilon.. is found to be characterized by Kunze et al.'s form within a factor of 2, while exhibiting slightly decreased skill near Ri = Ric. Observed dissipation rates are compared to estimates from a constant interfacial drag formulation that neglects the direct effects of stratification. This is found to be appropriate in energetic regimes when the bulk-averaged Richardson number Rib is less than Ric/4. However, when Rib > Ric/4, the effects of stratification must be included. Similarly, ..epsilon.. scaled by the bulk velocity and density differences over the plume displays a clear dependence on Rib, decreasing as Rib approaches Ric. The Kunze et al. ..epsilon.. parameterization is modified to form an expression for the nondimensional dissipation rate that is solely a function of Rib, displaying good agreement with the observations. It is suggested that this formulation is broadly applicable for unstable to marginally unstable stratified shear flows.« less
NASA Astrophysics Data System (ADS)
Zhang, Liang; Tinsley, Brian A.
2018-03-01
Simulations and parameterization of collision rate coefficients for aerosol particles with 3 μm radius droplets have been extended to a range of particle densities up to 2,000 kg m-3 for midtropospheric ( 5 km) conditions (540 hPa, -17°C). The increasing weight has no effect on collisions for particle radii less than 0.2 μm, but for greater radii the weight effect becomes significant and usually decreases the collision rate coefficient. When increasing size and density of particles make the fall speed of the particle relative to undisturbed air approach to that of the droplet, the effect of the particle falling away in the stagnation region ahead of the droplet becomes important, and the probability of frontside collisions can decrease to zero. Collisions on the rear side of the droplet can be enhanced as particle weight increases, and for this the weight effect tends to increase the rate coefficients. For charges on the droplet and for large particles with density ρ < 1,000 kg m-3 the predominant effect increases in rate coefficient due to the short-range attractive image electric force. With density ρ above about 1,000 kg m-3, the stagnation region prevents particles moving close to the droplet and reduces the effect of these short-range forces. Together with previous work, it is now possible to obtain collision rate coefficients for realistic combinations of droplet charge, particle charge, droplet radius, particle radius, particle density, and relative humidity in clouds. The parameterization allows rapid access to these values for use in cloud models.
Paukert, M.; Hoose, C.; Simmel, M.
2017-02-21
In model studies of aerosol-dependent immersion freezing in clouds, a common assumption is that each ice nucleating aerosol particle corresponds to exactly one cloud droplet. Conversely, the immersion freezing of larger drops—“rain”—is usually represented by a liquid volume-dependent approach, making the parameterizations of rain freezing independent of specific aerosol types and concentrations. This may lead to inconsistencies when aerosol effects on clouds and precipitation shall be investigated, since raindrops consist of the cloud droplets—and corresponding aerosol particles—that have been involved in drop-drop-collisions. We introduce an extension to a two-moment microphysical scheme in order to account explicitly for particle accumulation inmore » raindrops by tracking the rates of selfcollection, autoconversion, and accretion. This also provides a direct link between ice nuclei and the primary formation of large precipitating ice particles. A new parameterization scheme of drop freezing is presented to consider multiple ice nuclei within one drop and effective drop cooling rates. In our test cases of deep convective clouds, we find that at altitudes which are most relevant for immersion freezing, the majority of potential ice nuclei have been converted from cloud droplets into raindrops. Compared to the standard treatment of freezing in our model, the less efficient mineral dust-based freezing results in higher rainwater contents in the convective core, affecting both rain and hail precipitation. The aerosol-dependent treatment of rain freezing can reverse the signs of simulated precipitation sensitivities to ice nuclei perturbations.« less
Characteristics of Mesoscale Organization in WRF Simulations of Convection during TWP-ICE
NASA Technical Reports Server (NTRS)
Del Genio, Anthony D.; Wu, Jingbo; Chen, Yonghua
2013-01-01
Compared to satellite-derived heating profiles, the Goddard Institute for Space Studies general circulation model (GCM) convective heating is too deep and its stratiform upper-level heating is too weak. This deficiency highlights the need for GCMs to parameterize the mesoscale organization of convection. Cloud-resolving model simulations of convection near Darwin, Australia, in weak wind shear environments of different humidities are used to characterize mesoscale organization processes and to provide parameterization guidance. Downdraft cold pools appear to stimulate further deep convection both through their effect on eddy size and vertical velocity. Anomalously humid air surrounds updrafts, reducing the efficacy of entrainment. Recovery of cold pool properties to ambient conditions over 5-6 h proceeds differently over land and ocean. Over ocean increased surface fluxes restore the cold pool to prestorm conditions. Over land surface fluxes are suppressed in the cold pool region; temperature decreases and humidity increases, and both then remain nearly constant, while the undisturbed environment cools diurnally. The upper-troposphere stratiform rain region area lags convection by 5-6 h under humid active monsoon conditions but by only 1-2 h during drier break periods, suggesting that mesoscale organization is more readily sustained in a humid environment. Stratiform region hydrometeor mixing ratio lags convection by 0-2 h, suggesting that it is strongly influenced by detrainment from convective updrafts. Small stratiform region temperature anomalies suggest that a mesoscale updraft parameterization initialized with properties of buoyant detrained air and evolving to a balance between diabatic heating and adiabatic cooling might be a plausible approach for GCMs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paukert, M.; Hoose, C.; Simmel, M.
In model studies of aerosol-dependent immersion freezing in clouds, a common assumption is that each ice nucleating aerosol particle corresponds to exactly one cloud droplet. Conversely, the immersion freezing of larger drops—“rain”—is usually represented by a liquid volume-dependent approach, making the parameterizations of rain freezing independent of specific aerosol types and concentrations. This may lead to inconsistencies when aerosol effects on clouds and precipitation shall be investigated, since raindrops consist of the cloud droplets—and corresponding aerosol particles—that have been involved in drop-drop-collisions. We introduce an extension to a two-moment microphysical scheme in order to account explicitly for particle accumulation inmore » raindrops by tracking the rates of selfcollection, autoconversion, and accretion. This also provides a direct link between ice nuclei and the primary formation of large precipitating ice particles. A new parameterization scheme of drop freezing is presented to consider multiple ice nuclei within one drop and effective drop cooling rates. In our test cases of deep convective clouds, we find that at altitudes which are most relevant for immersion freezing, the majority of potential ice nuclei have been converted from cloud droplets into raindrops. Compared to the standard treatment of freezing in our model, the less efficient mineral dust-based freezing results in higher rainwater contents in the convective core, affecting both rain and hail precipitation. The aerosol-dependent treatment of rain freezing can reverse the signs of simulated precipitation sensitivities to ice nuclei perturbations.« less
RUIZ-RAMOS, MARGARITA; MÍNGUEZ, M. INÉS
2006-01-01
• Background Plant structural (i.e. architectural) models explicitly describe plant morphology by providing detailed descriptions of the display of leaf and stem surfaces within heterogeneous canopies and thus provide the opportunity for modelling the functioning of plant organs in their microenvironments. The outcome is a class of structural–functional crop models that combines advantages of current structural and process approaches to crop modelling. ALAMEDA is such a model. • Methods The formalism of Lindenmayer systems (L-systems) was chosen for the development of a structural model of the faba bean canopy, providing both numerical and dynamic graphical outputs. It was parameterized according to the results obtained through detailed morphological and phenological descriptions that capture the detailed geometry and topology of the crop. The analysis distinguishes between relationships of general application for all sowing dates and stem ranks and others valid only for all stems of a single crop cycle. • Results and Conclusions The results reveal that in faba bean, structural parameterization valid for the entire plant may be drawn from a single stem. ALAMEDA was formed by linking the structural model to the growth model ‘Simulation d'Allongement des Feuilles’ (SAF) with the ability to simulate approx. 3500 crop organs and components of a group of nine plants. Model performance was verified for organ length, plant height and leaf area. The L-system formalism was able to capture the complex architecture of canopy leaf area of this indeterminate crop and, with the growth relationships, generate a 3D dynamic crop simulation. Future development and improvement of the model are discussed. PMID:16390842
NASA Astrophysics Data System (ADS)
La Vigna, Francesco; Hill, Mary C.; Rossetto, Rudy; Mazza, Roberto
2016-09-01
With respect to model parameterization and sensitivity analysis, this work uses a practical example to suggest that methods that start with simple models and use computationally frugal model analysis methods remain valuable in any toolbox of model development methods. In this work, groundwater model calibration starts with a simple parameterization that evolves into a moderately complex model. The model is developed for a water management study of the Tivoli-Guidonia basin (Rome, Italy) where surface mining has been conducted in conjunction with substantial dewatering. The approach to model development used in this work employs repeated analysis using sensitivity and inverse methods, including use of a new observation-stacked parameter importance graph. The methods are highly parallelizable and require few model runs, which make the repeated analyses and attendant insights possible. The success of a model development design can be measured by insights attained and demonstrated model accuracy relevant to predictions. Example insights were obtained: (1) A long-held belief that, except for a few distinct fractures, the travertine is homogeneous was found to be inadequate, and (2) The dewatering pumping rate is more critical to model accuracy than expected. The latter insight motivated additional data collection and improved pumpage estimates. Validation tests using three other recharge and pumpage conditions suggest good accuracy for the predictions considered. The model was used to evaluate management scenarios and showed that similar dewatering results could be achieved using 20 % less pumped water, but would require installing newly positioned wells and cooperation between mine owners.
NASA Astrophysics Data System (ADS)
Keating, Elizabeth H.; Doherty, John; Vrugt, Jasper A.; Kang, Qinjun
2010-10-01
Highly parameterized and CPU-intensive groundwater models are increasingly being used to understand and predict flow and transport through aquifers. Despite their frequent use, these models pose significant challenges for parameter estimation and predictive uncertainty analysis algorithms, particularly global methods which usually require very large numbers of forward runs. Here we present a general methodology for parameter estimation and uncertainty analysis that can be utilized in these situations. Our proposed method includes extraction of a surrogate model that mimics key characteristics of a full process model, followed by testing and implementation of a pragmatic uncertainty analysis technique, called null-space Monte Carlo (NSMC), that merges the strengths of gradient-based search and parameter dimensionality reduction. As part of the surrogate model analysis, the results of NSMC are compared with a formal Bayesian approach using the DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm. Such a comparison has never been accomplished before, especially in the context of high parameter dimensionality. Despite the highly nonlinear nature of the inverse problem, the existence of multiple local minima, and the relatively large parameter dimensionality, both methods performed well and results compare favorably with each other. Experiences gained from the surrogate model analysis are then transferred to calibrate the full highly parameterized and CPU intensive groundwater model and to explore predictive uncertainty of predictions made by that model. The methodology presented here is generally applicable to any highly parameterized and CPU-intensive environmental model, where efficient methods such as NSMC provide the only practical means for conducting predictive uncertainty analysis.
Cloud Simulations in Response to Turbulence Parameterizations in the GISS Model E GCM
NASA Technical Reports Server (NTRS)
Yao, Mao-Sung; Cheng, Ye
2013-01-01
The response of cloud simulations to turbulence parameterizations is studied systematically using the GISS general circulation model (GCM) E2 employed in the Intergovernmental Panel on Climate Change's (IPCC) Fifth Assessment Report (AR5).Without the turbulence parameterization, the relative humidity (RH) and the low cloud cover peak unrealistically close to the surface; with the dry convection or with only the local turbulence parameterization, these two quantities improve their vertical structures, but the vertical transport of water vapor is still weak in the planetary boundary layers (PBLs); with both local and nonlocal turbulence parameterizations, the RH and low cloud cover have better vertical structures in all latitudes due to more significant vertical transport of water vapor in the PBL. The study also compares the cloud and radiation climatologies obtained from an experiment using a newer version of turbulence parameterization being developed at GISS with those obtained from the AR5 version. This newer scheme differs from the AR5 version in computing nonlocal transports, turbulent length scale, and PBL height and shows significant improvements in cloud and radiation simulations, especially over the subtropical eastern oceans and the southern oceans. The diagnosed PBL heights appear to correlate well with the low cloud distribution over oceans. This suggests that a cloud-producing scheme needs to be constructed in a framework that also takes the turbulence into consideration.
Latency Hiding in Dynamic Partitioning and Load Balancing of Grid Computing Applications
NASA Technical Reports Server (NTRS)
Das, Sajal K.; Harvey, Daniel J.; Biswas, Rupak
2001-01-01
The Information Power Grid (IPG) concept developed by NASA is aimed to provide a metacomputing platform for large-scale distributed computations, by hiding the intricacies of highly heterogeneous environment and yet maintaining adequate security. In this paper, we propose a latency-tolerant partitioning scheme that dynamically balances processor workloads on the.IPG, and minimizes data movement and runtime communication. By simulating an unsteady adaptive mesh application on a wide area network, we study the performance of our load balancer under the Globus environment. The number of IPG nodes, the number of processors per node, and the interconnected speeds are parameterized to derive conditions under which the IPG would be suitable for parallel distributed processing of such applications. Experimental results demonstrate that effective solution are achieved when the IPG nodes are connected by a high-speed asynchronous interconnection network.
Precomputing upscaled hydraulic conductivity for complex geological structures
NASA Astrophysics Data System (ADS)
Mariethoz, G.; Jha, S. K.; George, M.; Maheswarajah, S.; John, V.; De Re, D.; Smith, M.
2013-12-01
3D geological models are built to capture the geological heterogeneity at a fine scale. However groundwater modellers are often interested in the hydraulic conductivity (K) values at a much coarser scale to reduce the numerical burden. Upscaling is used to assign conductivity to large volumes, which necessarily causes a loss of information. Recent literature has shown that the connectivity in the channelized structures is an important feature that needs to be taken into account for accurate upscaling. In this work we study the effect of channel parameters, e.g. width, sinuosity, connectivity etc. on the upscaled values of the hydraulic conductivity and the associated uncertainty. We devise a methodology that derives correspondences between a lithological description and the equivalent hydraulic conductivity at a larger scale. The method uses multiple-point geostatistics simulations (MPS) and parameterizes the 3D structures by introducing continuous rotation and affinity parameters. Additional statistical characterization is obtained by transition probabilities and connectivity measures. Equivalent hydraulic conductivity is then estimated by solving a flow problem for the entire heterogeneous domain by applying steady state flow in horizontal and vertical directions. This is systematically performed for many random realisations of the small scale structures to enable a probability distribution for the equivalent upscaled hydraulic conductivity. This process allows deriving systematic relationships between a given depositional environment and precomputed equivalent parameters. A modeller can then exploit the prior knowledge of the depositional environment and expected geological heterogeneity to bypass the step of generating small-scale models, and directly work with upscaled values.
Evaluation of Aerosol-cloud Interaction in the GISS Model E Using ARM Observations
NASA Technical Reports Server (NTRS)
DeBoer, G.; Bauer, S. E.; Toto, T.; Menon, Surabi; Vogelmann, A. M.
2013-01-01
Observations from the US Department of Energy's Atmospheric Radiation Measurement (ARM) program are used to evaluate the ability of the NASA GISS ModelE global climate model in reproducing observed interactions between aerosols and clouds. Included in the evaluation are comparisons of basic meteorology and aerosol properties, droplet activation, effective radius parameterizations, and surface-based evaluations of aerosol-cloud interactions (ACI). Differences between the simulated and observed ACI are generally large, but these differences may result partially from vertical distribution of aerosol in the model, rather than the representation of physical processes governing the interactions between aerosols and clouds. Compared to the current observations, the ModelE often features elevated droplet concentrations for a given aerosol concentration, indicating that the activation parameterizations used may be too aggressive. Additionally, parameterizations for effective radius commonly used in models were tested using ARM observations, and there was no clear superior parameterization for the cases reviewed here. This lack of consensus is demonstrated to result in potentially large, statistically significant differences to surface radiative budgets, should one parameterization be chosen over another.
Twofold symmetries of the pure gravity action
Cheung, Clifford; Remmen, Grant N.
2017-01-25
Here, we recast the action of pure gravity into a form that is invariant under a twofold Lorentz symmetry. To derive this representation, we construct a general parameterization of all theories equivalent to the Einstein-Hilbert action up to a local field redefinition and gauge fixing. We then exploit this freedom to eliminate all interactions except those exhibiting two sets of independently contracted Lorentz indices. The resulting action is local, remarkably simple, and naturally expressed in a field basis analogous to the exponential parameterization of the nonlinear sigma model. The space of twofold Lorentz invariant field redefinitions then generates an infinitemore » class of equivalent representations. By construction, all off-shell Feynman diagrams are twofold Lorentz invariant while all on-shell tree amplitudes are automatically twofold gauge invariant. We extend our results to curved spacetime and calculate the analogue of the Einstein equations. Finally, while these twofold invariances are hidden in the canonical approach of graviton perturbation theory, they are naturally expected given the double copy relations for scattering amplitudes in gauge theory and gravity.« less