Approaches to highly parameterized inversion-A guide to using PEST for groundwater-model calibration
Doherty, John E.; Hunt, Randall J.
2010-01-01
Highly parameterized groundwater models can create calibration difficulties. Regularized inversion-the combined use of large numbers of parameters with mathematical approaches for stable parameter estimation-is becoming a common approach to address these difficulties and enhance the transfer of information contained in field measurements to parameters used to model that system. Though commonly used in other industries, regularized inversion is somewhat imperfectly understood in the groundwater field. There is concern that this unfamiliarity can lead to underuse, and misuse, of the methodology. This document is constructed to facilitate the appropriate use of regularized inversion for calibrating highly parameterized groundwater models. The presentation is directed at an intermediate- to advanced-level modeler, and it focuses on the PEST software suite-a frequently used tool for highly parameterized model calibration and one that is widely supported by commercial graphical user interfaces. A brief overview of the regularized inversion approach is provided, and techniques for mathematical regularization offered by PEST are outlined, including Tikhonov, subspace, and hybrid schemes. Guidelines for applying regularized inversion techniques are presented after a logical progression of steps for building suitable PEST input. The discussion starts with use of pilot points as a parameterization device and processing/grouping observations to form multicomponent objective functions. A description of potential parameter solution methodologies and resources available through the PEST software and its supporting utility programs follows. Directing the parameter-estimation process through PEST control variables is then discussed, including guidance for monitoring and optimizing the performance of PEST. Comprehensive listings of PEST control variables, and of the roles performed by PEST utility support programs, are presented in the appendixes.
Fienen, Michael N.; D'Oria, Marco; Doherty, John E.; Hunt, Randall J.
2013-01-01
The application bgaPEST is a highly parameterized inversion software package implementing the Bayesian Geostatistical Approach in a framework compatible with the parameter estimation suite PEST. Highly parameterized inversion refers to cases in which parameters are distributed in space or time and are correlated with one another. The Bayesian aspect of bgaPEST is related to Bayesian probability theory in which prior information about parameters is formally revised on the basis of the calibration dataset used for the inversion. Conceptually, this approach formalizes the conditionality of estimated parameters on the specific data and model available. The geostatistical component of the method refers to the way in which prior information about the parameters is used. A geostatistical autocorrelation function is used to enforce structure on the parameters to avoid overfitting and unrealistic results. Bayesian Geostatistical Approach is designed to provide the smoothest solution that is consistent with the data. Optionally, users can specify a level of fit or estimate a balance between fit and model complexity informed by the data. Groundwater and surface-water applications are used as examples in this text, but the possible uses of bgaPEST extend to any distributed parameter applications.
NASA Astrophysics Data System (ADS)
Luo, Ning; Zhao, Zhanfeng; Illman, Walter A.; Berg, Steven J.
2017-11-01
Transient hydraulic tomography (THT) is a robust method of aquifer characterization to estimate the spatial distributions (or tomograms) of both hydraulic conductivity (K) and specific storage (Ss). However, the highly-parameterized nature of the geostatistical inversion approach renders it computationally intensive for large-scale investigations. In addition, geostatistics-based THT may produce overly smooth tomograms when head data used to constrain the inversion is limited. Therefore, alternative model conceptualizations for THT need to be examined. To investigate this, we simultaneously calibrated different groundwater models with varying parameterizations and zonations using two cases of different pumping and monitoring data densities from a laboratory sandbox. Specifically, one effective parameter model, four geology-based zonation models with varying accuracy and resolution, and five geostatistical models with different prior information are calibrated. Model performance is quantitatively assessed by examining the calibration and validation results. Our study reveals that highly parameterized geostatistical models perform the best among the models compared, while the zonation model with excellent knowledge of stratigraphy also yields comparable results. When few pumping tests with sparse monitoring intervals are available, the incorporation of accurate or simplified geological information into geostatistical models reveals more details in heterogeneity and yields more robust validation results. However, results deteriorate when inaccurate geological information are incorporated. Finally, our study reveals that transient inversions are necessary to obtain reliable K and Ss estimates for making accurate predictions of transient drawdown events.
Non-perturbational surface-wave inversion: A Dix-type relation for surface waves
Haney, Matt; Tsai, Victor C.
2015-01-01
We extend the approach underlying the well-known Dix equation in reflection seismology to surface waves. Within the context of surface wave inversion, the Dix-type relation we derive for surface waves allows accurate depth profiles of shear-wave velocity to be constructed directly from phase velocity data, in contrast to perturbational methods. The depth profiles can subsequently be used as an initial model for nonlinear inversion. We provide examples of the Dix-type relation for under-parameterized and over-parameterized cases. In the under-parameterized case, we use the theory to estimate crustal thickness, crustal shear-wave velocity, and mantle shear-wave velocity across the Western U.S. from phase velocity maps measured at 8-, 20-, and 40-s periods. By adopting a thin-layer formalism and an over-parameterized model, we show how a regularized inversion based on the Dix-type relation yields smooth depth profiles of shear-wave velocity. In the process, we quantitatively demonstrate the depth sensitivity of surface-wave phase velocity as a function of frequency and the accuracy of the Dix-type relation. We apply the over-parameterized approach to a near-surface data set within the frequency band from 5 to 40 Hz and find overall agreement between the inverted model and the result of full nonlinear inversion.
NASA Astrophysics Data System (ADS)
Gao, C.; Lekic, V.
2016-12-01
When constraining the structure of the Earth's continental lithosphere, multiple seismic observables are often combined due to their complementary sensitivities.The transdimensional Bayesian (TB) approach in seismic inversion allows model parameter uncertainties and trade-offs to be quantified with few assumptions. TB sampling yields an adaptive parameterization that enables simultaneous inversion for different model parameters (Vp, Vs, density, radial anisotropy), without the need for strong prior information or regularization. We use a reversible jump Markov chain Monte Carlo (rjMcMC) algorithm to incorporate different seismic observables - surface wave dispersion (SWD), Rayleigh wave ellipticity (ZH ratio), and receiver functions - into the inversion for the profiles of shear velocity (Vs), compressional velocity (Vp), density (ρ), and radial anisotropy (ξ) beneath a seismic station. By analyzing all three data types individually and together, we show that TB sampling can eliminate the need for a fixed parameterization based on prior information, and reduce trade-offs in model estimates. We then explore the effect of different types of misfit functions for receiver function inversion, which is a highly non-unique problem. We compare the synthetic inversion results using the L2 norm, cross-correlation type and integral type misfit function by their convergence rates and retrieved seismic structures. In inversions in which only one type of model parameter (Vs for the case of SWD) is inverted, assumed scaling relationships are often applied to account for sensitivity to other model parameters (e.g. Vp, ρ, ξ). Here we show that under a TB framework, we can eliminate scaling assumptions, while simultaneously constraining multiple model parameters to varying degrees. Furthermore, we compare the performance of TB inversion when different types of model parameters either share the same or use independent parameterizations. We show that different parameterizations can lead to differences in retrieved model parameters, consistent with limited data constraints. We then quantitatively examine the model parameter trade-offs and find that trade-offs between Vp and radial anisotropy might limit our ability to constrain shallow-layer radial anisotropy using current seismic observables.
NASA Astrophysics Data System (ADS)
Laloy, Eric; Hérault, Romain; Lee, John; Jacques, Diederik; Linde, Niklas
2017-12-01
Efficient and high-fidelity prior sampling and inversion for complex geological media is still a largely unsolved challenge. Here, we use a deep neural network of the variational autoencoder type to construct a parametric low-dimensional base model parameterization of complex binary geological media. For inversion purposes, it has the attractive feature that random draws from an uncorrelated standard normal distribution yield model realizations with spatial characteristics that are in agreement with the training set. In comparison with the most commonly used parametric representations in probabilistic inversion, we find that our dimensionality reduction (DR) approach outperforms principle component analysis (PCA), optimization-PCA (OPCA) and discrete cosine transform (DCT) DR techniques for unconditional geostatistical simulation of a channelized prior model. For the considered examples, important compression ratios (200-500) are achieved. Given that the construction of our parameterization requires a training set of several tens of thousands of prior model realizations, our DR approach is more suited for probabilistic (or deterministic) inversion than for unconditional (or point-conditioned) geostatistical simulation. Probabilistic inversions of 2D steady-state and 3D transient hydraulic tomography data are used to demonstrate the DR-based inversion. For the 2D case study, the performance is superior compared to current state-of-the-art multiple-point statistics inversion by sequential geostatistical resampling (SGR). Inversion results for the 3D application are also encouraging.
A Simple Parameterization of 3 x 3 Magic Squares
ERIC Educational Resources Information Center
Trenkler, Gotz; Schmidt, Karsten; Trenkler, Dietrich
2012-01-01
In this article a new parameterization of magic squares of order three is presented. This parameterization permits an easy computation of their inverses, eigenvalues, eigenvectors and adjoints. Some attention is paid to the Luoshu, one of the oldest magic squares.
NASA Astrophysics Data System (ADS)
Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.
2016-09-01
Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace such that the dimensionality of the problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2-D and a random hydraulic conductivity field in 3-D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ˜101 to ˜102 in a multicore computational environment. Therefore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate to large-scale problems.
The novel high-performance 3-D MT inverse solver
NASA Astrophysics Data System (ADS)
Kruglyakov, Mikhail; Geraskin, Alexey; Kuvshinov, Alexey
2016-04-01
We present novel, robust, scalable, and fast 3-D magnetotelluric (MT) inverse solver. The solver is written in multi-language paradigm to make it as efficient, readable and maintainable as possible. Separation of concerns and single responsibility concepts go through implementation of the solver. As a forward modelling engine a modern scalable solver extrEMe, based on contracting integral equation approach, is used. Iterative gradient-type (quasi-Newton) optimization scheme is invoked to search for (regularized) inverse problem solution, and adjoint source approach is used to calculate efficiently the gradient of the misfit. The inverse solver is able to deal with highly detailed and contrasting models, allows for working (separately or jointly) with any type of MT responses, and supports massive parallelization. Moreover, different parallelization strategies implemented in the code allow optimal usage of available computational resources for a given problem statement. To parameterize an inverse domain the so-called mask parameterization is implemented, which means that one can merge any subset of forward modelling cells in order to account for (usually) irregular distribution of observation sites. We report results of 3-D numerical experiments aimed at analysing the robustness, performance and scalability of the code. In particular, our computational experiments carried out at different platforms ranging from modern laptops to HPC Piz Daint (6th supercomputer in the world) demonstrate practically linear scalability of the code up to thousands of nodes.
On the joint inversion of geophysical data for models of the coupled core-mantle system
NASA Technical Reports Server (NTRS)
Voorhies, Coerte V.
1991-01-01
Joint inversion of magnetic, earth rotation, geoid, and seismic data for a unified model of the coupled core-mantle system is proposed and shown to be possible. A sample objective function is offered and simplified by targeting results from independent inversions and summary travel time residuals instead of original observations. These data are parameterized in terms of a very simple, closed model of the topographically coupled core-mantle system. Minimization of the simplified objective function leads to a nonlinear inverse problem; an iterative method for solution is presented. Parameterization and method are emphasized; numerical results are not presented.
Parameterizations for ensemble Kalman inversion
NASA Astrophysics Data System (ADS)
Chada, Neil K.; Iglesias, Marco A.; Roininen, Lassi; Stuart, Andrew M.
2018-05-01
The use of ensemble methods to solve inverse problems is attractive because it is a derivative-free methodology which is also well-adapted to parallelization. In its basic iterative form the method produces an ensemble of solutions which lie in the linear span of the initial ensemble. Choice of the parameterization of the unknown field is thus a key component of the success of the method. We demonstrate how both geometric ideas and hierarchical ideas can be used to design effective parameterizations for a number of applied inverse problems arising in electrical impedance tomography, groundwater flow and source inversion. In particular we show how geometric ideas, including the level set method, can be used to reconstruct piecewise continuous fields, and we show how hierarchical methods can be used to learn key parameters in continuous fields, such as length-scales, resulting in improved reconstructions. Geometric and hierarchical ideas are combined in the level set method to find piecewise constant reconstructions with interfaces of unknown topology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.
Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10 1 to ~10 2 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less
Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.
2016-09-01
Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10 1 to ~10 2 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less
On parameterization of the inverse problem for estimating aquifer properties using tracer data
NASA Astrophysics Data System (ADS)
Kowalsky, M. B.; Finsterle, S.; Williams, K. H.; Murray, C.; Commer, M.; Newcomer, D.; Englert, A.; Steefel, C. I.; Hubbard, S. S.
2012-06-01
In developing a reliable approach for inferring hydrological properties through inverse modeling of tracer data, decisions made on how to parameterize heterogeneity (i.e., how to represent a heterogeneous distribution using a limited number of parameters that are amenable to estimation) are of paramount importance, as errors in the model structure are partly compensated for by estimating biased property values during the inversion. These biased estimates, while potentially providing an improved fit to the calibration data, may lead to wrong interpretations and conclusions and reduce the ability of the model to make reliable predictions. We consider the estimation of spatial variations in permeability and several other parameters through inverse modeling of tracer data, specifically synthetic and actual field data associated with the 2007 Winchester experiment from the Department of Energy Rifle site. Characterization is challenging due to the real-world complexities associated with field experiments in such a dynamic groundwater system. Our aim is to highlight and quantify the impact on inversion results of various decisions related to parameterization, such as the positioning of pilot points in a geostatistical parameterization; the handling of up-gradient regions; the inclusion of zonal information derived from geophysical data or core logs; extension from 2-D to 3-D; assumptions regarding the gradient direction, porosity, and the semivariogram function; and deteriorating experimental conditions. This work adds to the relatively limited number of studies that offer guidance on the use of pilot points in complex real-world experiments involving tracer data (as opposed to hydraulic head data).
NASA Astrophysics Data System (ADS)
Pasquet, Simon; Bouruet-Aubertot, Pascale; Reverdin, Gilles; Turnherr, Andreas; Laurent, Lou St.
2016-06-01
The relevance of finescale parameterizations of dissipation rate of turbulent kinetic energy is addressed using finescale and microstructure measurements collected in the Lucky Strike segment of the Mid-Atlantic Ridge (MAR). There, high amplitude internal tides and a strongly sheared mean flow sustain a high level of dissipation rate and turbulent mixing. Two sets of parameterizations are considered: the first ones (Gregg, 1989; Kunze et al., 2006) were derived to estimate dissipation rate of turbulent kinetic energy induced by internal wave breaking, while the second one aimed to estimate dissipation induced by shear instability of a strongly sheared mean flow and is a function of the Richardson number (Kunze et al., 1990; Polzin, 1996). The latter parameterization has low skill in reproducing the observed dissipation rate when shear unstable events are resolved presumably because there is no scale separation between the duration of unstable events and the inverse growth rate of unstable billows. Instead GM based parameterizations were found to be relevant although slight biases were observed. Part of these biases result from the small value of the upper vertical wavenumber integration limit in the computation of shear variance in Kunze et al. (2006) parameterization that does not take into account internal wave signal of high vertical wavenumbers. We showed that significant improvement is obtained when the upper integration limit is set using a signal to noise ratio criterion and that the spatial structure of dissipation rates is reproduced with this parameterization.
Sensitivity analyses of acoustic impedance inversion with full-waveform inversion
NASA Astrophysics Data System (ADS)
Yao, Gang; da Silva, Nuno V.; Wu, Di
2018-04-01
Acoustic impedance estimation has a significant importance to seismic exploration. In this paper, we use full-waveform inversion to recover the impedance from seismic data, and analyze the sensitivity of the acoustic impedance with respect to the source-receiver offset of seismic data and to the initial velocity model. We parameterize the acoustic wave equation with velocity and impedance, and demonstrate three key aspects of acoustic impedance inversion. First, short-offset data are most suitable for acoustic impedance inversion. Second, acoustic impedance inversion is more compatible with the data generated by density contrasts than velocity contrasts. Finally, acoustic impedance inversion requires the starting velocity model to be very accurate for achieving a high-quality inversion. Based upon these observations, we propose a workflow for acoustic impedance inversion as: (1) building a background velocity model with travel-time tomography or reflection waveform inversion; (2) recovering the intermediate wavelength components of the velocity model with full-waveform inversion constrained by Gardner’s relation; (3) inverting the high-resolution acoustic impedance model with short-offset data through full-waveform inversion. We verify this workflow by the synthetic tests based on the Marmousi model.
NASA Astrophysics Data System (ADS)
Breen, S. J.; Lochbuehler, T.; Detwiler, R. L.; Linde, N.
2013-12-01
Electrical resistivity tomography (ERT) is a well-established method for geophysical characterization and has shown potential for monitoring geologic CO2 sequestration, due to its sensitivity to electrical resistivity contrasts generated by liquid/gas saturation variability. In contrast to deterministic ERT inversion approaches, probabilistic inversion provides not only a single saturation model but a full posterior probability density function for each model parameter. Furthermore, the uncertainty inherent in the underlying petrophysics (e.g., Archie's Law) can be incorporated in a straightforward manner. In this study, the data are from bench-scale ERT experiments conducted during gas injection into a quasi-2D (1 cm thick), translucent, brine-saturated sand chamber with a packing that mimics a simple anticlinal geological reservoir. We estimate saturation fields by Markov chain Monte Carlo sampling with the MT-DREAM(ZS) algorithm and compare them quantitatively to independent saturation measurements from a light transmission technique, as well as results from deterministic inversions. Different model parameterizations are evaluated in terms of the recovered saturation fields and petrophysical parameters. The saturation field is parameterized (1) in cartesian coordinates, (2) by means of its discrete cosine transform coefficients, and (3) by fixed saturation values and gradients in structural elements defined by a gaussian bell of arbitrary shape and location. Synthetic tests reveal that a priori knowledge about the expected geologic structures (as in parameterization (3)) markedly improves the parameter estimates. The number of degrees of freedom thus strongly affects the inversion results. In an additional step, we explore the effects of assuming that the total volume of injected gas is known a priori and that no gas has migrated away from the monitored region.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Albrecht, Bruce; Fang, Ming; Ghate, Virendra
2016-02-01
Observations from an upward-pointing Doppler cloud radar are used to examine cloud-top entrainment processes and parameterizations in a non-precipitating continental stratocumulus cloud deck maintained by time varying surface buoyancy fluxes and cloud-top radiative cooling. Radar and ancillary observations were made at the Atmospheric Radiation Measurement (ARM)’s Southern Great Plains (SGP) site located near Lamont, Oklahoma of unbroken, non-precipitating stratocumulus clouds observed for a 14-hour period starting 0900 Central Standard Time on 25 March 2005. The vertical velocity variance and energy dissipation rate (EDR) terms in a parameterized turbulence kinetic energy (TKE) budget of the entrainment zone are estimated using themore » radar vertical velocity and the radar spectrum width observations from the upward-pointing millimeter cloud radar (MMCR) operating at the SGP site. Hourly averages of the vertical velocity variance term in the TKE entrainment formulation correlates strongly (r=0.72) to the dissipation rate term in the entrainment zone. However, the ratio of the variance term to the dissipation decreases at night due to decoupling of the boundary layer. When the night -time decoupling is accounted for, the correlation between the variance and the EDR term increases (r=0.92). To obtain bulk coefficients for the entrainment parameterizations derived from the TKE budget, independent estimate of entrainment were obtained from an inversion height budget using ARM SGP observations of the local time derivative and the horizontal advection of the cloud-top height. The large-scale vertical velocity at the inversion needed for this budget from EMWF reanalysis. This budget gives a mean entrainment rate for the observing period of 0.76±0.15 cm/s. This mean value is applied to the TKE budget parameterizations to obtain the bulk coefficients needed in these parameterizations. These bulk coefficients are compared with those from previous and are used to in the parameterizations to give hourly estimates of the entrainment rates using the radar derived vertical velocity variance and dissipation rates. Hourly entrainment rates were estimated from a convective velocity w* parameterization depends on the local surface buoyancy fluxes and the calculated radiative flux divergence, parameterization using a bulk coefficient obtained from the mean inversion height budget. The hourly rates from the cloud turbulence estimates and the w* parameterization, which is independent of the radar observations, are compared with the hourly we values from the budget. All show rough agreement with each other and capture the entrainment variability associated with substantial changes in the surface flux and radiative divergence at cloud top. Major uncertainties in the hourly estimates from the height budget and w* are discussed. The results indicate a strong potential for making entrainment rate estimates directly from the radar vertical velocity variance and the EDR measurements—a technique that has distinct advantages over other methods for estimating entrainment rates. Calculations based on the EDR alone can provide high temporal resolution (for averaging intervals as small as 10 minutes) of the entrainment processes and do not require an estimate of the boundary layer depth, which can be difficult to define when the boundary layer is decoupled.« less
NASA Astrophysics Data System (ADS)
Galewsky, J.
2017-12-01
Understanding the processes that govern the relationships between lower tropospheric stability and low-cloud cover is crucial for improved constraints on low-cloud feedbacks and for improving the parameterizations of low-cloud cover used in climate models. The stable isotopic composition of atmospheric water vapor is a sensitive recorder of the balance of moistening and drying processes that set the humidity of the lower troposphere and may thus provide a useful framework for improving our understanding low-cloud processes. In-situ measurements of water vapor isotopic composition collected at the NOAA Mauna Loa Observatory in Hawaii, along with twice-daily soundings from Hilo and remote sensing of cloud cover, show a clear inverse relationship between the estimated inversion strength (EIS) and the mixing ratios and water vapor δ -values, and a positive relationship between EIS, deuterium excess, and Δ δ D, defined as the difference between an observation and a reference Rayleigh distillation curve. These relationships are consistent with reduced moistening and an enhanced upper-tropospheric contribution above the trade inversion under high EIS conditions and stronger moistening under weaker EIS conditions. The cloud fraction, cloud liquid water path, and cloud-top pressure were all found to be higher under low EIS conditions. Inverse modeling of the isotopic data for the highest and lowest terciles of EIS conditions provide quantitative constraints on the cold-point temperatures and mixing fractions that govern the humidity above the trade inversion. The modeling shows the moistening fraction between moist boundary layer air and dry middle tropospheric air 24±1.5% under low EIS conditions is and 6±1.5% under high EIS conditions. A cold-point (last-saturation) temperature of -30C can match the observations for both low and high EIS conditions. The isotopic composition of the moistening source as derived from the inversion (-114±10‰ ) requires moderate fractionation from a pure marine source, indicating a link between inversion strength and moistening of the lower troposphere from the outflow of shallow convection. This approach can be applied in other settings and the results can be used to test parameterizations in climate models.
Acoustic and elastic waveform inversion best practices
NASA Astrophysics Data System (ADS)
Modrak, Ryan T.
Reaching the global minimum of a waveform misfit function requires careful choices about the nonlinear optimization, preconditioning and regularization methods underlying an inversion. Because waveform inversion problems are susceptible to erratic convergence, one or two test cases are not enough to reliably inform such decisions. We identify best practices instead using two global, one regional and four near-surface acoustic test problems. To obtain meaningful quantitative comparisons, we carry out hundreds acoustic inversions, varying one aspect of the implementation at a time. Comparing nonlinear optimization algorithms, we find that L-BFGS provides computational savings over nonlinear conjugate gradient methods in a wide variety of test cases. Comparing preconditioners, we show that a new diagonal scaling derived from the adjoint of the forward operator provides better performance than two conventional preconditioning schemes. Comparing regularization strategies, we find that projection, convolution, Tikhonov regularization, and total variation regularization are effective in different contexts. Besides these issues, reliability and efficiency in waveform inversion depend on close numerical attention and care. Implementation details have a strong effect on computational cost, regardless of the chosen material parameterization or nonlinear optimization algorithm. Building on the acoustic inversion results, we carry out elastic experiments with four test problems, three objective functions, and four material parameterizations. The choice of parameterization for isotropic elastic media is found to be more complicated than previous studies suggests, with "wavespeed-like'' parameters performing well with phase-based objective functions and Lame parameters performing well with amplitude-based objective functions. Reliability and efficiency can be even harder to achieve in transversely isotropic elastic inversions because rotation angle parameters describing fast-axis direction are difficult to recover. Using Voigt or Chen-Tromp parameters avoids the need to include rotation angles explicitly and provides an effective strategy for anisotropic inversion. The need for flexible and portable workflow management tools for seismic inversion also poses a major challenge. In a final chapter, the software used to the carry out the above experiments is described and instructions for reproducing experimental results are given.
Antarctic Ocean Tides from GRACE Intersatellite Tracking Data and Hydrodynamic Assimilation
NASA Astrophysics Data System (ADS)
Erofeeva, S.; Han, S.; Ray, R.; Egbert, G.; Luthcke, S.
2007-12-01
Long-wavelength components of the oceanic tides surrounding Antarctica are estimated from over three years of GRACE satellite-to-satellite ranging measurements. An inversion is performed for the major constituents M2, O1, and S2, parameterized as localized average mass anomalies relative to a prior tidal model. Satellite state adjustments are made simultaneously. These long-wavelength anomalies are then assimilated into a high-resolution regional hydrodynamic tidal model. Comparisons to independent "ground truth" data, previously collected by King and Padman, show that assimilation of the GRACE inversions results in improved accuracy, for all three constituents.
Multi-parameter Full-waveform Inversion for Acoustic VTI Medium with Surface Seismic Data
NASA Astrophysics Data System (ADS)
Cheng, X.; Jiao, K.; Sun, D.; Huang, W.; Vigh, D.
2013-12-01
Full-waveform Inversion (FWI) attracts wide attention recently in oil and gas industry as a new promising tool for high resolution subsurface velocity model building. While the traditional common image point gather based tomography method aims to focus post-migrated data in depth domain, FWI aims to directly fit the observed seismic waveform in either time or frequency domain. The inversion is performed iteratively by updating the velocity fields to reduce the difference between the observed and the simulated data. It has been shown the inversion is very sensitive to the starting velocity fields, and data with long offsets and low frequencies is crucial for the success of FWI to overcome this sensitivity. Considering the importance of data with long offsets and low frequencies, in most geologic environment, anisotropy is an unavoidable topic for FWI especially at long offsets, since anisotropy tends to have more pronounced effects on waves traveled for a great distance. In VTI medium, this means more horizontal velocity will be registered in middle-to-long offset data, while more vertical velocity will be registered in near-to-middle offset data. Up to date, most of real world applications of FWI still remain in isotropic medium, and only a few studies have been shown to account for anisotropy. And most of those studies only account for anisotropy in waveform simulation, but not invert for those anisotropy fields. Multi-parameter inversion for anisotropy fields, even in VTI medium, remains as a hot topic in the field. In this study, we develop a strategy for multi-parameter FWI for acoustic VTI medium with surface seismic data. Because surface seismic data is insensitivity to the delta fields, we decide to hold the delta fields unchanged during our inversion, and invert only for vertical velocity and epsilon fields. Through parameterization analysis and synthetic tests, we find that it is more feasible to invert for the parameterization as vertical and horizontal velocities instead of inverting for the parameterization as vertical velocity and epsilon fields. We develop a hierarchical approach to invert for vertical velocity first but hold epsilon unchanged and only switch to simultaneous inversion when vertical velocity inversion are approaching convergence. During simultaneous inversion, we observe significant acceleration in the convergence when incorporates second order information and preconditioning into inversion. We demonstrate the success of our strategy for VTI FWI using synthetic and real data examples from the Gulf of Mexico. Our results show that incorporation of VTI FWI improves migration of large offset acquisition data, and produces better focused migration images to be used in exploration, production and development of oil fields.
A reversible-jump Markov chain Monte Carlo algorithm for 1D inversion of magnetotelluric data
NASA Astrophysics Data System (ADS)
Mandolesi, Eric; Ogaya, Xenia; Campanyà, Joan; Piana Agostinetti, Nicola
2018-04-01
This paper presents a new computer code developed to solve the 1D magnetotelluric (MT) inverse problem using a Bayesian trans-dimensional Markov chain Monte Carlo algorithm. MT data are sensitive to the depth-distribution of rock electric conductivity (or its reciprocal, resistivity). The solution provided is a probability distribution - the so-called posterior probability distribution (PPD) for the conductivity at depth, together with the PPD of the interface depths. The PPD is sampled via a reversible-jump Markov Chain Monte Carlo (rjMcMC) algorithm, using a modified Metropolis-Hastings (MH) rule to accept or discard candidate models along the chains. As the optimal parameterization for the inversion process is generally unknown a trans-dimensional approach is used to allow the dataset itself to indicate the most probable number of parameters needed to sample the PPD. The algorithm is tested against two simulated datasets and a set of MT data acquired in the Clare Basin (County Clare, Ireland). For the simulated datasets the correct number of conductive layers at depth and the associated electrical conductivity values is retrieved, together with reasonable estimates of the uncertainties on the investigated parameters. Results from the inversion of field measurements are compared with results obtained using a deterministic method and with well-log data from a nearby borehole. The PPD is in good agreement with the well-log data, showing as a main structure a high conductive layer associated with the Clare Shale formation. In this study, we demonstrate that our new code go beyond algorithms developend using a linear inversion scheme, as it can be used: (1) to by-pass the subjective choices in the 1D parameterizations, i.e. the number of horizontal layers in the 1D parameterization, and (2) to estimate realistic uncertainties on the retrieved parameters. The algorithm is implemented using a simple MPI approach, where independent chains run on isolated CPU, to take full advantage of parallel computer architectures. In case of a large number of data, a master/slave appoach can be used, where the master CPU samples the parameter space and the slave CPUs compute forward solutions.
NASA Astrophysics Data System (ADS)
Kim, Seongryong; Tkalčić, Hrvoje; Mustać, Marija; Rhie, Junkee; Ford, Sean
2016-04-01
A framework is presented within which we provide rigorous estimations for seismic sources and structures in the Northeast Asia. We use Bayesian inversion methods, which enable statistical estimations of models and their uncertainties based on data information. Ambiguities in error statistics and model parameterizations are addressed by hierarchical and trans-dimensional (trans-D) techniques, which can be inherently implemented in the Bayesian inversions. Hence reliable estimation of model parameters and their uncertainties is possible, thus avoiding arbitrary regularizations and parameterizations. Hierarchical and trans-D inversions are performed to develop a three-dimensional velocity model using ambient noise data. To further improve the model, we perform joint inversions with receiver function data using a newly developed Bayesian method. For the source estimation, a novel moment tensor inversion method is presented and applied to regional waveform data of the North Korean nuclear explosion tests. By the combination of new Bayesian techniques and the structural model, coupled with meaningful uncertainties related to each of the processes, more quantitative monitoring and discrimination of seismic events is possible.
NASA Technical Reports Server (NTRS)
Molthan, Andrew L.; Petersen, Walter A.; Case, Jonathan L.; Dembek, Scott R.; Jedlovec, Gary J.
2009-01-01
Increases in computational resources have allowed operational forecast centers to pursue experimental, high resolution simulations that resolve the microphysical characteristics of clouds and precipitation. These experiments are motivated by a desire to improve the representation of weather and climate, but will also benefit current and future satellite campaigns, which often use forecast model output to guide the retrieval process. Aircraft, surface and radar data from the Canadian CloudSat/CALIPSO Validation Project are used to check the validity of size distribution and density characteristics for snowfall simulated by the NASA Goddard six-class, single-moment bulk water microphysics scheme, currently available within the Weather Research and Forecast (WRF) Model. Widespread snowfall developed across the region on January 22, 2007, forced by the passing of a midlatitude cyclone, and was observed by the dual-polarimetric, C-band radar King City, Ontario, as well as the NASA 94 GHz CloudSat Cloud Profiling Radar. Combined, these data sets provide key metrics for validating model output: estimates of size distribution parameters fit to the inverse-exponential equations prescribed within the model, bulk density and crystal habit characteristics sampled by the aircraft, and representation of size characteristics as inferred by the radar reflectivity at C- and W-band. Specified constants for distribution intercept and density differ significantly from observations throughout much of the cloud depth. Alternate parameterizations are explored, using column-integrated values of vapor excess to avoid problems encountered with temperature-based parameterizations in an environment where inversions and isothermal layers are present. Simulation of CloudSat reflectivity is performed by adopting the discrete-dipole parameterizations and databases provided in literature, and demonstrate an improved capability in simulating radar reflectivity at W-band versus Mie scattering assumptions.
ERIC Educational Resources Information Center
Alku, Paavo; Vilkman, Erkki; Laukkanen, Anne-Maria
1998-01-01
A new method is presented for the parameterization of glottal volume velocity waveforms that have been estimated by inverse filtering acoustic speech pressure signals. The new technique combines two features of voice production: the AC value and the spectral decay of the glottal flow. Testing found the new parameter correlates strongly with the…
Regularized wave equation migration for imaging and data reconstruction
NASA Astrophysics Data System (ADS)
Kaplan, Sam T.
The reflection seismic experiment results in a measurement (reflection seismic data) of the seismic wavefield. The linear Born approximation to the seismic wavefield leads to a forward modelling operator that we use to approximate reflection seismic data in terms of a scattering potential. We consider approximations to the scattering potential using two methods: the adjoint of the forward modelling operator (migration), and regularized numerical inversion using the forward and adjoint operators. We implement two parameterizations of the forward modelling and migration operators: source-receiver and shot-profile. For both parameterizations, we find requisite Green's function using the split-step approximation. We first develop the forward modelling operator, and then find the adjoint (migration) operator by recognizing a Fredholm integral equation of the first kind. The resulting numerical system is generally under-determined, requiring prior information to find a solution. In source-receiver migration, the parameterization of the scattering potential is understood using the migration imaging condition, and this encourages us to apply sparse prior models to the scattering potential. To that end, we use both a Cauchy prior and a mixed Cauchy-Gaussian prior, finding better resolved estimates of the scattering potential than are given by the adjoint. In shot-profile migration, the parameterization of the scattering potential has its redundancy in multiple active energy sources (i.e. shots). We find that a smallest model regularized inverse representation of the scattering potential gives a more resolved picture of the earth, as compared to the simpler adjoint representation. The shot-profile parameterization allows us to introduce a joint inversion to further improve the estimate of the scattering potential. Moreover, it allows us to introduce a novel data reconstruction algorithm so that limited data can be interpolated/extrapolated. The linearized operators are expensive, encouraging their parallel implementation. For the source-receiver parameterization of the scattering potential this parallelization is non-trivial. Seismic data is typically corrupted by various types of noise. Sparse coding can be used to suppress noise prior to migration. It is a method that stems from information theory and that we apply to noise suppression in seismic data.
NASA Astrophysics Data System (ADS)
Christensen, N. K.; Christensen, S.; Ferre, T. P. A.
2015-09-01
Despite geophysics is being used increasingly, it is still unclear how and when the integration of geophysical data improves the construction and predictive capability of groundwater models. Therefore, this paper presents a newly developed HYdrogeophysical TEst-Bench (HYTEB) which is a collection of geological, groundwater and geophysical modeling and inversion software wrapped to make a platform for generation and consideration of multi-modal data for objective hydrologic analysis. It is intentionally flexible to allow for simple or sophisticated treatments of geophysical responses, hydrologic processes, parameterization, and inversion approaches. It can also be used to discover potential errors that can be introduced through petrophysical models and approaches to correlating geophysical and hydrologic parameters. With HYTEB we study alternative uses of electromagnetic (EM) data for groundwater modeling in a hydrogeological environment consisting of various types of glacial deposits with typical hydraulic conductivities and electrical resistivities covering impermeable bedrock with low resistivity. It is investigated to what extent groundwater model calibration and, often more importantly, model predictions can be improved by including in the calibration process electrical resistivity estimates obtained from TEM data. In all calibration cases, the hydraulic conductivity field is highly parameterized and the estimation is stabilized by regularization. For purely hydrologic inversion (HI, only using hydrologic data) we used Tikhonov regularization combined with singular value decomposition. For joint hydrogeophysical inversion (JHI) and sequential hydrogeophysical inversion (SHI) the resistivity estimates from TEM are used together with a petrophysical relationship to formulate the regularization term. In all cases, the regularization stabilizes the inversion, but neither the HI nor the JHI objective function could be minimized uniquely. SHI or JHI with regularization based on the use of TEM data produced estimated hydraulic conductivity fields that bear more resemblance to the reference fields than when using HI with Tikhonov regularization. However, for the studied system the resistivities estimated by SHI or JHI must be used with caution as estimators of hydraulic conductivity or as regularization means for subsequent hydrological inversion. Much of the lack of value of the geophysical data arises from a mistaken faith in the power of the petrophysical model in combination with geophysical data of low sensitivity, thereby propagating geophysical estimation errors into the hydrologic model parameters. With respect to reducing model prediction error, it depends on the type of prediction whether it has value to include geophysical data in the model calibration. It is found that all calibrated models are good predictors of hydraulic head. When the stress situation is changed from that of the hydrologic calibration data, then all models make biased predictions of head change. All calibrated models turn out to be a very poor predictor of the pumping well's recharge area and groundwater age. The reason for this is that distributed recharge is parameterized as depending on estimated hydraulic conductivity of the upper model layer which tends to be underestimated. Another important insight from the HYTEB analysis is thus that either recharge should be parameterized and estimated in a different way, or other types of data should be added to better constrain the recharge estimates.
Resolution analysis of marine seismic full waveform data by Bayesian inversion
NASA Astrophysics Data System (ADS)
Ray, A.; Sekar, A.; Hoversten, G. M.; Albertin, U.
2015-12-01
The Bayesian posterior density function (PDF) of earth models that fit full waveform seismic data convey information on the uncertainty with which the elastic model parameters are resolved. In this work, we apply the trans-dimensional reversible jump Markov Chain Monte Carlo method (RJ-MCMC) for the 1D inversion of noisy synthetic full-waveform seismic data in the frequency-wavenumber domain. While seismic full waveform inversion (FWI) is a powerful method for characterizing subsurface elastic parameters, the uncertainty in the inverted models has remained poorly known, if at all and is highly initial model dependent. The Bayesian method we use is trans-dimensional in that the number of model layers is not fixed, and flexible such that the layer boundaries are free to move around. The resulting parameterization does not require regularization to stabilize the inversion. Depth resolution is traded off with the number of layers, providing an estimate of uncertainty in elastic parameters (compressional and shear velocities Vp and Vs as well as density) with depth. We find that in the absence of additional constraints, Bayesian inversion can result in a wide range of posterior PDFs on Vp, Vs and density. These PDFs range from being clustered around the true model, to those that contain little resolution of any particular features other than those in the near surface, depending on the particular data and target geometry. We present results for a suite of different frequencies and offset ranges, examining the differences in the posterior model densities thus derived. Though these results are for a 1D earth, they are applicable to areas with simple, layered geology and provide valuable insight into the resolving capabilities of FWI, as well as highlight the challenges in solving a highly non-linear problem. The RJ-MCMC method also presents a tantalizing possibility for extension to 2D and 3D Bayesian inversion of full waveform seismic data in the future, as it objectively tackles the problem of model selection (i.e., the number of layers or cells for parameterization), which could ease the computational burden of evaluating forward models with many parameters.
Generalized ocean color inversion model for retrieving marine inherent optical properties.
Werdell, P Jeremy; Franz, Bryan A; Bailey, Sean W; Feldman, Gene C; Boss, Emmanuel; Brando, Vittorio E; Dowell, Mark; Hirata, Takafumi; Lavender, Samantha J; Lee, ZhongPing; Loisel, Hubert; Maritorena, Stéphane; Mélin, Fréderic; Moore, Timothy S; Smyth, Timothy J; Antoine, David; Devred, Emmanuel; d'Andon, Odile Hembise Fanton; Mangin, Antoine
2013-04-01
Ocean color measured from satellites provides daily, global estimates of marine inherent optical properties (IOPs). Semi-analytical algorithms (SAAs) provide one mechanism for inverting the color of the water observed by the satellite into IOPs. While numerous SAAs exist, most are similarly constructed and few are appropriately parameterized for all water masses for all seasons. To initiate community-wide discussion of these limitations, NASA organized two workshops that deconstructed SAAs to identify similarities and uniqueness and to progress toward consensus on a unified SAA. This effort resulted in the development of the generalized IOP (GIOP) model software that allows for the construction of different SAAs at runtime by selection from an assortment of model parameterizations. As such, GIOP permits isolation and evaluation of specific modeling assumptions, construction of SAAs, development of regionally tuned SAAs, and execution of ensemble inversion modeling. Working groups associated with the workshops proposed a preliminary default configuration for GIOP (GIOP-DC), with alternative model parameterizations and features defined for subsequent evaluation. In this paper, we: (1) describe the theoretical basis of GIOP; (2) present GIOP-DC and verify its comparable performance to other popular SAAs using both in situ and synthetic data sets; and, (3) quantify the sensitivities of their output to their parameterization. We use the latter to develop a hierarchical sensitivity of SAAs to various model parameterizations, to identify components of SAAs that merit focus in future research, and to provide material for discussion on algorithm uncertainties and future emsemble applications.
Generalized Ocean Color Inversion Model for Retrieving Marine Inherent Optical Properties
NASA Technical Reports Server (NTRS)
Werdell, P. Jeremy; Franz, Bryan A.; Bailey, Sean W.; Feldman, Gene C.; Boss, Emmanuel; Brando, Vittorio E.; Dowell, Mark; Hirata, Takafumi; Lavender, Samantha J.; Lee, ZhongPing;
2013-01-01
Ocean color measured from satellites provides daily, global estimates of marine inherent optical properties (IOPs). Semi-analytical algorithms (SAAs) provide one mechanism for inverting the color of the water observed by the satellite into IOPs. While numerous SAAs exist, most are similarly constructed and few are appropriately parameterized for all water masses for all seasons. To initiate community-wide discussion of these limitations, NASA organized two workshops that deconstructed SAAs to identify similarities and uniqueness and to progress toward consensus on a unified SAA. This effort resulted in the development of the generalized IOP (GIOP) model software that allows for the construction of different SAAs at runtime by selection from an assortment of model parameterizations. As such, GIOP permits isolation and evaluation of specific modeling assumptions, construction of SAAs, development of regionally tuned SAAs, and execution of ensemble inversion modeling. Working groups associated with the workshops proposed a preliminary default configuration for GIOP (GIOP-DC), with alternative model parameterizations and features defined for subsequent evaluation. In this paper, we: (1) describe the theoretical basis of GIOP; (2) present GIOP-DC and verify its comparable performance to other popular SAAs using both in situ and synthetic data sets; and, (3) quantify the sensitivities of their output to their parameterization. We use the latter to develop a hierarchical sensitivity of SAAs to various model parameterizations, to identify components of SAAs that merit focus in future research, and to provide material for discussion on algorithm uncertainties and future ensemble applications.
Middle East emissions of VOCs estimated using OMI HCHO observations and the MAGRITTE regional model
NASA Astrophysics Data System (ADS)
Müller, Jean-Francois; Stavrakou, Trisevgeni; Bauwens, Maite; De Smedt, Isabelle; Van Roozendael, Michel
2017-04-01
Air quality in the Middle East has considerably deteriorated in the last decades. In particular tropospheric ozone reaches very high levels during summer due to the combination of high solar irradiances with often very high and rapidly evolving anthropogenic emissions of NOx and VOCs associated to oil/gas exploitation and fast urbanisation. In addition, high biogenic VOC emissions are expected in non-desert areas, in particular during summer due to scorching temperatures and high solar irradiances. Both anthropogenic and biogenic VOC emissions are poorly known, however, due to near-absence of experimental constraints on emission factors for local vegetation and industrial and extraction processes. Furthermore, the dependence of emissions on environmental conditions (e.g. soil moisture in the case of biogenic isoprene emissions) is only very crudely parameterized in emission models. Here we use spaceborne (OMI) observations of formaldehyde, a known product of anthropogenic and biogenic VOC oxidation, as constraint in an inversion framework built on a regional model, MAGRITTE (Model of Atmospheric composition at Global and Regional scales using Inversion Techniques for Trace Gas Emissions). MAGRITTE is run at 0.5x0.5 degree resolution, with lateral boundary conditions provided by the global CTM IMAGESv2 (Bauwens et al., 2016). The global and regional models share essentially the same chemistry and physical parameterizations. Emission inversion with MAGRITTE is performed using an adjoint-based iterative procedure, similar to previous inversions using IMAGES. Biogenic VOC emissions are calculated using MEGAN (Muller et al., 2008; Stavrakou et al., 2015), whereas the HTAPv2 emission dataset is used for anthropogenic emissions, with several adjustments for oil/gas exploitation and traffic emissions. The OMI data are regridded onto the model resolution and averaged seasonally in order to reduce noise. Preliminary results indicate that biogenic isoprene emissions are a major VOC source in summertime throughout the "Fertile Crescent" from the Nile Valley to Iraq. Anthropogenic emissions from many large cities (e.g. Bagdad and Cairo) as well as from known oil extraction/refining/handling sites are well detected, while other cities (such as Riyadh) are elusive.
Hydraulic Conductivity Estimation using Bayesian Model Averaging and Generalized Parameterization
NASA Astrophysics Data System (ADS)
Tsai, F. T.; Li, X.
2006-12-01
Non-uniqueness in parameterization scheme is an inherent problem in groundwater inverse modeling due to limited data. To cope with the non-uniqueness problem of parameterization, we introduce a Bayesian Model Averaging (BMA) method to integrate a set of selected parameterization methods. The estimation uncertainty in BMA includes the uncertainty in individual parameterization methods as the within-parameterization variance and the uncertainty from using different parameterization methods as the between-parameterization variance. Moreover, the generalized parameterization (GP) method is considered in the geostatistical framework in this study. The GP method aims at increasing the flexibility of parameterization through the combination of a zonation structure and an interpolation method. The use of BMP with GP avoids over-confidence in a single parameterization method. A normalized least-squares estimation (NLSE) is adopted to calculate the posterior probability for each GP. We employee the adjoint state method for the sensitivity analysis on the weighting coefficients in the GP method. The adjoint state method is also applied to the NLSE problem. The proposed methodology is implemented to the Alamitos Barrier Project (ABP) in California, where the spatially distributed hydraulic conductivity is estimated. The optimal weighting coefficients embedded in GP are identified through the maximum likelihood estimation (MLE) where the misfits between the observed and calculated groundwater heads are minimized. The conditional mean and conditional variance of the estimated hydraulic conductivity distribution using BMA are obtained to assess the estimation uncertainty.
Novel Scalable 3-D MT Inverse Solver
NASA Astrophysics Data System (ADS)
Kuvshinov, A. V.; Kruglyakov, M.; Geraskin, A.
2016-12-01
We present a new, robust and fast, three-dimensional (3-D) magnetotelluric (MT) inverse solver. As a forward modelling engine a highly-scalable solver extrEMe [1] is used. The (regularized) inversion is based on an iterative gradient-type optimization (quasi-Newton method) and exploits adjoint sources approach for fast calculation of the gradient of the misfit. The inverse solver is able to deal with highly detailed and contrasting models, allows for working (separately or jointly) with any type of MT (single-site and/or inter-site) responses, and supports massive parallelization. Different parallelization strategies implemented in the code allow for optimal usage of available computational resources for a given problem set up. To parameterize an inverse domain a mask approach is implemented, which means that one can merge any subset of forward modelling cells in order to account for (usually) irregular distribution of observation sites. We report results of 3-D numerical experiments aimed at analysing the robustness, performance and scalability of the code. In particular, our computational experiments carried out at different platforms ranging from modern laptops to high-performance clusters demonstrate practically linear scalability of the code up to thousands of nodes. 1. Kruglyakov, M., A. Geraskin, A. Kuvshinov, 2016. Novel accurate and scalable 3-D MT forward solver based on a contracting integral equation method, Computers and Geosciences, in press.
NASA Astrophysics Data System (ADS)
Pan, Wenyong; Innanen, Kristopher A.; Geng, Yu
2018-03-01
Seismic full-waveform inversion (FWI) methods hold strong potential to recover multiple subsurface elastic properties for hydrocarbon reservoir characterization. Simultaneously updating multiple physical parameters introduces the problem of interparameter tradeoff, arising from the covariance between different physical parameters, which increases nonlinearity and uncertainty of multiparameter FWI. The coupling effects of different physical parameters are significantly influenced by model parameterization and acquisition arrangement. An appropriate choice of model parameterization is critical to successful field data applications of multiparameter FWI. The objective of this paper is to examine the performance of various model parameterizations in isotropic-elastic FWI with walk-away vertical seismic profile (W-VSP) dataset for unconventional heavy oil reservoir characterization. Six model parameterizations are considered: velocity-density (α, β and ρ΄), modulus-density (κ, μ and ρ), Lamé-density (λ, μ΄ and ρ‴), impedance-density (IP, IS and ρ″), velocity-impedance-I (α΄, β΄ and I_P^'), and velocity-impedance-II (α″, β″ and I_S^'). We begin analyzing the interparameter tradeoff by making use of scattering radiation patterns, which is a common strategy for qualitative parameter resolution analysis. In this paper, we discuss the advantages and limitations of the scattering radiation patterns and recommend that interparameter tradeoffs be evaluated using interparameter contamination kernels, which provide quantitative, second-order measurements of the interparameter contaminations and can be constructed efficiently with an adjoint-state approach. Synthetic W-VSP isotropic-elastic FWI experiments in the time domain verify our conclusions about interparameter tradeoffs for various model parameterizations. Density profiles are most strongly influenced by the interparameter contaminations; depending on model parameterization, the inverted density profile can be over-estimated, under-estimated or spatially distorted. Among the six cases, only the velocity-density parameterization provides stable and informative density features not included in the starting model. Field data applications of multicomponent W-VSP isotropic-elastic FWI in the time domain were also carried out. The heavy oil reservoir target zone, characterized by low α-to-β ratios and low Poisson's ratios, can be identified clearly with the inverted isotropic-elastic parameters.
Pan, Wenyong; Innanen, Kristopher A.; Geng, Yu
2018-03-06
We report seismic full-waveform inversion (FWI) methods hold strong potential to recover multiple subsurface elastic properties for hydrocarbon reservoir characterization. Simultaneously updating multiple physical parameters introduces the problem of interparameter tradeoff, arising from the covariance between different physical parameters, which increases nonlinearity and uncertainty of multiparameter FWI. The coupling effects of different physical parameters are significantly influenced by model parameterization and acquisition arrangement. An appropriate choice of model parameterization is critical to successful field data applications of multiparameter FWI. The objective of this paper is to examine the performance of various model parameterizations in isotropic-elastic FWI with walk-away vertical seismicmore » profile (W-VSP) dataset for unconventional heavy oil reservoir characterization. Six model parameterizations are considered: velocity-density (α, β and ρ'), modulus-density (κ, μ and ρ), Lamé-density (λ, μ' and ρ'''), impedance-density (IP, IS and ρ''), velocity-impedance-I (α', β' and I' P), and velocity-impedance-II (α'', β'' and I'S). We begin analyzing the interparameter tradeoff by making use of scattering radiation patterns, which is a common strategy for qualitative parameter resolution analysis. In this paper, we discuss the advantages and limitations of the scattering radiation patterns and recommend that interparameter tradeoffs be evaluated using interparameter contamination kernels, which provide quantitative, second-order measurements of the interparameter contaminations and can be constructed efficiently with an adjoint-state approach. Synthetic W-VSP isotropic-elastic FWI experiments in the time domain verify our conclusions about interparameter tradeoffs for various model parameterizations. Density profiles are most strongly influenced by the interparameter contaminations; depending on model parameterization, the inverted density profile can be over-estimated, under-estimated or spatially distorted. Among the six cases, only the velocity-density parameterization provides stable and informative density features not included in the starting model. Field data applications of multicomponent W-VSP isotropic-elastic FWI in the time domain were also carried out. Finally, the heavy oil reservoir target zone, characterized by low α-to-β ratios and low Poisson’s ratios, can be identified clearly with the inverted isotropic-elastic parameters.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pan, Wenyong; Innanen, Kristopher A.; Geng, Yu
We report seismic full-waveform inversion (FWI) methods hold strong potential to recover multiple subsurface elastic properties for hydrocarbon reservoir characterization. Simultaneously updating multiple physical parameters introduces the problem of interparameter tradeoff, arising from the covariance between different physical parameters, which increases nonlinearity and uncertainty of multiparameter FWI. The coupling effects of different physical parameters are significantly influenced by model parameterization and acquisition arrangement. An appropriate choice of model parameterization is critical to successful field data applications of multiparameter FWI. The objective of this paper is to examine the performance of various model parameterizations in isotropic-elastic FWI with walk-away vertical seismicmore » profile (W-VSP) dataset for unconventional heavy oil reservoir characterization. Six model parameterizations are considered: velocity-density (α, β and ρ'), modulus-density (κ, μ and ρ), Lamé-density (λ, μ' and ρ'''), impedance-density (IP, IS and ρ''), velocity-impedance-I (α', β' and I' P), and velocity-impedance-II (α'', β'' and I'S). We begin analyzing the interparameter tradeoff by making use of scattering radiation patterns, which is a common strategy for qualitative parameter resolution analysis. In this paper, we discuss the advantages and limitations of the scattering radiation patterns and recommend that interparameter tradeoffs be evaluated using interparameter contamination kernels, which provide quantitative, second-order measurements of the interparameter contaminations and can be constructed efficiently with an adjoint-state approach. Synthetic W-VSP isotropic-elastic FWI experiments in the time domain verify our conclusions about interparameter tradeoffs for various model parameterizations. Density profiles are most strongly influenced by the interparameter contaminations; depending on model parameterization, the inverted density profile can be over-estimated, under-estimated or spatially distorted. Among the six cases, only the velocity-density parameterization provides stable and informative density features not included in the starting model. Field data applications of multicomponent W-VSP isotropic-elastic FWI in the time domain were also carried out. Finally, the heavy oil reservoir target zone, characterized by low α-to-β ratios and low Poisson’s ratios, can be identified clearly with the inverted isotropic-elastic parameters.« less
Hartzell, Stephen; Langer, Charley
1993-01-01
The spatial and temporal slip distributions for the October 3, 1974 (Mw = 8.0), Peru subduction zone earthquake and its largest aftershock on November 9 (Ms = 7.1) are calculated and analyzed in terms of the inversion parameterization and tectonic significance. Teleseismic, long-period World-Wide Standard Seismograph Network, P and SH waveforms are inverted to obtain the rupture histories. We demonstrate that erroneous results are obtained if a parameterization is used that does not allow for a sufficiently complex source, involving spatial variation in slip amplitude, risetime, and rupture time. The inversion method utilizes a parameterization of the fault that allows for a discretized source risetime and rupture time. Well-located aftershocks recorded on a local network have the same general pattern as teleseismically determined hypocenters and help to constrain the geometry of the subduction zone. For the main shock a hinged fault is preferred having a shallow plane with a dip of 11° and a deeper, landward plane with a dip of 30°. The preferred nucleation depth lies between 11 and 15 km. A bilateral rupture is obtained with two major concentrations of slip, one 60 to 70 km to the northwest of the epicenter and a second 80 to 100 km to the south and southeast of the epicenter. For these source regions, risetimes vary from 6 to 18 s. Our estimates of risetimes are consistent with the time for the rupture to traverse the dominant local asperity. The slip distribution for the November 9 aftershock falls within a conspicuous hole in the main shock rupture pattern, near the hypocenter of the main shock. The November 9 event has a simple risetime function with a duration of 2 s. Aftershocks recorded by the local network are shown to cluster near the hypocenter of the impending November 9 event and downdip from the largest main shock source region. Slip during the main shock is concentrated at shallow depths above 15 km and extends updip from the hypocenter to near the plate boundary at the trench axis. The large amount of slip at shallow depths is attributed to the absence of any significant accretionary wedge of sediments, and the relatively young age and high convergence rate of the subducted plate, which results in good seismic coupling near the trench axis.
NASA Astrophysics Data System (ADS)
La Vigna, Francesco; Hill, Mary C.; Rossetto, Rudy; Mazza, Roberto
2016-09-01
With respect to model parameterization and sensitivity analysis, this work uses a practical example to suggest that methods that start with simple models and use computationally frugal model analysis methods remain valuable in any toolbox of model development methods. In this work, groundwater model calibration starts with a simple parameterization that evolves into a moderately complex model. The model is developed for a water management study of the Tivoli-Guidonia basin (Rome, Italy) where surface mining has been conducted in conjunction with substantial dewatering. The approach to model development used in this work employs repeated analysis using sensitivity and inverse methods, including use of a new observation-stacked parameter importance graph. The methods are highly parallelizable and require few model runs, which make the repeated analyses and attendant insights possible. The success of a model development design can be measured by insights attained and demonstrated model accuracy relevant to predictions. Example insights were obtained: (1) A long-held belief that, except for a few distinct fractures, the travertine is homogeneous was found to be inadequate, and (2) The dewatering pumping rate is more critical to model accuracy than expected. The latter insight motivated additional data collection and improved pumpage estimates. Validation tests using three other recharge and pumpage conditions suggest good accuracy for the predictions considered. The model was used to evaluate management scenarios and showed that similar dewatering results could be achieved using 20 % less pumped water, but would require installing newly positioned wells and cooperation between mine owners.
NASA Technical Reports Server (NTRS)
Molthan, Andrew L.; Petersen, Walter A.; Case, Jonathan L.; Dembek, Scott R.
2009-01-01
Increases in computational resources have allowed operational forecast centers to pursue experimental, high resolution simulations that resolve the microphysical characteristics of clouds and precipitation. These experiments are motivated by a desire to improve the representation of weather and climate, but will also benefit current and future satellite campaigns, which often use forecast model output to guide the retrieval process. The combination of reliable cloud microphysics and radar reflectivity may constrain radiative transfer models used in satellite simulators during future missions, including EarthCARE and the NASA Global Precipitation Measurement. Aircraft, surface and radar data from the Canadian CloudSat/CALIPSO Validation Project are used to check the validity of size distribution and density characteristics for snowfall simulated by the NASA Goddard six-class, single moment bulk water microphysics scheme, currently available within the Weather Research and Forecast (WRF) Model. Widespread snowfall developed across the region on January 22, 2007, forced by the passing of a mid latitude cyclone, and was observed by the dual-polarimetric, C-band radar King City, Ontario, as well as the NASA 94 GHz CloudSat Cloud Profiling Radar. Combined, these data sets provide key metrics for validating model output: estimates of size distribution parameters fit to the inverse-exponential equations prescribed within the model, bulk density and crystal habit characteristics sampled by the aircraft, and representation of size characteristics as inferred by the radar reflectivity at C- and W-band. Specified constants for distribution intercept and density differ significantly from observations throughout much of the cloud depth. Alternate parameterizations are explored, using column-integrated values of vapor excess to avoid problems encountered with temperature-based parameterizations in an environment where inversions and isothermal layers are present. Simulation of CloudSat reflectivity is performed by adopting the discrete-dipole parameterizations and databases provided in literature, and demonstrate an improved capability in simulating radar reflectivity at W-band versus Mie scattering assumptions.
NASA Astrophysics Data System (ADS)
Engquist, Björn; Frederick, Christina; Huynh, Quyen; Zhou, Haomin
2017-06-01
We present a multiscale approach for identifying features in ocean beds by solving inverse problems in high frequency seafloor acoustics. The setting is based on Sound Navigation And Ranging (SONAR) imaging used in scientific, commercial, and military applications. The forward model incorporates multiscale simulations, by coupling Helmholtz equations and geometrical optics for a wide range of spatial scales in the seafloor geometry. This allows for detailed recovery of seafloor parameters including material type. Simulated backscattered data is generated using numerical microlocal analysis techniques. In order to lower the computational cost of the large-scale simulations in the inversion process, we take advantage of a pre-computed library of representative acoustic responses from various seafloor parameterizations.
Efficient hierarchical trans-dimensional Bayesian inversion of magnetotelluric data
NASA Astrophysics Data System (ADS)
Xiang, Enming; Guo, Rongwen; Dosso, Stan E.; Liu, Jianxin; Dong, Hao; Ren, Zhengyong
2018-06-01
This paper develops an efficient hierarchical trans-dimensional (trans-D) Bayesian algorithm to invert magnetotelluric (MT) data for subsurface geoelectrical structure, with unknown geophysical model parameterization (the number of conductivity-layer interfaces) and data-error models parameterized by an auto-regressive (AR) process to account for potential error correlations. The reversible-jump Markov-chain Monte Carlo algorithm, which adds/removes interfaces and AR parameters in birth/death steps, is applied to sample the trans-D posterior probability density for model parameterization, model parameters, error variance and AR parameters, accounting for the uncertainties of model dimension and data-error statistics in the uncertainty estimates of the conductivity profile. To provide efficient sampling over the multiple subspaces of different dimensions, advanced proposal schemes are applied. Parameter perturbations are carried out in principal-component space, defined by eigen-decomposition of the unit-lag model covariance matrix, to minimize the effect of inter-parameter correlations and provide effective perturbation directions and length scales. Parameters of new layers in birth steps are proposed from the prior, instead of focused distributions centred at existing values, to improve birth acceptance rates. Parallel tempering, based on a series of parallel interacting Markov chains with successively relaxed likelihoods, is applied to improve chain mixing over model dimensions. The trans-D inversion is applied in a simulation study to examine the resolution of model structure according to the data information content. The inversion is also applied to a measured MT data set from south-central Australia.
NASA Astrophysics Data System (ADS)
Simeonov, J.; Czapiga, M. J.; Holland, K. T.
2017-12-01
We developed an inversion model for river bathymetry estimation using measurements of surface currents, water surface elevation slope and shoreline position. The inversion scheme is based on explicit velocity-depth and velocity-slope relationships derived from the along-channel momentum balance and mass conservation. The velocity-depth relationship requires the discharge value to quantitatively relate the depth to the measured velocity field. The ratio of the discharge and the bottom friction enter as a coefficient in the velocity-slope relationship and is determined by minimizing the difference between the predicted and the measured streamwise variation of the total head. Completing the inversion requires an estimate of the bulk friction, which in the case of sand bed rivers is a strong function of the size of dune bedforms. We explored the accuracy of existing and new empirical closures that relate the bulk roughness to parameters such as the median grain size diameter, ratio of shear velocity to sediment fall velocity or the Froude number. For given roughness parameterization, the inversion solution is determined iteratively since the hydraulic roughness depends on the unknown depth. We first test the new hydraulic roughness parameterization using estimates of the Manning roughness in sand bed rivers based on field measurements. The coupled inversion and roughness model is then tested using in situ and remote sensing measurements of the Kootenai River east of Bonners Ferry, ID.
Retrieving rupture history using waveform inversions in time sequence
NASA Astrophysics Data System (ADS)
Yi, L.; Xu, C.; Zhang, X.
2017-12-01
The rupture history of large earthquakes is generally regenerated using the waveform inversion through utilizing seismological waveform records. In the waveform inversion, based on the superposition principle, the rupture process is linearly parameterized. After discretizing the fault plane into sub-faults, the local source time function of each sub-fault is usually parameterized using the multi-time window method, e.g., mutual overlapped triangular functions. Then the forward waveform of each sub-fault is synthesized through convoluting the source time function with its Green function. According to the superposition principle, these forward waveforms generated from the fault plane are summarized in the recorded waveforms after aligning the arrival times. Then the slip history is retrieved using the waveform inversion method after the superposing of all forward waveforms for each correspond seismological waveform records. Apart from the isolation of these forward waveforms generated from each sub-fault, we also realize that these waveforms are gradually and sequentially superimposed in the recorded waveforms. Thus we proposed a idea that the rupture model is possibly detachable in sequent rupture times. According to the constrained waveform length method emphasized in our previous work, the length of inverted waveforms used in the waveform inversion is objectively constrained by the rupture velocity and rise time. And one essential prior condition is the predetermined fault plane that limits the duration of rupture time, which means the waveform inversion is restricted in a pre-set rupture duration time. Therefore, we proposed a strategy to inverse the rupture process sequentially using the progressively shift rupture times as the rupture front expanding in the fault plane. And we have designed a simulation inversion to test the feasibility of the method. Our test result shows the prospect of this idea that requiring furthermore investigation.
NASA Astrophysics Data System (ADS)
Yan, Peng; Zhang, Yangming
2018-06-01
High performance scanning of nano-manipulators is widely deployed in various precision engineering applications such as SPM (scanning probe microscope), where trajectory tracking of sophisticated reference signals is an challenging control problem. The situation is further complicated when rate dependent hysteresis of the piezoelectric actuators and the stress-stiffening induced nonlinear stiffness of the flexure mechanism are considered. In this paper, a novel control framework is proposed to achieve high precision tracking of a piezoelectric nano-manipulator subjected to hysteresis and stiffness nonlinearities. An adaptive parameterized rate-dependent Prandtl-Ishlinskii model is constructed and the corresponding adaptive inverse model based online compensation is derived. Meanwhile a robust adaptive control architecture is further introduced to improve the tracking accuracy and robustness of the compensated system, where the parametric uncertainties of the nonlinear dynamics can be well eliminated by on-line estimations. Comparative experimental studies of the proposed control algorithm are conducted on a PZT actuated nano-manipulating stage, where hysteresis modeling accuracy and excellent tracking performance are demonstrated in real-time implementations, with significant improvement over existing results.
Changing basal conditions during the speed-up of Jakobshavn Isbræ, Greenland
NASA Astrophysics Data System (ADS)
Habermann, M.; Truffer, M.; Maxwell, D.
2013-06-01
Ice-sheet outlet glaciers can undergo dynamic changes such as the rapid speed-up of Jakobshavn Isbræ following the disintegration of its floating ice tongue. These changes are associated with stress changes on the boundary of the ice mass. We investigate the basal conditions throughout a well-observed period of rapid change and evaluate parameterizations currently used in ice-sheet models. A Tikhonov inverse method with a Shallow Shelf Approximation forward model is used for diagnostic inversions for the years 1985, 2000, 2005, 2006 and 2008. Our ice softness, model norm, and regularization parameter choices are justified using the data-model misfit metric and the L-curve method. The sensitivity of the inversion results to these parameter choices is explored. We find a lowering of basal yield stress in the first 7 km of the 2008 grounding line and no significant changes higher upstream. The temporal evolution in the fast flow area is in broad agreement with a Mohr-Coulomb parameterization of basal shear stress, but with a till friction angle much lower than has been measured for till samples. The lowering of basal yield stress is significant within the uncertainties of the inversion, but it cannot be ruled out that there are other significant contributors to the acceleration of the glacier.
NASA Astrophysics Data System (ADS)
Keating, Elizabeth H.; Doherty, John; Vrugt, Jasper A.; Kang, Qinjun
2010-10-01
Highly parameterized and CPU-intensive groundwater models are increasingly being used to understand and predict flow and transport through aquifers. Despite their frequent use, these models pose significant challenges for parameter estimation and predictive uncertainty analysis algorithms, particularly global methods which usually require very large numbers of forward runs. Here we present a general methodology for parameter estimation and uncertainty analysis that can be utilized in these situations. Our proposed method includes extraction of a surrogate model that mimics key characteristics of a full process model, followed by testing and implementation of a pragmatic uncertainty analysis technique, called null-space Monte Carlo (NSMC), that merges the strengths of gradient-based search and parameter dimensionality reduction. As part of the surrogate model analysis, the results of NSMC are compared with a formal Bayesian approach using the DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm. Such a comparison has never been accomplished before, especially in the context of high parameter dimensionality. Despite the highly nonlinear nature of the inverse problem, the existence of multiple local minima, and the relatively large parameter dimensionality, both methods performed well and results compare favorably with each other. Experiences gained from the surrogate model analysis are then transferred to calibrate the full highly parameterized and CPU intensive groundwater model and to explore predictive uncertainty of predictions made by that model. The methodology presented here is generally applicable to any highly parameterized and CPU-intensive environmental model, where efficient methods such as NSMC provide the only practical means for conducting predictive uncertainty analysis.
NASA Astrophysics Data System (ADS)
Zielke, Olaf; McDougall, Damon; Mai, Martin; Babuska, Ivo
2014-05-01
Seismic, often augmented with geodetic data, are frequently used to invert for the spatio-temporal evolution of slip along a rupture plane. The resulting images of the slip evolution for a single event, inferred by different research teams, often vary distinctly, depending on the adopted inversion approach and rupture model parameterization. This observation raises the question, which of the provided kinematic source inversion solutions is most reliable and most robust, and — more generally — how accurate are fault parameterization and solution predictions? These issues are not included in "standard" source inversion approaches. Here, we present a statistical inversion approach to constrain kinematic rupture parameters from teleseismic body waves. The approach is based a) on a forward-modeling scheme that computes synthetic (body-)waves for a given kinematic rupture model, and b) on the QUESO (Quantification of Uncertainty for Estimation, Simulation, and Optimization) library that uses MCMC algorithms and Bayes theorem for sample selection. We present Bayesian inversions for rupture parameters in synthetic earthquakes (i.e. for which the exact rupture history is known) in an attempt to identify the cross-over at which further model discretization (spatial and temporal resolution of the parameter space) is no longer attributed to a decreasing misfit. Identification of this cross-over is of importance as it reveals the resolution power of the studied data set (i.e. teleseismic body waves), enabling one to constrain kinematic earthquake rupture histories of real earthquakes at a resolution that is supported by data. In addition, the Bayesian approach allows for mapping complete posterior probability density functions of the desired kinematic source parameters, thus enabling us to rigorously assess the uncertainties in earthquake source inversions.
[Formula: see text] regularity properties of singular parameterizations in isogeometric analysis.
Takacs, T; Jüttler, B
2012-11-01
Isogeometric analysis (IGA) is a numerical simulation method which is directly based on the NURBS-based representation of CAD models. It exploits the tensor-product structure of 2- or 3-dimensional NURBS objects to parameterize the physical domain. Hence the physical domain is parameterized with respect to a rectangle or to a cube. Consequently, singularly parameterized NURBS surfaces and NURBS volumes are needed in order to represent non-quadrangular or non-hexahedral domains without splitting, thereby producing a very compact and convenient representation. The Galerkin projection introduces finite-dimensional spaces of test functions in the weak formulation of partial differential equations. In particular, the test functions used in isogeometric analysis are obtained by composing the inverse of the domain parameterization with the NURBS basis functions. In the case of singular parameterizations, however, some of the resulting test functions do not necessarily fulfill the required regularity properties. Consequently, numerical methods for the solution of partial differential equations cannot be applied properly. We discuss the regularity properties of the test functions. For one- and two-dimensional domains we consider several important classes of singularities of NURBS parameterizations. For specific cases we derive additional conditions which guarantee the regularity of the test functions. In addition we present a modification scheme for the discretized function space in case of insufficient regularity. It is also shown how these results can be applied for computational domains in higher dimensions that can be parameterized via sweeping.
Changing basal conditions during the speed-up of Jakobshavn Isbræ, Greenland
NASA Astrophysics Data System (ADS)
Habermann, M.; Truffer, M.; Maxwell, D.
2013-11-01
Ice-sheet outlet glaciers can undergo dynamic changes such as the rapid speed-up of Jakobshavn Isbræ following the disintegration of its floating ice tongue. These changes are associated with stress changes on the boundary of the ice mass. We invert for basal conditions from surface velocity data throughout a well-observed period of rapid change and evaluate parameterizations currently used in ice-sheet models. A Tikhonov inverse method with a shallow-shelf approximation forward model is used for diagnostic inversions for the years 1985, 2000, 2005, 2006 and 2008. Our ice-softness, model norm, and regularization parameter choices are justified using the data-model misfit metric and the L curve method. The sensitivity of the inversion results to these parameter choices is explored. We find a lowering of effective basal yield stress in the first 7 km upstream from the 2008 grounding line and no significant changes higher upstream. The temporal evolution in the fast flow area is in broad agreement with a Mohr-Coulomb parameterization of basal shear stress, but with a till friction angle much lower than has been measured for till samples. The lowering of effective basal yield stress is significant within the uncertainties of the inversion, but it cannot be ruled out that there are other significant contributors to the acceleration of the glacier.
Parameterizing the Spatial Markov Model From Breakthrough Curve Data Alone
NASA Astrophysics Data System (ADS)
Sherman, Thomas; Fakhari, Abbas; Miller, Savannah; Singha, Kamini; Bolster, Diogo
2017-12-01
The spatial Markov model (SMM) is an upscaled Lagrangian model that effectively captures anomalous transport across a diverse range of hydrologic systems. The distinct feature of the SMM relative to other random walk models is that successive steps are correlated. To date, with some notable exceptions, the model has primarily been applied to data from high-resolution numerical simulations and correlation effects have been measured from simulated particle trajectories. In real systems such knowledge is practically unattainable and the best one might hope for is breakthrough curves (BTCs) at successive downstream locations. We introduce a novel methodology to quantify velocity correlation from BTC data alone. By discretizing two measured BTCs into a set of arrival times and developing an inverse model, we estimate velocity correlation, thereby enabling parameterization of the SMM in studies where detailed Lagrangian velocity statistics are unavailable. The proposed methodology is applied to two synthetic numerical problems, where we measure all details and thus test the veracity of the approach by comparison of estimated parameters with known simulated values. Our results suggest that our estimated transition probabilities agree with simulated values and using the SMM with this estimated parameterization accurately predicts BTCs downstream. Our methodology naturally allows for estimates of uncertainty by calculating lower and upper bounds of velocity correlation, enabling prediction of a range of BTCs. The measured BTCs fall within the range of predicted BTCs. This novel method to parameterize the SMM from BTC data alone is quite parsimonious, thereby widening the SMM's practical applicability.
NASA Astrophysics Data System (ADS)
Revil, A.; Jardani, A.; Dupont, J.
2012-12-01
The assessment of hydraulic conductivity of heterogeneous aquifers is a difficult task using traditional hydrogeological methods (e.g., steady state or transient pumping tests) due to their low spatial resolution associated with a low density of available piezometers. Geophysical measurements performed at the ground surface and in boreholes provide additional information for increasing the resolution and accuracy of the inverted hydraulic conductivity. We use a stochastic joint inversion of Direct Current (DC) resistivity and Self-Potential (SP) data plus in situ measurement of the salinity in a downstream well during a synthetic salt tracer experiment to reconstruct the hydraulic conductivity field of an heterogeneous aquifer. The pilot point parameterization is used to avoid over-parameterization of the inverse problem. Bounds on the model parameters are used to promote a consistent Markov chain Monte Carlo sampling of the hydrogeological parameters of the model. To evaluate the effectiveness of the inversion process, we compare several scenarios where the geophysical data are coupled or not to the hydrogeological data to map the hydraulic conductivity. We first test the effectiveness of the inversion of each type of data alone, and then we combine the methods two by two. We finally combine all the information together to show the value of each type of geophysical data in the joint inversion process because of their different sensitivity map. The results of the inversion reveal that the self-potential data improve the estimate of hydraulic conductivity especially when the self-potential data are combined to the salt concentration measurement in the second well or to the time-lapse electrical resistivity data. Various tests are also performed to quantify the uncertainty in the inversion when for instance the semi-variogram is not known and its parameters should be inverted as well.
Run-to-Run Optimization Control Within Exact Inverse Framework for Scan Tracking.
Yeoh, Ivan L; Reinhall, Per G; Berg, Martin C; Chizeck, Howard J; Seibel, Eric J
2017-09-01
A run-to-run optimization controller uses a reduced set of measurement parameters, in comparison to more general feedback controllers, to converge to the best control point for a repetitive process. A new run-to-run optimization controller is presented for the scanning fiber device used for image acquisition and display. This controller utilizes very sparse measurements to estimate a system energy measure and updates the input parameterizations iteratively within a feedforward with exact-inversion framework. Analysis, simulation, and experimental investigations on the scanning fiber device demonstrate improved scan accuracy over previous methods and automatic controller adaptation to changing operating temperature. A specific application example and quantitative error analyses are provided of a scanning fiber endoscope that maintains high image quality continuously across a 20 °C temperature rise without interruption of the 56 Hz video.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yu; Gao, Kai; Huang, Lianjie
Accurate imaging and characterization of fracture zones is crucial for geothermal energy exploration. Aligned fractures within fracture zones behave as anisotropic media for seismic-wave propagation. The anisotropic properties in fracture zones introduce extra difficulties for seismic imaging and waveform inversion. We have recently developed a new anisotropic elastic-waveform inversion method using a modified total-variation regularization scheme and a wave-energy-base preconditioning technique. Our new inversion method uses the parameterization of elasticity constants to describe anisotropic media, and hence it can properly handle arbitrary anisotropy. We apply our new inversion method to a seismic velocity model along a 2D-line seismic data acquiredmore » at Eleven-Mile Canyon located at the Southern Dixie Valley in Nevada for geothermal energy exploration. Our inversion results show that anisotropic elastic-waveform inversion has potential to reconstruct subsurface anisotropic elastic parameters for imaging and characterization of fracture zones.« less
A python framework for environmental model uncertainty analysis
White, Jeremy; Fienen, Michael N.; Doherty, John E.
2016-01-01
We have developed pyEMU, a python framework for Environmental Modeling Uncertainty analyses, open-source tool that is non-intrusive, easy-to-use, computationally efficient, and scalable to highly-parameterized inverse problems. The framework implements several types of linear (first-order, second-moment (FOSM)) and non-linear uncertainty analyses. The FOSM-based analyses can also be completed prior to parameter estimation to help inform important modeling decisions, such as parameterization and objective function formulation. Complete workflows for several types of FOSM-based and non-linear analyses are documented in example notebooks implemented using Jupyter that are available in the online pyEMU repository. Example workflows include basic parameter and forecast analyses, data worth analyses, and error-variance analyses, as well as usage of parameter ensemble generation and management capabilities. These workflows document the necessary steps and provides insights into the results, with the goal of educating users not only in how to apply pyEMU, but also in the underlying theory of applied uncertainty quantification.
NASA Astrophysics Data System (ADS)
Jardani, A.; Revil, A.; Dupont, J. P.
2013-02-01
The assessment of hydraulic conductivity of heterogeneous aquifers is a difficult task using traditional hydrogeological methods (e.g., steady state or transient pumping tests) due to their low spatial resolution. Geophysical measurements performed at the ground surface and in boreholes provide additional information for increasing the resolution and accuracy of the inverted hydraulic conductivity field. We used a stochastic joint inversion of Direct Current (DC) resistivity and self-potential (SP) data plus in situ measurement of the salinity in a downstream well during a synthetic salt tracer experiment to reconstruct the hydraulic conductivity field between two wells. The pilot point parameterization was used to avoid over-parameterization of the inverse problem. Bounds on the model parameters were used to promote a consistent Markov chain Monte Carlo sampling of the model parameters. To evaluate the effectiveness of the joint inversion process, we compared eight cases in which the geophysical data are coupled or not to the in situ sampling of the salinity to map the hydraulic conductivity. We first tested the effectiveness of the inversion of each type of data alone (concentration sampling, self-potential, and DC resistivity), and then we combined the data two by two. We finally combined all the data together to show the value of each type of geophysical data in the joint inversion process because of their different sensitivity map. We also investigated a case in which the data were contaminated with noise and the variogram unknown and inverted stochastically. The results of the inversion revealed that incorporating the self-potential data improves the estimate of hydraulic conductivity field especially when the self-potential data were combined to the salt concentration measurement in the second well or to the time-lapse cross-well electrical resistivity data. Various tests were also performed to quantify the uncertainty in the inverted hydraulic conductivity field.
Transdimensional Bayesian tomography of the lowermost mantle from shear waves
NASA Astrophysics Data System (ADS)
Richardson, C.; Mousavi, S. S.; Tkalcic, H.; Masters, G.
2017-12-01
The lowermost layer of the mantle, known as D'', is a complex region that contains significant heterogeneities on different spatial scales and a wide range of physical and chemical features such as partial melting, seismic anisotropy, and variations in thermal and chemical composition. The most powerful tools we have to probe this region are seismic waves and corresponding imaging techniques such as tomography. Recently, we developed compressional velocity tomograms of D'' using a transdimensional Bayesian inversion, where the model parameterization is not explicit and regularization is not required. This has produced a far more nuanced P-wave velocity model of D'' than that from traditional S-wave tomography. We also note that P-wave models of D'' vary much more significantly among various research groups than the corresponding S-wave models. This study therefore seeks to develop a new S-wave velocity model of D'' underneath Australia by using predominantly ScS-S differential travel times measured through waveform correlation and Bayesian transdimensional inversion to further understand and characterize heterogeneities in D''. We used events at epicentral distances between 45 and 75 degrees from stations in Australia at depths of over 200 km and with magnitudes between 6.0 and 6.7. Because of globally incomplete coverage of station and earthquake locations, a major limitation of deep earth tomography has been the explicit parameterization of the region of interest. Explicit parameterization has been foundational in most studies, but faces inherent problems of either over-smoothing the data, or allowing for too much noise. To avoid this, we use spherical Voronoi polygons, which allow for a high level of flexibility as the polygons can grow, shrink, or be altogether deleted throughout a sequence of iterations. Our technique also yields highly desired model parameter uncertainties. While there is little doubt that D'' is heterogeneous, there is still much that is unclear about the extent and spatial distribution of different heterogeneous domains, as there are open questions about their dynamics and chemical interactions in the context of the surrounding mantle and outer core. In this context, our goal is also to quantify and understand the differences between S-wave and P-wave velocity tomographic models.
Training-Image Based Geostatistical Inversion Using a Spatial Generative Adversarial Neural Network
NASA Astrophysics Data System (ADS)
Laloy, Eric; Hérault, Romain; Jacques, Diederik; Linde, Niklas
2018-01-01
Probabilistic inversion within a multiple-point statistics framework is often computationally prohibitive for high-dimensional problems. To partly address this, we introduce and evaluate a new training-image based inversion approach for complex geologic media. Our approach relies on a deep neural network of the generative adversarial network (GAN) type. After training using a training image (TI), our proposed spatial GAN (SGAN) can quickly generate 2-D and 3-D unconditional realizations. A key characteristic of our SGAN is that it defines a (very) low-dimensional parameterization, thereby allowing for efficient probabilistic inversion using state-of-the-art Markov chain Monte Carlo (MCMC) methods. In addition, available direct conditioning data can be incorporated within the inversion. Several 2-D and 3-D categorical TIs are first used to analyze the performance of our SGAN for unconditional geostatistical simulation. Training our deep network can take several hours. After training, realizations containing a few millions of pixels/voxels can be produced in a matter of seconds. This makes it especially useful for simulating many thousands of realizations (e.g., for MCMC inversion) as the relative cost of the training per realization diminishes with the considered number of realizations. Synthetic inversion case studies involving 2-D steady state flow and 3-D transient hydraulic tomography with and without direct conditioning data are used to illustrate the effectiveness of our proposed SGAN-based inversion. For the 2-D case, the inversion rapidly explores the posterior model distribution. For the 3-D case, the inversion recovers model realizations that fit the data close to the target level and visually resemble the true model well.
NASA Astrophysics Data System (ADS)
Lambrakos, S. G.
2018-04-01
Inverse thermal analysis of Ti-6Al-4V friction stir welds is presented that demonstrates application of a methodology using numerical-analytical basis functions and temperature-field constraint conditions. This analysis provides parametric representation of friction-stir-weld temperature histories that can be adopted as input data to computational procedures for prediction of solid-state phase transformations and mechanical response. These parameterized temperature histories can be used for inverse thermal analysis of friction stir welds having process conditions similar those considered here. Case studies are presented for inverse thermal analysis of friction stir welds that use three-dimensional constraint conditions on calculated temperature fields, which are associated with experimentally measured transformation boundaries and weld-stir-zone cross sections.
The Collaborative Seismic Earth Model: Generation 1
NASA Astrophysics Data System (ADS)
Fichtner, Andreas; van Herwaarden, Dirk-Philip; Afanasiev, Michael; SimutÄ--, SaulÄ--; Krischer, Lion; ćubuk-Sabuncu, Yeşim; Taymaz, Tuncay; Colli, Lorenzo; Saygin, Erdinc; Villaseñor, Antonio; Trampert, Jeannot; Cupillard, Paul; Bunge, Hans-Peter; Igel, Heiner
2018-05-01
We present a general concept for evolutionary, collaborative, multiscale inversion of geophysical data, specifically applied to the construction of a first-generation Collaborative Seismic Earth Model. This is intended to address the limited resources of individual researchers and the often limited use of previously accumulated knowledge. Model evolution rests on a Bayesian updating scheme, simplified into a deterministic method that honors today's computational restrictions. The scheme is able to harness distributed human and computing power. It furthermore handles conflicting updates, as well as variable parameterizations of different model refinements or different inversion techniques. The first-generation Collaborative Seismic Earth Model comprises 12 refinements from full seismic waveform inversion, ranging from regional crustal- to continental-scale models. A global full-waveform inversion ensures that regional refinements translate into whole-Earth structure.
Linear Approximation to Optimal Control Allocation for Rocket Nozzles with Elliptical Constraints
NASA Technical Reports Server (NTRS)
Orr, Jeb S.; Wall, Johnm W.
2011-01-01
In this paper we present a straightforward technique for assessing and realizing the maximum control moment effectiveness for a launch vehicle with multiple constrained rocket nozzles, where elliptical deflection limits in gimbal axes are expressed as an ensemble of independent quadratic constraints. A direct method of determining an approximating ellipsoid that inscribes the set of attainable angular accelerations is derived. In the case of a parameterized linear generalized inverse, the geometry of the attainable set is computationally expensive to obtain but can be approximated to a high degree of accuracy with the proposed method. A linear inverse can then be optimized to maximize the volume of the true attainable set by maximizing the volume of the approximating ellipsoid. The use of a linear inverse does not preclude the use of linear methods for stability analysis and control design, preferred in practice for assessing the stability characteristics of the inertial and servoelastic coupling appearing in large boosters. The present techniques are demonstrated via application to the control allocation scheme for a concept heavy-lift launch vehicle.
NASA Astrophysics Data System (ADS)
Lay, T.; Ammon, C. J.
2010-12-01
An unusually large number of widely distributed great earthquakes have occurred in the past six years, with extensive data sets of teleseismic broadband seismic recordings being available in near-real time for each event. Numerous research groups have implemented finite-fault inversions that utilize the rapidly accessible teleseismic recordings, and slip models are regularly determined and posted on websites for all major events. The source inversion validation project has already demonstrated that for events of all sizes there is often significant variability in models for a given earthquake. Some of these differences can be attributed to variations in data sets and procedures used for including signals with very different bandwidth and signal characteristics into joint inversions. Some differences can also be attributed to choice of velocity structure and data weighting. However, our experience is that some of the primary causes of solution variability involve rupture model parameterization and imposed kinematic constraints such as rupture velocity and subfault source time function description. In some cases it is viable to rapidly perform separate procedures such as teleseismic array back-projection or surface wave directivity analysis to reduce the uncertainties associated with rupture velocity, and it is possible to explore a range of subfault source parameterizations to place some constraints on which model features are robust. In general, many such tests are performed, but not fully described, with single model solutions being posted or published, with limited insight into solution confidence being conveyed. Using signals from recent great earthquakes in the Kuril Islands, Solomon Islands, Peru, Chile and Samoa, we explore issues of uncertainty and robustness of solutions that can be rapidly obtained by inversion of teleseismic signals. Formalizing uncertainty estimates remains a formidable undertaking and some aspects of that challenge will be addressed.
Testing earthquake source inversion methodologies
Page, M.; Mai, P.M.; Schorlemmer, D.
2011-01-01
Source Inversion Validation Workshop; Palm Springs, California, 11-12 September 2010; Nowadays earthquake source inversions are routinely performed after large earthquakes and represent a key connection between recorded seismic and geodetic data and the complex rupture process at depth. The resulting earthquake source models quantify the spatiotemporal evolution of ruptures. They are also used to provide a rapid assessment of the severity of an earthquake and to estimate losses. However, because of uncertainties in the data, assumed fault geometry and velocity structure, and chosen rupture parameterization, it is not clear which features of these source models are robust. Improved understanding of the uncertainty and reliability of earthquake source inversions will allow the scientific community to use the robust features of kinematic inversions to more thoroughly investigate the complexity of the rupture process and to better constrain other earthquakerelated computations, such as ground motion simulations and static stress change calculations.
Satellite Imagery Analysis for Nighttime Temperature Inversion Clouds
NASA Technical Reports Server (NTRS)
Kawamoto, K.; Minnis, P.; Arduini, R.; Smith, W., Jr.
2001-01-01
Clouds play important roles in the climate system. Their optical and microphysical properties, which largely determine their radiative property, need to be investigated. Among several measurement means, satellite remote sensing seems to be the most promising. Since most of the cloud algorithms proposed so far are daytime use which utilizes solar radiation, Minnis et al. (1998) developed a nighttime use one using 3.7-, 11 - and 12-microns channels. Their algorithm, however, has a drawback that is not able to treat temperature inversion cases. We update their algorithm, incorporating new parameterization by Arduini et al. (1999) which is valid for temperature inversion cases. This updated algorithm has been applied to GOES satellite data and reasonable retrieval results were obtained.
Hartzell, S.; Liu, P.
1996-01-01
A method is presented for the simultaneous calculation of slip amplitudes and rupture times for a finite fault using a hybrid global search algorithm. The method we use combines simulated annealing with the downhill simplex method to produce a more efficient search algorithm then either of the two constituent parts. This formulation has advantages over traditional iterative or linearized approaches to the problem because it is able to escape local minima in its search through model space for the global optimum. We apply this global search method to the calculation of the rupture history for the Landers, California, earthquake. The rupture is modeled using three separate finite-fault planes to represent the three main fault segments that failed during this earthquake. Both the slip amplitude and the time of slip are calculated for a grid work of subfaults. The data used consist of digital, teleseismic P and SH body waves. Long-period, broadband, and short-period records are utilized to obtain a wideband characterization of the source. The results of the global search inversion are compared with a more traditional linear-least-squares inversion for only slip amplitudes. We use a multi-time-window linear analysis to relax the constraints on rupture time and rise time in the least-squares inversion. Both inversions produce similar slip distributions, although the linear-least-squares solution has a 10% larger moment (7.3 ?? 1026 dyne-cm compared with 6.6 ?? 1026 dyne-cm). Both inversions fit the data equally well and point out the importance of (1) using a parameterization with sufficient spatial and temporal flexibility to encompass likely complexities in the rupture process, (2) including suitable physically based constraints on the inversion to reduce instabilities in the solution, and (3) focusing on those robust rupture characteristics that rise above the details of the parameterization and data set.
NASA Astrophysics Data System (ADS)
Delay, Frederick; Badri, Hamid; Fahs, Marwan; Ackerer, Philippe
2017-12-01
Dual porosity models become increasingly used for simulating groundwater flow at the large scale in fractured porous media. In this context, model inversions with the aim of retrieving the system heterogeneity are frequently faced with huge parameterizations for which descent methods of inversion with the assistance of adjoint state calculations are well suited. We compare the performance of discrete and continuous forms of adjoint states associated with the flow equations in a dual porosity system. The discrete form inherits from previous works by some of the authors, as the continuous form is completely new and here fully differentiated for handling all types of model parameters. Adjoint states assist descent methods by calculating the gradient components of the objective function, these being a key to good convergence of inverse solutions. Our comparison on the basis of synthetic exercises show that both discrete and continuous adjoint states can provide very similar solutions close to reference. For highly heterogeneous systems, the calculation grid of the continuous form cannot be too coarse, otherwise the method may show lack of convergence. This notwithstanding, the continuous adjoint state is the most versatile form as its non-intrusive character allows for plugging an inversion toolbox quasi-independent from the code employed for solving the forward problem.
Characterize kinematic rupture history of large earthquakes with Multiple Haskell sources
NASA Astrophysics Data System (ADS)
Jia, Z.; Zhan, Z.
2017-12-01
Earthquakes are often regarded as continuous rupture along a single fault, but the occurrence of complex large events involving multiple faults and dynamic triggering challenges this view. Such rupture complexities cause difficulties in existing finite fault inversion algorithms, because they rely on specific parameterizations and regularizations to obtain physically meaningful solutions. Furthermore, it is difficult to assess reliability and uncertainty of obtained rupture models. Here we develop a Multi-Haskell Source (MHS) method to estimate rupture process of large earthquakes as a series of sub-events of varying location, timing and directivity. Each sub-event is characterized by a Haskell rupture model with uniform dislocation and constant unilateral rupture velocity. This flexible yet simple source parameterization allows us to constrain first-order rupture complexity of large earthquakes robustly. Additionally, relatively few parameters in the inverse problem yields improved uncertainty analysis based on Markov chain Monte Carlo sampling in a Bayesian framework. Synthetic tests and application of MHS method on real earthquakes show that our method can capture major features of large earthquake rupture process, and provide information for more detailed rupture history analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pasyanos, M; Gok, R; Zor, E
We investigate the crustal and upper mantle structure of eastern Turkey where the Anatolian, Arabian and Eurasian Plates meet and form a complex tectonic structure. The Bitlis suture is a continental collision zone between the Anatolian plateau and the Arabian plate. Broadband data available through the Eastern Turkey Seismic Experiment (ETSE) provided a unique opportunity for studying the high resolution velocity structure. Zor et al. found an average 46 km thick crust in Anatolian plateau using six-layered grid search inversion of the ETSE receiver functions. Receiver functions are sensitive to the velocity contrast of interfaces and the relative travel timemore » of converted and reverberated waves between those interfaces. The interpretation of receiver function alone with many-layered parameterization may result in an apparent depth-velocity tradeoff. In order to improve previous velocity model, we employed the joint inversion method with many layered parameterization of Julia et al. (2000) to the ETSE receiver functions. In this technique, the receiver function and surface-wave observations are combined into a single algebraic equation and each data set is weighted by an estimate of the uncertainty in the observations. We consider azimuthal changes of receiver functions and have stacked them into different groups. We calculated the receiver functions using iterative time-domain deconvolution technique and surface wave group velocity dispersion curves between 10-100 sec. We are making surface wave dispersion measurements at the ETSE stations and have incorporated them into a regional group velocity model. Preliminary results indicate a strong trend in the long period group velocity in the northeast. This indicates slow upper mantle velocities in the region consistent with Pn, Sn and receiver function results. We started with both the 1-D model that is obtained with the 12 tones dam explosion shot data recorded by ETSE network and the existing receiver function inversion results. In fact, we observe that the inversion results are independent at the starting model and converges well to the same final model. We don't observe a significant change at the first order discontinuities of model (e.g. Moho depth), but we obtain better defined depths to low velocity layers.« less
NASA Astrophysics Data System (ADS)
Shimelevich, M. I.; Obornev, E. A.; Obornev, I. E.; Rodionov, E. A.
2017-07-01
The iterative approximation neural network method for solving conditionally well-posed nonlinear inverse problems of geophysics is presented. The method is based on the neural network approximation of the inverse operator. The inverse problem is solved in the class of grid (block) models of the medium on a regularized parameterization grid. The construction principle of this grid relies on using the calculated values of the continuity modulus of the inverse operator and its modifications determining the degree of ambiguity of the solutions. The method provides approximate solutions of inverse problems with the maximal degree of detail given the specified degree of ambiguity with the total number of the sought parameters n × 103 of the medium. The a priori and a posteriori estimates of the degree of ambiguity of the approximated solutions are calculated. The work of the method is illustrated by the example of the three-dimensional (3D) inversion of the synthesized 2D areal geoelectrical (audio magnetotelluric sounding, AMTS) data corresponding to the schematic model of a kimberlite pipe.
Energy functions for regularization algorithms
NASA Technical Reports Server (NTRS)
Delingette, H.; Hebert, M.; Ikeuchi, K.
1991-01-01
Regularization techniques are widely used for inverse problem solving in computer vision such as surface reconstruction, edge detection, or optical flow estimation. Energy functions used for regularization algorithms measure how smooth a curve or surface is, and to render acceptable solutions these energies must verify certain properties such as invariance with Euclidean transformations or invariance with parameterization. The notion of smoothness energy is extended here to the notion of a differential stabilizer, and it is shown that to void the systematic underestimation of undercurvature for planar curve fitting, it is necessary that circles be the curves of maximum smoothness. A set of stabilizers is proposed that meet this condition as well as invariance with rotation and parameterization.
NASA Astrophysics Data System (ADS)
Christensen, H. M.; Berner, J.; Sardeshmukh, P. D.
2017-12-01
Stochastic parameterizations have been used for more than a decade in atmospheric models. They provide a way to represent model uncertainty through representing the variability of unresolved sub-grid processes, and have been shown to have a beneficial effect on the spread and mean state for medium- and extended-range forecasts. There is increasing evidence that stochastic parameterization of unresolved processes can improve the bias in mean and variability, e.g. by introducing a noise-induced drift (nonlinear rectification), and by changing the residence time and structure of flow regimes. We present results showing the impact of including the Stochastically Perturbed Parameterization Tendencies scheme (SPPT) in coupled runs of the National Center for Atmospheric Research (NCAR) Community Atmosphere Model, version 4 (CAM4) with historical forcing. SPPT results in a significant improvement in the representation of the El Nino-Southern Oscillation in CAM4, improving the power spectrum, as well as both the inter- and intra-annual variability of tropical pacific sea surface temperatures. We use a Linear Inverse Modelling framework to gain insight into the mechanisms by which SPPT has improved ENSO-variability.
NASA Astrophysics Data System (ADS)
Ying, Zhang; Zhengqiang, Li; Yan, Wang
2014-03-01
Anthropogenic aerosols are released into the atmosphere, which cause scattering and absorption of incoming solar radiation, thus exerting a direct radiative forcing on the climate system. Anthropogenic Aerosol Optical Depth (AOD) calculations are important in the research of climate changes. Accumulation-Mode Fractions (AMFs) as an anthropogenic aerosol parameter, which are the fractions of AODs between the particulates with diameters smaller than 1μm and total particulates, could be calculated by AOD spectral deconvolution algorithm, and then the anthropogenic AODs are obtained using AMFs. In this study, we present a parameterization method coupled with an AOD spectral deconvolution algorithm to calculate AMFs in Beijing over 2011. All of data are derived from AErosol RObotic NETwork (AERONET) website. The parameterization method is used to improve the accuracies of AMFs compared with constant truncation radius method. We find a good correlation using parameterization method with the square relation coefficient of 0.96, and mean deviation of AMFs is 0.028. The parameterization method could also effectively solve AMF underestimate in winter. It is suggested that the variations of Angstrom indexes in coarse mode have significant impacts on AMF inversions.
A regional high-resolution carbon flux inversion of North America for 2004
NASA Astrophysics Data System (ADS)
Schuh, A. E.; Denning, A. S.; Corbin, K. D.; Baker, I. T.; Uliasz, M.; Parazoo, N.; Andrews, A. E.; Worthy, D. E. J.
2010-05-01
Resolving the discrepancies between NEE estimates based upon (1) ground studies and (2) atmospheric inversion results, demands increasingly sophisticated techniques. In this paper we present a high-resolution inversion based upon a regional meteorology model (RAMS) and an underlying biosphere (SiB3) model, both running on an identical 40 km grid over most of North America. Current operational systems like CarbonTracker as well as many previous global inversions including the Transcom suite of inversions have utilized inversion regions formed by collapsing biome-similar grid cells into larger aggregated regions. An extreme example of this might be where corrections to NEE imposed on forested regions on the east coast of the United States might be the same as that imposed on forests on the west coast of the United States while, in reality, there likely exist subtle differences in the two areas, both natural and anthropogenic. Our current inversion framework utilizes a combination of previously employed inversion techniques while allowing carbon flux corrections to be biome independent. Temporally and spatially high-resolution results utilizing biome-independent corrections provide insight into carbon dynamics in North America. In particular, we analyze hourly CO2 mixing ratio data from a sparse network of eight towers in North America for 2004. A prior estimate of carbon fluxes due to Gross Primary Productivity (GPP) and Ecosystem Respiration (ER) is constructed from the SiB3 biosphere model on a 40 km grid. A combination of transport from the RAMS and the Parameterized Chemical Transport Model (PCTM) models is used to forge a connection between upwind biosphere fluxes and downwind observed CO2 mixing ratio data. A Kalman filter procedure is used to estimate weekly corrections to biosphere fluxes based upon observed CO2. RMSE-weighted annual NEE estimates, over an ensemble of potential inversion parameter sets, show a mean estimate 0.57 Pg/yr sink in North America. We perform the inversion with two independently derived boundary inflow conditions and calculate jackknife-based statistics to test the robustness of the model results. We then compare final results to estimates obtained from the CarbonTracker inversion system and at the Southern Great Plains flux site. Results are promising, showing the ability to correct carbon fluxes from the biosphere models over annual and seasonal time scales, as well as over the different GPP and ER components. Additionally, the correlation of an estimated sink of carbon in the South Central United States with regional anomalously high precipitation in an area of managed agricultural and forest lands provides interesting hypotheses for future work.
NASA Astrophysics Data System (ADS)
Sarris, Theo S.; Close, Murray; Abraham, Phillip
2018-03-01
A test using Rhodamine WT and heat as tracers, conducted over a 78 day period in a strongly heterogeneous alluvial aquifer, was used to evaluate the utility of the combined observation dataset for aquifer characterization. A highly parameterized model was inverted, with concentration and temperature time-series as calibration targets. Groundwater heads recorded during the experiment were boundary dependent and were ignored during the inversion process. The inverted model produced a high resolution depiction of the hydraulic conductivity and porosity fields. Statistical properties of these fields are in very good agreement with estimates from previous studies at the site. Spatially distributed sensitivity analysis suggests that both solute and heat transport were most sensitive to the hydraulic conductivity and porosity fields and less sensitive to dispersivity and thermal distribution factor, with sensitivity to porosity greatly reducing outside the monitored area. The issues of model over-parameterization and non-uniqueness are addressed through identifiability analysis. Longitudinal dispersivity and thermal distribution factor are highly identifiable, however spatially distributed parameters are only identifiable near the injection point. Temperature related density effects became observable for both heat and solute, as the temperature anomaly increased above 12 degrees centigrade, and affected down gradient propagation. Finally we demonstrate that high frequency and spatially dense temperature data cannot inform a dual porosity model in the absence of frequent solute concentration measurements.
Towards Linking 3D SAR and Lidar Models with a Spatially Explicit Individual Based Forest Model
NASA Astrophysics Data System (ADS)
Osmanoglu, B.; Ranson, J.; Sun, G.; Armstrong, A. H.; Fischer, R.; Huth, A.
2017-12-01
In this study, we present a parameterization of the FORMIND individual-based gap model (IBGM)for old growth Atlantic lowland rainforest in La Selva, Costa Rica for the purpose of informing multisensor remote sensing techniques for above ground biomass techniques. The model was successfully parameterized and calibrated for the study site; results show that the simulated forest reproduces the structural complexity of Costa Rican rainforest based on comparisons with CARBONO inventory plot data. Though the simulated stem numbers (378) slightly underestimated the plot data (418), particularly for canopy dominant intermediate shade tolerant trees and shade tolerant understory trees, overall there was a 9.7% difference. Aboveground biomass (kg/ha) showed a 0.1% difference between the simulated forest and inventory plot dataset. The Costa Rica FORMIND simulation was then used to parameterize a spatially explicit (3D) SAR and lidar backscatter models. The simulated forest stands were used to generate a Look Up Table as a tractable means to estimate aboveground forest biomass for these complex forests. Various combinations of lidar and radar variables were evaluated in the LUT inversion. To test the capability of future data for estimation of forest height and biomass, we considered data of 1) L- (or P-) band polarimetric data (backscattering coefficients of HH, HV and VV); 2) L-band dual-pol repeat-pass InSAR data (HH/HV backscattering coefficients and coherences, height of scattering phase center at HH and HV using DEM or surface height from lidar data as reference); 3) P-band polarimetric InSAR data (canopy height from inversion of PolInSAR data or use the coherences and height of scattering phase center at HH, HV and VV); 4) various height indices from waveform lidar data); and 5) surface and canopy top height from photon-counting lidar data. The methods for parameterizing the remote sensing models with the IBGM and developing Look Up Tables will be discussed. Results from various remote sensing scenarios will also be presented.
NASA Astrophysics Data System (ADS)
Zhou, Jianmei; Wang, Jianxun; Shang, Qinglong; Wang, Hongnian; Yin, Changchun
2014-04-01
We present an algorithm for inverting controlled source audio-frequency magnetotelluric (CSAMT) data in horizontally layered transversely isotropic (TI) media. The popular inversion method parameterizes the media into a large number of layers which have fixed thickness and only reconstruct the conductivities (e.g. Occam's inversion), which does not enable the recovery of the sharp interfaces between layers. In this paper, we simultaneously reconstruct all the model parameters, including both the horizontal and vertical conductivities and layer depths. Applying the perturbation principle and the dyadic Green's function in TI media, we derive the analytic expression of Fréchet derivatives of CSAMT responses with respect to all the model parameters in the form of Sommerfeld integrals. A regularized iterative inversion method is established to simultaneously reconstruct all the model parameters. Numerical results show that the inverse algorithm, including the depths of the layer interfaces, can significantly improve the inverse results. It can not only reconstruct the sharp interfaces between layers, but also can obtain conductivities close to the true value.
Welter, David E.; Doherty, John E.; Hunt, Randall J.; Muffels, Christopher T.; Tonkin, Matthew J.; Schreuder, Willem A.
2012-01-01
An object-oriented parameter estimation code was developed to incorporate benefits of object-oriented programming techniques for solving large parameter estimation modeling problems. The code is written in C++ and is a formulation and expansion of the algorithms included in PEST, a widely used parameter estimation code written in Fortran. The new code is called PEST++ and is designed to lower the barriers of entry for users and developers while providing efficient algorithms that can accommodate large, highly parameterized problems. This effort has focused on (1) implementing the most popular features of PEST in a fashion that is easy for novice or experienced modelers to use and (2) creating a software design that is easy to extend; that is, this effort provides a documented object-oriented framework designed from the ground up to be modular and extensible. In addition, all PEST++ source code and its associated libraries, as well as the general run manager source code, have been integrated in the Microsoft Visual Studio® 2010 integrated development environment. The PEST++ code is designed to provide a foundation for an open-source development environment capable of producing robust and efficient parameter estimation tools for the environmental modeling community into the future.
NASA Astrophysics Data System (ADS)
Simeonov, J.; Holland, K. T.
2016-12-01
We investigated the fidelity of a hierarchy of inverse models that estimate river bathymetry and discharge using measurements of surface currents and water surface elevation. Our most comprehensive depth inversion was based on the Shiono and Knight (1991) model that considers the depth-averaged along-channel momentum balance between the downstream pressure gradient due to gravity, the bottom drag and the lateral stresses induced by turbulence. The discharge was determined by minimizing the difference between the predicted and the measured streamwise variation of the total head. The bottom friction coefficient was assumed to be known or determined by alternative means. We also considered simplifications of the comprehensive inversion model that exclude the lateral mixing term from the momentum balance and assessed the effect of neglecting this term on the depth and discharge estimates for idealized in-bank flow in symmetric trapezoidal channels with width/depth ratio of 40 and different side-wall slopes. For these simple gravity-friction models, we used two different bottom friction parameterizations - a constant Darcy-Weisbach local friction and a depth-dependent friction related to the local depth and a constant Manning (roughness) coefficient. Our results indicated that the Manning gravity-friction model provides accurate estimates of the depth and the discharge that are within 1% of the assumed values for channels with side-wall slopes between 1/2 and 1/17. On the other hand, the constant Darcy-Weisbach friction model underpredicted the true depth and discharge by 7% and 9%, respectively, for the channel with side-wall slope of 1/17. These idealized modeling results suggest that a depth-dependent parameterization of the bottom friction is important for accurate inversion of depth and discharge and that the lateral turbulent mixing is not important. We also tested the comprehensive and the simplified inversion models for the Kootenai River near Bonners Ferry (Idaho) using in situ and remote sensing measurements of surface currents and water surface elevation obtained during a 2010 field experiment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Line, Michael R.; Stevenson, Kevin B.; Bean, Jacob
The nature of the thermal structure of hot Jupiter atmospheres is one of the key questions raised by the characterization of transiting exoplanets over the past decade. There have been claims that many hot Jupiters exhibit atmospheric thermal inversions. However, these claims have been based on broadband photometry rather than the unambiguous identification of emission features with spectroscopy, and the chemical species that could cause the thermal inversions by absorbing stellar irradiation at high altitudes have not been identified despite extensive theoretical and observational effort. Here we present high-precision Hubble Space Telescope WFC3 observations of the dayside thermal emission spectrummore » of the hot Jupiter HD 209458b, which was the first exoplanet suggested to have a thermal inversion. In contrast to previous results for this planet, our observations detect water in absorption at 6.2 σ confidence. When combined with Spitzer photometry, the data are indicative of a monotonically decreasing temperature with pressure over the range of 1–0.001 bars at 7.7 σ confidence. We test the robustness of our results by exploring a variety of model assumptions, including the temperature profile parameterization, presence of a cloud, and choice of Spitzer data reduction. We also introduce a new analysis method to determine the elemental abundances from the spectrally retrieved mixing ratios with thermochemical self-consistency and find plausible abundances consistent with solar metallicity (0.06–10 × solar) and carbon-to-oxygen ratios less than unity. This work suggests that high-precision spectrophotometric results are required to robustly infer thermal structures and compositions of extrasolar planet atmospheres and to perform comparative exoplanetology.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lawrence, Chris C.; Flaska, Marek; Pozzi, Sara A.
2016-08-14
Verification of future warhead-dismantlement treaties will require detection of certain warhead attributes without the disclosure of sensitive design information, and this presents an unusual measurement challenge. Neutron spectroscopy—commonly eschewed as an ill-posed inverse problem—may hold special advantages for warhead verification by virtue of its insensitivity to certain neutron-source parameters like plutonium isotopics. In this article, we investigate the usefulness of unfolded neutron spectra obtained from organic-scintillator data for verifying a particular treaty-relevant warhead attribute: the presence of high-explosive and neutron-reflecting materials. Toward this end, several improvements on current unfolding capabilities are demonstrated: deuterated detectors are shown to have superior response-matrixmore » condition to that of standard hydrogen-base scintintillators; a novel data-discretization scheme is proposed which removes important detector nonlinearities; and a technique is described for re-parameterizing the unfolding problem in order to constrain the parameter space of solutions sought, sidestepping the inverse problem altogether. These improvements are demonstrated with trial measurements and verified using accelerator-based time-of-flight calculation of reference spectra. Then, a demonstration is presented in which the elemental compositions of low-Z neutron-attenuating materials are estimated to within 10%. These techniques could have direct application in verifying the presence of high-explosive materials in a neutron-emitting test item, as well as other for treaty verification challenges.« less
NASA Astrophysics Data System (ADS)
Lawrence, Chris C.; Febbraro, Michael; Flaska, Marek; Pozzi, Sara A.; Becchetti, F. D.
2016-08-01
Verification of future warhead-dismantlement treaties will require detection of certain warhead attributes without the disclosure of sensitive design information, and this presents an unusual measurement challenge. Neutron spectroscopy—commonly eschewed as an ill-posed inverse problem—may hold special advantages for warhead verification by virtue of its insensitivity to certain neutron-source parameters like plutonium isotopics. In this article, we investigate the usefulness of unfolded neutron spectra obtained from organic-scintillator data for verifying a particular treaty-relevant warhead attribute: the presence of high-explosive and neutron-reflecting materials. Toward this end, several improvements on current unfolding capabilities are demonstrated: deuterated detectors are shown to have superior response-matrix condition to that of standard hydrogen-base scintintillators; a novel data-discretization scheme is proposed which removes important detector nonlinearities; and a technique is described for re-parameterizing the unfolding problem in order to constrain the parameter space of solutions sought, sidestepping the inverse problem altogether. These improvements are demonstrated with trial measurements and verified using accelerator-based time-of-flight calculation of reference spectra. Then, a demonstration is presented in which the elemental compositions of low-Z neutron-attenuating materials are estimated to within 10%. These techniques could have direct application in verifying the presence of high-explosive materials in a neutron-emitting test item, as well as other for treaty verification challenges.
Trans-Dimensional Bayesian Imaging of 3-D Crustal and Upper Mantle Structure in Northeast Asia
NASA Astrophysics Data System (ADS)
Kim, S.; Tkalcic, H.; Rhie, J.; Chen, Y.
2016-12-01
Imaging 3-D structures using stepwise inversions of ambient noise and receiver function data is now a routine work. Here, we carry out the inversion in the trans-dimensional and hierarchical extension of the Bayesian framework to obtain rigorous estimates of uncertainty and high-resolution images of crustal and upper mantle structures beneath Northeast (NE) Asia. The methods inherently account for data sensitivities by means of using adaptive parameterizations and treating data noise as free parameters. Therefore, parsimonious results from the methods are balanced out between model complexity and data fitting. This allows fully exploiting data information, preventing from over- or under-estimation of the data fit, and increases model resolution. In addition, the reliability of results is more rigorously checked through the use of Bayesian uncertainties. It is shown by various synthetic recovery tests that complex and spatially variable features are well resolved in our resulting images of NE Asia. Rayleigh wave phase and group velocity tomograms (8-70 s), a 3-D shear-wave velocity model from depth inversions of the estimated dispersion maps, and regional 3-D models (NE China, the Korean Peninsula, and the Japanese islands) from joint inversions with receiver function data of dense networks are presented. High-resolution models are characterized by a number of tectonically meaningful features. We focus our interpretation on complex patterns of sub-lithospheric low velocity structures that extend from back-arc regions to continental margins. We interpret the anomalies in conjunction with distal and distributed intraplate volcanoes in NE Asia. Further discussion on other imaged features will be presented.
Earthquake Source Inversion Blindtest: Initial Results and Further Developments
NASA Astrophysics Data System (ADS)
Mai, P.; Burjanek, J.; Delouis, B.; Festa, G.; Francois-Holden, C.; Monelli, D.; Uchide, T.; Zahradnik, J.
2007-12-01
Images of earthquake ruptures, obtained from modelling/inverting seismic and/or geodetic data exhibit a high degree in spatial complexity. This earthquake source heterogeneity controls seismic radiation, and is determined by the details of the dynamic rupture process. In turn, such rupture models are used for studying source dynamics and for ground-motion prediction. But how reliable and trustworthy are these earthquake source inversions? Rupture models for a given earthquake, obtained by different research teams, often display striking disparities (see http://www.seismo.ethz.ch/srcmod) However, well resolved, robust, and hence reliable source-rupture models are an integral part to better understand earthquake source physics and to improve seismic hazard assessment. Therefore it is timely to conduct a large-scale validation exercise for comparing the methods, parameterization and data-handling in earthquake source inversions.We recently started a blind test in which several research groups derive a kinematic rupture model from synthetic seismograms calculated for an input model unknown to the source modelers. The first results, for an input rupture model with heterogeneous slip but constant rise time and rupture velocity, reveal large differences between the input and inverted model in some cases, while a few studies achieve high correlation between the input and inferred model. Here we report on the statistical assessment of the set of inverted rupture models to quantitatively investigate their degree of (dis-)similarity. We briefly discuss the different inversion approaches, their possible strength and weaknesses, and the use of appropriate misfit criteria. Finally we present new blind-test models, with increasing source complexity and ambient noise on the synthetics. The goal is to attract a large group of source modelers to join this source-inversion blindtest in order to conduct a large-scale validation exercise to rigorously asses the performance and reliability of current inversion methods and to discuss future developments.
NASA Astrophysics Data System (ADS)
Samrock, F.; Grayver, A.; Eysteinsson, H.; Saar, M. O.
2017-12-01
In search for geothermal resources, especially in exploration for high-enthalpy systems found in regions with active volcanism, the magnetotelluric (MT) method has proven to be an efficient tool. Electrical conductivity of the subsurface, imaged by MT, is used for detecting layers of electrically highly conductive clays which form around the surrounding strata of hot circulating fluids and for delineating magmatic heat sources such as zones with partial melting. We present a case study using a novel 3-D inverse solver, based on adaptive local mesh refinement techniques, applied to decoupled forward and inverse mesh parameterizations. The flexible meshing allows accurate representation of surface topography, while keeping computational costs at a reasonable level. The MT data set we analyze was measured at 112 sites, covering an area of 18 by 11 km at a geothermal prospect in the Main Ethiopian Rift. For inverse modelling, we tested a series of different settings to ensure that the recovered structures are supported by the data. Specifically, we tested different starting models, regularization functionals, sets of transfer functions, with and without inclusion of topography. Several robust subsurface structures were revealed. These are prominent features of a high-enthalpy geothermal system: A highly conductive shallow clay cap occurs in an area with high fumarolic activity, and is underlain by a more resistive zone, which is commonly interpreted as a propylitic reservoir and is the main geothermal target for drilling. An interesting discovery is the existence of a channel-like conductor connecting the geothermal field at the surface with an off-rift conductive zone, whose existence was proposed earlier as being related to an off-rift volcanic belt along the western shoulder of the Main Ethiopian Rift. The electrical conductivity model is interpreted together with results from other geoscientific studies and outcomes from satellite remote sensing techniques.
New Features in the Computational Infrastructure for Nuclear Astrophysics
NASA Astrophysics Data System (ADS)
Smith, M. S.; Lingerfelt, E. J.; Scott, J. P.; Hix, W. R.; Nesaraja, C. D.; Koura, H.; Roberts, L. F.
2006-04-01
The Computational Infrastructure for Nuclear Astrophysics is a suite of computer codes online at nucastrodata.org that streamlines the incorporation of recent nuclear physics results into astrophysical simulations. The freely-available, cross- platform suite enables users to upload cross sections and s-factors, convert them into reaction rates, parameterize the rates, store the rates in customizable libraries, setup and run custom post-processing element synthesis calculations, and visualize the results. New features include the ability for users to comment on rates or libraries using an email-type interface, a nuclear mass model evaluator, enhanced techniques for rate parameterization, better treatment of rate inverses, and creation and exporting of custom animations of simulation results. We also have online animations of r- process, rp-process, and neutrino-p process element synthesis occurring in stellar explosions.
NASA Astrophysics Data System (ADS)
Pflügl, Christian; Hoehn, Philipp; Hofmann, Thilo
2017-04-01
Irrespective of the availability of various field measurement and modeling approaches, the quantification of interactions between surface water and groundwater systems remains associated with high uncertainty. Such uncertainties on stream-aquifer interaction have a high potential to misinterpret the local water budget and water quality significantly. Due to typically considerable temporal variation of stream discharge rates, it is desirable for the measurement of streamflow to reduce the measuring duration while reducing uncertainty. Streamflow measurements, according to the velocity-area method, have been performed along reaches of a losing-disconnected, subalpine headwater stream using a 2-dimensional, wading-rod-mounted acoustic Doppler current profiler (ADCP). The method was chosen, with stream morphology not allowing for boat-mounted setups, to reduce uncertainty compared to conventional, single-point streamflow measurements of similar measurement duration. Reach-averaged stream loss rates were subsequently quantified between 12 cross sections. They enabled the delineation of strongly infiltrating stream reaches and their differentiation from insignificantly infiltrating reaches. Furthermore, a total of 10 near-stream observation wells were constructed and/or equipped with pressure and temperature loggers. The time series of near-stream groundwater temperature data were cross-correlated with stream temperature time series to yield supportive qualitative information on the delineation of infiltrating reaches. Subsequently, as a reference parameterization, the hydraulic conductivity and specific yield of a numerical, steady-state model of groundwater flow, in the unconfined glaciofluvial aquifer adjacent to the stream, were inversely determined incorporating the inferred stream loss rates. Applying synthetic sets of infiltration rates, resembling increasing levels of uncertainty associated with single-point streamflow measurements of comparable duration, the same inversion procedure was run. The volume-weighted mean of the respective parameter distribution within 200 m of stream periphery deviated increasingly from the reference parameterization at increasing deviation of infiltration rates.
NASA Astrophysics Data System (ADS)
Greenway, D. P.; Hackett, E.
2017-12-01
Under certain atmospheric refractivity conditions, propagated electromagnetic waves (EM) can become trapped between the surface and the bottom of the atmosphere's mixed layer, which is referred to as surface duct propagation. Being able to predict the presence of these surface ducts can reap many benefits to users and developers of sensing technologies and communication systems because they significantly influence the performance of these systems. However, the ability to directly measure or model a surface ducting layer is challenging due to the high spatial resolution and large spatial coverage needed to make accurate refractivity estimates for EM propagation; thus, inverse methods have become an increasingly popular way of determining atmospheric refractivity. This study uses data from the Coupled Ocean/Atmosphere Mesoscale Prediction System developed by the Naval Research Laboratory and instrumented helicopter (helo) measurements taken during the Wallops Island Field Experiment to evaluate the use of ensemble forecasts in refractivity inversions. Helo measurements and ensemble forecasts are optimized to a parametric refractivity model, and three experiments are performed to evaluate whether incorporation of ensemble forecast data aids in more timely and accurate inverse solutions using genetic algorithms. The results suggest that using optimized ensemble members as an initial population for the genetic algorithms generally enhances the accuracy and speed of the inverse solution; however, use of the ensemble data to restrict parameter search space yields mixed results. Inaccurate results are related to parameterization of the ensemble members' refractivity profile and the subsequent extraction of the parameter ranges to limit the search space.
VALDRIFT 1.0: A valley atmospheric dispersion model with deposition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allwine, K.J.; Bian, X.; Whiteman, C.D.
1995-05-01
VALDRIFT version 1.0 is an atmospheric transport and diffusion model for use in well-defined mountain valleys. It is designed to determine the extent of ddft from aedal pesticide spraying activities, but can also be applied to estimate the transport and diffusion of various air pollutants in valleys. The model is phenomenological -- that is, the dominant meteorological processes goveming the behavior of the valley atmosphere are formulated explicitly in the model, albeit in a highly parameterized fashion. The key meteorological processes treated are: (1) nonsteady and nonhomogeneous along-valley winds and turbulent diffusivities, (2) convective boundary layer growth, (3) inversion descent,more » (4) noctumal temperature inversion breakup, and (5) subsidence. The model is applicable under relatively cloud-free, undisturbed synoptic conditions and is configured to operate through one diumal cycle for a single valley. The inputs required are the valley topographical characteristics, pesticide release rate as a function of time and space, along-valley wind speed as a function of time and space, temperature inversion characteristics at sunrise, and sensible heat flux as a function of time following sunrise. Default values are provided for certain inputs in the absence of detailed observations. The outputs are three-dimensional air concentration and ground-level deposition fields as a function of time.« less
Surface wave tomography of the European crust and upper mantle from ambient seismic noise
NASA Astrophysics Data System (ADS)
LU, Y.; Stehly, L.; Paul, A.
2017-12-01
We present a high-resolution 3-D Shear wave velocity model of the European crust and upper mantle derived from ambient seismic noise tomography. In this study, we collect 4 years of continuous vertical-component seismic recordings from 1293 broadband stations across Europe (10W-35E, 30N-75N). We analyze group velocity dispersion from 5s to 150s for cross-correlations of more than 0.8 million virtual source-receiver pairs. 2-D group velocity maps are estimated using adaptive parameterization to accommodate the strong heterogeneity of path coverage. 3-D velocity model is obtained by merging 1-D models inverted at each pixel through a two-step data-driven inversion algorithm: a non-linear Bayesian Monte Carlo inversion, followed by a linearized inversion. Resulting S-wave velocity model and Moho depth are compared with previous geophysical studies: 1) The crustal model and Moho depth show striking agreement with active seismic imaging results. Moreover, it even provides new valuable information such as a strong difference of the European Moho along two seismic profiles in the Western Alps (Cifalps and ECORS-CROP). 2) The upper mantle model displays strong similarities with published models even at 150km deep, which is usually imaged using earthquake records.
NASA Astrophysics Data System (ADS)
Reichstein, M.; Dinh, N.; Running, S.; Seufert, G.; Tenhunen, J.; Valentini, R.
2003-04-01
Here we present spatially distributed bottom-up estimates of European carbon balance components for the year 2001, that stem from a newly built modeling system that integrates CARBOEUROPE eddy covariance CO_2 exchange data, remotely sensed vegetation properties via the MODIS-Terra sensor, European-wide soils data, and a suite of carbon balance models of different complexity. These estimates are able to better constrain top-down atmospheric-inversion carbon balance estimates within the dual-constraint approach for estimating continental carbon balances. The models that are used to calculate gross primary production (GPP) include a detailed layered canopy model with Farquhar-type photosynthesis (PROXELNEE), sun-shade big-leaf formulations operating at a daily time-step and a simple radiation-use efficiency model. These models are parameterized from eddy covariance data through inverse estimation techniques. Also for the estimation of soil and ecosystem respiration (Rsoil, Reco) we profit from a large data set of eddy covariance and soil chamber measurements, that enables us to the parameterize and validate a recently developed semi-empirical model, that includes a variable temperature sensitivity of respiration. As the outcome of the modeling system we present the most likely daily to annual numbers of carbon balance components (GPP, Reco, Rsoil), but we also issue a thorough analysis of biases and uncertainties in carbon balance estimates that are introduced through errors in the meteorological and remote sensing input data and through uncertainties in the model parameterization. In particular, we analyze 1) the effect of cloud contamination of the MODIS data, 2) the sensitivity to the land-use classification (Corine versus MODIS), 3) the effect of different soil parameterizations as derived from new continental-scale soil maps, and 4) the necessity to include soil drought effects into models of GPP and respiration. While the models describe the eddy covariance data quite well with r^2 values always greater than 0.7, there are still uncertainties in the European carbon balance estimate that exceed 0.3 PgC/yr. In northern (boreal) regions the carbon balance estimate is very much contingent on a high-quality filling of cloud contaminated remote sensing data, while in the southern (Mediterranean) regions a correct description of the soil water holding capacity is crucial. A major source of uncertainty also still is the estimation of heterotrophic respiration at continental scales. Consequently more spatial surveys on soil carbon stocks, turnover and history are needed. The study demonstrates that both, the inclusion of considerable geo-biological variability into a carbon balance modeling system, a high-quality cloud screening and gap-filling of the MODIS remote sensing data, and a correct description of soil drought effects are mandatory for realistic bottom-up estimates of European carbon balance components.
Lutter, William J.; Tréhu, Anne M.; Nowack, Robert L.
1993-01-01
The inversion technique of Nowack and Lutter (1988a) and Lutter et al. (1990) has been applied to first arrival seismic refraction data collected along Line A of the 1986 Lake Superior GLIMPCE experiment, permitting comparison of the inversion image with an independently derived forward model (Trehu et al., 1991; Shay and Trehu, in press). For this study, the inversion method was expanded to allow variable grid spacing for the bicubic spline parameterization of velocity. The variable grid spacing improved model delineation and data fit by permitting model parameters to be clustered at features of interest. Over 800 first-arrival travel-times were fit with a final RMS error of 0.045 s. The inversion model images a low velocity central graben and smaller flanking half-grabens of the Midcontinent Rift, and higher velocity regions (+0.5 to +0.75 km/s) associated with the Isle Royale and Keweenaw faults, which bound the central graben. Although the forward modeling interpretation gives finer details associated with the near surface expression of the two faults because of the inclusion of secondary reflections and refractions that were not included in the inversion, the inversion model reproduces the primary features of the forward model.
NASA Technical Reports Server (NTRS)
Mckinna, Lachlan I. W.; Werdell, P. Jeremy; Fearns, Peter R. C.; Weeks, Scarla J.; Reichstetter, Martina; Franz, Bryan A.; Shea, Donald M.; Feldman, Gene C.
2015-01-01
A semianalytical ocean color inversion algorithm was developed for improving retrievals of inherent optical properties (IOPs) in optically shallow waters. In clear, geometrically shallow waters, light reflected off the seafloor can contribute to the water-leaving radiance signal. This can have a confounding effect on ocean color algorithms developed for optically deep waters, leading to an overestimation of IOPs. The algorithm described here, the Shallow Water Inversion Model (SWIM), uses pre-existing knowledge of bathymetry and benthic substrate brightness to account for optically shallow effects. SWIM was incorporated into the NASA Ocean Biology Processing Group's L2GEN code and tested in waters of the Great Barrier Reef, Australia, using the Moderate Resolution Imaging Spectroradiometer (MODIS) Aqua time series (2002-2013). SWIM-derived values of the total non-water absorption coefficient at 443 nm, at(443), the particulate backscattering coefficient at 443 nm, bbp(443), and the diffuse attenuation coefficient at 488 nm, Kd(488), were compared with values derived using the Generalized Inherent Optical Properties algorithm (GIOP) and the Quasi-Analytical Algorithm (QAA). The results indicated that in clear, optically shallow waters SWIM-derived values of at(443), bbp(443), and Kd(443) were realistically lower than values derived using GIOP and QAA, in agreement with radiative transfer modeling. This signified that the benthic reflectance correction was performing as expected. However, in more optically complex waters, SWIM had difficulty converging to a solution, a likely consequence of internal IOP parameterizations. Whilst a comprehensive study of the SWIM algorithm's behavior was conducted, further work is needed to validate the algorithm using in situ data.
Surface wave tomography of Europe from ambient seismic noise
NASA Astrophysics Data System (ADS)
Lu, Yang; Stehly, Laurent; Paul, Anne
2017-04-01
We present a European scale high-resolution 3-D shear wave velocity model derived from ambient seismic noise tomography. In this study, we collect 4 years of continuous seismic recordings from 1293 stations across much of the European region (10˚W-35˚E, 30˚N-75˚N), which yields more than 0.8 million virtual station pairs. This data set compiles records from 67 seismic networks, both permanent and temporary from the EIDA (European Integrated Data Archive). Rayleigh wave group velocity are measured at each station pair using the multiple-filter analysis technique. Group velocity maps are estimated through a linearized tomographic inversion algorithm at period from 5s to 100s. Adaptive parameterization is used to accommodate heterogeneity in data coverage. We then apply a two-step data-driven inversion method to obtain the shear wave velocity model. The two steps refer to a Monte Carlo inversion to build the starting model, followed by a linearized inversion for further improvement. Finally, Moho depth (and its uncertainty) are determined over most of our study region by identifying and analysing sharp velocity discontinuities (and sharpness). The resulting velocity model shows good agreement with main geological features and previous geophyical studies. Moho depth coincides well with that obtained from active seismic experiments. A focus on the Greater Alpine region (covered by the AlpArray seismic network) displays a clear crustal thinning that follows the arcuate shape of the Alps from the southern French Massif Central to southern Germany.
Reflectance of micron-sized dust particles retrieved with the Umov law
NASA Astrophysics Data System (ADS)
Zubko, Evgenij; Videen, Gorden; Zubko, Nataliya; Shkuratov, Yuriy
2017-03-01
The maximum positive polarization Pmax that initially unpolarized light acquires when scattered from a particulate surface inversely correlates with its geometric albedo A. In the literature, this phenomenon is known as the Umov law. We investigate the Umov law in application to single-scattering submicron and micron-sized agglomerated debris particles, model particles that have highly irregular morphology. We find that if the complex refractive index m is constrained to Re(m)=1.4-1.7 and Im(m)=0-0.15, model particles of a given size distribution have a linear inverse correlation between log(Pmax) and log(A). This correlation resembles what is measured in particulate surfaces, suggesting a similar mechanism governing the Umov law in both systems. We parameterize the dependence of log(A) on log(Pmax) of single-scattering particles and analyze the airborne polarimetric measurements of atmospheric aerosols reported by Dolgos & Martins in [1]. We conclude that Pmax ≈ 50% measured by Dolgos & Martins corresponds to very dark aerosols having geometric albedo A=0.019 ± 0.005.
NASA Astrophysics Data System (ADS)
Zhang, Y.; Wen, J.; Xiao, Q.; You, D.
2016-12-01
Operational algorithms for land surface BRDF/Albedo products are mainly developed from kernel-driven model, combining atmospherically corrected, multidate, multiband surface reflectance to extract BRDF parameters. The Angular and Spectral Kernel Driven model (ASK model), which incorporates the component spectra as a priori knowledge, provides a potential way to make full use of the multi-sensor data with multispectral information and accumulated observations. However, the ASK model is still not feasible for global BRDF/Albedo inversions due to the lack of sufficient field measurements of component spectra at the large scale. This research outlines a parameterization scheme on the component spectra for global scale BRDF/Albedo inversions in the frame of ASK. The parameter γ(λ) can be derived from the ratio of the leaf reflectance and soil reflectance, supported by globally distributed soil spectral library, ANGERS and LOPEX leaf optical properties database. To consider the intrinsic variability in both the land cover and spectral dimension, the mean and standard deviation of γ(λ) for 28 soil units and 4 leaf types in seven MODIS bands were calculated, with a world soil map used for global BRDF/Albedo products retrieval. Compared to the retrievals from BRF datasets simulated by the PROSAIL model, ASK model shows an acceptable accuracy on the parameterization strategy, with the RMSE 0.007 higher at most than inversion by true component spectra. The results indicate that the classification on ratio contributed to capture the spectral characteristics in BBRDF/Albedo retrieval, whereas the ratio range should be controlled within 8% in each band. Ground-based measurements in Heihe river basin were used to validate the accuracy of the improved ASK model, and the generated broadband albedo products shows good agreement with in situ data, which suggests that the improvement of the component spectra on the ASK model has potential for global scale BRDF/Albedo inversions.
Almost but not quite 2D, Non-linear Bayesian Inversion of CSEM Data
NASA Astrophysics Data System (ADS)
Ray, A.; Key, K.; Bodin, T.
2013-12-01
The geophysical inverse problem can be elegantly stated in a Bayesian framework where a probability distribution can be viewed as a statement of information regarding a random variable. After all, the goal of geophysical inversion is to provide information on the random variables of interest - physical properties of the earth's subsurface. However, though it may be simple to postulate, a practical difficulty of fully non-linear Bayesian inversion is the computer time required to adequately sample the model space and extract the information we seek. As a consequence, in geophysical problems where evaluation of a full 2D/3D forward model is computationally expensive, such as marine controlled source electromagnetic (CSEM) mapping of the resistivity of seafloor oil and gas reservoirs, Bayesian studies have largely been conducted with 1D forward models. While the 1D approximation is indeed appropriate for exploration targets with planar geometry and geological stratification, it only provides a limited, site-specific idea of uncertainty in resistivity with depth. In this work, we extend our fully non-linear 1D Bayesian inversion to a 2D model framework, without requiring the usual regularization of model resistivities in the horizontal or vertical directions used to stabilize quasi-2D inversions. In our approach, we use the reversible jump Markov-chain Monte-Carlo (RJ-MCMC) or trans-dimensional method and parameterize the subsurface in a 2D plane with Voronoi cells. The method is trans-dimensional in that the number of cells required to parameterize the subsurface is variable, and the cells dynamically move around and multiply or combine as demanded by the data being inverted. This approach allows us to expand our uncertainty analysis of resistivity at depth to more than a single site location, allowing for interactions between model resistivities at different horizontal locations along a traverse over an exploration target. While the model is parameterized in 2D, we efficiently evaluate the forward response using 1D profiles extracted from the model at the common-midpoints of the EM source-receiver pairs. Since the 1D approximation is locally valid at different midpoint locations, the computation time is far lower than is required by a full 2D or 3D simulation. We have applied this method to both synthetic and real CSEM survey data from the Scarborough gas field on the Northwest shelf of Australia, resulting in a spatially variable quantification of resistivity and its uncertainty in 2D. This Bayesian approach results in a large database of 2D models that comprise a posterior probability distribution, which we can subset to test various hypotheses about the range of model structures compatible with the data. For example, we can subset the model distributions to examine the hypothesis that a resistive reservoir extends overs a certain spatial extent. Depending on how this conditions other parts of the model space, light can be shed on the geological viability of the hypothesis. Since tackling spatially variable uncertainty and trade-offs in 2D and 3D is a challenging research problem, the insights gained from this work may prove valuable for subsequent full 2D and 3D Bayesian inversions.
Doherty, John E.; Hunt, Randall J.; Tonkin, Matthew J.
2010-01-01
Analysis of the uncertainty associated with parameters used by a numerical model, and with predictions that depend on those parameters, is fundamental to the use of modeling in support of decisionmaking. Unfortunately, predictive uncertainty analysis with regard to models can be very computationally demanding, due in part to complex constraints on parameters that arise from expert knowledge of system properties on the one hand (knowledge constraints) and from the necessity for the model parameters to assume values that allow the model to reproduce historical system behavior on the other hand (calibration constraints). Enforcement of knowledge and calibration constraints on parameters used by a model does not eliminate the uncertainty in those parameters. In fact, in many cases, enforcement of calibration constraints simply reduces the uncertainties associated with a number of broad-scale combinations of model parameters that collectively describe spatially averaged system properties. The uncertainties associated with other combinations of parameters, especially those that pertain to small-scale parameter heterogeneity, may not be reduced through the calibration process. To the extent that a prediction depends on system-property detail, its postcalibration variability may be reduced very little, if at all, by applying calibration constraints; knowledge constraints remain the only limits on the variability of predictions that depend on such detail. Regrettably, in many common modeling applications, these constraints are weak. Though the PEST software suite was initially developed as a tool for model calibration, recent developments have focused on the evaluation of model-parameter and predictive uncertainty. As a complement to functionality that it provides for highly parameterized inversion (calibration) by means of formal mathematical regularization techniques, the PEST suite provides utilities for linear and nonlinear error-variance and uncertainty analysis in these highly parameterized modeling contexts. Availability of these utilities is particularly important because, in many cases, a significant proportion of the uncertainty associated with model parameters-and the predictions that depend on them-arises from differences between the complex properties of the real world and the simplified representation of those properties that is expressed by the calibrated model. This report is intended to guide intermediate to advanced modelers in the use of capabilities available with the PEST suite of programs for evaluating model predictive error and uncertainty. A brief theoretical background is presented on sources of parameter and predictive uncertainty and on the means for evaluating this uncertainty. Applications of PEST tools are then discussed for overdetermined and underdetermined problems, both linear and nonlinear. PEST tools for calculating contributions to model predictive uncertainty, as well as optimization of data acquisition for reducing parameter and predictive uncertainty, are presented. The appendixes list the relevant PEST variables, files, and utilities required for the analyses described in the document.
Model Parameterization and P-wave AVA Direct Inversion for Young's Impedance
NASA Astrophysics Data System (ADS)
Zong, Zhaoyun; Yin, Xingyao
2017-05-01
AVA inversion is an important tool for elastic parameters estimation to guide the lithology prediction and "sweet spot" identification of hydrocarbon reservoirs. The product of the Young's modulus and density (named as Young's impedance in this study) is known as an effective lithology and brittleness indicator of unconventional hydrocarbon reservoirs. Density is difficult to predict from seismic data, which renders the estimation of the Young's impedance inaccurate in conventional approaches. In this study, a pragmatic seismic AVA inversion approach with only P-wave pre-stack seismic data is proposed to estimate the Young's impedance to avoid the uncertainty brought by density. First, based on the linearized P-wave approximate reflectivity equation in terms of P-wave and S-wave moduli, the P-wave approximate reflectivity equation in terms of the Young's impedance is derived according to the relationship between P-wave modulus, S-wave modulus, Young's modulus and Poisson ratio. This equation is further compared to the exact Zoeppritz equation and the linearized P-wave approximate reflectivity equation in terms of P- and S-wave velocities and density, which illustrates that this equation is accurate enough to be used for AVA inversion when the incident angle is within the critical angle. Parameter sensitivity analysis illustrates that the high correlation between the Young's impedance and density render the estimation of the Young's impedance difficult. Therefore, a de-correlation scheme is used in the pragmatic AVA inversion with Bayesian inference to estimate Young's impedance only with pre-stack P-wave seismic data. Synthetic examples demonstrate that the proposed approach is able to predict the Young's impedance stably even with moderate noise and the field data examples verify the effectiveness of the proposed approach in Young's impedance estimation and "sweet spots" evaluation.
NASA Astrophysics Data System (ADS)
Mustac, M.; Kim, S.; Tkalcic, H.; Rhie, J.; Chen, Y.; Ford, S. R.; Sebastian, N.
2015-12-01
Conventional approaches to inverse problems suffer from non-linearity and non-uniqueness in estimations of seismic structures and source properties. Estimated results and associated uncertainties are often biased by applied regularizations and additional constraints, which are commonly introduced to solve such problems. Bayesian methods, however, provide statistically meaningful estimations of models and their uncertainties constrained by data information. In addition, hierarchical and trans-dimensional (trans-D) techniques are inherently implemented in the Bayesian framework to account for involved error statistics and model parameterizations, and, in turn, allow more rigorous estimations of the same. Here, we apply Bayesian methods throughout the entire inference process to estimate seismic structures and source properties in Northeast Asia including east China, the Korean peninsula, and the Japanese islands. Ambient noise analysis is first performed to obtain a base three-dimensional (3-D) heterogeneity model using continuous broadband waveforms from more than 300 stations. As for the tomography of surface wave group and phase velocities in the 5-70 s band, we adopt a hierarchical and trans-D Bayesian inversion method using Voronoi partition. The 3-D heterogeneity model is further improved by joint inversions of teleseismic receiver functions and dispersion data using a newly developed high-efficiency Bayesian technique. The obtained model is subsequently used to prepare 3-D structural Green's functions for the source characterization. A hierarchical Bayesian method for point source inversion using regional complete waveform data is applied to selected events from the region. The seismic structure and source characteristics with rigorously estimated uncertainties from the novel Bayesian methods provide enhanced monitoring and discrimination of seismic events in northeast Asia.
Finite frequency shear wave splitting tomography: a model space search approach
NASA Astrophysics Data System (ADS)
Mondal, P.; Long, M. D.
2017-12-01
Observations of seismic anisotropy provide key constraints on past and present mantle deformation. A common method for upper mantle anisotropy is to measure shear wave splitting parameters (delay time and fast direction). However, the interpretation is not straightforward, because splitting measurements represent an integration of structure along the ray path. A tomographic approach that allows for localization of anisotropy is desirable; however, tomographic inversion for anisotropic structure is a daunting task, since 21 parameters are needed to describe general anisotropy. Such a large parameter space does not allow a straightforward application of tomographic inversion. Building on previous work on finite frequency shear wave splitting tomography, this study aims to develop a framework for SKS splitting tomography with a new parameterization of anisotropy and a model space search approach. We reparameterize the full elastic tensor, reducing the number of parameters to three (a measure of strength based on symmetry considerations for olivine, plus the dip and azimuth of the fast symmetry axis). We compute Born-approximation finite frequency sensitivity kernels relating model perturbations to splitting intensity observations. The strong dependence of the sensitivity kernels on the starting anisotropic model, and thus the strong non-linearity of the inverse problem, makes a linearized inversion infeasible. Therefore, we implement a Markov Chain Monte Carlo technique in the inversion procedure. We have performed tests with synthetic data sets to evaluate computational costs and infer the resolving power of our algorithm for synthetic models with multiple anisotropic layers. Our technique can resolve anisotropic parameters on length scales of ˜50 km for realistic station and event configurations for dense broadband experiments. We are proceeding towards applications to real data sets, with an initial focus on the High Lava Plains of Oregon.
Xueri Dang; Chun-Ta Lai; David Y. Hollinger; Andrew J. Schauer; Jingfeng Xiao; J. William Munger; Clenton Owensby; James R. Ehleringer
2011-01-01
We evaluated an idealized boundary layer (BL) model with simple parameterizations using vertical transport information from community model outputs (NCAR/NCEP Reanalysis and ECMWF Interim Analysis) to estimate regional-scale net CO2 fluxes from 2002 to 2007 at three forest and one grassland flux sites in the United States. The BL modeling...
Optimal lattice-structured materials
Messner, Mark C.
2016-07-09
This paper describes a method for optimizing the mesostructure of lattice-structured materials. These materials are periodic arrays of slender members resembling efficient, lightweight macroscale structures like bridges and frame buildings. Current additive manufacturing technologies can assemble lattice structures with length scales ranging from nanometers to millimeters. Previous work demonstrates that lattice materials have excellent stiffness- and strength-to-weight scaling, outperforming natural materials. However, there are currently no methods for producing optimal mesostructures that consider the full space of possible 3D lattice topologies. The inverse homogenization approach for optimizing the periodic structure of lattice materials requires a parameterized, homogenized material model describingmore » the response of an arbitrary structure. This work develops such a model, starting with a method for describing the long-wavelength, macroscale deformation of an arbitrary lattice. The work combines the homogenized model with a parameterized description of the total design space to generate a parameterized model. Finally, the work describes an optimization method capable of producing optimal mesostructures. Several examples demonstrate the optimization method. One of these examples produces an elastically isotropic, maximally stiff structure, here called the isotruss, that arguably outperforms the anisotropic octet truss topology.« less
Parameterizing the Morse Potential for Coarse-Grained Modeling of Blood Plasma
Zhang, Na; Zhang, Peng; Kang, Wei; Bluestein, Danny; Deng, Yuefan
2014-01-01
Multiscale simulations of fluids such as blood represent a major computational challenge of coupling the disparate spatiotemporal scales between molecular and macroscopic transport phenomena characterizing such complex fluids. In this paper, a coarse-grained (CG) particle model is developed for simulating blood flow by modifying the Morse potential, traditionally used in Molecular Dynamics for modeling vibrating structures. The modified Morse potential is parameterized with effective mass scales for reproducing blood viscous flow properties, including density, pressure, viscosity, compressibility and characteristic flow dynamics of human blood plasma fluid. The parameterization follows a standard inverse-problem approach in which the optimal micro parameters are systematically searched, by gradually decoupling loosely correlated parameter spaces, to match the macro physical quantities of viscous blood flow. The predictions of this particle based multiscale model compare favorably to classic viscous flow solutions such as Counter-Poiseuille and Couette flows. It demonstrates that such coarse grained particle model can be applied to replicate the dynamics of viscous blood flow, with the advantage of bridging the gap between macroscopic flow scales and the cellular scales characterizing blood flow that continuum based models fail to handle adequately. PMID:24910470
Qu, Zhiyu; Qu, Fuxin; Hou, Changbo; Jing, Fulong
2018-05-19
In an inverse synthetic aperture radar (ISAR) imaging system for targets with complex motion, the azimuth echo signals of the target are always modeled as multicomponent quadratic frequency modulation (QFM) signals. The chirp rate (CR) and quadratic chirp rate (QCR) estimation of QFM signals is very important to solve the ISAR image defocus problem. For multicomponent QFM (multi-QFM) signals, the conventional QR and QCR estimation algorithms suffer from the cross-term and poor anti-noise ability. This paper proposes a novel estimation algorithm called a two-dimensional product modified parameterized chirp rate-quadratic chirp rate distribution (2D-PMPCRD) for QFM signals parameter estimation. The 2D-PMPCRD employs a multi-scale parametric symmetric self-correlation function and modified nonuniform fast Fourier transform-Fast Fourier transform to transform the signals into the chirp rate-quadratic chirp rate (CR-QCR) domains. It can greatly suppress the cross-terms while strengthening the auto-terms by multiplying different CR-QCR domains with different scale factors. Compared with high order ambiguity function-integrated cubic phase function and modified Lv's distribution, the simulation results verify that the 2D-PMPCRD acquires higher anti-noise performance and obtains better cross-terms suppression performance for multi-QFM signals with reasonable computation cost.
Heat and mass transport during a groundwater replenishment trial in a highly heterogeneous aquifer
NASA Astrophysics Data System (ADS)
Seibert, Simone; Prommer, Henning; Siade, Adam; Harris, Brett; Trefry, Mike; Martin, Michael
2014-12-01
Changes in subsurface temperature distribution resulting from the injection of fluids into aquifers may impact physiochemical and microbial processes as well as basin resource management strategies. We have completed a 2 year field trial in a hydrogeologically and geochemically heterogeneous aquifer below Perth, Western Australia in which highly treated wastewater was injected for large-scale groundwater replenishment. During the trial, chloride and temperature data were collected from conventional monitoring wells and by time-lapse temperature logging. We used a joint inversion of these solute tracer and temperature data to parameterize a numerical flow and multispecies transport model and to analyze the solute and heat propagation characteristics that prevailed during the trial. The simulation results illustrate that while solute transport is largely confined to the most permeable lithological units, heat transport was also affected by heat exchange with lithological units that have a much lower hydraulic conductivity. Heat transfer by heat conduction was found to significantly influence the complex temporal and spatial temperature distribution, especially with growing radial distance and in aquifer sequences with a heterogeneous hydraulic conductivity distribution. We attempted to estimate spatially varying thermal transport parameters during the data inversion to illustrate the anticipated correlations of these parameters with lithological heterogeneities, but estimates could not be uniquely determined on the basis of the collected data.
Proxies of oceanic Lithosphere/Asthenosphere Boundary from Global Seismic Anisotropy Tomography
NASA Astrophysics Data System (ADS)
Burgos, Gael; Montagner, Jean-Paul; Beucler, Eric; Trampert, Jeannot; Capdeville, Yann
2013-04-01
Surface waves provide essential information on the knowledge of the upper mantle global structure despite their low lateral resolution. This study, based on surface waves data, presents the development of a new anisotropic tomographic model of the upper mantle, a simplified isotropic model and the consequences of these results for the Lithosphere/Asthenosphere Boundary (LAB). As a first step, a large number of data is collected, these data are merged and regionalized in order to derive maps of phase and group velocity for the fundamental mode of Rayleigh and Love waves and their azimuthal dependence (maps of phase velocity are also obtained for the first six overtones). As a second step, a crustal a posteriori model is developped from the Monte-Carlo inversion of the shorter periods of the dataset, in order to take into account the effect of the shallow layers on the upper mantle. With the crustal model, a first Monte-Carlo inversion for the upper mantle structure is realized in a simplified isotropic parameterization to highlight the influence of the LAB properties on the surface waves data. Still using the crustal model, a first order perturbation theory inversion is performed in a fully anisotropic parameterization to build a 3-D tomographic model of the upper mantle (an extended model until the transition zone is also obtained by using the overtone data). Estimates of the LAB depth are derived from the upper mantle models and compared with the predictions of oceanic lithosphere cooling models. Seismic events are simulated using the Spectral Element Method in order to validate the ability of the anisotropic tomographic model of the upper mantle to re- produce observed seismograms.
NASA Astrophysics Data System (ADS)
Zhao, Zhanfeng; Illman, Walter A.
2018-04-01
Previous studies have shown that geostatistics-based transient hydraulic tomography (THT) is robust for subsurface heterogeneity characterization through the joint inverse modeling of multiple pumping tests. However, the hydraulic conductivity (K) and specific storage (Ss) estimates can be smooth or even erroneous for areas where pumping/observation densities are low. This renders the imaging of interlayer and intralayer heterogeneity of highly contrasting materials including their unit boundaries difficult. In this study, we further test the performance of THT by utilizing existing and newly collected pumping test data of longer durations that showed drawdown responses in both aquifer and aquitard units at a field site underlain by a highly heterogeneous glaciofluvial deposit. The robust performance of the THT is highlighted through the comparison of different degrees of model parameterization including: (1) the effective parameter approach; (2) the geological zonation approach relying on borehole logs; and (3) the geostatistical inversion approach considering different prior information (with/without geological data). Results reveal that the simultaneous analysis of eight pumping tests with the geostatistical inverse model yields the best results in terms of model calibration and validation. We also find that the joint interpretation of long-term drawdown data from aquifer and aquitard units is necessary in mapping their full heterogeneous patterns including intralayer variabilities. Moreover, as geological data are included as prior information in the geostatistics-based THT analysis, the estimated K values increasingly reflect the vertical distribution patterns of permeameter-estimated K in both aquifer and aquitard units. Finally, the comparison of various THT approaches reveals that differences in the estimated K and Ss tomograms result in significantly different transient drawdown predictions at observation ports.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Sun Ung, E-mail: sunung@umich.edu; Monroe, Charles W., E-mail: cwmonroe@umich.edu
The inverse problem of parameterizing intermolecular potentials given macroscopic transport and thermodynamic data is addressed. Procedures are developed to create arbitrary-precision algorithms for transport collision integrals, using the Lennard-Jones (12–6) potential as an example. Interpolation formulas are produced that compute these collision integrals to four-digit accuracy over the reduced-temperature range 0.3≤T{sup ⁎}≤400, allowing very fast computation. Lennard-Jones parameters for neon, argon, and krypton are determined by simultaneously fitting the observed temperature dependences of their viscosities and second virial coefficients—one of the first times that a thermodynamic and a dynamic property have been used simultaneously for Lennard-Jones parameterization. In addition tomore » matching viscosities and second virial coefficients within the bounds of experimental error, the determined Lennard-Jones parameters are also found to predict the thermal conductivity and self-diffusion coefficient accurately, supporting the value of the Lennard-Jones (12–6) potential for noble-gas transport-property correlation.« less
Water Quality Monitoring for Lake Constance with a Physically Based Algorithm for MERIS Data.
Odermatt, Daniel; Heege, Thomas; Nieke, Jens; Kneubühler, Mathias; Itten, Klaus
2008-08-05
A physically based algorithm is used for automatic processing of MERIS level 1B full resolution data. The algorithm is originally used with input variables for optimization with different sensors (i.e. channel recalibration and weighting), aquatic regions (i.e. specific inherent optical properties) or atmospheric conditions (i.e. aerosol models). For operational use, however, a lake-specific parameterization is required, representing an approximation of the spatio-temporal variation in atmospheric and hydrooptic conditions, and accounting for sensor properties. The algorithm performs atmospheric correction with a LUT for at-sensor radiance, and a downhill simplex inversion of chl-a, sm and y from subsurface irradiance reflectance. These outputs are enhanced by a selective filter, which makes use of the retrieval residuals. Regular chl-a sampling measurements by the Lake's protection authority coinciding with MERIS acquisitions were used for parameterization, training and validation.
Westenbroek, Stephen M.; Doherty, John; Walker, John F.; Kelson, Victor A.; Hunt, Randall J.; Cera, Timothy B.
2012-01-01
The TSPROC (Time Series PROCessor) computer software uses a simple scripting language to process and analyze time series. It was developed primarily to assist in the calibration of environmental models. The software is designed to perform calculations on time-series data commonly associated with surface-water models, including calculation of flow volumes, transformation by means of basic arithmetic operations, and generation of seasonal and annual statistics and hydrologic indices. TSPROC can also be used to generate some of the key input files required to perform parameter optimization by means of the PEST (Parameter ESTimation) computer software. Through the use of TSPROC, the objective function for use in the model-calibration process can be focused on specific components of a hydrograph.
NASA Astrophysics Data System (ADS)
Reiter, D. T.; Rodi, W. L.
2015-12-01
Constructing 3D Earth models through the joint inversion of large geophysical data sets presents numerous theoretical and practical challenges, especially when diverse types of data and model parameters are involved. Among the challenges are the computational complexity associated with large data and model vectors and the need to unify differing model parameterizations, forward modeling methods and regularization schemes within a common inversion framework. The challenges can be addressed in part by decomposing the inverse problem into smaller, simpler inverse problems that can be solved separately, providing one knows how to merge the separate inversion results into an optimal solution of the full problem. We have formulated an approach to the decomposition of large inverse problems based on the augmented Lagrangian technique from optimization theory. As commonly done, we define a solution to the full inverse problem as the Earth model minimizing an objective function motivated, for example, by a Bayesian inference formulation. Our decomposition approach recasts the minimization problem equivalently as the minimization of component objective functions, corresponding to specified data subsets, subject to the constraints that the minimizing models be equal. A standard optimization algorithm solves the resulting constrained minimization problems by alternating between the separate solution of the component problems and the updating of Lagrange multipliers that serve to steer the individual solution models toward a common model solving the full problem. We are applying our inversion method to the reconstruction of the·crust and upper-mantle seismic velocity structure across Eurasia.· Data for the inversion comprise a large set of P and S body-wave travel times·and fundamental and first-higher mode Rayleigh-wave group velocities.
NASA Astrophysics Data System (ADS)
Zielke, O.; McDougall, D.; Mai, P. M.; Babuska, I.
2014-12-01
One fundamental aspect of seismic hazard mitigation is gaining a better understanding of the rupture process. Because direct observation of the relevant parameters and properties is not possible, other means such as kinematic source inversions are used instead. By constraining the spatial and temporal evolution of fault slip during an earthquake, those inversion approaches may enable valuable insights in the physics of the rupture process. However, due to the underdetermined nature of this inversion problem (i.e., inverting a kinematic source model for an extended fault based on seismic data), the provided solutions are generally non-unique. Here we present a statistical (Bayesian) inversion approach based on an open-source library for uncertainty quantification (UQ) called QUESO that was developed at ICES (UT Austin). The approach has advantages with respect to deterministic inversion approaches as it provides not only a single (non-unique) solution but also provides uncertainty bounds with it. Those uncertainty bounds help to qualitatively and quantitatively judge how well constrained an inversion solution is and how much rupture complexity the data reliably resolve. The presented inversion scheme uses only tele-seismically recorded body waves but future developments may lead us towards joint inversion schemes. After giving an insight in the inversion scheme ifself (based on delayed rejection adaptive metropolis, DRAM) we explore the method's resolution potential. For that, we synthetically generate tele-seismic data, add for example different levels of noise and/or change fault plane parameterization and then apply our inversion scheme in the attempt to extract the (known) kinematic rupture model. We conclude with exemplary inverting real tele-seismic data of a recent large earthquake and compare those results with deterministically derived kinematic source models provided by other research groups.
A sensitivity study of the coupled simulation of the Northeast Brazil rainfall variability
NASA Astrophysics Data System (ADS)
Misra, Vasubandhu
2007-06-01
Two long-term coupled ocean-land-atmosphere simulations with slightly different parameterization of the diagnostic shallow inversion clouds in the atmospheric general circulation model (AGCM) of the Center for Ocean-Land-Atmosphere Studies (COLA) coupled climate model are compared for their annual cycle and interannual variability of the northeast Brazil (NEB) rainfall variability. It is seen that the solar insolation affected by the changes to the shallow inversion clouds results in large scale changes to the gradients of the SST and the surface pressure. The latter in turn modulates the surface convergence and the associated Atlantic ITCZ precipitation and the NEB annual rainfall variability. In contrast, the differences in the NEB interannual rainfall variability between the two coupled simulations is attributed to their different remote ENSO forcing.
NASA Astrophysics Data System (ADS)
Foolad, Foad; Franz, Trenton E.; Wang, Tiejun; Gibson, Justin; Kilic, Ayse; Allen, Richard G.; Suyker, Andrew
2017-03-01
In this study, the feasibility of using inverse vadose zone modeling for estimating field-scale actual evapotranspiration (ETa) was explored at a long-term agricultural monitoring site in eastern Nebraska. Data from both point-scale soil water content (SWC) sensors and the area-average technique of cosmic-ray neutron probes were evaluated against independent ETa estimates from a co-located eddy covariance tower. While this methodology has been successfully used for estimates of groundwater recharge, it was essential to assess the performance of other components of the water balance such as ETa. In light of recent evaluations of land surface models (LSMs), independent estimates of hydrologic state variables and fluxes are critically needed benchmarks. The results here indicate reasonable estimates of daily and annual ETa from the point sensors, but with highly varied soil hydraulic function parameterizations due to local soil texture variability. The results of multiple soil hydraulic parameterizations leading to equally good ETa estimates is consistent with the hydrological principle of equifinality. While this study focused on one particular site, the framework can be easily applied to other SWC monitoring networks across the globe. The value-added products of groundwater recharge and ETa flux from the SWC monitoring networks will provide additional and more robust benchmarks for the validation of LSM that continues to improve their forecast skill. In addition, the value-added products of groundwater recharge and ETa often have more direct impacts on societal decision-making than SWC alone. Water flux impacts human decision-making from policies on the long-term management of groundwater resources (recharge), to yield forecasts (ETa), and to optimal irrigation scheduling (ETa). Illustrating the societal benefits of SWC monitoring is critical to insure the continued operation and expansion of these public datasets.
Data error and highly parameterized groundwater models
Hill, M.C.
2008-01-01
Strengths and weaknesses of highly parameterized models, in which the number of parameters exceeds the number of observations, are demonstrated using a synthetic test case. Results suggest that the approach can yield close matches to observations but also serious errors in system representation. It is proposed that avoiding the difficulties of highly parameterized models requires close evaluation of: (1) model fit, (2) performance of the regression, and (3) estimated parameter distributions. Comparisons to hydrogeologic information are expected to be critical to obtaining credible models. Copyright ?? 2008 IAHS Press.
Non-traditional Physics-based Inverse Approaches for Determining a Buried Object’s Location
2008-09-01
parameterization of its time-decay curve) in dipole models ( Pasion and Oldenburg, 2001) or the amplitudes of responding magnetic sources in the NSMS...commonly in use. According to the simple dipole model ( Pasion and Oldenburg, 2001), the secondary magnetic field due to the dipole m is 3 0 1 ˆ ˆ(3...Forum, St. Louis, MO. L. R. Pasion and D. W. Oldenburg (2001), “A discrimination algorithm for UXO using time domain electromagnetics.” J. Environ
NASA Technical Reports Server (NTRS)
Chao, Winston C.
2015-01-01
The excessive precipitation over steep and high mountains (EPSM) in GCMs and meso-scale models is due to a lack of parameterization of the thermal effects of the subgrid-scale topographic variation. These thermal effects drive subgrid-scale heated slope induced vertical circulations (SHVC). SHVC provide a ventilation effect of removing heat from the boundary layer of resolvable-scale mountain slopes and depositing it higher up. The lack of SHVC parameterization is the cause of EPSM. The author has previously proposed a method of parameterizing SHVC, here termed SHVC.1. Although this has been successful in avoiding EPSM, the drawback of SHVC.1 is that it suppresses convective type precipitation in the regions where it is applied. In this article we propose a new method of parameterizing SHVC, here termed SHVC.2. In SHVC.2 the potential temperature and mixing ratio of the boundary layer are changed when used as input to the cumulus parameterization scheme over mountainous regions. This allows the cumulus parameterization to assume the additional function of SHVC parameterization. SHVC.2 has been tested in NASA Goddard's GEOS-5 GCM. It achieves the primary goal of avoiding EPSM while also avoiding the suppression of convective-type precipitation in regions where it is applied.
NASA Astrophysics Data System (ADS)
Connor, C.; Connor, L.; White, J.
2015-12-01
Explosive volcanic eruptions are often classified by deposit mass and eruption column height. How well are these eruption parameters determined in older deposits, and how well can we reduce uncertainty using robust numerical and statistical methods? We describe an efficient and effective inversion and uncertainty quantification approach for estimating eruption parameters given a dataset of tephra deposit thickness and granulometry. The inversion and uncertainty quantification is implemented using the open-source PEST++ code. Inversion with PEST++ can be used with a variety of forward models and here is applied using Tephra2, a code that simulates advective and dispersive tephra transport and deposition. The Levenburg-Marquardt algorithm is combined with formal Tikhonov and subspace regularization to invert eruption parameters; a linear equation for conditional uncertainty propagation is used to estimate posterior parameter uncertainty. Both the inversion and uncertainty analysis support simultaneous analysis of the full eruption and wind-field parameterization. The combined inversion/uncertainty-quantification approach is applied to the 1992 eruption of Cerro Negro (Nicaragua), the 2011 Kirishima-Shinmoedake (Japan), and the 1913 Colima (Mexico) eruptions. These examples show that although eruption mass uncertainty is reduced by inversion against tephra isomass data, considerable uncertainty remains for many eruption and wind-field parameters, such as eruption column height. Supplementing the inversion dataset with tephra granulometry data is shown to further reduce the uncertainty of most eruption and wind-field parameters. We think the use of such robust models provides a better understanding of uncertainty in eruption parameters, and hence eruption classification, than is possible with more qualitative methods that are widely used.
Parameterization of aerosol scavenging due to atmospheric ionization under varying relative humidity
NASA Astrophysics Data System (ADS)
Zhang, Liang; Tinsley, Brian A.
2017-05-01
Simulations and parameterizations of the modulation of aerosol scavenging by electric charges on particles and droplets for different relative humidities have been made for 3 μm radii droplets and a wide range of particle radii. For droplets and particles with opposite-sign charges, the attractive Coulomb force increases the collision rate coefficients above values due to other forces. With same-sign charges, the repulsive Coulomb force decreases the rate coefficients, and the short-range attractive image forces become important. The phoretic forces are attractive for relative humidity less than 100% and repulsive for relative humidity greater than 100% and have increasing overall effect for particle radii up to about 1 μm. There is an analytic solution for rate coefficients if only inverse square forces are present, but due to the presence of image forces, and for larger particles the intercept, weight, and the flow around the particle affecting the droplet trajectory, the simulated results usually depart far from the analytic solution. We give simple empirical parameterization formulas for some cases and more complex parameterizations for more exact fits to the simulated results. The results can be used in cloud models with growing droplets, as in updrafts, as well as with evaporating droplets in downdrafts. There is considered to be little scavenging of uncharged ice-forming nuclei in updrafts, but with charged ice-forming nuclei it is possible for scavenging in updrafts in cold clouds to produce contact ice nucleation. Scavenging in updrafts below the freezing level produces immersion nuclei that promote enhanced freezing as droplets rise above it.
NASA Astrophysics Data System (ADS)
Yoon, H.; McKenna, S. A.; Hart, D. B.
2010-12-01
Heterogeneity plays an important role in groundwater flow and contaminant transport in natural systems. Since it is impossible to directly measure spatial variability of hydraulic conductivity, predictions of solute transport based on mathematical models are always uncertain. While in most cases groundwater flow and tracer transport problems are investigated in two-dimensional (2D) systems, it is important to study more realistic and well-controlled 3D systems to fully evaluate inverse parameter estimation techniques and evaluate uncertainty in the resulting estimates. We used tracer concentration breakthrough curves (BTCs) obtained from a magnetic resonance imaging (MRI) technique in a small flow cell (14 x 8 x 8 cm) that was packed with a known pattern of five different sands (i.e., zones) having cm-scale variability. In contrast to typical inversion systems with head, conductivity and concentration measurements at limited points, the MRI data included BTCs measured at a voxel scale (~0.2 cm in each dimension) over 13 x 8 x 8 cm with a well controlled boundary condition, but did not have direct measurements of head and conductivity. Hydraulic conductivity and porosity were conceptualized as spatial random fields and estimated using pilot points along layers of the 3D medium. The steady state water flow and solute transport were solved using MODFLOW and MODPATH. The inversion problem was solved with a nonlinear parameter estimation package - PEST. Two approaches to parameterization of the spatial fields are evaluated: 1) The detailed zone information was used as prior information to constrain the spatial impact of the pilot points and reduce the number of parameters; and 2) highly parameterized inversion at cm scale (e.g., 1664 parameters) using singular value decomposition (SVD) methodology to significantly reduce the run-time demands. Both results will be compared to measured BTCs. With MRI, it is easy to change the averaging scale of the observed concentration from point to cross-section. This comparison allows us to evaluate which method best matches experimental results at different scales. To evaluate the uncertainty in parameter estimation, the null space Monte Carlo method will be used to reduce computational burden of the development of calibration-constrained Monte Carlo based parameter fields. This study will illustrate how accurately a well-calibrated model can predict contaminant transport. This material is based upon work supported as part of the Center for Frontiers of Subsurface Energy Security (CFSES), an Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences under Award Number DE-SC0001114. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Fourier transformation microwave spectroscopy of the methyl glycolate-H2O complex
NASA Astrophysics Data System (ADS)
Fujitake, Masaharu; Tanaka, Toshihiro; Ohashi, Nobukimi
2018-01-01
The rotational spectrum of one conformer of the methyl glycolate-H2O complex has been measured by means of the pulsed jet Fourier transform microwave spectrometer. The observed a- and b-type transitions exhibit doublet splittings due to the internal rotation of the methyl group. On the other hand, most of the c-type transitions exhibit quartet splittings arising from the methyl internal rotation and the inversion motion between two equivalent conformations. The spectrum was analyzed using parameterized expressions of the Hamiltonian matrix elements derived by applying the tunneling matrix formalism. Based on the results obtained from ab initio calculation, the observed complex of methyl glycolate-H2O was assigned to the most stable conformer of the insertion complex, in which a non-planer seven membered-ring structure is formed by the intermolecular hydrogen bonds between methyl glycolate and H2O subunits. The inversion motion observed in the c-type transitions is therefore a kind of ring-inversion motion between two equivalent conformations. Conformational flexibility, which corresponds to the ring-inversion between two equivalent conformations and to the isomerization between two possible conformers of the insertion complex, was investigated with the help of the ab initio calculation.
NASA Technical Reports Server (NTRS)
Truong-Loi, My-Linh; Saatchi, Sassan; Jaruwatanadilok, Sermsak
2012-01-01
A semi-empirical algorithm for the retrieval of soil moisture, root mean square (RMS) height and biomass from polarimetric SAR data is explained and analyzed in this paper. The algorithm is a simplification of the distorted Born model. It takes into account the physical scattering phenomenon and has three major components: volume, double-bounce and surface. This simplified model uses the three backscattering coefficients ( sigma HH, sigma HV and sigma vv) at low-frequency (P-band). The inversion process uses the Levenberg-Marquardt non-linear least-squares method to estimate the structural parameters. The estimation process is entirely explained in this paper, from initialization of the unknowns to retrievals. A sensitivity analysis is also done where the initial values in the inversion process are varying randomly. The results show that the inversion process is not really sensitive to initial values and a major part of the retrievals has a root-mean-square error lower than 5% for soil moisture, 24 Mg/ha for biomass and 0.49 cm for roughness, considering a soil moisture of 40%, roughness equal to 3cm and biomass varying from 0 to 500 Mg/ha with a mean of 161 Mg/ha
Speeding up the learning of robot kinematics through function decomposition.
Ruiz de Angulo, Vicente; Torras, Carme
2005-11-01
The main drawback of using neural networks or other example-based learning procedures to approximate the inverse kinematics (IK) of robot arms is the high number of training samples (i.e., robot movements) required to attain an acceptable precision. We propose here a trick, valid for most industrial robots, that greatly reduces the number of movements needed to learn or relearn the IK to a given accuracy. This trick consists in expressing the IK as a composition of learnable functions, each having half the dimensionality of the original mapping. Off-line and on-line training schemes to learn these component functions are also proposed. Experimental results obtained by using nearest neighbors and parameterized self-organizing map, with and without the decomposition, show that the time savings granted by the proposed scheme grow polynomially with the precision required.
Estimating the Earthquake Source Time Function by Markov Chain Monte Carlo Sampling
NASA Astrophysics Data System (ADS)
Dȩbski, Wojciech
2008-07-01
Many aspects of earthquake source dynamics like dynamic stress drop, rupture velocity and directivity, etc. are currently inferred from the source time functions obtained by a deconvolution of the propagation and recording effects from seismograms. The question of the accuracy of obtained results remains open. In this paper we address this issue by considering two aspects of the source time function deconvolution. First, we propose a new pseudo-spectral parameterization of the sought function which explicitly takes into account the physical constraints imposed on the sought functions. Such parameterization automatically excludes non-physical solutions and so improves the stability and uniqueness of the deconvolution. Secondly, we demonstrate that the Bayesian approach to the inverse problem at hand, combined with an efficient Markov Chain Monte Carlo sampling technique, is a method which allows efficient estimation of the source time function uncertainties. The key point of the approach is the description of the solution of the inverse problem by the a posteriori probability density function constructed according to the Bayesian (probabilistic) theory. Next, the Markov Chain Monte Carlo sampling technique is used to sample this function so the statistical estimator of a posteriori errors can be easily obtained with minimal additional computational effort with respect to modern inversion (optimization) algorithms. The methodological considerations are illustrated by a case study of the mining-induced seismic event of the magnitude M L ≈3.1 that occurred at Rudna (Poland) copper mine. The seismic P-wave records were inverted for the source time functions, using the proposed algorithm and the empirical Green function technique to approximate Green functions. The obtained solutions seem to suggest some complexity of the rupture process with double pulses of energy release. However, the error analysis shows that the hypothesis of source complexity is not justified at the 95% confidence level. On the basis of the analyzed event we also show that the separation of the source inversion into two steps introduces limitations on the completeness of the a posteriori error analysis.
Use of Cloud Computing to Calibrate a Highly Parameterized Model
NASA Astrophysics Data System (ADS)
Hayley, K. H.; Schumacher, J.; MacMillan, G.; Boutin, L.
2012-12-01
We present a case study using cloud computing to facilitate the calibration of a complex and highly parameterized model of regional groundwater flow. The calibration dataset consisted of many (~1500) measurements or estimates of static hydraulic head, a high resolution time series of groundwater extraction and disposal rates at 42 locations and pressure monitoring at 147 locations with a total of more than one million raw measurements collected over a ten year pumping history, and base flow estimates at 5 surface water monitoring locations. This modeling project was undertaken to assess the sustainability of groundwater withdrawal and disposal plans for insitu heavy oil extraction in Northeast Alberta, Canada. The geological interpretations used for model construction were based on more than 5,000 wireline logs collected throughout the 30,865 km2 regional study area (RSA), and resulted in a model with 28 slices, and 28 hydro stratigraphic units (average model thickness of 700 m, with aquifers ranging from a depth of 50 to 500 m below ground surface). The finite element FEFLOW model constructed on this geological interpretation had 331,408 nodes and required 265 time steps to simulate the ten year transient calibration period. This numerical model of groundwater flow required 3 hours to run on a on a server with two, 2.8 GHz processers and 16 Gb. RAM. Calibration was completed using PEST. Horizontal and vertical hydraulic conductivity as well as specific storage for each unit were independent parameters. For the recharge and the horizontal hydraulic conductivity in the three aquifers with the most transient groundwater use, a pilot point parameterization was adopted. A 7*7 grid of pilot points was defined over the RSA that defined a spatially variable horizontal hydraulic conductivity or recharge field. A 7*7 grid of multiplier pilot points that perturbed the more regional field was then superimposed over the 3,600 km2 local study area (LSA). The pilot point multipliers were implemented so a higher resolution of spatial variability could be obtained where there was a higher density of observation data. Five geologic boundaries were modeled with a specified flux boundary condition and the transfer rate was used as an adjustable parameter for each of these boundaries. This parameterization resulted in 448 parameters for calibration. In the project planning stage it was estimated that the calibration might require as much 15,000 hours (1.7 years) of computing. In an effort to complete the calibration in a timely manner, the inversion was parallelized and implemented on as many as 250 computing nodes located on Amazon's EC2 servers. The results of the calibration provided a better fit to the data than previous efforts with homogenous parameters, and the highly parameterized approach facilitated subspace Monte Carlo analysis for predictive uncertainty. This scale of cloud computing is relatively new for the hydrogeology community and at the time of implementation it was believed to be the first implementation of FEFLOW model at this scale. While the experience provided several challenges, the implementation was successful and provides some valuable learning for future efforts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ray, Jaideep; Lee, Jina; Lefantzi, Sophia
2013-09-01
The estimation of fossil-fuel CO2 emissions (ffCO2) from limited ground-based and satellite measurements of CO2 concentrations will form a key component of the monitoring of treaties aimed at the abatement of greenhouse gas emissions. The limited nature of the measured data leads to a severely-underdetermined estimation problem. If the estimation is performed at fine spatial resolutions, it can also be computationally expensive. In order to enable such estimations, advances are needed in the spatial representation of ffCO2 emissions, scalable inversion algorithms and the identification of observables to measure. To that end, we investigate parsimonious spatial parameterizations of ffCO2 emissions whichmore » can be used in atmospheric inversions. We devise and test three random field models, based on wavelets, Gaussian kernels and covariance structures derived from easily-observed proxies of human activity. In doing so, we constructed a novel inversion algorithm, based on compressive sensing and sparse reconstruction, to perform the estimation. We also address scalable ensemble Kalman filters as an inversion mechanism and quantify the impact of Gaussian assumptions inherent in them. We find that the assumption does not impact the estimates of mean ffCO2 source strengths appreciably, but a comparison with Markov chain Monte Carlo estimates show significant differences in the variance of the source strengths. Finally, we study if the very different spatial natures of biogenic and ffCO2 emissions can be used to estimate them, in a disaggregated fashion, solely from CO2 concentration measurements, without extra information from products of incomplete combustion e.g., CO. We find that this is possible during the winter months, though the errors can be as large as 50%.« less
Shear velocity structure of central Eurasia from inversion of surface wave velocities
NASA Astrophysics Data System (ADS)
Villaseñor, A.; Ritzwoller, M. H.; Levshin, A. L.; Barmin, M. P.; Engdahl, E. R.; Spakman, W.; Trampert, J.
2001-04-01
We present a shear velocity model of the crust and upper mantle beneath central Eurasia by simultaneous inversion of broadband group and phase velocity maps of fundamental-mode Love and Rayleigh waves. The model is parameterized in terms of velocity depth profiles on a discrete 2°×2° grid. The model is isotropic for the crust and for the upper mantle below 220 km but, to fit simultaneously long period Love and Rayleigh waves, the model is transversely isotropic in the uppermost mantle, from the Moho discontinuity to 220 km depth. We have used newly available a priori models for the crust and sedimentary cover as starting models for the inversion. Therefore, the crustal part of the estimated model shows good correlation with known surface features such as sedimentary basins and mountain ranges. The velocity anomalies in the upper mantle are related to differences between tectonic and stable regions. Old, stable regions such as the East European, Siberian, and Indian cratons are characterized by high upper-mantle shear velocities. Other large high velocity anomalies occur beneath the Persian Gulf and the Tarim block. Slow shear velocity anomalies are related to regions of current extension (Red Sea and Andaman ridges) and are also found beneath the Tibetan and Turkish-Iranian Plateaus, structures originated by continent-continent collision. A large low velocity anomaly beneath western Mongolia corresponds to the location of a hypothesized mantle plume. A clear low velocity zone in vSH between Moho and 220 km exists across most of Eurasia, but is absent for vSV. The character and magnitude of anisotropy in the model is on average similar to PREM, with the most prominent anisotropic region occurring beneath the Tibetan Plateau.
NASA Astrophysics Data System (ADS)
Guo, Yamin; Cheng, Jie; Liang, Shunlin
2018-02-01
Surface downward longwave radiation (SDLR) is a key variable for calculating the earth's surface radiation budget. In this study, we evaluated seven widely used clear-sky parameterization methods using ground measurements collected from 71 globally distributed fluxnet sites. The Bayesian model averaging (BMA) method was also introduced to obtain a multi-model ensemble estimate. As a whole, the parameterization method of Carmona et al. (2014) performs the best, with an average BIAS, RMSE, and R 2 of - 0.11 W/m2, 20.35 W/m2, and 0.92, respectively, followed by the parameterization methods of Idso (1981), Prata (Q J R Meteorol Soc 122:1127-1151, 1996), Brunt and Sc (Q J R Meteorol Soc 58:389-420, 1932), and Brutsaert (Water Resour Res 11:742-744, 1975). The accuracy of the BMA is close to that of the parameterization method of Carmona et al. (2014) and comparable to that of the parameterization method of Idso (1981). The advantage of the BMA is that it achieves balanced results compared to the integrated single parameterization methods. To fully assess the performance of the parameterization methods, the effects of climate type, land cover, and surface elevation were also investigated. The five parameterization methods and BMA all failed over land with the tropical climate type, with high water vapor, and had poor results over forest, wetland, and ice. These methods achieved better results over desert, bare land, cropland, and grass and had acceptable accuracies for sites at different elevations, except for the parameterization method of Carmona et al. (2014) over high elevation sites. Thus, a method that can be successfully applied everywhere does not exist.
NASA Technical Reports Server (NTRS)
Jasinski, Michael F.; Eagleson, Peter S.
1989-01-01
A stochastic-geometric landsurface reflectance model is formulated and tested for the parameterization of spatially variable vegetation and soil at subpixel scales using satellite multispectral images without ground truth. Landscapes are conceptualized as 3-D Lambertian reflecting surfaces consisting of plant canopies, represented by solid geometric figures, superposed on a flat soil background. A computer simulation program is developed to investigate image characteristics at various spatial aggregations representative of satellite observational scales, or pixels. The evolution of the shape and structure of the red-infrared space, or scattergram, of typical semivegetated scenes is investigated by sequentially introducing model variables into the simulation. The analytical moments of the total pixel reflectance, including the mean, variance, spatial covariance, and cross-spectral covariance, are derived in terms of the moments of the individual fractional cover and reflectance components. The moments are applied to the solution of the inverse problem: The estimation of subpixel landscape properties on a pixel-by-pixel basis, given only one multispectral image and limited assumptions on the structure of the landscape. The landsurface reflectance model and inversion technique are tested using actual aerial radiometric data collected over regularly spaced pecan trees, and using both aerial and LANDSAT Thematic Mapper data obtained over discontinuous, randomly spaced conifer canopies in a natural forested watershed. Different amounts of solar backscattered diffuse radiation are assumed and the sensitivity of the estimated landsurface parameters to those amounts is examined.
Bayesian seismic tomography by parallel interacting Markov chains
NASA Astrophysics Data System (ADS)
Gesret, Alexandrine; Bottero, Alexis; Romary, Thomas; Noble, Mark; Desassis, Nicolas
2014-05-01
The velocity field estimated by first arrival traveltime tomography is commonly used as a starting point for further seismological, mineralogical, tectonic or similar analysis. In order to interpret quantitatively the results, the tomography uncertainty values as well as their spatial distribution are required. The estimated velocity model is obtained through inverse modeling by minimizing an objective function that compares observed and computed traveltimes. This step is often performed by gradient-based optimization algorithms. The major drawback of such local optimization schemes, beyond the possibility of being trapped in a local minimum, is that they do not account for the multiple possible solutions of the inverse problem. They are therefore unable to assess the uncertainties linked to the solution. Within a Bayesian (probabilistic) framework, solving the tomography inverse problem aims at estimating the posterior probability density function of velocity model using a global sampling algorithm. Markov chains Monte-Carlo (MCMC) methods are known to produce samples of virtually any distribution. In such a Bayesian inversion, the total number of simulations we can afford is highly related to the computational cost of the forward model. Although fast algorithms have been recently developed for computing first arrival traveltimes of seismic waves, the complete browsing of the posterior distribution of velocity model is hardly performed, especially when it is high dimensional and/or multimodal. In the latter case, the chain may even stay stuck in one of the modes. In order to improve the mixing properties of classical single MCMC, we propose to make interact several Markov chains at different temperatures. This method can make efficient use of large CPU clusters, without increasing the global computational cost with respect to classical MCMC and is therefore particularly suited for Bayesian inversion. The exchanges between the chains allow a precise sampling of the high probability zones of the model space while avoiding the chains to end stuck in a probability maximum. This approach supplies thus a robust way to analyze the tomography imaging uncertainties. The interacting MCMC approach is illustrated on two synthetic examples of tomography of calibration shots such as encountered in induced microseismic studies. On the second application, a wavelet based model parameterization is presented that allows to significantly reduce the dimension of the problem, making thus the algorithm efficient even for a complex velocity model.
NASA Astrophysics Data System (ADS)
Zhang, M.; Nunes, V. D.; Burbey, T. J.; Borggaard, J.
2012-12-01
More than 1.5 m of subsidence has been observed in Las Vegas Valley since 1935 as a result of groundwater pumping that commenced in 1905 (Bell, 2002). The compaction of the aquifer system has led to several large subsidence bowls and deleterious earth fissures. The highly heterogeneous aquifer system with its variably thick interbeds makes predicting the magnitude and location of subsidence extremely difficult. Several numerical groundwater flow models of the Las Vegas basin have been previously developed; however none of them have been able to accurately simulate the observed subsidence patterns or magnitudes because of inadequate parameterization. To better manage groundwater resources and predict future subsidence we have updated and developed a more accurate groundwater management model for Las Vegas Valley by developing a new adjoint parameter estimation package (APE) that is used in conjunction with UCODE along with MODFLOW and the SUB (subsidence) and HFB (horizontal flow barrier) packages. The APE package is used with UCODE to automatically identify suitable parameter zonations and inversely calculate parameter values from hydraulic head and subsidence measurements, which are highly sensitive to both elastic (Ske) and inelastic (Skv) storage coefficients. With the advent of InSAR (Interferometric synthetic aperture radar), distributed spatial and temporal subsidence measurements can be obtained, which greatly enhance the accuracy of parameter estimation. This automation process can remove user bias and provide a far more accurate and robust parameter zonation distribution. The outcome of this work yields a more accurate and powerful tool for managing groundwater resources in Las Vegas Valley to date.
Lithological and Surface Geometry Joint Inversions Using Multi-Objective Global Optimization Methods
NASA Astrophysics Data System (ADS)
Lelièvre, Peter; Bijani, Rodrigo; Farquharson, Colin
2016-04-01
Geologists' interpretations about the Earth typically involve distinct rock units with contacts (interfaces) between them. In contrast, standard minimum-structure geophysical inversions are performed on meshes of space-filling cells (typically prisms or tetrahedra) and recover smoothly varying physical property distributions that are inconsistent with typical geological interpretations. There are several approaches through which mesh-based minimum-structure geophysical inversion can help recover models with some of the desired characteristics. However, a more effective strategy may be to consider two fundamentally different types of inversions: lithological and surface geometry inversions. A major advantage of these two inversion approaches is that joint inversion of multiple types of geophysical data is greatly simplified. In a lithological inversion, the subsurface is discretized into a mesh and each cell contains a particular rock type. A lithological model must be translated to a physical property model before geophysical data simulation. Each lithology may map to discrete property values or there may be some a priori probability density function associated with the mapping. Through this mapping, lithological inverse problems limit the parameter domain and consequently reduce the non-uniqueness from that presented by standard mesh-based inversions that allow physical property values on continuous ranges. Furthermore, joint inversion is greatly simplified because no additional mathematical coupling measure is required in the objective function to link multiple physical property models. In a surface geometry inversion, the model comprises wireframe surfaces representing contacts between rock units. This parameterization is then fully consistent with Earth models built by geologists, which in 3D typically comprise wireframe contact surfaces of tessellated triangles. As for the lithological case, the physical properties of the units lying between the contact surfaces are set to a priori values. The inversion is tasked with calculating the geometry of the contact surfaces instead of some piecewise distribution of properties in a mesh. Again, no coupling measure is required and joint inversion is simplified. Both of these inverse problems involve high nonlinearity and discontinuous or non-obtainable derivatives. They can also involve the existence of multiple minima. Hence, one can not apply the standard descent-based local minimization methods used to solve typical minimum-structure inversions. Instead, we are applying Pareto multi-objective global optimization (PMOGO) methods, which generate a suite of solutions that minimize multiple objectives (e.g. data misfits and regularization terms) in a Pareto-optimal sense. Providing a suite of models, as opposed to a single model that minimizes a weighted sum of objectives, allows a more complete assessment of the possibilities and avoids the often difficult choice of how to weight each objective. While there are definite advantages to PMOGO joint inversion approaches, the methods come with significantly increased computational requirements. We are researching various strategies to ameliorate these computational issues including parallelization and problem dimension reduction.
NASA Technical Reports Server (NTRS)
Taconet, O.; Carlson, T.; Bernard, R.; Vidal-Madjar, D.
1986-01-01
Ground measurements of surface-sensible heat flux and soil moisture for a wheat-growing area of Beauce in France were compared with the values derived by inverting two boundary layer models with a surface/vegetation formulation using surface temperature measurements made from NOAA-AVHRR. The results indicated that the trends in the surface heat fluxes and soil moisture observed during the 5 days of the field experiment were effectively captured by the inversion method using the remotely measured radiative temperatures and either of the two boundary layer methods, both of which contain nearly identical vegetation parameterizations described by Taconet et al. (1986). The sensitivity of the results to errors in the initial sounding values or measured surface temperature was tested by varying the initial sounding temperature, dewpoint, and wind speed and the measured surface temperature by amounts corresponding to typical measurement error. In general, the vegetation component was more sensitive to error than the bare soil model.
Manifestation of remote response over the equatorial Pacific in a climate model
NASA Astrophysics Data System (ADS)
Misra, Vasubandhu; Marx, L.
2007-10-01
In this paper we examine the simulations over the tropical Pacific Ocean from long-term simulations of two different versions of the Center for Ocean-Land-Atmosphere Studies (COLA) coupled climate model that have a different global distribution of the inversion clouds. We find that subtle changes made to the numerics of an empirical parameterization of the inversion clouds can result in a significant change in the coupled climate of the equatorial Pacific Ocean. In one coupled simulation of this study we enforce a simple linear spatial filtering of the diagnostic inversion clouds to ameliorate its spatial incoherency (as a result of the Gibbs effect) while in the other we conduct no such filtering. It is found from the comparison of these two simulations that changing the distribution of the shallow inversion clouds prevalent in the subsidence region of the subtropical high over the eastern oceans in this manner has a direct bearing on the surface wind stress through surface pressure modifications. The SST in the warm pool region responds to this modulation of the wind stress, thus affecting the convective activity over the warm pool region and also the large-scale Walker and Hadley circulation. The interannual variability of SST in the eastern equatorial Pacific Ocean is also modulated by this change to the inversion clouds. Consequently, this sensitivity has a bearing on the midlatitude height response. The same set of two experiments were conducted with the respective versions of the atmosphere general circulation model uncoupled to the ocean general circulation model but forced with observed SST to demonstrate that this sensitivity of the mean climate of the equatorial Pacific Ocean is unique to the coupled climate model where atmosphere, ocean and land interact. Therefore a strong case is made for adopting coupled ocean-land-atmosphere framework to develop climate models as against the usual practice of developing component models independent of each other.
Simulation of low clouds in the Southeast Pacific by the NCEP GFS: sensitivity to vertical mixing
NASA Astrophysics Data System (ADS)
Sun, R.; Moorthi, S.; Xiao, H.; Mechoso, C. R.
2010-12-01
The NCEP Global Forecast System (GFS) model has an important systematic error shared by many other models: stratocumuli are missed over the subtropical eastern oceans. It is shown that this error can be alleviated in the GFS by introducing a consideration of the low-level inversion and making two modifications in the model's representation of vertical mixing. The modifications consist of (a) the elimination of background vertical diffusion above the inversion and (b) the incorporation of a stability parameter based on the cloud-top entrainment instability (CTEI) criterion, which limits the strength of shallow convective mixing across the inversion. A control simulation and three experiments are performed in order to examine both the individual and combined effects of modifications on the generation of the stratocumulus clouds. Individually, both modifications result in enhanced cloudiness in the Southeast Pacific (SEP) region, although the cloudiness is still low compared to the ISCCP climatology. If the modifications are applied together, however, the total cloudiness produced in the southeast Pacific has realistic values. This nonlinearity arises as the effects of both modifications reinforce each other in reducing the leakage of moisture across the inversion. Increased moisture trapped below the inversion than in the control run without modifications leads to an increase in cloud amount and cloud-top radiative cooling. Then a positive feedback due to enhanced turbulent mixing in the planetary boundary layer by cloud-top radiative cooling leads to and maintains the stratocumulus cover. Although the amount of total cloudiness obtained with both modifications has realistic values, the relative contributions of low, middle, and high layers tend to differ from the observations. These results demonstrate that it is possible to simulate realistic marine boundary clouds in large-scale models by implementing direct and physically based improvements in the model parameterizations.
Simulation of low clouds in the Southeast Pacific by the NCEP GFS: sensitivity to vertical mixing
NASA Astrophysics Data System (ADS)
Sun, R.; Moorthi, S.; Xiao, H.; Mechoso, C.-R.
2010-08-01
The NCEP Global Forecast System (GFS) model has an important systematic error shared by many other models: stratocumuli are missed over the subtropical eastern oceans. It is shown that this error can be alleviated in the GFS by introducing a consideration of the low-level inversion and making two modifications in the model's representation of vertical mixing. The modifications consist of (a) the elimination of background vertical diffusion above the inversion and (b) the incorporation of a stability parameter based on the cloud-top entrainment instability (CTEI) criterion, which limits the strength of shallow convective mixing across the inversion. A control simulation and three experiments are performed in order to examine both the individual and combined effects of modifications on the generation of the stratocumulus clouds. Individually, both modifications result in enhanced cloudiness in the Southeast Pacific (SEP) region, although the cloudiness is still low compared to the ISCCP climatology. If the modifications are applied together, however, the total cloudiness produced in the southeast Pacific has realistic values. This nonlinearity arises as the effects of both modifications reinforce each other in reducing the leakage of moisture across the inversion. Increased moisture trapped below the inversion than in the control run without modifications leads to an increase in cloud amount and cloud-top radiative cooling. Then a positive feedback due to enhanced turbulent mixing in the planetary boundary layer by cloud-top radiative cooling leads to and maintains the stratocumulus cover. Although the amount of total cloudiness obtained with both modifications has realistic values, the relative contributions of low, middle, and high layers tend to differ from the observations. These results demonstrate that it is possible to simulate realistic marine boundary clouds in large-scale models by implementing direct and physically based improvements in the model parameterizations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lundquist, K A
Mesoscale models, such as the Weather Research and Forecasting (WRF) model, are increasingly used for high resolution simulations, particularly in complex terrain, but errors associated with terrain-following coordinates degrade the accuracy of the solution. Use of an alternative Cartesian gridding technique, known as an immersed boundary method (IBM), alleviates coordinate transformation errors and eliminates restrictions on terrain slope which currently limit mesoscale models to slowly varying terrain. In this dissertation, an immersed boundary method is developed for use in numerical weather prediction. Use of the method facilitates explicit resolution of complex terrain, even urban terrain, in the WRF mesoscale model.more » First, the errors that arise in the WRF model when complex terrain is present are presented. This is accomplished using a scalar advection test case, and comparing the numerical solution to the analytical solution. Results are presented for different orders of advection schemes, grid resolutions and aspect ratios, as well as various degrees of terrain slope. For comparison, results from the same simulation are presented using the IBM. Both two-dimensional and three-dimensional immersed boundary methods are then described, along with details that are specific to the implementation of IBM in the WRF code. Our IBM is capable of imposing both Dirichlet and Neumann boundary conditions. Additionally, a method for coupling atmospheric physics parameterizations at the immersed boundary is presented, making IB methods much more functional in the context of numerical weather prediction models. The two-dimensional IB method is verified through comparisons of solutions for gentle terrain slopes when using IBM and terrain-following grids. The canonical case of flow over a Witch of Agnesi hill provides validation of the basic no-slip and zero gradient boundary conditions. Specified diurnal heating in a valley, producing anabatic winds, is used to validate the use of flux (non-zero) boundary conditions. This anabatic flow set-up is further coupled to atmospheric physics parameterizations, which calculate surface fluxes, demonstrating that the IBM can be coupled to various land-surface parameterizations in atmospheric models. Additionally, the IB method is extended to three dimensions, using both trilinear and inverse distance weighted interpolations. Results are presented for geostrophic flow over a three-dimensional hill. It is found that while the IB method using trilinear interpolation works well for simple three-dimensional geometries, a more flexible and robust method is needed for extremely complex geometries, as found in three-dimensional urban environments. A second, more flexible, immersed boundary method is devised using inverse distance weighting, and results are compared to the first IBM approach. Additionally, the functionality to nest a domain with resolved complex geometry inside of a parent domain without resolved complex geometry is described. The new IBM approach is used to model urban terrain from Oklahoma City in a one-way nested configuration, where lateral boundary conditions are provided by the parent domain. Finally, the IB method is extended to include wall model parameterizations for rough surfaces. Two possible implementations are presented, one which uses the log law to reconstruct velocities exterior to the solid domain, and one which reconstructs shear stress at the immersed boundary, rather than velocity. These methods are tested on the three-dimensional canonical case of neutral atmospheric boundary layer flow over flat terrain.« less
Domain-averaged snow depth over complex terrain from flat field measurements
NASA Astrophysics Data System (ADS)
Helbig, Nora; van Herwijnen, Alec
2017-04-01
Snow depth is an important parameter for a variety of coarse-scale models and applications, such as hydrological forecasting. Since high-resolution snow cover models are computational expensive, simplified snow models are often used. Ground measured snow depth at single stations provide a chance for snow depth data assimilation to improve coarse-scale model forecasts. Snow depth is however commonly recorded at so-called flat fields, often in large measurement networks. While these ground measurement networks provide a wealth of information, various studies questioned the representativity of such flat field snow depth measurements for the surrounding topography. We developed two parameterizations to compute domain-averaged snow depth for coarse model grid cells over complex topography using easy to derive topographic parameters. To derive the two parameterizations we performed a scale dependent analysis for domain sizes ranging from 50m to 3km using highly-resolved snow depth maps at the peak of winter from two distinct climatic regions in Switzerland and in the Spanish Pyrenees. The first, simpler parameterization uses a commonly applied linear lapse rate. For the second parameterization, we first removed the obvious elevation gradient in mean snow depth, which revealed an additional correlation with the subgrid sky view factor. We evaluated domain-averaged snow depth derived with both parameterizations using flat field measurements nearby with the domain-averaged highly-resolved snow depth. This revealed an overall improved performance for the parameterization combining a power law elevation trend scaled with the subgrid parameterized sky view factor. We therefore suggest the parameterization could be used to assimilate flat field snow depth into coarse-scale snow model frameworks in order to improve coarse-scale snow depth estimates over complex topography.
Selecting an Informative/Discriminating Multivariate Response for Inverse Prediction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomas, Edward V.; Lewis, John. R.; Anderson-Cook, Christine Michaela
The inverse prediction is important in a variety of scientific and engineering applications, such as to predict properties/characteristics of an object by using multiple measurements obtained from it. Inverse prediction can be accomplished by inverting parameterized forward models that relate the measurements (responses) to the properties/characteristics of interest. Sometimes forward models are computational/science based; but often, forward models are empirically based response surface models, obtained by using the results of controlled experimentation. For empirical models, it is important that the experiments provide a sound basis to develop accurate forward models in terms of the properties/characteristics (factors). And while nature dictatesmore » the causal relationships between factors and responses, experimenters can control the complexity, accuracy, and precision of forward models constructed via selection of factors, factor levels, and the set of trials that are performed. Recognition of the uncertainty in the estimated forward models leads to an errors-in-variables approach for inverse prediction. The forward models (estimated by experiments or science based) can also be used to analyze how well candidate responses complement one another for inverse prediction over the range of the factor space of interest. Furthermore, one may find that some responses are complementary, redundant, or noninformative. Simple analysis and examples illustrate how an informative and discriminating subset of responses could be selected among candidates in cases where the number of responses that can be acquired during inverse prediction is limited by difficulty, expense, and/or availability of material.« less
Selecting an Informative/Discriminating Multivariate Response for Inverse Prediction
Thomas, Edward V.; Lewis, John. R.; Anderson-Cook, Christine Michaela; ...
2017-07-01
The inverse prediction is important in a variety of scientific and engineering applications, such as to predict properties/characteristics of an object by using multiple measurements obtained from it. Inverse prediction can be accomplished by inverting parameterized forward models that relate the measurements (responses) to the properties/characteristics of interest. Sometimes forward models are computational/science based; but often, forward models are empirically based response surface models, obtained by using the results of controlled experimentation. For empirical models, it is important that the experiments provide a sound basis to develop accurate forward models in terms of the properties/characteristics (factors). And while nature dictatesmore » the causal relationships between factors and responses, experimenters can control the complexity, accuracy, and precision of forward models constructed via selection of factors, factor levels, and the set of trials that are performed. Recognition of the uncertainty in the estimated forward models leads to an errors-in-variables approach for inverse prediction. The forward models (estimated by experiments or science based) can also be used to analyze how well candidate responses complement one another for inverse prediction over the range of the factor space of interest. Furthermore, one may find that some responses are complementary, redundant, or noninformative. Simple analysis and examples illustrate how an informative and discriminating subset of responses could be selected among candidates in cases where the number of responses that can be acquired during inverse prediction is limited by difficulty, expense, and/or availability of material.« less
NASA Astrophysics Data System (ADS)
Köpke, Corinna; Irving, James; Elsheikh, Ahmed H.
2018-06-01
Bayesian solutions to geophysical and hydrological inverse problems are dependent upon a forward model linking subsurface physical properties to measured data, which is typically assumed to be perfectly known in the inversion procedure. However, to make the stochastic solution of the inverse problem computationally tractable using methods such as Markov-chain-Monte-Carlo (MCMC), fast approximations of the forward model are commonly employed. This gives rise to model error, which has the potential to significantly bias posterior statistics if not properly accounted for. Here, we present a new methodology for dealing with the model error arising from the use of approximate forward solvers in Bayesian solutions to hydrogeophysical inverse problems. Our approach is geared towards the common case where this error cannot be (i) effectively characterized through some parametric statistical distribution; or (ii) estimated by interpolating between a small number of computed model-error realizations. To this end, we focus on identification and removal of the model-error component of the residual during MCMC using a projection-based approach, whereby the orthogonal basis employed for the projection is derived in each iteration from the K-nearest-neighboring entries in a model-error dictionary. The latter is constructed during the inversion and grows at a specified rate as the iterations proceed. We demonstrate the performance of our technique on the inversion of synthetic crosshole ground-penetrating radar travel-time data considering three different subsurface parameterizations of varying complexity. Synthetic data are generated using the eikonal equation, whereas a straight-ray forward model is assumed for their inversion. In each case, our developed approach enables us to remove posterior bias and obtain a more realistic characterization of uncertainty.
Quantifying uncertainties of seismic Bayesian inversion of Northern Great Plains
NASA Astrophysics Data System (ADS)
Gao, C.; Lekic, V.
2017-12-01
Elastic waves excited by earthquakes are the fundamental observations of the seismological studies. Seismologists measure information such as travel time, amplitude, and polarization to infer the properties of earthquake source, seismic wave propagation, and subsurface structure. Across numerous applications, seismic imaging has been able to take advantage of complimentary seismic observables to constrain profiles and lateral variations of Earth's elastic properties. Moreover, seismic imaging plays a unique role in multidisciplinary studies of geoscience by providing direct constraints on the unreachable interior of the Earth. Accurate quantification of uncertainties of inferences made from seismic observations is of paramount importance for interpreting seismic images and testing geological hypotheses. However, such quantification remains challenging and subjective due to the non-linearity and non-uniqueness of geophysical inverse problem. In this project, we apply a reverse jump Markov chain Monte Carlo (rjMcMC) algorithm for a transdimensional Bayesian inversion of continental lithosphere structure. Such inversion allows us to quantify the uncertainties of inversion results by inverting for an ensemble solution. It also yields an adaptive parameterization that enables simultaneous inversion of different elastic properties without imposing strong prior information on the relationship between them. We present retrieved profiles of shear velocity (Vs) and radial anisotropy in Northern Great Plains using measurements from USArray stations. We use both seismic surface wave dispersion and receiver function data due to their complementary constraints of lithosphere structure. Furthermore, we analyze the uncertainties of both individual and joint inversion of those two data types to quantify the benefit of doing joint inversion. As an application, we infer the variation of Moho depths and crustal layering across the northern Great Plains.
Efficient 3D inversions using the Richards equation
NASA Astrophysics Data System (ADS)
Cockett, Rowan; Heagy, Lindsey J.; Haber, Eldad
2018-07-01
Fluid flow in the vadose zone is governed by the Richards equation; it is parameterized by hydraulic conductivity, which is a nonlinear function of pressure head. Investigations in the vadose zone typically require characterizing distributed hydraulic properties. Water content or pressure head data may include direct measurements made from boreholes. Increasingly, proxy measurements from hydrogeophysics are being used to supply more spatially and temporally dense data sets. Inferring hydraulic parameters from such datasets requires the ability to efficiently solve and optimize the nonlinear time domain Richards equation. This is particularly important as the number of parameters to be estimated in a vadose zone inversion continues to grow. In this paper, we describe an efficient technique to invert for distributed hydraulic properties in 1D, 2D, and 3D. Our technique does not store the Jacobian matrix, but rather computes its product with a vector. Existing literature for the Richards equation inversion explicitly calculates the sensitivity matrix using finite difference or automatic differentiation, however, for large scale problems these methods are constrained by computation and/or memory. Using an implicit sensitivity algorithm enables large scale inversion problems for any distributed hydraulic parameters in the Richards equation to become tractable on modest computational resources. We provide an open source implementation of our technique based on the SimPEG framework, and show it in practice for a 3D inversion of saturated hydraulic conductivity using water content data through time.
Shi, Jie; Thompson, Paul M.; Gutman, Boris; Wang, Yalin
2013-01-01
In this paper, we develop a new automated surface registration system based on surface conformal parameterization by holomorphic 1-forms, inverse consistentsurface fluid registration, and multivariate tensor-based morphometry (mTBM). First, we conformally map a surface onto a planar rectangle space with holomorphic 1-forms. Second, we compute surface conformal representation by combining its local conformal factor and mean curvature and linearly scale the dynamic range of the conformal representation to form the feature image of the surface. Third, we align the feature image with a chosen template image via the fluid image registration algorithm, which has been extended into the curvilinear coordinates to adjust for the distortion introduced by surface parameterization. The inverse consistent image registration algorithm is also incorporated in the system to jointly estimate the forward and inverse transformations between the study and template images. This alignment induces a corresponding deformation on the surface. We tested the system on Alzheimer's Disease Neuroimaging Initiative (ADNI) baseline dataset to study AD symptoms on hippocampus. In our system, by modeling a hippocampus as a 3D parametric surface, we nonlinearly registered each surface with a selected template surface. Then we used mTBM to analyze the morphometrydifference between diagnostic groups. Experimental results show that the new system has better performance than two publically available subcortical surface registration tools: FIRST and SPHARM. We also analyzed the genetic influence of the Apolipoprotein E ε4 allele (ApoE4),which is considered as the most prevalent risk factor for AD.Our work successfully detected statistically significant difference between ApoE4 carriers and non-carriers in both patients of mild cognitive impairment (MCI) and healthy control subjects. The results show evidence that the ApoE genotype may be associated with accelerated brain atrophy so that our workprovides a new MRI analysis tool that may help presymptomatic AD research. PMID:23587689
NASA Astrophysics Data System (ADS)
Gallovic, Frantisek; Cirella, Antonella; Plicka, Vladimir; Piatanesi, Alessio
2013-04-01
On 14 June 2008, UTC 23:43, the border of Iwate and Miyagi prefectures was hit by an Mw7 reverse-fault type crustal earthquake. The event is known to have the largest ground acceleration observed to date (~4g), which was recorded at station IWTH25. We analyze observed strong motion data with the objective to image the event rupture process and the associated uncertainties. Two different slip inversion approaches are used, the difference between the two methods being only in the parameterization of the source model. To minimize mismodeling of the propagation effects we use crustal model obtained by full waveform inversion of aftershock records in the frequency range between 0.05-0.3 Hz. In the first method, based on linear formulation, the parameters are represented by samples of slip velocity functions along the (finely discretized) fault in a time window spanning the whole rupture duration. Such a source description is very general with no prior constraint on the nucleation point, rupture velocity, shape of the velocity function. Thus the inversion could resolve very general (unexpected) features of the rupture evolution, such as multiple rupturing, rupture-propagation reversals, etc. On the other hand, due to the relatively large number of model parameters, the inversion result is highly non-unique, with possibility of obtaining a biased solution. The second method is a non-linear global inversion technique, where each point on the fault can slip only once, following a prescribed functional form of the source time function. We invert simultaneously for peak slip velocity, slip angle, rise time and rupture time by allowing a given range of variability for each kinematic model parameter. For this reason, unlike to the linear inversion approach, the rupture process needs a smaller number of parameters to be retrieved, and is more constrained with a proper control on the allowed range of parameter values. In order to test the resolution and reliability of the retrieved models, we present a thorough analysis of the performance of the two inversion approaches. In fact, depending on the inversion strategy and the intrinsic 'non-uniqueness' of the inverse problem, the final slip maps and distribution of rupture onset times are generally different, sometimes even incompatible with each other. Great emphasis is devoted to the uncertainty estimate of both techniques. Thus we do not compare only the best fitting models, but their 'compatibility' in terms of the uncertainty limits.
NASA Astrophysics Data System (ADS)
Shankar, Praveen
The performance of nonlinear control algorithms such as feedback linearization and dynamic inversion is heavily dependent on the fidelity of the dynamic model being inverted. Incomplete or incorrect knowledge of the dynamics results in reduced performance and may lead to instability. Augmenting the baseline controller with approximators which utilize a parametrization structure that is adapted online reduces the effect of this error between the design model and actual dynamics. However, currently existing parameterizations employ a fixed set of basis functions that do not guarantee arbitrary tracking error performance. To address this problem, we develop a self-organizing parametrization structure that is proven to be stable and can guarantee arbitrary tracking error performance. The training algorithm to grow the network and adapt the parameters is derived from Lyapunov theory. In addition to growing the network of basis functions, a pruning strategy is incorporated to keep the size of the network as small as possible. This algorithm is implemented on a high performance flight vehicle such as F-15 military aircraft. The baseline dynamic inversion controller is augmented with a Self-Organizing Radial Basis Function Network (SORBFN) to minimize the effect of the inversion error which may occur due to imperfect modeling, approximate inversion or sudden changes in aircraft dynamics. The dynamic inversion controller is simulated for different situations including control surface failures, modeling errors and external disturbances with and without the adaptive network. A performance measure of maximum tracking error is specified for both the controllers a priori. Excellent tracking error minimization to a pre-specified level using the adaptive approximation based controller was achieved while the baseline dynamic inversion controller failed to meet this performance specification. The performance of the SORBFN based controller is also compared to a fixed RBF network based adaptive controller. While the fixed RBF network based controller which is tuned to compensate for control surface failures fails to achieve the same performance under modeling uncertainty and disturbances, the SORBFN is able to achieve good tracking convergence under all error conditions.
NASA Astrophysics Data System (ADS)
You, Y.; Wang, S.; Yang, Q.; Shen, M.; Chen, G.
2017-12-01
Alpine river water environment on the Plateau (such as Tibetan Plateau, China) is a key indicator for water security and environmental security in China. Due to the complex terrain and various surface eco-environment, it is a very difficult to monitor the water environment over the complex land surface of the plateau. The increasing availability of remote sensing techniques with appropriate spatiotemporal resolutions, broad coverage and low costs allows for effective monitoring river water environment on the Plateau, particularly in remote and inaccessible areas where are lack of in situ observations. In this study, we propose a remote sense-based monitoring model by using multi-platform remote sensing data for monitoring alpine river environment. In this study some parameterization methodologies based on satellite remote sensing data and field observations have been proposed for monitoring the water environmental parameters (including chlorophyll-a concentration (Chl-a), water turbidity (WT) or water clarity (SD), total nitrogen (TN), total phosphorus (TP), and total organic carbon (TOC)) over the china's southwest highland rivers, such as the Brahmaputra. First, because most sensors do not collect multiple observations of a target in a single pass, data from multiple orbits or acquisition times may be used, and varying atmospheric and irradiance effects must be reconciled. So based on various types of satellite data, at first we developed the techniques of multi-sensor data correction, atmospheric correction. Second, we also built the inversion spectral database derived from long-term remote sensing data and field sampling data. Then we have studied and developed a high-precision inversion model over the southwest highland river backed by inversion spectral database through using the techniques of multi-sensor remote sensing information optimization and collaboration. Third, take the middle reaches of the Brahmaputra river as the study area, we validated the key water environmental parameters and further improved the inversion model. The results indicate that our proposed water environment inversion model can be a good inversion for alpine water environmental parameters, and can improve the monitoring and warning ability for the alpine river water environment in the future.
NASA Astrophysics Data System (ADS)
Grosvenor, D. P.; Wood, R.
2012-12-01
As part of one of the Climate Process Teams (CPTs) we have been testing the implementation of a new cloud parameterization into the CAM5 and AM3 GCMs. The CLUBB parameterization replaces all but the deep convection cloud scheme and uses an innovative PDF based approach to diagnose cloud water content and turbulence. We have evaluated the base models and the CLUBB parameterization in the SE Pacific stratocumulus region using a suite of satellite observation metrics including: Liquid Water Path (LWP) measurements from AMSRE; cloud fractions from CloudSat/CALIPSO; droplet concentrations (Nd) and Cloud Top Temperatures from MODIS; CloudSat precipitation; and relationships between Estimated Inversion Strength (calculated from AMSRE SSTs, Cloud Top Temperatures from MODIS and ECMWF re-analysis fields) and cloud fraction. This region has the advantage of an abundance of in-situ aircraft observations taken during the VOCALS campaign, which is facilitating the diagnosis of the model problems highlighted by the model evaluation. This data has also been recently used to demonstrate the reliability of MODIS Nd estimates. The satellite data needs to be filtered to ensure accurate retrievals and we have been careful to apply the same screenings to the model fields. For example, scenes with high cloud fractions and with output times near to the satellite overpass times can be extracted from the model for a fair comparison with MODIS Nd estimates. To facilitate this we have been supplied with instantaneous model output since screening would not be possible based on time averaged data. We also have COSP satellite simulator output, which allows a fairer comparison between satellite and model. For example, COSP cloud fraction is based upon the detection threshold of the satellite instrument in question. These COSP fields are also used for the model output filtering just described. The results have revealed problems with both the base models and the versions with the CLUBB parameterization. The CAM5 model produces realistic near-coast cloud cover, but too little further west in the stratocumulus to cumulus regions. The implementation of CLUBB has vastly improved this situation with cloud cover that is very similar to that observed. CLUBB also improves the Nd field in CAM5 by producing realistic near-coast increases and by removing high Nd values associated with the detrainment of droplets by cumulus clouds. AM3 has a lack of stratocumulus cloud near the South American coast and has much lower droplet concentrations than observed. VOCALS measurements showed that sulfate mass loadings were generally too high in both base models, whereas CCN concentrations were too low. This suggests a problem with the mass distribution partitioning of sulfate that is being investigated. Diurnal and seasonal comparisons have been very illuminating. CLUBB produces very little diurnal variation in LWP, but large variations in precipitation rates. This is likely to point to problems that are now being addressed by the modeling part of the CPT team, creating an iterative workflow process between the model developers and the model testers, which should facilitate efficient parameterization improvement. We will report on the latest developments of this process.
NASA Technical Reports Server (NTRS)
Oreopoulos, Lazaros; Lee, Dongmin; Norris, Peter; Yuan, Tianle
2011-01-01
It has been shown that the details of how cloud fraction overlap is treated in GCMs has substantial impact on shortwave and longwave fluxes. Because cloud condensate is also horizontally heterogeneous at GCM grid scales, another aspect of cloud overlap should in principle also be assessed, namely the vertical overlap of hydrometeor distributions. This type of overlap is usually examined in terms of rank correlations, i.e., linear correlations between hydrometeor amount ranks of the overlapping parts of cloud layers at specific separation distances. The cloud fraction overlap parameter and the rank correlation of hydrometeor amounts can be both expressed as inverse exponential functions of separation distance characterized by their respective decorrelation lengths (e-folding distances). Larger decorrelation lengths mean that hydrometeor fractions and probability distribution functions have high levels of vertical alignment. An analysis of CloudSat and CALIPSO data reveals that the two aspects of cloud overlap are related and their respective decorrelation lengths have a distinct dependence on latitude that can be parameterized and included in a GCM. In our presentation we will contrast the Cloud Radiative Effect (CRE) of the GEOS-5 atmospheric GCM (AGCM) when the observationally-based parameterization of decorrelation lengths is used to represent overlap versus the simpler cases of maximum-random overlap and globally constant decorrelation lengths. The effects of specific overlap representations will be examined for both diagnostic and interactive radiation runs in GEOS-5 and comparisons will be made with observed CREs from CERES and CloudSat (2B-FLXHR product). Since the radiative effects of overlap depend on the cloud property distributions of the AGCM, the availability of two different cloud schemes in GEOS-5 will give us the opportunity to assess a wide range of potential cloud overlap consequences on the model's climate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Guang; Fan, Jiwen; Xu, Kuan-Man
2015-06-01
Arakawa and Wu (2013, hereafter referred to as AW13) recently developed a formal approach to a unified parameterization of atmospheric convection for high-resolution numerical models. The work is based on ideas formulated by Arakawa et al. (2011). It lays the foundation for a new parameterization pathway in the era of high-resolution numerical modeling of the atmosphere. The key parameter in this approach is convective cloud fraction. In conventional parameterization, it is assumed that <<1. This assumption is no longer valid when horizontal resolution of numerical models approaches a few to a few tens kilometers, since in such situations convective cloudmore » fraction can be comparable to unity. Therefore, they argue that the conventional approach to parameterizing convective transport must include a factor 1 - in order to unify the parameterization for the full range of model resolutions so that it is scale-aware and valid for large convective cloud fractions. While AW13’s approach provides important guidance for future convective parameterization development, in this note we intend to show that the conventional approach already has this scale awareness factor 1 - built in, although not recognized for the last forty years. Therefore, it should work well even in situations of large convective cloud fractions in high-resolution numerical models.« less
NASA Astrophysics Data System (ADS)
Xu, Kuan-Man; Cheng, Anning
2014-05-01
A high-resolution cloud-resolving model (CRM) embedded in a general circulation model (GCM) is an attractive alternative for climate modeling because it replaces all traditional cloud parameterizations and explicitly simulates cloud physical processes in each grid column of the GCM. Such an approach is called "Multiscale Modeling Framework." MMF still needs to parameterize the subgrid-scale (SGS) processes associated with clouds and large turbulent eddies because circulations associated with planetary boundary layer (PBL) and in-cloud turbulence are unresolved by CRMs with horizontal grid sizes on the order of a few kilometers. A third-order turbulence closure (IPHOC) has been implemented in the CRM component of the super-parameterized Community Atmosphere Model (SPCAM). IPHOC is used to predict (or diagnose) fractional cloudiness and the variability of temperature and water vapor at scales that are not resolved on the CRM's grid. This model has produced promised results, especially for low-level cloud climatology, seasonal variations and diurnal variations (Cheng and Xu 2011, 2013a, b; Xu and Cheng 2013a, b). Because of the enormous computational cost of SPCAM-IPHOC, which is 400 times of a conventional CAM, we decided to bypass the CRM and implement the IPHOC directly to CAM version 5 (CAM5). IPHOC replaces the PBL/stratocumulus, shallow convection, and cloud macrophysics parameterizations in CAM5. Since there are large discrepancies in the spatial and temporal scales between CRM and CAM5, IPHOC used in CAM5 has to be modified from that used in SPCAM. In particular, we diagnose all second- and third-order moments except for the fluxes. These prognostic and diagnostic moments are used to select a double-Gaussian probability density function to describe the SGS variability. We also incorporate a diagnostic PBL height parameterization to represent the strong inversion above PBL. The goal of this study is to compare the simulation of the climatology from these three models (CAM5, CAM5-IPHOC and SPCAM-IPHOC), with emphasis on low-level clouds and precipitation. Detailed comparisons of scatter diagrams among the monthly-mean low-level cloudiness, PBL height, surface relative humidity and lower tropospheric stability (LTS) reveal the relative strengths and weaknesses for five coastal low-cloud regions among the three models. Observations from CloudSat and CALIPSO and ECMWF Interim reanalysis are used as the truths for the comparisons. We found that the standard CAM5 underestimates cloudiness and produces small cloud fractions at low PBL heights that contradict with observations. CAM5-IPHOC tends to overestimate low clouds but the ranges of LTS and PBL height variations are most realistic. SPCAM-IPHOC seems to produce most realistic results with relatively consistent results from one region to another. Further comparisons with other atmospheric environmental variables will be helpful to reveal the causes of model deficiencies so that SPCAM-IPHOC results will provide guidance to the other two models.
Real-Time Minimization of Tracking Error for Aircraft Systems
NASA Technical Reports Server (NTRS)
Garud, Sumedha; Kaneshige, John T.; Krishnakumar, Kalmanje S.; Kulkarni, Nilesh V.; Burken, John
2013-01-01
This technology presents a novel, stable, discrete-time adaptive law for flight control in a Direct adaptive control (DAC) framework. Where errors are not present, the original control design has been tuned for optimal performance. Adaptive control works towards achieving nominal performance whenever the design has modeling uncertainties/errors or when the vehicle suffers substantial flight configuration change. The baseline controller uses dynamic inversion with proportional-integral augmentation. On-line adaptation of this control law is achieved by providing a parameterized augmentation signal to a dynamic inversion block. The parameters of this augmentation signal are updated to achieve the nominal desired error dynamics. If the system senses that at least one aircraft component is experiencing an excursion and the return of this component value toward its reference value is not proceeding according to the expected controller characteristics, then the neural network (NN) modeling of aircraft operation may be changed.
NASA Astrophysics Data System (ADS)
Eliçabe, Guillermo E.
2013-09-01
In this work, an exact scattering model for a system of clusters of spherical particles, based on the Rayleigh-Gans approximation, has been parameterized in such a way that it can be solved in inverse form using Thikhonov Regularization to obtain the morphological parameters of the clusters. That is to say, the average number of particles per cluster, the size of the primary spherical units that form the cluster, and the Discrete Distance Distribution Function from which the z-average square radius of gyration of the system of clusters is obtained. The methodology is validated through a series of simulated and experimental examples of x-ray and light scattering that show that the proposed methodology works satisfactorily in unideal situations such as: presence of error in the measurements, presence of error in the model, and several types of unideallities present in the experimental cases.
X-38 Application of Dynamic Inversion Flight Control
NASA Technical Reports Server (NTRS)
Wacker, Roger; Munday, Steve; Merkle, Scott
2001-01-01
This paper summarizes the application of a nonlinear dynamic inversion (DI) flight control system (FCS) to an autonomous flight test vehicle in NASA's X-38 Project, a predecessor to the International Space Station (ISS) Crew Return Vehicle (CRV). Honeywell's Multi-Application Control-H (MACH) is a parameterized FCS design architecture including both model-based DI rate-compensation and classical P+I command-tracking. MACH was adopted by X-38 in order to shorten the design cycle time for different vehicle shapes and flight envelopes and evolving aerodynamic databases. Specific design issues and analysis results are presented for the application of MACH to the 3rd free flight (FF3) of X-38 Vehicle 132 (V132). This B-52 drop test, occurring on March 30, 2000, represents the first flight test of MACH and one of the first few known applications of DI in the primary FCS of an autonomous flight test vehicle.
An inverse approach to determining spatially varying arterial compliance using ultrasound imaging
NASA Astrophysics Data System (ADS)
Mcgarry, Matthew; Li, Ronny; Apostolakis, Iason; Nauleau, Pierre; Konofagou, Elisa E.
2016-08-01
The mechanical properties of arteries are implicated in a wide variety of cardiovascular diseases, many of which are expected to involve a strong spatial variation in properties that can be depicted by diagnostic imaging. A pulse wave inverse problem (PWIP) is presented, which can produce spatially resolved estimates of vessel compliance from ultrasound measurements of the vessel wall displacements. The 1D equations governing pulse wave propagation in a flexible tube are parameterized by the spatially varying properties, discrete cosine transform components of the inlet pressure boundary conditions, viscous loss constant and a resistance outlet boundary condition. Gradient descent optimization is used to fit displacements from the model to the measured data by updating the model parameters. Inversion of simulated data showed that the PWIP can accurately recover the correct compliance distribution and inlet pressure under realistic conditions, even under high simulated measurement noise conditions. Silicone phantoms with known compliance contrast were imaged with a clinical ultrasound system. The PWIP produced spatially and quantitatively accurate maps of the phantom compliance compared to independent static property estimates, and the known locations of stiff inclusions (which were as small as 7 mm). The PWIP is necessary for these phantom experiments as the spatiotemporal resolution, measurement noise and compliance contrast does not allow accurate tracking of the pulse wave velocity using traditional approaches (e.g. 50% upstroke markers). Results from simulations indicate reflections generated from material interfaces may negatively affect wave velocity estimates, whereas these reflections are accounted for in the PWIP and do not cause problems.
A general rough-surface inversion algorithm: Theory and application to SAR data
NASA Technical Reports Server (NTRS)
Moghaddam, M.
1993-01-01
Rough-surface inversion has significant applications in interpretation of SAR data obtained over bare soil surfaces and agricultural lands. Due to the sparsity of data and the large pixel size in SAR applications, it is not feasible to carry out inversions based on numerical scattering models. The alternative is to use parameter estimation techniques based on approximate analytical or empirical models. Hence, there are two issues to be addressed, namely, what model to choose and what estimation algorithm to apply. Here, a small perturbation model (SPM) is used to express the backscattering coefficients of the rough surface in terms of three surface parameters. The algorithm used to estimate these parameters is based on a nonlinear least-squares criterion. The least-squares optimization methods are widely used in estimation theory, but the distinguishing factor for SAR applications is incorporating the stochastic nature of both the unknown parameters and the data into formulation, which will be discussed in detail. The algorithm is tested with synthetic data, and several Newton-type least-squares minimization methods are discussed to compare their convergence characteristics. Finally, the algorithm is applied to multifrequency polarimetric SAR data obtained over some bare soil and agricultural fields. Results will be shown and compared to ground-truth measurements obtained from these areas. The strength of this general approach to inversion of SAR data is that it can be easily modified for use with any scattering model without changing any of the inversion steps. Note also that, for the same reason it is not limited to inversion of rough surfaces, and can be applied to any parameterized scattering process.
Endalamaw, Abraham; Bolton, W. Robert; Young-Robertson, Jessica M.; ...
2017-09-14
Modeling hydrological processes in the Alaskan sub-arctic is challenging because of the extreme spatial heterogeneity in soil properties and vegetation communities. Nevertheless, modeling and predicting hydrological processes is critical in this region due to its vulnerability to the effects of climate change. Coarse-spatial-resolution datasets used in land surface modeling pose a new challenge in simulating the spatially distributed and basin-integrated processes since these datasets do not adequately represent the small-scale hydrological, thermal, and ecological heterogeneity. The goal of this study is to improve the prediction capacity of mesoscale to large-scale hydrological models by introducing a small-scale parameterization scheme, which bettermore » represents the spatial heterogeneity of soil properties and vegetation cover in the Alaskan sub-arctic. The small-scale parameterization schemes are derived from observations and a sub-grid parameterization method in the two contrasting sub-basins of the Caribou Poker Creek Research Watershed (CPCRW) in Interior Alaska: one nearly permafrost-free (LowP) sub-basin and one permafrost-dominated (HighP) sub-basin. The sub-grid parameterization method used in the small-scale parameterization scheme is derived from the watershed topography. We found that observed soil thermal and hydraulic properties – including the distribution of permafrost and vegetation cover heterogeneity – are better represented in the sub-grid parameterization method than the coarse-resolution datasets. Parameters derived from the coarse-resolution datasets and from the sub-grid parameterization method are implemented into the variable infiltration capacity (VIC) mesoscale hydrological model to simulate runoff, evapotranspiration (ET), and soil moisture in the two sub-basins of the CPCRW. Simulated hydrographs based on the small-scale parameterization capture most of the peak and low flows, with similar accuracy in both sub-basins, compared to simulated hydrographs based on the coarse-resolution datasets. On average, the small-scale parameterization scheme improves the total runoff simulation by up to 50 % in the LowP sub-basin and by up to 10 % in the HighP sub-basin from the large-scale parameterization. This study shows that the proposed sub-grid parameterization method can be used to improve the performance of mesoscale hydrological models in the Alaskan sub-arctic watersheds.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Endalamaw, Abraham; Bolton, W. Robert; Young-Robertson, Jessica M.
Modeling hydrological processes in the Alaskan sub-arctic is challenging because of the extreme spatial heterogeneity in soil properties and vegetation communities. Nevertheless, modeling and predicting hydrological processes is critical in this region due to its vulnerability to the effects of climate change. Coarse-spatial-resolution datasets used in land surface modeling pose a new challenge in simulating the spatially distributed and basin-integrated processes since these datasets do not adequately represent the small-scale hydrological, thermal, and ecological heterogeneity. The goal of this study is to improve the prediction capacity of mesoscale to large-scale hydrological models by introducing a small-scale parameterization scheme, which bettermore » represents the spatial heterogeneity of soil properties and vegetation cover in the Alaskan sub-arctic. The small-scale parameterization schemes are derived from observations and a sub-grid parameterization method in the two contrasting sub-basins of the Caribou Poker Creek Research Watershed (CPCRW) in Interior Alaska: one nearly permafrost-free (LowP) sub-basin and one permafrost-dominated (HighP) sub-basin. The sub-grid parameterization method used in the small-scale parameterization scheme is derived from the watershed topography. We found that observed soil thermal and hydraulic properties – including the distribution of permafrost and vegetation cover heterogeneity – are better represented in the sub-grid parameterization method than the coarse-resolution datasets. Parameters derived from the coarse-resolution datasets and from the sub-grid parameterization method are implemented into the variable infiltration capacity (VIC) mesoscale hydrological model to simulate runoff, evapotranspiration (ET), and soil moisture in the two sub-basins of the CPCRW. Simulated hydrographs based on the small-scale parameterization capture most of the peak and low flows, with similar accuracy in both sub-basins, compared to simulated hydrographs based on the coarse-resolution datasets. On average, the small-scale parameterization scheme improves the total runoff simulation by up to 50 % in the LowP sub-basin and by up to 10 % in the HighP sub-basin from the large-scale parameterization. This study shows that the proposed sub-grid parameterization method can be used to improve the performance of mesoscale hydrological models in the Alaskan sub-arctic watersheds.« less
NASA Astrophysics Data System (ADS)
Müller, Silvia; Brockmann, Jan Martin; Schuh, Wolf-Dieter
2015-04-01
The ocean's dynamic topography as the difference between the sea surface and the geoid reflects many characteristics of the general ocean circulation. Consequently, it provides valuable information for evaluating or tuning ocean circulation models. The sea surface is directly observed by satellite radar altimetry while the geoid cannot be observed directly. The satellite-based gravity field determination requires different measurement principles (satellite-to-satellite tracking (e.g. GRACE), satellite-gravity-gradiometry (GOCE)). In addition, hydrographic measurements (salinity, temperature and pressure; near-surface velocities) provide information on the dynamic topography. The observation types have different representations and spatial as well as temporal resolutions. Therefore, the determination of the dynamic topography is not straightforward. Furthermore, the integration of the dynamic topography into ocean circulation models requires not only the dynamic topography itself but also its inverse covariance matrix on the ocean model grid. We developed a rigorous combination method in which the dynamic topography is parameterized in space as well as in time. The altimetric sea surface heights are expressed as a sum of geoid heights represented in terms of spherical harmonics and the dynamic topography parameterized by a finite element method which can be directly related to the particular ocean model grid. Besides the difficult task of combining altimetry data with a gravity field model, a major aspect is the consistent combination of satellite data and in-situ observations. The particular characteristics and the signal content of the different observations must be adequately considered requiring the introduction of auxiliary parameters. Within our model the individual observation groups are combined in terms of normal equations considering their full covariance information; i.e. a rigorous variance/covariance propagation from the original measurements to the final product is accomplished. In conclusion, the developed integrated approach allows for estimating the dynamic topography and its inverse covariance matrix on arbitrary grids in space and time. The inverse covariance matrix contains the appropriate weights for model-data misfits in least-squares ocean model inversions. The focus of this study is on the North Atlantic Ocean. We will present the conceptual design and dynamic topography estimates based on time variable data from seven satellite altimeter missions (Jason-1, Jason-2, Topex/Poseidon, Envisat, ERS-2, GFO, Cryosat2) in combination with the latest GOCE gravity field model and in-situ data from the Argo floats and near-surface drifting buoys.
A test of the chromosomal theory of ecotypic speciation in Anopheles gambiae
Manoukis, Nicholas C.; Powell, Jeffrey R.; Touré, Mahamoudou B.; Sacko, Adama; Edillo, Frances E.; Coulibaly, Mamadou B.; Traoré, Sekou F.; Taylor, Charles E.; Besansky, Nora J.
2008-01-01
The role of chromosomal inversions in speciation has long been of interest to evolutionists. Recent quantitative modeling has stimulated reconsideration of previous conceptual models for chromosomal speciation. Anopheles gambiae, the most important vector of human malaria, carries abundant chromosomal inversion polymorphism nonrandomly associated with ecotypes that mate assortatively. Here, we consider the potential role of paracentric inversions in promoting speciation in A. gambiae via “ecotypification,” a term that refers to differentiation arising from local adaptation. In particular, we focus on the Bamako form, an ecotype characterized by low inversion polymorphism and fixation of an inversion, 2Rj, that is very rare or absent in all other forms of A. gambiae. The Bamako form has a restricted distribution by the upper Niger River and its tributaries that is associated with a distinctive type of larval habitat, laterite rock pools, hypothesized to be its optimal breeding site. We first present computer simulations to investigate whether the population dynamics of A. gambiae are consistent with chromosomal speciation by ecotypification. The models are parameterized using field observations on the various forms of A. gambiae that exist in Mali, West Africa. We then report on the distribution of larvae of this species collected from rock pools and more characteristic breeding sites nearby. Both the simulations and field observations support the thesis that speciation by ecotypification is occurring, or has occurred, prompting consideration of Bamako as an independent species. PMID:18287019
Evaluation of scale-aware subgrid mesoscale eddy models in a global eddy-rich model
NASA Astrophysics Data System (ADS)
Pearson, Brodie; Fox-Kemper, Baylor; Bachman, Scott; Bryan, Frank
2017-07-01
Two parameterizations for horizontal mixing of momentum and tracers by subgrid mesoscale eddies are implemented in a high-resolution global ocean model. These parameterizations follow on the techniques of large eddy simulation (LES). The theory underlying one parameterization (2D Leith due to Leith, 1996) is that of enstrophy cascades in two-dimensional turbulence, while the other (QG Leith) is designed for potential enstrophy cascades in quasi-geostrophic turbulence. Simulations using each of these parameterizations are compared with a control simulation using standard biharmonic horizontal mixing.Simulations using the 2D Leith and QG Leith parameterizations are more realistic than those using biharmonic mixing. In particular, the 2D Leith and QG Leith simulations have more energy in resolved mesoscale eddies, have a spectral slope more consistent with turbulence theory (an inertial enstrophy or potential enstrophy cascade), have bottom drag and vertical viscosity as the primary sinks of energy instead of lateral friction, and have isoneutral parameterized mesoscale tracer transport. The parameterization choice also affects mass transports, but the impact varies regionally in magnitude and sign.
Anisotropic Shear Dispersion Parameterization for Mesoscale Eddy Transport
NASA Astrophysics Data System (ADS)
Reckinger, S. J.; Fox-Kemper, B.
2016-02-01
The effects of mesoscale eddies are universally treated isotropically in general circulation models. However, the processes that the parameterization approximates, such as shear dispersion, typically have strongly anisotropic characteristics. The Gent-McWilliams/Redi mesoscale eddy parameterization is extended for anisotropy and tested using 1-degree Community Earth System Model (CESM) simulations. The sensitivity of the model to anisotropy includes a reduction of temperature and salinity biases, a deepening of the southern ocean mixed-layer depth, and improved ventilation of biogeochemical tracers, particularly in oxygen minimum zones. The parameterization is further extended to include the effects of unresolved shear dispersion, which sets the strength and direction of anisotropy. The shear dispersion parameterization is similar to drifter observations in spatial distribution of diffusivity and high-resolution model diagnosis in the distribution of eddy flux orientation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Na; Zhang, Peng; Kang, Wei
Multiscale simulations of fluids such as blood represent a major computational challenge of coupling the disparate spatiotemporal scales between molecular and macroscopic transport phenomena characterizing such complex fluids. In this paper, a coarse-grained (CG) particle model is developed for simulating blood flow by modifying the Morse potential, traditionally used in Molecular Dynamics for modeling vibrating structures. The modified Morse potential is parameterized with effective mass scales for reproducing blood viscous flow properties, including density, pressure, viscosity, compressibility and characteristic flow dynamics of human blood plasma fluid. The parameterization follows a standard inverse-problem approach in which the optimal micro parameters aremore » systematically searched, by gradually decoupling loosely correlated parameter spaces, to match the macro physical quantities of viscous blood flow. The predictions of this particle based multiscale model compare favorably to classic viscous flow solutions such as Counter-Poiseuille and Couette flows. It demonstrates that such coarse grained particle model can be applied to replicate the dynamics of viscous blood flow, with the advantage of bridging the gap between macroscopic flow scales and the cellular scales characterizing blood flow that continuum based models fail to handle adequately.« less
Raney Distributions and Random Matrix Theory
NASA Astrophysics Data System (ADS)
Forrester, Peter J.; Liu, Dang-Zheng
2015-03-01
Recent works have shown that the family of probability distributions with moments given by the Fuss-Catalan numbers permit a simple parameterized form for their density. We extend this result to the Raney distribution which by definition has its moments given by a generalization of the Fuss-Catalan numbers. Such computations begin with an algebraic equation satisfied by the Stieltjes transform, which we show can be derived from the linear differential equation satisfied by the characteristic polynomial of random matrix realizations of the Raney distribution. For the Fuss-Catalan distribution, an equilibrium problem characterizing the density is identified. The Stieltjes transform for the limiting spectral density of the singular values squared of the matrix product formed from inverse standard Gaussian matrices, and standard Gaussian matrices, is shown to satisfy a variant of the algebraic equation relating to the Raney distribution. Supported on , we show that it too permits a simple functional form upon the introduction of an appropriate choice of parameterization. As an application, the leading asymptotic form of the density as the endpoints of the support are approached is computed, and is shown to have some universal features.
NASA Astrophysics Data System (ADS)
Xu, Gang; Li, Ming; Mourrain, Bernard; Rabczuk, Timon; Xu, Jinlan; Bordas, Stéphane P. A.
2018-01-01
In this paper, we propose a general framework for constructing IGA-suitable planar B-spline parameterizations from given complex CAD boundaries consisting of a set of B-spline curves. Instead of forming the computational domain by a simple boundary, planar domains with high genus and more complex boundary curves are considered. Firstly, some pre-processing operations including B\\'ezier extraction and subdivision are performed on each boundary curve in order to generate a high-quality planar parameterization; then a robust planar domain partition framework is proposed to construct high-quality patch-meshing results with few singularities from the discrete boundary formed by connecting the end points of the resulting boundary segments. After the topology information generation of quadrilateral decomposition, the optimal placement of interior B\\'ezier curves corresponding to the interior edges of the quadrangulation is constructed by a global optimization method to achieve a patch-partition with high quality. Finally, after the imposition of C1=G1-continuity constraints on the interface of neighboring B\\'ezier patches with respect to each quad in the quadrangulation, the high-quality B\\'ezier patch parameterization is obtained by a C1-constrained local optimization method to achieve uniform and orthogonal iso-parametric structures while keeping the continuity conditions between patches. The efficiency and robustness of the proposed method are demonstrated by several examples which are compared to results obtained by the skeleton-based parameterization approach.
Evaluation of the site effect with Heuristic Methods
NASA Astrophysics Data System (ADS)
Torres, N. N.; Ortiz-Aleman, C.
2017-12-01
The seismic site response in an area depends mainly on the local geological and topographical conditions. Estimation of variations in ground motion can lead to significant contributions on seismic hazard assessment, in order to reduce human and economic losses. Site response estimation can be posed as a parameterized inversion approach which allows separating source and path effects. The generalized inversion (Field and Jacob, 1995) represents one of the alternative methods to estimate the local seismic response, which involves solving a strongly non-linear multiparametric problem. In this work, local seismic response was estimated using global optimization methods (Genetic Algorithms and Simulated Annealing) which allowed us to increase the range of explored solutions in a nonlinear search, as compared to other conventional linear methods. By using the VEOX Network velocity records, collected from August 2007 to March 2009, source, path and site parameters corresponding to the amplitude spectra of the S wave of the velocity seismic records are estimated. We can establish that inverted parameters resulting from this simultaneous inversion approach, show excellent agreement, not only in terms of adjustment between observed and calculated spectra, but also when compared to previous work from several authors.
Design by Dragging: An Interface for Creative Forward and Inverse Design with Simulation Ensembles
Coffey, Dane; Lin, Chi-Lun; Erdman, Arthur G.; Keefe, Daniel F.
2014-01-01
We present an interface for exploring large design spaces as encountered in simulation-based engineering, design of visual effects, and other tasks that require tuning parameters of computationally-intensive simulations and visually evaluating results. The goal is to enable a style of design with simulations that feels as-direct-as-possible so users can concentrate on creative design tasks. The approach integrates forward design via direct manipulation of simulation inputs (e.g., geometric properties, applied forces) in the same visual space with inverse design via “tugging” and reshaping simulation outputs (e.g., scalar fields from finite element analysis (FEA) or computational fluid dynamics (CFD)). The interface includes algorithms for interpreting the intent of users’ drag operations relative to parameterized models, morphing arbitrary scalar fields output from FEA and CFD simulations, and in-place interactive ensemble visualization. The inverse design strategy can be extended to use multi-touch input in combination with an as-rigid-as-possible shape manipulation to support rich visual queries. The potential of this new design approach is confirmed via two applications: medical device engineering of a vacuum-assisted biopsy device and visual effects design using a physically based flame simulation. PMID:24051845
NASA Astrophysics Data System (ADS)
Janik, Tomasz; Środa, Piotr; Czuba, Wojciech; Lysynchuk, Dmytro
2016-12-01
The interpretation of seismic refraction and wide angle reflection data usually involves the creation of a velocity model based on an inverse or forward modelling of the travel times of crustal and mantle phases using the ray theory approach. The modelling codes differ in terms of model parameterization, data used for modelling, regularization of the result, etc. It is helpful to know the capabilities, advantages and limitations of the code used compared to others. This work compares some popular 2D seismic modelling codes using the dataset collected along the seismic wide-angle profile DOBRE-4, where quite peculiar/uncommon reflected phases were observed in the wavefield. The 505 km long profile was realized in southern Ukraine in 2009, using 13 shot points and 230 recording stations. Double PMP phases with a different reduced time (7.5-11 s) and a different apparent velocity, intersecting each other, are observed in the seismic wavefield. This is the most striking feature of the data. They are interpreted as reflections from strongly dipping Moho segments with an opposite dip. Two steps were used for the modelling. In the previous work by Starostenko et al. (2013), the trial-and-error forward model based on refracted and reflected phases (SEIS83 code) was published. The interesting feature is the high-amplitude (8-17 km) variability of the Moho depth in the form of downward and upward bends. This model is compared with results from other seismic inversion methods: the first arrivals tomography package FAST based on first arrivals; the JIVE3D code, which can also use later refracted arrivals and reflections; and the forward and inversion code RAYINVR using both refracted and reflected phases. Modelling with all the codes tested showed substantial variability of the Moho depth along the DOBRE-4 profile. However, SEIS83 and RAYINVR packages seem to give the most coincident results.
NASA Astrophysics Data System (ADS)
Irving, J.; Koepke, C.; Elsheikh, A. H.
2017-12-01
Bayesian solutions to geophysical and hydrological inverse problems are dependent upon a forward process model linking subsurface parameters to measured data, which is typically assumed to be known perfectly in the inversion procedure. However, in order to make the stochastic solution of the inverse problem computationally tractable using, for example, Markov-chain-Monte-Carlo (MCMC) methods, fast approximations of the forward model are commonly employed. This introduces model error into the problem, which has the potential to significantly bias posterior statistics and hamper data integration efforts if not properly accounted for. Here, we present a new methodology for addressing the issue of model error in Bayesian solutions to hydrogeophysical inverse problems that is geared towards the common case where these errors cannot be effectively characterized globally through some parametric statistical distribution or locally based on interpolation between a small number of computed realizations. Rather than focusing on the construction of a global or local error model, we instead work towards identification of the model-error component of the residual through a projection-based approach. In this regard, pairs of approximate and detailed model runs are stored in a dictionary that grows at a specified rate during the MCMC inversion procedure. At each iteration, a local model-error basis is constructed for the current test set of model parameters using the K-nearest neighbour entries in the dictionary, which is then used to separate the model error from the other error sources before computing the likelihood of the proposed set of model parameters. We demonstrate the performance of our technique on the inversion of synthetic crosshole ground-penetrating radar traveltime data for three different subsurface parameterizations of varying complexity. The synthetic data are generated using the eikonal equation, whereas a straight-ray forward model is assumed in the inversion procedure. In each case, the developed model-error approach enables to remove posterior bias and obtain a more realistic characterization of uncertainty.
Lane, John W.; Day-Lewis, Frederick D.; Versteeg, Roelof J.; Casey, Clifton C.
2004-01-01
Crosswell radar methods can be used to dynamically image ground-water flow and mass transport associated with tracer tests, hydraulic tests, and natural physical processes, for improved characterization of preferential flow paths and complex aquifer heterogeneity. Unfortunately, because the raypath coverage of the interwell region is limited by the borehole geometry, the tomographic inverse problem is typically underdetermined, and tomograms may contain artifacts such as spurious blurring or streaking that confuse interpretation.We implement object-based inversion (using a constrained, non-linear, least-squares algorithm) to improve results from pixel-based inversion approaches that utilize regularization criteria, such as damping or smoothness. Our approach requires pre- and post-injection travel-time data. Parameterization of the image plane comprises a small number of objects rather than a large number of pixels, resulting in an overdetermined problem that reduces the need for prior information. The nature and geometry of the objects are based on hydrologic insight into aquifer characteristics, the nature of the experiment, and the planned use of the geophysical results.The object-based inversion is demonstrated using synthetic and crosswell radar field data acquired during vegetable-oil injection experiments at a site in Fridley, Minnesota. The region where oil has displaced ground water is discretized as a stack of rectangles of variable horizontal extents. The inversion provides the geometry of the affected region and an estimate of the radar slowness change for each rectangle. Applying petrophysical models to these results and porosity from neutron logs, we estimate the vegetable-oil emulsion saturation in various layers.Using synthetic- and field-data examples, object-based inversion is shown to be an effective strategy for inverting crosswell radar tomography data acquired to monitor the emplacement of vegetable-oil emulsions. A principal advantage of object-based inversion is that it yields images that hydrologists and engineers can easily interpret and use for model calibration.
NASA Astrophysics Data System (ADS)
Lim, Kyo-Sun Sunny; Lim, Jong-Myoung; Shin, Hyeyum Hailey; Hong, Jinkyu; Ji, Young-Yong; Lee, Wanno
2018-06-01
A substantial over-prediction bias at low-to-moderate wind speeds in the Weather Research and Forecasting (WRF) model has been reported in the previous studies. Low-level wind fields play an important role in dispersion of air pollutants, including radionuclides, in a high-resolution WRF framework. By implementing two subgrid-scale orography parameterizations (Jimenez and Dudhia in J Appl Meteorol Climatol 51:300-316, 2012; Mass and Ovens in WRF model physics: problems, solutions and a new paradigm for progress. Preprints, 2010 WRF Users' Workshop, NCAR, Boulder, Colo. http://www.mmm.ucar.edu/wrf/users/workshops/WS2010/presentations/session%204/4-1_WRFworkshop2010Final.pdf, 2010), we tried to compare the performance of parameterizations and to enhance the forecast skill of low-level wind fields over the central western part of South Korea. Even though both subgrid-scale orography parameterizations significantly alleviated the positive bias at 10-m wind speed, the parameterization by Jimenez and Dudhia revealed a better forecast skill in wind speed under our modeling configuration. Implementation of the subgrid-scale orography parameterizations in the model did not affect the forecast skills in other meteorological fields including 10-m wind direction. Our study also brought up the problem of discrepancy in the definition of "10-m" wind between model physics parameterizations and observations, which can cause overestimated winds in model simulations. The overestimation was larger in stable conditions than in unstable conditions, indicating that the weak diurnal cycle in the model could be attributed to the representation error.
NASA Astrophysics Data System (ADS)
Xie, Xin
Microphysics and convection parameterizations are two key components in a climate model to simulate realistic climatology and variability of cloud distribution and the cycles of energy and water. When a model has varying grid size or simulations have to be run with different resolutions, scale-aware parameterization is desirable so that we do not have to tune model parameters tailored to a particular grid size. The subgrid variability of cloud hydrometers is known to impact microphysics processes in climate models and is found to highly depend on spatial scale. A scale- aware liquid cloud subgrid variability parameterization is derived and implemented in the Community Earth System Model (CESM) in this study using long-term radar-based ground measurements from the Atmospheric Radiation Measurement (ARM) program. When used in the default CESM1 with the finite-volume dynamic core where a constant liquid inhomogeneity parameter was assumed, the newly developed parameterization reduces the cloud inhomogeneity in high latitudes and increases it in low latitudes. This is due to both the smaller grid size in high latitudes, and larger grid size in low latitudes in the longitude-latitude grid setting of CESM as well as the variation of the stability of the atmosphere. The single column model and general circulation model (GCM) sensitivity experiments show that the new parameterization increases the cloud liquid water path in polar regions and decreases it in low latitudes. Current CESM1 simulation suffers from the bias of both the pacific double ITCZ precipitation and weak Madden-Julian oscillation (MJO). Previous studies show that convective parameterization with multiple plumes may have the capability to alleviate such biases in a more uniform and physical way. A multiple-plume mass flux convective parameterization is used in Community Atmospheric Model (CAM) to investigate the sensitivity of MJO simulations. We show that MJO simulation is sensitive to entrainment rate specification. We found that shallow plumes can generate and sustain the MJO propagation in the model.
NASA Astrophysics Data System (ADS)
Lu, Yang; Stehly, Laurent; Paul, Anne; AlpArray Working Group
2018-05-01
Taking advantage of the large number of seismic stations installed in Europe, in particular in the greater Alpine region with the AlpArray experiment, we derive a new high-resolution 3-D shear-wave velocity model of the European crust and uppermost mantle from ambient noise tomography. The correlation of up to four years of continuous vertical-component seismic recordings from 1293 broadband stations (10° W-35° E, 30° N-75° N) provides Rayleigh wave group velocity dispersion data in the period band 5-150 s at more than 0.8 million virtual source-receiver pairs. Two-dimensional Rayleigh wave group velocity maps are estimated using adaptive parameterization to accommodate the strong heterogeneity of path coverage. A probabilistic 3-D shear-wave velocity model, including probability densities for the depth of layer boundaries and S-wave velocity values, is obtained by non-linear Bayesian inversion. A weighted average of the probabilistic model is then used as starting model for the linear inversion step, providing the final Vs model. The resulting S-wave velocity model and Moho depth are validated by comparison with previous geophysical studies. Although surface-wave tomography is weakly sensitive to layer boundaries, vertical cross-sections through our Vs model and the associated probability of presence of interfaces display striking similarities with reference controlled-source (CSS) and receiver-function sections across the Alpine belt. Our model even provides new structural information such as a ˜8 km Moho jump along the CSS ECORS-CROP profile that was not imaged by reflection data due to poor penetration across a heterogeneous upper crust. Our probabilistic and final shear wave velocity models have the potential to become new reference models of the European crust, both for crustal structure probing and geophysical studies including waveform modeling or full waveform inversion.
Bayesian Inversion of 2D Models from Airborne Transient EM Data
NASA Astrophysics Data System (ADS)
Blatter, D. B.; Key, K.; Ray, A.
2016-12-01
The inherent non-uniqueness in most geophysical inverse problems leads to an infinite number of Earth models that fit observed data to within an adequate tolerance. To resolve this ambiguity, traditional inversion methods based on optimization techniques such as the Gauss-Newton and conjugate gradient methods rely on an additional regularization constraint on the properties that an acceptable model can possess, such as having minimal roughness. While allowing such an inversion scheme to converge on a solution, regularization makes it difficult to estimate the uncertainty associated with the model parameters. This is because regularization biases the inversion process toward certain models that satisfy the regularization constraint and away from others that don't, even when both may suitably fit the data. By contrast, a Bayesian inversion framework aims to produce not a single `most acceptable' model but an estimate of the posterior likelihood of the model parameters, given the observed data. In this work, we develop a 2D Bayesian framework for the inversion of transient electromagnetic (TEM) data. Our method relies on a reversible-jump Markov Chain Monte Carlo (RJ-MCMC) Bayesian inverse method with parallel tempering. Previous gradient-based inversion work in this area used a spatially constrained scheme wherein individual (1D) soundings were inverted together and non-uniqueness was tackled by using lateral and vertical smoothness constraints. By contrast, our work uses a 2D model space of Voronoi cells whose parameterization (including number of cells) is fully data-driven. To make the problem work practically, we approximate the forward solution for each TEM sounding using a local 1D approximation where the model is obtained from the 2D model by retrieving a vertical profile through the Voronoi cells. The implicit parsimony of the Bayesian inversion process leads to the simplest models that adequately explain the data, obviating the need for explicit smoothness constraints. In addition, credible intervals in model space are directly obtained, resolving some of the uncertainty introduced by regularization. An example application shows how the method can be used to quantify the uncertainty in airborne EM soundings for imaging subglacial brine channels and groundwater systems.
NASA Astrophysics Data System (ADS)
Imvitthaya, Chomchid; Honda, Kiyoshi; Lertlum, Surat; Tangtham, Nipon
2011-01-01
In this paper, we present the results of a net primary production (NPP) modeling of teak (Tectona grandis Lin F.), an important species in tropical deciduous forests. The biome-biogeochemical cycles or Biome-BGC model was calibrated to estimate net NPP through the inverse modeling approach. A genetic algorithm (GA) was linked with Biome-BGC to determine the optimal ecophysiological model parameters. The Biome-BGC was calibrated by adjusting the ecophysiological model parameters to fit the simulated LAI to the satellite LAI (SPOT-Vegetation), and the best fitness confirmed the high accuracy of generated ecophysioligical parameter from GA. The modeled NPP, using optimized parameters from GA as input data, was evaluated using daily NPP derived by the MODIS satellite and the annual field data in northern Thailand. The results showed that NPP obtained using the optimized ecophysiological parameters were more accurate than those obtained using default literature parameterization. This improvement occurred mainly because the model's optimized parameters reduced the bias by reducing systematic underestimation in the model. These Biome-BGC results can be effectively applied in teak forests in tropical areas. The study proposes a more effective method of using GA to determine ecophysiological parameters at the site level and represents a first step toward the analysis of the carbon budget of teak plantations at the regional scale.
Multiscale 2D Inversions of Active-source First-arrival Times in Taiwan
NASA Astrophysics Data System (ADS)
Lin, Y. P.; Zhao, L.; Hung, S. H.
2015-12-01
In this study, we make use of the active-source records collected by the TAIGER (TAiwan Integrated GEodynamics Research) project in 2008 at nearly 1400 locations on the island of Taiwan and the surrounding ocean bottom. We manually picked the first-arrival times from the waveform records to obtain a set of highly accurate P-wave traveltimes. Among the 1400 receivers, more than 1000 were deployed along four almost linear cross-island profiles with inter-seismometer spacing down to 200 m. This ground-truth dataset provides strong constrains on the structure between the exactly known active sources and densely distributed receivers, which can be used to calibrate the seismic structure in the upper crust in Taiwan. In this study, we use this dataset to image the two-dimensional P-wave structure along the four linear profiles. A wavelet parameterization of the model is adopted to achieve an objective and data-adaptive multiscale resolution to the 2D structures. Rigorous estimations of resolution lengths were also conducted to quantify the spatial resolutions of the tomography inversions. The resulting 2D models yield first-arrival time predictions that are in excellent agreement with the observations. The seismic structures along the 2D profiles display strong lateral variations (up to 80% relative to regional average) with more realistic amplitudes of velocity perturbations and spatial patterns consistent with geological zonations of Taiwan
NASA Astrophysics Data System (ADS)
Goodlet, Brent R.; Mills, Leah; Bales, Ben; Charpagne, Marie-Agathe; Murray, Sean P.; Lenthe, William C.; Petzold, Linda; Pollock, Tresa M.
2018-06-01
Bayesian inference is employed to precisely evaluate single crystal elastic properties of novel γ -γ ' Co- and CoNi-based superalloys from simple and non-destructive resonant ultrasound spectroscopy (RUS) measurements. Nine alloys from three Co-, CoNi-, and Ni-based alloy classes were evaluated in the fully aged condition, with one alloy per class also evaluated in the solution heat-treated condition. Comparisons are made between the elastic properties of the three alloy classes and among the alloys of a single class, with the following trends observed. A monotonic rise in the c_{44} (shear) elastic constant by a total of 12 pct is observed between the three alloy classes as Co is substituted for Ni. Elastic anisotropy ( A) is also increased, with a large majority of the nearly 13 pct increase occurring after Co becomes the dominant constituent. Together the five CoNi alloys, with Co:Ni ratios from 1:1 to 1.5:1, exhibited remarkably similar properties with an average A 1.8 pct greater than the Ni-based alloy CMSX-4. Custom code demonstrating a substantial advance over previously reported methods for RUS inversion is also reported here for the first time. CmdStan-RUS is built upon the open-source probabilistic programing language of Stan and formulates the inverse problem using Bayesian methods. Bayesian posterior distributions are efficiently computed with Hamiltonian Monte Carlo (HMC), while initial parameterization is randomly generated from weakly informative prior distributions. Remarkably robust convergence behavior is demonstrated across multiple independent HMC chains in spite of initial parameterization often very far from actual parameter values. Experimental procedures are substantially simplified by allowing any arbitrary misorientation between the specimen and crystal axes, as elastic properties and misorientation are estimated simultaneously.
NASA Astrophysics Data System (ADS)
Farzamian, Mohammad; Monteiro Santos, Fernando A.; Khalil, Mohamed A.
2017-12-01
The coupled hydrogeophysical approach has proved to be a valuable tool for improving the use of geoelectrical data for hydrological model parameterization. In the coupled approach, hydrological parameters are directly inferred from geoelectrical measurements in a forward manner to eliminate the uncertainty connected to the independent inversion of electrical resistivity data. Several numerical studies have been conducted to demonstrate the advantages of a coupled approach; however, only a few attempts have been made to apply the coupled approach to actual field data. In this study, we developed a 1D coupled hydrogeophysical code to estimate the van Genuchten-Mualem model parameters, K s, n, θ r and α, from time-lapse vertical electrical sounding data collected during a constant inflow infiltration experiment. van Genuchten-Mualem parameters were sampled using the Latin hypercube sampling method to provide a full coverage of the range of each parameter from their distributions. By applying the coupled approach, vertical electrical sounding data were coupled to hydrological models inferred from van Genuchten-Mualem parameter samples to investigate the feasibility of constraining the hydrological model. The key approaches taken in the study are to (1) integrate electrical resistivity and hydrological data and avoiding data inversion, (2) estimate the total water mass recovery of electrical resistivity data and consider it in van Genuchten-Mualem parameters evaluation and (3) correct the influence of subsurface temperature fluctuations during the infiltration experiment on electrical resistivity data. The results of the study revealed that the coupled hydrogeophysical approach can improve the value of geophysical measurements in hydrological model parameterization. However, the approach cannot overcome the technical limitations of the geoelectrical method associated with resolution and of water mass recovery.
NASA Astrophysics Data System (ADS)
Rinaldi, Antonio P.; Rutqvist, Jonny; Finsterle, Stefan; Liu, Hui-Hai
2017-11-01
Ground deformation, commonly observed in storage projects, carries useful information about processes occurring in the injection formation. The Krechba gas field at In Salah (Algeria) is one of the best-known sites for studying ground surface deformation during geological carbon storage. At this first industrial-scale on-shore CO2 demonstration project, satellite-based ground-deformation monitoring data of high quality are available and used to study the large-scale hydrological and geomechanical response of the system to injection. In this work, we carry out coupled fluid flow and geomechanical simulations to understand the uplift at three different CO2 injection wells (KB-501, KB-502, KB-503). Previous numerical studies focused on the KB-502 injection well, where a double-lobe uplift pattern has been observed in the ground-deformation data. The observed uplift patterns at KB-501 and KB-503 have single-lobe patterns, but they can also indicate a deep fracture zone mechanical response to the injection. The current study improves the previous modeling approach by introducing an injection reservoir and a fracture zone, both responding to a Mohr-Coulomb failure criterion. In addition, we model a stress-dependent permeability and bulk modulus, according to a dual continuum model. Mechanical and hydraulic properties are determined through inverse modeling by matching the simulated spatial and temporal evolution of uplift to InSAR observations as well as by matching simulated and measured pressures. The numerical simulations are in agreement with both spatial and temporal observations. The estimated values for the parameterized mechanical and hydraulic properties are in good agreement with previous numerical results. In addition, the formal joint inversion of hydrogeological and geomechanical data provides measures of the estimation uncertainty.
Advances in Global Full Waveform Inversion
NASA Astrophysics Data System (ADS)
Tromp, J.; Bozdag, E.; Lei, W.; Ruan, Y.; Lefebvre, M. P.; Modrak, R. T.; Orsvuran, R.; Smith, J. A.; Komatitsch, D.; Peter, D. B.
2017-12-01
Information about Earth's interior comes from seismograms recorded at its surface. Seismic imaging based on spectral-element and adjoint methods has enabled assimilation of this information for the construction of 3D (an)elastic Earth models. These methods account for the physics of wave excitation and propagation by numerically solving the equations of motion, and require the execution of complex computational procedures that challenge the most advanced high-performance computing systems. Current research is petascale; future research will require exascale capabilities. The inverse problem consists of reconstructing the characteristics of the medium from -often noisy- observations. A nonlinear functional is minimized, which involves both the misfit to the measurements and a Tikhonov-type regularization term to tackle inherent ill-posedness. Achieving scalability for the inversion process on tens of thousands of multicore processors is a task that offers many research challenges. We initiated global "adjoint tomography" using 253 earthquakes and produced the first-generation model named GLAD-M15, with a transversely isotropic model parameterization. We are currently running iterations for a second-generation anisotropic model based on the same 253 events. In parallel, we continue iterations for a transversely isotropic model with a larger dataset of 1,040 events to determine higher-resolution plume and slab images. A significant part of our research has focused on eliminating I/O bottlenecks in the adjoint tomography workflow. This has led to the development of a new Adaptable Seismic Data Format based on HDF5, and post-processing tools based on the ADIOS library developed by Oak Ridge National Laboratory. We use the Ensemble Toolkit for workflow stabilization & management to automate the workflow with minimal human interaction.
Rinaldi, Antonio P.; Rutqvist, Jonny; Finsterle, Stefan; ...
2016-10-24
Ground deformation, commonly seen in storage projects, carries useful information about processes occurring in the injection formation. The Krechba gas field at In Salah (Algeria) is one of the best-known sites for studying ground surface deformation during geological carbon storage. At this first industrial-scale on-shore CO 2 demonstration project, satellite-based ground-deformation monitoring data of high quality are available and used to study the large-scale hydrological and geomechanical response of the system to injection. In this work, we carry out coupled fluid flow and geomechanical simulations to understand the uplift at three different CO 2 injection wells (KB-501, KB-502, KB-503). Previousmore » numerical studies focused on the KB-502 injection well, where a double-lobe uplift pattern has been observed in the ground-deformation data. The observed uplift patterns at KB-501 and KB-503 have single-lobe patterns, but they can also indicate a deep fracture zone mechanical response to the injection.The current study improves the previous modeling approach by introducing an injection reservoir and a fracture zone, both responding to a Mohr-Coulomb failure criterion. In addition, we model a stress-dependent permeability and bulk modulus, according to a dual continuum model. Mechanical and hydraulic properties are determined through inverse modeling by matching the simulated spatial and temporal evolution of uplift to InSAR observations as well as by matching simulated and measured pressures. The numerical simulations are in agreement with both spatial and temporal observations. The estimated values for the parameterized mechanical and hydraulic properties are in good agreement with previous numerical results. In addition, the formal joint inversion of hydrogeological and geomechanical data provides measures of the estimation uncertainty.« less
Electrostatic point charge fitting as an inverse problem: Revealing the underlying ill-conditioning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ivanov, Maxim V.; Talipov, Marat R.; Timerghazin, Qadir K., E-mail: qadir.timerghazin@marquette.edu
2015-10-07
Atom-centered point charge (PC) model of the molecular electrostatics—a major workhorse of the atomistic biomolecular simulations—is usually parameterized by least-squares (LS) fitting of the point charge values to a reference electrostatic potential, a procedure that suffers from numerical instabilities due to the ill-conditioned nature of the LS problem. To reveal the origins of this ill-conditioning, we start with a general treatment of the point charge fitting problem as an inverse problem and construct an analytical model with the point charges spherically arranged according to Lebedev quadrature which is naturally suited for the inverse electrostatic problem. This analytical model is contrastedmore » to the atom-centered point-charge model that can be viewed as an irregular quadrature poorly suited for the problem. This analysis shows that the numerical problems of the point charge fitting are due to the decay of the curvatures corresponding to the eigenvectors of LS sum Hessian matrix. In part, this ill-conditioning is intrinsic to the problem and is related to decreasing electrostatic contribution of the higher multipole moments, that are, in the case of Lebedev grid model, directly associated with the Hessian eigenvectors. For the atom-centered model, this association breaks down beyond the first few eigenvectors related to the high-curvature monopole and dipole terms; this leads to even wider spread-out of the Hessian curvature values. Using these insights, it is possible to alleviate the ill-conditioning of the LS point-charge fitting without introducing external restraints and/or constraints. Also, as the analytical Lebedev grid PC model proposed here can reproduce multipole moments up to a given rank, it may provide a promising alternative to including explicit multipole terms in a force field.« less
A second-order Budkyo-type parameterization of landsurface hydrology
NASA Technical Reports Server (NTRS)
Andreou, S. A.; Eagleson, P. S.
1982-01-01
A simple, second order parameterization of the water fluxes at a land surface for use as the appropriate boundary condition in general circulation models of the global atmosphere was developed. The derived parameterization incorporates the high nonlinearities in the relationship between the near surface soil moisture and the evaporation, runoff and percolation fluxes. Based on the one dimensional statistical dynamic derivation of the annual water balance, it makes the transition to short term prediction of the moisture fluxes, through a Taylor expansion around the average annual soil moisture. A comparison of the suggested parameterization is made with other existing techniques and available measurements. A thermodynamic coupling is applied in order to obtain estimations of the surface ground temperature.
NASA Technical Reports Server (NTRS)
Kratz, David P.; Chou, Ming-Dah; Yan, Michael M.-H.
1993-01-01
Fast and accurate parameterizations have been developed for the transmission functions of the CO2 9.4- and 10.4-micron bands, as well as the CFC-11, CFC-12, and CFC-22 bands located in the 8-12-micron region. The parameterizations are based on line-by-line calculations of transmission functions for the CO2 bands and on high spectral resolution laboratory measurements of the absorption coefficients for the CFC bands. Also developed are the parameterizations for the H2O transmission functions for the corresponding spectral bands. Compared to the high-resolution calculations, fluxes at the tropopause computed with the parameterizations are accurate to within 10 percent when overlapping of gas absorptions within a band is taken into account. For individual gas absorption, the accuracy is of order 0-2 percent. The climatic effects of these trace gases have been studied using a zonally averaged multilayer energy balance model, which includes seasonal cycles and a simplified deep ocean. With the trace gas abundances taken to follow the Intergovernmental Panel on Climate Change Low Emissions 'B' scenario, the transient response of the surface temperature is simulated for the period 1900-2060.
NASA Astrophysics Data System (ADS)
Langeveld, Willem G. J.
The most widely used technology for the non-intrusive active inspection of cargo containers and trucks is x-ray radiography at high energies (4-9 MeV). Technologies such as dual-energy imaging, spectroscopy, and statistical waveform analysis can be used to estimate the effective atomic number (Zeff) of the cargo from the x-ray transmission data, because the mass attenuation coefficient depends on energy as well as atomic number Z. The estimated effective atomic number, Zeff, of the cargo then leads to improved detection capability of contraband and threats, including special nuclear materials (SNM) and shielding. In this context, the exact meaning of effective atomic number (for mixtures and compounds) is generally not well-defined. Physics-based parameterizations of the mass attenuation coefficient have been given in the past, but usually for a limited low-energy range. Definitions of Zeff have been based, in part, on such parameterizations. Here, we give an improved parameterization at low energies (20-1000 keV) which leads to a well-defined Zeff. We then extend this parameterization up to energies relevant for cargo inspection (10 MeV), and examine what happens to the Zeff definition at these higher energies.
NASA Astrophysics Data System (ADS)
Kusznir, Nick; Gozzard, Simon; Alvey, Andy
2016-04-01
The distribution of ocean crust and lithosphere within the South China Sea (SCS) are controversial. Sea-floor spreading re-orientation and ridge jumps during the Oligocene-Miocene formation of the South China Sea led to the present complex distribution of oceanic crust, thinned continental crust, micro-continents and volcanic ridges. We determine Moho depth, crustal thickness and continental lithosphere thinning (1- 1/beta) for the South China Sea using a gravity inversion method which incorporates a lithosphere thermal gravity anomaly correction (Chappell & Kusznir, 2008). The gravity inversion method provides a prediction of ocean-continent transition structure and continent-ocean boundary location which is independent of ocean isochron information. A correction is required for the lithosphere thermal gravity anomaly in order to determine Moho depth accurately from gravity inversion; the elevated lithosphere geotherm of the young oceanic and rifted continental margin lithosphere of the South China Sea produces a large lithosphere thermal gravity anomaly which in places exceeds -150 mGal. The gravity anomaly inversion is carried out in the 3D spectral domain (using Parker 1972) to determine 3D Moho geometry and invokes Smith's uniqueness theorem. The gravity anomaly contribution from sediments assumes a compaction controlled sediment density increase with depth. The gravity inversion includes a parameterization of the decompression melting model of White & McKenzie (1999) to predict volcanic addition generated during continental breakup lithosphere thinning and seafloor spreading. Public domain free air gravity anomaly, bathymetry and sediment thickness data are used in this gravity inversion. Using crustal thickness and continental lithosphere thinning factor maps with superimposed shaded-relief free-air gravity anomaly, we improve the determination of pre-breakup rifted margin conjugacy, rift orientation and sea-floor spreading trajectory. SCS conjugate margins are highly asymmetric and have several striking features such as the Macclesfield Bank, Xisha Trough, Reed Bank and Dangerous Grounds. Thin continental crust is predicted extending westwards from thin oceanic crust north of Macclesfield Bank into the Quiondongnan (QDN) basin and is interpreted as being generated ahead of westward propagating sea-floor spreading most in the Oligocene. Further south, highly thinned continental crust or possibly serpentinised exhumed mantle is predicted in the Phu Khanh Basin. Ahead of the failed propagating tip of seafloor spreading, offshore southern Vietnam, thinned continental crust is predicted for the Cuu Long and Nam Con Son Basins. Crustal thicknesses from gravity inversion confirms that the southern margin of the SCS consists of fragmented blocks of thinned continental crust separated by thinner regions of continental crust that have undergone higher degrees of stretching and thinning. The Reed Bank is predicted to have a crustal thickness of 20 to 25km, similar to that of Macclesfield Bank. The Dangerous Grounds, west of the Reed Bank, are also predicted to consist of continental crust. This region has been thinned to a higher degree than the Reed Bank, with continental crustal thickness ranging between 10 and 20km thick.
NASA Astrophysics Data System (ADS)
Zakšek, Klemen; Schroedter-Homscheidt, Marion
Some applications, e.g. from traffic or energy management, require air temperature data in high spatial and temporal resolution at two metres height above the ground ( T2m), sometimes in near-real-time. Thus, a parameterization based on boundary layer physical principles was developed that determines the air temperature from remote sensing data (SEVIRI data aboard the MSG and MODIS data aboard Terra and Aqua satellites). The method consists of two parts. First, a downscaling procedure from the SEVIRI pixel resolution of several kilometres to a one kilometre spatial resolution is performed using a regression analysis between the land surface temperature ( LST) and the normalized differential vegetation index ( NDVI) acquired by the MODIS instrument. Second, the lapse rate between the LST and T2m is removed using an empirical parameterization that requires albedo, down-welling surface short-wave flux, relief characteristics and NDVI data. The method was successfully tested for Slovenia, the French region Franche-Comté and southern Germany for the period from May to December 2005, indicating that the parameterization is valid for Central Europe. This parameterization results in a root mean square deviation RMSD of 2.0 K during the daytime with a bias of -0.01 K and a correlation coefficient of 0.95. This is promising, especially considering the high temporal (30 min) and spatial resolution (1000 m) of the results.
Modeling the Absorbing Aerosol Index
NASA Technical Reports Server (NTRS)
Penner, Joyce; Zhang, Sophia
2003-01-01
We propose a scheme to model the absorbing aerosol index and improve the biomass carbon inventories by optimizing the difference between TOMS aerosol index (AI) and modeled AI with an inverse model. Two absorbing aerosol types are considered, including biomass carbon and mineral dust. A priori biomass carbon source was generated by Liousse et al [1996]. Mineral dust emission is parameterized according to surface wind and soil moisture using the method developed by Ginoux [2000]. In this initial study, the coupled CCM1 and GRANTOUR model was used to determine the aerosol spatial and temporal distribution. With modeled aerosol concentrations and optical properties, we calculate the radiance at the top of the atmosphere at 340 nm and 380 nm with a radiative transfer model. The contrast of radiance at these two wavelengths will be used to calculate AI. Then we compare the modeled AI with TOMS AI. This paper reports our initial modeling for AI and its comparison with TOMS Nimbus 7 AI. For our follow-on project we will model the global AI with aerosol spatial and temporal distribution recomputed from the IMPACT model and DAO GEOS-1 meteorology fields. Then we will build an inverse model, which applies a Bayesian inverse technique to optimize the agreement of between model and observational data. The inverse model will tune the biomass burning source strength to reduce the difference between modelled AI and TOMS AI. Further simulations with a posteriori biomass carbon sources from the inverse model will be carried out. Results will be compared to available observations such as surface concentration and aerosol optical depth.
NASA Astrophysics Data System (ADS)
Poirier, Vincent
Mesh deformation schemes play an important role in numerical aerodynamic optimization. As the aerodynamic shape changes, the computational mesh must adapt to conform to the deformed geometry. In this work, an extension to an existing fast and robust Radial Basis Function (RBF) mesh movement scheme is presented. Using a reduced set of surface points to define the mesh deformation increases the efficiency of the RBF method; however, at the cost of introducing errors into the parameterization by not recovering the exact displacement of all surface points. A secondary mesh movement is implemented, within an adjoint-based optimization framework, to eliminate these errors. The proposed scheme is tested within a 3D Euler flow by reducing the pressure drag while maintaining lift of a wing-body configured Boeing-747 and an Onera-M6 wing. As well, an inverse pressure design is executed on the Onera-M6 wing and an inverse span loading case is presented for a wing-body configured DLR-F6 aircraft.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wannamaker, Philip E.
We have developed an algorithm for the inversion of magnetotelluric (MT) data to a 3D earth resistivity model based upon the finite element method. Hexahedral edge finite elements are implemented to accommodate discontinuities in the electric field across resistivity boundaries, and to accurately simulate topographic variations. All matrices are reduced and solved using direct solution modules which avoids ill-conditioning endemic to iterative solvers such as conjugate gradients, principally PARDISO for the finite element system and PLASMA for the parameter step estimate. Large model parameterizations can be handled by transforming the Gauss-Newton estimator to data-space form. Accuracy of the forward problemmore » and jacobians has been checked by comparison to integral equations results and by limiting asymptotes. Inverse accuracy and performance has been verified against the public Dublin Secret Test Model 2 and the well-known Mount St Helens 3D MT data set. This algorithm we believe is the most capable yet for forming 3D images of earth resistivity structure and their implications for geothermal fluids and pathways.« less
Non-invasive flow path characterization in a mining-impacted wetland
Bethune, James; Randell, Jackie; Runkel, Robert L.; Singha, Kamini
2015-01-01
Time-lapse electrical resistivity (ER) was used to capture the dilution of a seasonal pulse of acid mine drainage (AMD) contamination in the subsurface of a wetland downgradient of the abandoned Pennsylvania mine workings in central Colorado. Data were collected monthly from mid-July to late October of 2013, with an additional dataset collected in June of 2014. Inversion of the ER data shows the development through time of multiple resistive anomalies in the subsurface, which corroborating data suggest are driven by changes in total dissolved solids (TDS) localized in preferential flow pathways. Sensitivity analyses on a synthetic model of the site suggest that the anomalies would need to be at least several meters in diameter to be adequately resolved by the inversions. The existence of preferential flow paths would have a critical impact on the extent of attenuation mechanisms at the site, and their further characterization could be used to parameterize reactive transport models in developing quantitative predictions of remediation strategies.
Approaches in highly parameterized inversion - GENIE, a general model-independent TCP/IP run manager
Muffels, Christopher T.; Schreuder, Willem A.; Doherty, John E.; Karanovic, Marinko; Tonkin, Matthew J.; Hunt, Randall J.; Welter, David E.
2012-01-01
GENIE is a model-independent suite of programs that can be used to generally distribute, manage, and execute multiple model runs via the TCP/IP infrastructure. The suite consists of a file distribution interface, a run manage, a run executer, and a routine that can be compiled as part of a program and used to exchange model runs with the run manager. Because communication is via a standard protocol (TCP/IP), any computer connected to the Internet can serve in any of the capacities offered by this suite. Model independence is consistent with the existing template and instruction file protocols of the widely used PEST parameter estimation program. This report describes (1) the problem addressed; (2) the approach used by GENIE to queue, distribute, and retrieve model runs; and (3) user instructions, classes, and functions developed. It also includes (4) an example to illustrate the linking of GENIE with Parallel PEST using the interface routine.
NASA Astrophysics Data System (ADS)
Pincus, R.; Mlawer, E. J.
2017-12-01
Radiation is key process in numerical models of the atmosphere. The problem is well-understood and the parameterization of radiation has seen relatively few conceptual advances in the past 15 years. It is nonthelss often the single most expensive component of all physical parameterizations despite being computed less frequently than other terms. This combination of cost and maturity suggests value in a single radiation parameterization that could be shared across models; devoting effort to a single parameterization might allow for fine tuning for efficiency. The challenge lies in the coupling of this parameterization to many disparate representations of clouds and aerosols. This talk will describe RRTMGP, a new radiation parameterization that seeks to balance efficiency and flexibility. This balance is struck by isolating computational tasks in "kernels" that expose as much fine-grained parallelism as possible. These have simple interfaces and are interoperable across programming languages so that they might be repalced by alternative implementations in domain-specific langauges. Coupling to the host model makes use of object-oriented features of Fortran 2003, minimizing branching within the kernels and the amount of data that must be transferred. We will show accuracy and efficiency results for a globally-representative set of atmospheric profiles using a relatively high-resolution spectral discretization.
Parameter estimation uncertainty: Comparing apples and apples?
NASA Astrophysics Data System (ADS)
Hart, D.; Yoon, H.; McKenna, S. A.
2012-12-01
Given a highly parameterized ground water model in which the conceptual model of the heterogeneity is stochastic, an ensemble of inverse calibrations from multiple starting points (MSP) provides an ensemble of calibrated parameters and follow-on transport predictions. However, the multiple calibrations are computationally expensive. Parameter estimation uncertainty can also be modeled by decomposing the parameterization into a solution space and a null space. From a single calibration (single starting point) a single set of parameters defining the solution space can be extracted. The solution space is held constant while Monte Carlo sampling of the parameter set covering the null space creates an ensemble of the null space parameter set. A recently developed null-space Monte Carlo (NSMC) method combines the calibration solution space parameters with the ensemble of null space parameters, creating sets of calibration-constrained parameters for input to the follow-on transport predictions. Here, we examine the consistency between probabilistic ensembles of parameter estimates and predictions using the MSP calibration and the NSMC approaches. A highly parameterized model of the Culebra dolomite previously developed for the WIPP project in New Mexico is used as the test case. A total of 100 estimated fields are retained from the MSP approach and the ensemble of results defining the model fit to the data, the reproduction of the variogram model and prediction of an advective travel time are compared to the same results obtained using NSMC. We demonstrate that the NSMC fields based on a single calibration model can be significantly constrained by the calibrated solution space and the resulting distribution of advective travel times is biased toward the travel time from the single calibrated field. To overcome this, newly proposed strategies to employ a multiple calibration-constrained NSMC approach (M-NSMC) are evaluated. Comparison of the M-NSMC and MSP methods suggests that M-NSMC can provide a computationally efficient and practical solution for predictive uncertainty analysis in highly nonlinear and complex subsurface flow and transport models. This material is based upon work supported as part of the Center for Frontiers of Subsurface Energy Security, an Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences under Award Number DE-SC0001114. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Stochastic Convection Parameterizations
NASA Technical Reports Server (NTRS)
Teixeira, Joao; Reynolds, Carolyn; Suselj, Kay; Matheou, Georgios
2012-01-01
computational fluid dynamics, radiation, clouds, turbulence, convection, gravity waves, surface interaction, radiation interaction, cloud and aerosol microphysics, complexity (vegetation, biogeochemistry, radiation versus turbulence/convection stochastic approach, non-linearities, Monte Carlo, high resolutions, large-Eddy Simulations, cloud structure, plumes, saturation in tropics, forecasting, parameterizations, stochastic, radiation-clod interaction, hurricane forecasts
NASA Astrophysics Data System (ADS)
Nasta, Paolo; Romano, Nunzio
2016-01-01
This study explores the feasibility of identifying the effective soil hydraulic parameterization of a layered soil profile by using a conventional unsteady drainage experiment leading to field capacity. The flux-based field capacity criterion is attained by subjecting the soil profile to a synthetic drainage process implemented numerically in the Soil-Water-Atmosphere-Plant (SWAP) model. The effective hydraulic parameterization is associated to either aggregated or equivalent parameters, the former being determined by the geometrical scaling theory while the latter is obtained through the inverse modeling approach. Outcomes from both these methods depend on information that is sometimes difficult to retrieve at local scale and rather challenging or virtually impossible at larger scales. The only knowledge of topsoil hydraulic properties, for example, as retrieved by a near-surface field campaign or a data assimilation technique, is often exploited as a proxy to determine effective soil hydraulic parameterization at the largest spatial scales. Comparisons of the effective soil hydraulic characterization provided by these three methods are conducted by discussing the implications for their use and accounting for the trade-offs between required input information and model output reliability. To better highlight the epistemic errors associated to the different effective soil hydraulic properties and to provide some more practical guidance, the layered soil profiles are then grouped by using the FAO textural classes. For the moderately heterogeneous soil profiles available, all three approaches guarantee a general good predictability of the actual field capacity values and provide adequate identification of the effective hydraulic parameters. Conversely, worse performances are encountered for the highly variable vertical heterogeneity, especially when resorting to the "topsoil-only" information. In general, the best performances are guaranteed by the equivalent parameters, which might be considered a reference for comparisons with other techniques. As might be expected, the information content of the soil hydraulic properties pertaining only to the uppermost soil horizon is rather inefficient and also not capable to map out the hydrologic behavior of the real vertical soil heterogeneity since the drainage process is significantly affected by profile layering in almost all cases.
Uncertainty analysis in seismic tomography
NASA Astrophysics Data System (ADS)
Owoc, Bartosz; Majdański, Mariusz
2017-04-01
Velocity field from seismic travel time tomography depends on several factors like regularization, inversion path, model parameterization etc. The result also strongly depends on an initial velocity model and precision of travel times picking. In this research we test dependence on starting model in layered tomography and compare it with effect of picking precision. Moreover, in our analysis for manual travel times picking the uncertainty distribution is asymmetric. This effect is shifting the results toward faster velocities. For calculation we are using JIVE3D travel time tomographic code. We used data from geo-engineering and industrial scale investigations, which were collected by our team from IG PAS.
Watermarking on 3D mesh based on spherical wavelet transform.
Jin, Jian-Qiu; Dai, Min-Ya; Bao, Hu-Jun; Peng, Qun-Sheng
2004-03-01
In this paper we propose a robust watermarking algorithm for 3D mesh. The algorithm is based on spherical wavelet transform. Our basic idea is to decompose the original mesh into a series of details at different scales by using spherical wavelet transform; the watermark is then embedded into the different levels of details. The embedding process includes: global sphere parameterization, spherical uniform sampling, spherical wavelet forward transform, embedding watermark, spherical wavelet inverse transform, and at last resampling the mesh watermarked to recover the topological connectivity of the original model. Experiments showed that our algorithm can improve the capacity of the watermark and the robustness of watermarking against attacks.
A new approach to the convective parameterization of the regional atmospheric model BRAMS
NASA Astrophysics Data System (ADS)
Dos Santos, A. F.; Freitas, S. R.; de Campos Velho, H. F.; Luz, E. F.; Gan, M. A.; de Mattos, J. Z.; Grell, G. A.
2013-05-01
The summer characteristics of January 2010 was performed using the atmospheric model Brazilian developments on the Regional Atmospheric Modeling System (BRAMS). The convective parameterization scheme of Grell and Dévényi was used to represent clouds and their interaction with the large scale environment. As a result, the precipitation forecasts can be combined in several ways, generating a numerical representation of precipitation and atmospheric heating and moistening rates. The purpose of this study was to generate a set of weights to compute a best combination of the hypothesis of the convective scheme. It is an inverse problem of parameter estimation and the problem is solved as an optimization problem. To minimize the difference between observed data and forecasted precipitation, the objective function was computed with the quadratic difference between five simulated precipitation fields and observation. The precipitation field estimated by the Tropical Rainfall Measuring Mission satellite was used as observed data. Weights were obtained using the firefly algorithm and the mass fluxes of each closure of the convective scheme were weighted generating a new set of mass fluxes. The results indicated the better skill of the model with the new methodology compared with the old ensemble mean calculation.
NASA Astrophysics Data System (ADS)
Park, Jun; Hwang, Seung-On
2017-11-01
The impact of a spectral nudging technique for the dynamical downscaling of the summer surface air temperature in a high-resolution regional atmospheric model is assessed. The performance of this technique is measured by comparing 16 analysis-driven simulation sets of physical parameterization combinations of two shortwave radiation and four land surface model schemes of the model, which are known to be crucial for the simulation of the surface air temperature. It is found that the application of spectral nudging to the outermost domain has a greater impact on the regional climate than any combination of shortwave radiation and land surface model physics schemes. The optimal choice of two model physics parameterizations is helpful for obtaining more realistic spatiotemporal distributions of land surface variables such as the surface air temperature, precipitation, and surface fluxes. However, employing spectral nudging adds more value to the results; the improvement is greater than using sophisticated shortwave radiation and land surface model physical parameterizations. This result indicates that spectral nudging applied to the outermost domain provides a more accurate lateral boundary condition to the innermost domain when forced by analysis data by securing the consistency with large-scale forcing over a regional domain. This consequently indirectly helps two physical parameterizations to produce small-scale features closer to the observed values, leading to a better representation of the surface air temperature in a high-resolution downscaled climate.
Forest productivity varies with soil moisture more than temperature in a small montane watershed
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wei, Liang; Zhou, Hang; Link, Timothy E
Mountainous terrain creates variability in microclimate, including nocturnal cold air drainage and resultant temperature inversions. Driven by the elevational temperature gradient, vapor pressure deficit (VPD) also varies with elevation. Soil depth and moisture availability often increase from ridgetop to valley bottom. These variations complicate predictions of forest productivity and other biological responses. We analyzed spatiotemporal air temperature (T) and VPD variations in a forested, 27-km 2 catchment that varied from 1000 to 1650 m in elevation. Temperature inversions occurred on 76% of mornings in the growing season. The inversion had a clear upper boundary at midslope (~1370 m a.s.l.). Vapormore » pressure was relatively constant across elevations, therefore VPD was mainly controlled by T in the watershed. Here, we assessed the impact of microclimate and soil moisture on tree height, forest productivity, and carbon stable isotopes (δ 13C) using a physiological forest growth model (3-PG). Simulated productivity and tree height were tested against observations derived from lidar data. The effects on photosynthetic gas-exchange of dramatic elevational variations in T and VPD largely cancelled as higher temperature (increasing productivity) accompanies higher VPD (reducing productivity). Although it was not measured, the simulations suggested that realistic elevational variations in soil moisture predicted the observed decline in productivity with elevation. Therefore, in this watershed, the model parameterization should have emphasized soil moisture rather than precise descriptions of temperature inversions.« less
Forest productivity varies with soil moisture more than temperature in a small montane watershed
Wei, Liang; Zhou, Hang; Link, Timothy E; ...
2018-05-16
Mountainous terrain creates variability in microclimate, including nocturnal cold air drainage and resultant temperature inversions. Driven by the elevational temperature gradient, vapor pressure deficit (VPD) also varies with elevation. Soil depth and moisture availability often increase from ridgetop to valley bottom. These variations complicate predictions of forest productivity and other biological responses. We analyzed spatiotemporal air temperature (T) and VPD variations in a forested, 27-km 2 catchment that varied from 1000 to 1650 m in elevation. Temperature inversions occurred on 76% of mornings in the growing season. The inversion had a clear upper boundary at midslope (~1370 m a.s.l.). Vapormore » pressure was relatively constant across elevations, therefore VPD was mainly controlled by T in the watershed. Here, we assessed the impact of microclimate and soil moisture on tree height, forest productivity, and carbon stable isotopes (δ 13C) using a physiological forest growth model (3-PG). Simulated productivity and tree height were tested against observations derived from lidar data. The effects on photosynthetic gas-exchange of dramatic elevational variations in T and VPD largely cancelled as higher temperature (increasing productivity) accompanies higher VPD (reducing productivity). Although it was not measured, the simulations suggested that realistic elevational variations in soil moisture predicted the observed decline in productivity with elevation. Therefore, in this watershed, the model parameterization should have emphasized soil moisture rather than precise descriptions of temperature inversions.« less
An algorithm for deriving core magnetic field models from the Swarm data set
NASA Astrophysics Data System (ADS)
Rother, Martin; Lesur, Vincent; Schachtschneider, Reyko
2013-11-01
In view of an optimal exploitation of the Swarm data set, we have prepared and tested software dedicated to the determination of accurate core magnetic field models and of the Euler angles between the magnetic sensors and the satellite reference frame. The dedicated core field model estimation is derived directly from the GFZ Reference Internal Magnetic Model (GRIMM) inversion and modeling family. The data selection techniques and the model parameterizations are similar to what were used for the derivation of the second (Lesur et al., 2010) and third versions of GRIMM, although the usage of observatory data is not planned in the framework of the application to Swarm. The regularization technique applied during the inversion process smoothes the magnetic field model in time. The algorithm to estimate the Euler angles is also derived from the CHAMP studies. The inversion scheme includes Euler angle determination with a quaternion representation for describing the rotations. It has been built to handle possible weak time variations of these angles. The modeling approach and software have been initially validated on a simple, noise-free, synthetic data set and on CHAMP vector magnetic field measurements. We present results of test runs applied to the synthetic Swarm test data set.
Electron Impact Ionization: A New Parameterization for 100 eV to 1 MeV Electrons
NASA Technical Reports Server (NTRS)
Fang, Xiaohua; Randall, Cora E.; Lummerzheim, Dirk; Solomon, Stanley C.; Mills, Michael J.; Marsh, Daniel; Jackman, Charles H.; Wang, Wenbin; Lu, Gang
2008-01-01
Low, medium and high energy electrons can penetrate to the thermosphere (90-400 km; 55-240 miles) and mesosphere (50-90 km; 30-55 miles). These precipitating electrons ionize that region of the atmosphere, creating positively charged atoms and molecules and knocking off other negatively charged electrons. The precipitating electrons also create nitrogen-containing compounds along with other constituents. Since the electron precipitation amounts change within minutes, it is necessary to have a rapid method of computing the ionization and production of nitrogen-containing compounds for inclusion in computationally-demanding global models. A new methodology has been developed, which has parameterized a more detailed model computation of the ionizing impact of precipitating electrons over the very large range of 100 eV up to 1,000,000 eV. This new parameterization method is more accurate than a previous parameterization scheme, when compared with the more detailed model computation. Global models at the National Center for Atmospheric Research will use this new parameterization method in the near future.
Anisotropic shear dispersion parameterization for ocean eddy transport
NASA Astrophysics Data System (ADS)
Reckinger, Scott; Fox-Kemper, Baylor
2015-11-01
The effects of mesoscale eddies are universally treated isotropically in global ocean general circulation models. However, observations and simulations demonstrate that the mesoscale processes that the parameterization is intended to represent, such as shear dispersion, are typified by strong anisotropy. We extend the Gent-McWilliams/Redi mesoscale eddy parameterization to include anisotropy and test the effects of varying levels of anisotropy in 1-degree Community Earth System Model (CESM) simulations. Anisotropy has many effects on the simulated climate, including a reduction of temperature and salinity biases, a deepening of the southern ocean mixed-layer depth, impacts on the meridional overturning circulation and ocean energy and tracer uptake, and improved ventilation of biogeochemical tracers, particularly in oxygen minimum zones. A process-based parameterization to approximate the effects of unresolved shear dispersion is also used to set the strength and direction of anisotropy. The shear dispersion parameterization is similar to drifter observations in spatial distribution of diffusivity and high-resolution model diagnosis in the distribution of eddy flux orientation.
NASA Astrophysics Data System (ADS)
Verrelst, J.; Rivera, J. P.; Leonenko, G.; Alonso, L.; Moreno, J.
2012-04-01
Radiative transfer (RT) modeling plays a key role for earth observation (EO) because it is needed to design EO instruments and to develop and test inversion algorithms. The inversion of a RT model is considered as a successful approach for the retrieval of biophysical parameters because of being physically-based and generally applicable. However, to the broader community this approach is considered as laborious because of its many processing steps and expert knowledge is required to realize precise model parameterization. We have recently developed a radiative transfer toolbox ARTMO (Automated Radiative Transfer Models Operator) with the purpose of providing in a graphical user interface (GUI) essential models and tools required for terrestrial EO applications such as model inversion. In short, the toolbox allows the user: i) to choose between various plant leaf and canopy RT models (e.g. models from the PROSPECT and SAIL family, FLIGHT), ii) to choose between spectral band settings of various air- and space-borne sensors or defining own sensor settings, iii) to simulate a massive amount of spectra based on a look up table (LUT) approach and storing it in a relational database, iv) to plot spectra of multiple models and compare them with measured spectra, and finally, v) to run model inversion against optical imagery given several cost options and accuracy estimates. In this work ARTMO was used to tackle some well-known problems related to model inversion. According to Hadamard conditions, mathematical models of physical phenomena are mathematically invertible if the solution of the inverse problem to be solved exists, is unique and depends continuously on data. This assumption is not always met because of the large number of unknowns and different strategies have been proposed to overcome this problem. Several of these strategies have been implemented in ARTMO and were here analyzed to optimize the inversion performance. Data came from the SPARC-2003 dataset, which was acquired on the agricultural test site Barrax, Spain. LUTs were created using the 4SAIL and FLIGHT models and were inverted against CHRIS data in order to retrieve maps of chlorophyll content (chl) and leaf area index (LAI). The following inversion steps have been optimized: 1. Cost function. The performances of about 50 different cost functions (i.e. minimum distance functions) were compared. Remarkably, in none of the studied cases the widely used root mean square error (RMSE) led to most accurate results. Depending on the retrieved parameter, more successful functions were: 'Sharma and Mittal', 'Shannońs entropy', 'Hellinger distance', 'Pearsońs chi-square'. 2. Gaussian noise. Earth observation data typically encompass a certain degree of noise due to errors related to radiometric and geometric processing. In all cases, adding 5% Gaussian noise to the simulated spectra led to more accurate retrievals as compared to without noise. 3. Average of multiple best solutions. Because multiple parameter combinations may lead to the same spectra, a way to overcome this problem is not searching for the top best match but for a percentage of best matches. Optimized retrievals were encountered when including an average of 7% (Chl) to 10% (LAI) top best matches. 4. Integration of estimates. The option is provided to integrate estimates of biochemical contents at the canopy level (e.g., total chlorophyll: Chl × LAI, or water: Cw × LAI), which can lead to increased robustness and accuracy. 5. Class-based inversion. This option is probably ARTMÓs most powerful feature as it allows model parameterization depending on the imagés land cover classes (e.g. different soil or vegetation types). Class-based inversion can lead to considerably improved accuracies compared to one generic class. Results suggest that 4SAIL and FLIGHT performed alike for Chl but not for LAI. While both models rely on the leaf model PROSPECT for Chl retrieval, their different nature (e.g. numerical vs. ray tracing) may cause that retrieval of structural parameters such as LAI differ. Finally, it should be noted that the whole analysis can be intuitively performed by the toolbox. ARTMO is freely available to the EO community for further development. Expressions of interest are welcome and should be directed to the corresponding author.
NASA Astrophysics Data System (ADS)
Zhou, Bing; Greenhalgh, S. A.
2011-10-01
2.5-D modeling and inversion techniques are much closer to reality than the simple and traditional 2-D seismic wave modeling and inversion. The sensitivity kernels required in full waveform seismic tomographic inversion are the Fréchet derivatives of the displacement vector with respect to the independent anisotropic model parameters of the subsurface. They give the sensitivity of the seismograms to changes in the model parameters. This paper applies two methods, called `the perturbation method' and `the matrix method', to derive the sensitivity kernels for 2.5-D seismic waveform inversion. We show that the two methods yield the same explicit expressions for the Fréchet derivatives using a constant-block model parameterization, and are available for both the line-source (2-D) and the point-source (2.5-D) cases. The method involves two Green's function vectors and their gradients, as well as the derivatives of the elastic modulus tensor with respect to the independent model parameters. The two Green's function vectors are the responses of the displacement vector to the two directed unit vectors located at the source and geophone positions, respectively; they can be generally obtained by numerical methods. The gradients of the Green's function vectors may be approximated in the same manner as the differential computations in the forward modeling. The derivatives of the elastic modulus tensor with respect to the independent model parameters can be obtained analytically, dependent on the class of medium anisotropy. Explicit expressions are given for two special cases—isotropic and tilted transversely isotropic (TTI) media. Numerical examples are given for the latter case, which involves five independent elastic moduli (or Thomsen parameters) plus one angle defining the symmetry axis.
2014-10-26
From the parameterization results, we extract adaptive and anisotropic T-meshes for the further T- spline surface construction. Finally, a gradient flow...field-based method [7, 12] to generate adaptive and anisotropic quadrilateral meshes, which can be used as the control mesh for high-order T- spline ...parameterization results, we extract adaptive and anisotropic T-meshes for the further T- spline surface construction. Finally, a gradient flow-based
Mixing parametrizations for ocean climate modelling
NASA Astrophysics Data System (ADS)
Gusev, Anatoly; Moshonkin, Sergey; Diansky, Nikolay; Zalesny, Vladimir
2016-04-01
The algorithm is presented of splitting the total evolutionary equations for the turbulence kinetic energy (TKE) and turbulence dissipation frequency (TDF), which is used to parameterize the viscosity and diffusion coefficients in ocean circulation models. The turbulence model equations are split into the stages of transport-diffusion and generation-dissipation. For the generation-dissipation stage, the following schemes are implemented: the explicit-implicit numerical scheme, analytical solution and the asymptotic behavior of the analytical solutions. The experiments were performed with different mixing parameterizations for the modelling of Arctic and the Atlantic climate decadal variability with the eddy-permitting circulation model INMOM (Institute of Numerical Mathematics Ocean Model) using vertical grid refinement in the zone of fully developed turbulence. The proposed model with the split equations for turbulence characteristics is similar to the contemporary differential turbulence models, concerning the physical formulations. At the same time, its algorithm has high enough computational efficiency. Parameterizations with using the split turbulence model make it possible to obtain more adequate structure of temperature and salinity at decadal timescales, compared to the simpler Pacanowski-Philander (PP) turbulence parameterization. Parameterizations with using analytical solution or numerical scheme at the generation-dissipation step of the turbulence model leads to better representation of ocean climate than the faster parameterization using the asymptotic behavior of the analytical solution. At the same time, the computational efficiency left almost unchanged relative to the simple PP parameterization. Usage of PP parametrization in the circulation model leads to realistic simulation of density and circulation with violation of T,S-relationships. This error is majorly avoided with using the proposed parameterizations containing the split turbulence model. The high sensitivity of the eddy-permitting circulation model to the definition of mixing is revealed, which is associated with significant changes of density fields in the upper baroclinic ocean layer over the total considered area. For instance, usage of the turbulence parameterization instead of PP algorithm leads to increasing circulation velocity in the Gulf Stream and North Atlantic Current, as well as the subpolar cyclonic gyre in the North Atlantic and Beaufort Gyre in the Arctic basin are reproduced more realistically. Consideration of the Prandtl number as a function of the Richardson number significantly increases the modelling quality. The research was supported by the Russian Foundation for Basic Research (grant № 16-05-00534) and the Council on the Russian Federation President Grants (grant № MK-3241.2015.5)
NASA Astrophysics Data System (ADS)
Madi, Raneem; Huibert de Rooij, Gerrit; Mielenz, Henrike; Mai, Juliane
2018-02-01
Few parametric expressions for the soil water retention curve are suitable for dry conditions. Furthermore, expressions for the soil hydraulic conductivity curves associated with parametric retention functions can behave unrealistically near saturation. We developed a general criterion for water retention parameterizations that ensures physically plausible conductivity curves. Only 3 of the 18 tested parameterizations met this criterion without restrictions on the parameters of a popular conductivity curve parameterization. A fourth required one parameter to be fixed. We estimated parameters by shuffled complex evolution (SCE) with the objective function tailored to various observation methods used to obtain retention curve data. We fitted the four parameterizations with physically plausible conductivities as well as the most widely used parameterization. The performance of the resulting 12 combinations of retention and conductivity curves was assessed in a numerical study with 751 days of semiarid atmospheric forcing applied to unvegetated, uniform, 1 m freely draining columns for four textures. Choosing different parameterizations had a minor effect on evaporation, but cumulative bottom fluxes varied by up to an order of magnitude between them. This highlights the need for a careful selection of the soil hydraulic parameterization that ideally does not only rely on goodness of fit to static soil water retention data but also on hydraulic conductivity measurements. Parameter fits for 21 soils showed that extrapolations into the dry range of the retention curve often became physically more realistic when the parameterization had a logarithmic dry branch, particularly in fine-textured soils where high residual water contents would otherwise be fitted.
How certain are the process parameterizations in our models?
NASA Astrophysics Data System (ADS)
Gharari, Shervan; Hrachowitz, Markus; Fenicia, Fabrizio; Matgen, Patrick; Razavi, Saman; Savenije, Hubert; Gupta, Hoshin; Wheater, Howard
2016-04-01
Environmental models are abstract simplifications of real systems. As a result, the elements of these models, including system architecture (structure), process parameterization and parameters inherit a high level of approximation and simplification. In a conventional model building exercise the parameter values are the only elements of a model which can vary while the rest of the modeling elements are often fixed a priori and therefore not subjected to change. Once chosen the process parametrization and model structure usually remains the same throughout the modeling process. The only flexibility comes from the changing parameter values, thereby enabling these models to reproduce the desired observation. This part of modeling practice, parameter identification and uncertainty, has attracted a significant attention in the literature during the last years. However what remains unexplored in our point of view is to what extent the process parameterization and system architecture (model structure) can support each other. In other words "Does a specific form of process parameterization emerge for a specific model given its system architecture and data while no or little assumption has been made about the process parameterization itself? In this study we relax the assumption regarding a specific pre-determined form for the process parameterizations of a rainfall/runoff model and examine how varying the complexity of the system architecture can lead to different or possibly contradictory parameterization forms than what would have been decided otherwise. This comparison implicitly and explicitly provides us with an assessment of how uncertain is our perception of model process parameterization in respect to the extent the data can support.
Sensitivity of Pacific Cold Tongue and Double-ITCZ Bias to Convective Parameterization
NASA Astrophysics Data System (ADS)
Woelfle, M.; Bretherton, C. S.; Pritchard, M. S.; Yu, S.
2016-12-01
Many global climate models struggle to accurately simulate annual mean precipitation and sea surface temperature (SST) fields in the tropical Pacific basin. Precipitation biases are dominated by the double intertropical convergence zone (ITCZ) bias where models exhibit precipitation maxima straddling the equator while only a single Northern Hemispheric maximum exists in observations. The major SST bias is the enhancement of the equatorial cold tongue. A series of coupled model simulations are used to investigate the sensitivity of the bias development to convective parameterization. Model components are initialized independently prior to coupling to allow analysis of the transient response of the system directly following coupling. These experiments show precipitation and SST patterns to be highly sensitive to convective parameterization. Simulations in which the deep convective parameterization is disabled forcing all convection to be resolved by the shallow convection parameterization showed a degradation in both the cold tongue and double-ITCZ biases as precipitation becomes focused into off-equatorial regions of local SST maxima. Simulations using superparameterization in place of traditional cloud parameterizations showed a reduced cold tongue bias at the expense of additional precipitation biases. The equatorial SST responses to changes in convective parameterization are driven by changes in near equatorial zonal wind stress. The sensitivity of convection to SST is important in determining the precipitation and wind stress fields. However, differences in convective momentum transport also play a role. While no significant improvement is seen in these simulations of the double-ITCZ, the system's sensitivity to these changes reaffirm that improved convective parameterizations may provide an avenue for improving simulations of tropical Pacific precipitation and SST.
Dynamically consistent parameterization of mesoscale eddies. Part III: Deterministic approach
NASA Astrophysics Data System (ADS)
Berloff, Pavel
2018-07-01
This work continues development of dynamically consistent parameterizations for representing mesoscale eddy effects in non-eddy-resolving and eddy-permitting ocean circulation models and focuses on the classical double-gyre problem, in which the main dynamic eddy effects maintain eastward jet extension of the western boundary currents and its adjacent recirculation zones via eddy backscatter mechanism. Despite its fundamental importance, this mechanism remains poorly understood, and in this paper we, first, study it and, then, propose and test its novel parameterization. We start by decomposing the reference eddy-resolving flow solution into the large-scale and eddy components defined by spatial filtering, rather than by the Reynolds decomposition. Next, we find that the eastward jet and its recirculations are robustly present not only in the large-scale flow itself, but also in the rectified time-mean eddies, and in the transient rectified eddy component, which consists of highly anisotropic ribbons of the opposite-sign potential vorticity anomalies straddling the instantaneous eastward jet core and being responsible for its continuous amplification. The transient rectified component is separated from the flow by a novel remapping method. We hypothesize that the above three components of the eastward jet are ultimately driven by the small-scale transient eddy forcing via the eddy backscatter mechanism, rather than by the mean eddy forcing and large-scale nonlinearities. We verify this hypothesis by progressively turning down the backscatter and observing the induced flow anomalies. The backscatter analysis leads us to formulating the key eddy parameterization hypothesis: in an eddy-permitting model at least partially resolved eddy backscatter can be significantly amplified to improve the flow solution. Such amplification is a simple and novel eddy parameterization framework implemented here in terms of local, deterministic flow roughening controlled by single parameter. We test the parameterization skills in an hierarchy of non-eddy-resolving and eddy-permitting modifications of the original model and demonstrate, that indeed it can be highly efficient for restoring the eastward jet extension and its adjacent recirculation zones. The new deterministic parameterization framework not only combines remarkable simplicity with good performance but also is dynamically transparent, therefore, it provides a powerful alternative to the common eddy diffusion and emerging stochastic parameterizations.
High Resolution Electro-Optical Aerosol Phase Function Database PFNDAT2006
2006-08-01
snow models use the gamma distribution (equation 12) with m = 0. 3.4.1 Rain Model The most widely used analytical parameterization for raindrop size ...Uijlenhoet and Stricker (22), as the result of an analytical derivation based on a theoretical parameterization for the raindrop size distribution ...6 2.2 Particle Size Distribution Models
A Thermal Infrared Radiation Parameterization for Atmospheric Studies
NASA Technical Reports Server (NTRS)
Chou, Ming-Dah; Suarez, Max J.; Liang, Xin-Zhong; Yan, Michael M.-H.; Cote, Charles (Technical Monitor)
2001-01-01
This technical memorandum documents the longwave radiation parameterization developed at the Climate and Radiation Branch, NASA Goddard Space Flight Center, for a wide variety of weather and climate applications. Based on the 1996-version of the Air Force Geophysical Laboratory HITRAN data, the parameterization includes the absorption due to major gaseous absorption (water vapor, CO2, O3) and most of the minor trace gases (N2O, CH4, CFCs), as well as clouds and aerosols. The thermal infrared spectrum is divided into nine bands. To achieve a high degree of accuracy and speed, various approaches of computing the transmission function are applied to different spectral bands and gases. The gaseous transmission function is computed either using the k-distribution method or the table look-up method. To include the effect of scattering due to clouds and aerosols, the optical thickness is scaled by the single-scattering albedo and asymmetry factor. The parameterization can accurately compute fluxes to within 1% of the high spectral-resolution line-by-line calculations. The cooling rate can be accurately computed in the region extending from the surface to the 0.01-hPa level.
Seafloor age dependence of Rayleigh wave phase velocities in the Indian Ocean
NASA Astrophysics Data System (ADS)
Godfrey, Karen E.; Dalton, Colleen A.; Ritsema, Jeroen
2017-05-01
Variations in the phase velocity of fundamental-mode Rayleigh waves across the Indian Ocean are determined using two inversion approaches. First, variations in phase velocity as a function of seafloor age are estimated using a pure-path age-dependent inversion method. Second, a two-dimensional parameterization is used to solve for phase velocity within 1.25° × 1.25° grid cells. Rayleigh wave travel time delays have been measured between periods of 38 and 200 s. The number of measurements in the study area ranges between 4139 paths at a period of 200 s and 22,272 paths at a period of 40 s. At periods < 100 s, the phase velocity variations are strongly controlled by seafloor age and shown to be consistent with temperature variations predicted by the half-space-cooling model for a mantle potential temperature of 1400°C. The inferred thermal structure beneath the Indian Ocean is most similar to the structure of the Pacific upper mantle, where phase velocities can also be explained by a half-space-cooling model. The thermal structure is not consistent with that of the Atlantic upper mantle, which is best fit by a plate-cooling model and requires a thin plate. Removing age-dependent phase velocity from the 2-D maps of the Indian Ocean highlights anomalously high velocities at the Rodriguez Triple Junction and the Australian-Antarctic Discordance and anomalously low velocities immediately to the west of the Central Indian Ridge.
On the importance of geological data for hydraulic tomography analysis: Laboratory sandbox study
NASA Astrophysics Data System (ADS)
Zhao, Zhanfeng; Illman, Walter A.; Berg, Steven J.
2016-11-01
This paper investigates the importance of geological data in Hydraulic Tomography (HT) through sandbox experiments. In particular, four groundwater models with homogeneous geological units constructed with borehole data of varying accuracy are jointly calibrated with multiple pumping test data of two different pumping and observation densities. The results are compared to those from a geostatistical inverse model. Model calibration and validation performances are quantitatively assessed using drawdown scatterplots. We find that accurate and inaccurate geological models can be well calibrated, despite the estimated K values for the poor geological models being quite different from the actual values. Model validation results reveal that inaccurate geological models yield poor drawdown predictions, but using more calibration data improves its predictive capability. Moreover, model comparisons among a highly parameterized geostatistical and layer-based geological models show that, (1) as the number of pumping tests and monitoring locations are reduced, the performance gap between the approaches decreases, and (2) a simplified geological model with a fewer number of layers is more reliable than the one based on the wrong description of stratigraphy. Finally, using a geological model as prior information in geostatistical inverse models results in the preservation of geological features, especially in areas where drawdown data are not available. Overall, our sandbox results emphasize the importance of incorporating geological data in HT surveys when data from pumping tests is sparse. These findings have important implications for field applications of HT where well distances are large.
Doherty, John E.; Fienen, Michael N.; Hunt, Randall J.
2011-01-01
Pilot points have been used in geophysics and hydrogeology for at least 30 years as a means to bridge the gap between estimating a parameter value in every cell of a model and subdividing models into a small number of homogeneous zones. Pilot points serve as surrogate parameters at which values are estimated in the inverse-modeling process, and their values are interpolated onto the modeling domain in such a way that heterogeneity can be represented at a much lower computational cost than trying to estimate parameters in every cell of a model. Although the use of pilot points is increasingly common, there are few works documenting the mathematical implications of their use and even fewer sources of guidelines for their implementation in hydrogeologic modeling studies. This report describes the mathematics of pilot-point use, provides guidelines for their use in the parameter-estimation software suite (PEST), and outlines several research directions. Two key attributes for pilot-point definitions are highlighted. First, the difference between the information contained in the every-cell parameter field and the surrogate parameter field created using pilot points should be in the realm of parameters which are not informed by the observed data (the null space). Second, the interpolation scheme for projecting pilot-point values onto model cells ideally should be orthogonal. These attributes are informed by the mathematics and have important ramifications for both the guidelines and suggestions for future research.
Impacts of Light Use Efficiency and fPAR Parameterization on Gross Primary Production Modeling
NASA Technical Reports Server (NTRS)
Cheng, Yen-Ben; Zhang, Qingyuan; Lyapustin, Alexei I.; Wang, Yujie; Middleton, Elizabeth M.
2014-01-01
This study examines the impact of parameterization of two variables, light use efficiency (LUE) and the fraction of absorbed photosynthetically active radiation (fPAR or fAPAR), on gross primary production(GPP) modeling. Carbon sequestration by terrestrial plants is a key factor to a comprehensive under-standing of the carbon budget at global scale. In this context, accurate measurements and estimates of GPP will allow us to achieve improved carbon monitoring and to quantitatively assess impacts from cli-mate changes and human activities. Spaceborne remote sensing observations can provide a variety of land surface parameterizations for modeling photosynthetic activities at various spatial and temporal scales. This study utilizes a simple GPP model based on LUE concept and different land surface parameterizations to evaluate the model and monitor GPP. Two maize-soybean rotation fields in Nebraska, USA and the Bartlett Experimental Forest in New Hampshire, USA were selected for study. Tower-based eddy-covariance carbon exchange and PAR measurements were collected from the FLUXNET Synthesis Dataset. For the model parameterization, we utilized different values of LUE and the fPAR derived from various algorithms. We adapted the approach and parameters from the MODIS MOD17 Biome Properties Look-Up Table (BPLUT) to derive LUE. We also used a site-specific analytic approach with tower-based Net Ecosystem Exchange (NEE) and PAR to estimate maximum potential LUE (LUEmax) to derive LUE. For the fPAR parameter, the MODIS MOD15A2 fPAR product was used. We also utilized fAPAR chl, a parameter accounting for the fAPAR linked to the chlorophyll-containing canopy fraction. fAPAR chl was obtained by inversion of a radiative transfer model, which used the MODIS-based reflectances in bands 1-7 produced by Multi-Angle Implementation of Atmospheric Correction (MAIAC) algorithm. fAPAR chl exhibited seasonal dynamics more similar with the flux tower based GPP than MOD15A2 fPAR, especially in the spring and fall at the agricultural sites. When using the MODIS MOD17-based parameters to estimate LUE, fAPAR chl generated better agreements with GPP (r2= 0.79-0.91) than MOD15A2 fPAR (r2= 0.57-0.84).However, underestimations of GPP were also observed, especially for the crop fields. When applying the site-specific LUE max value to estimate in situ LUE, the magnitude of estimated GPP was closer to in situ GPP; this method produced a slight overestimation for the MOD15A2 fPAR at the Bartlett forest. This study highlights the importance of accurate land surface parameterizations to achieve reliable carbon monitoring capabilities from remote sensing information.
NASA Technical Reports Server (NTRS)
Chao, Winston C.; Chen, Baode; Einaudi, Franco (Technical Monitor)
2000-01-01
Chao's numerical and theoretical work on multiple quasi-equilibria of the intertropical convergence zone (ITCZ) and the origin of monsoon onset is extended to solve two additional puzzles. One is the highly nonlinear dependence on latitude of the "force" acting on the ITCZ due to earth's rotation, which makes the multiple quasi-equilibria of the ITCZ and monsoon onset possible. The other is the dramatic difference in such dependence when different cumulus parameterization schemes are used in a model. Such a difference can lead to a switch between a single ITCZ at the equator and a double ITCZ, when a different cumulus parameterization scheme is used. Sometimes one of the double ITCZ can diminish and only the other remain, but still this can mean different latitudinal locations for the single ITCZ. A single idea based on two off-equator attractors for the ITCZ, due to earth's rotation and symmetric with respect to the equator, and the dependence of the strength and size of these attractors on the cumulus parameterization scheme solves both puzzles. The origin of these rotational attractors, explained in Part I, is further discussed. The "force" acting on the ITCZ due to earth's rotation is the sum of the "forces" of the two attractors. Each attractor exerts on the ITCZ a "force" of simple shape in latitude; but the sum gives a shape highly varying in latitude. Also the strength and the domain of influence of each attractor vary, when change is made in the cumulus parameterization. This gives rise to the high sensitivity of the "force" shape to cumulus parameterization. Numerical results, of experiments using Goddard's GEOS general circulation model, supporting this idea are presented. It is also found that the model results are sensitive to changes outside of the cumulus parameterization. The significance of this study to El Nino forecast and to tropical forecast in general is discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, Huiying; Hou, Zhangshuan; Huang, Maoyi
The Community Land Model (CLM) represents physical, chemical, and biological processes of the terrestrial ecosystems that interact with climate across a range of spatial and temporal scales. As CLM includes numerous sub-models and associated parameters, the high-dimensional parameter space presents a formidable challenge for quantifying uncertainty and improving Earth system predictions needed to assess environmental changes and risks. This study aims to evaluate the potential of transferring hydrologic model parameters in CLM through sensitivity analyses and classification across watersheds from the Model Parameter Estimation Experiment (MOPEX) in the United States. The sensitivity of CLM-simulated water and energy fluxes to hydrologicalmore » parameters across 431 MOPEX basins are first examined using an efficient stochastic sampling-based sensitivity analysis approach. Linear, interaction, and high-order nonlinear impacts are all identified via statistical tests and stepwise backward removal parameter screening. The basins are then classified accordingly to their parameter sensitivity patterns (internal attributes), as well as their hydrologic indices/attributes (external hydrologic factors) separately, using a Principal component analyses (PCA) and expectation-maximization (EM) –based clustering approach. Similarities and differences among the parameter sensitivity-based classification system (S-Class), the hydrologic indices-based classification (H-Class), and the Koppen climate classification systems (K-Class) are discussed. Within each S-class with similar parameter sensitivity characteristics, similar inversion modeling setups can be used for parameter calibration, and the parameters and their contribution or significance to water and energy cycling may also be more transferrable. This classification study provides guidance on identifiable parameters, and on parameterization and inverse model design for CLM but the methodology is applicable to other models. Inverting parameters at representative sites belonging to the same class can significantly reduce parameter calibration efforts.« less
NASA Astrophysics Data System (ADS)
Dalla Valle, Nicolas; Wutzler, Thomas; Meyer, Stefanie; Potthast, Karin; Michalzik, Beate
2017-04-01
Dual-permeability type models are widely used to simulate water fluxes and solute transport in structured soils. These models contain two spatially overlapping flow domains with different parameterizations or even entirely different conceptual descriptions of flow processes. They are usually able to capture preferential flow phenomena, but a large set of parameters is needed, which are very laborious to obtain or cannot be measured at all. Therefore, model inversions are often used to derive the necessary parameters. Although these require sufficient input data themselves, they can use measurements of state variables instead, which are often easier to obtain and can be monitored by automated measurement systems. In this work we show a method to estimate soil hydraulic parameters from high frequency soil moisture time series data gathered at two different measurement depths by inversion of a simple one dimensional dual-permeability model. The model uses an advection equation based on the kinematic wave theory to describe the flow in the fracture domain and a Richards equation for the flow in the matrix domain. The soil moisture time series data were measured in mesocosms during sprinkling experiments. The inversion consists of three consecutive steps: First, the parameters of the water retention function were assessed using vertical soil moisture profiles in hydraulic equilibrium. This was done using two different exponential retention functions and the Campbell function. Second, the soil sorptivity and diffusivity functions were estimated from Boltzmann-transformed soil moisture data, which allowed the calculation of the hydraulic conductivity function. Third, the parameters governing flow in the fracture domain were determined using the whole soil moisture time series. The resulting retention functions were within the range of values predicted by pedotransfer functions apart from very dry conditions, where all retention functions predicted lower matrix potentials. The diffusivity function predicted values of a similar range as shown in other studies. Overall, the model was able to emulate soil moisture time series for low measurement depths, but deviated increasingly at larger depths. This indicates that some of the model parameters are not constant throughout the profile. However, overall seepage fluxes were still predicted correctly. In the near future we will apply the inversion method to lower frequency soil moisture data from different sites to evaluate the model's ability to predict preferential flow seepage fluxes at the field scale.
NASA Astrophysics Data System (ADS)
Wang, He; Otsu, Hideaki; Sakurai, Hiroyoshi; Ahn, DeukSoon; Aikawa, Masayuki; Ando, Takashi; Araki, Shouhei; Chen, Sidong; Chiga, Nobuyuki; Doornenbal, Pieter; Fukuda, Naoki; Isobe, Tadaaki; Kawakami, Shunsuke; Kawase, Shoichiro; Kin, Tadahiro; Kondo, Yosuke; Koyama, Shupei; Kubono, Shigeru; Maeda, Yukie; Makinaga, Ayano; Matsushita, Masafumi; Matsuzaki, Teiichiro; Michimasa, Shinichiro; Momiyama, Satoru; Nagamine, Shunsuke; Nakamura, Takashi; Nakano, Keita; Niikura, Megumi; Ozaki, Tomoyuki; Saito, Atsumi; Saito, Takeshi; Shiga, Yoshiaki; Shikata, Mizuki; Shimizu, Yohei; Shimoura, Susumu; Sumikama, Toshiyuki; Söderström, Pär-Anders; Suzuki, Hiroshi; Takeda, Hiroyuki; Takeuchi, Satoshi; Taniuchi, Ryo; Togano, Yasuhiro; Tsubota, Junichi; Uesaka, Meiko; Watanabe, Yasushi; Watanabe, Yukinobu; Wimmer, Kathrin; Yamamoto, Tatsuya; Yoshida, Koichi
2017-09-01
Spallation reactions for the long-lived fission products 137Cs, 90Sr and 107Pd have been studied for the purpose of nuclear waste transmutation. The cross sections on the proton- and deuteron-induced spallation were obtained in inverse kinematics at the RIKEN Radioactive Isotope Beam Factory. Both the target and energy dependences of cross sections have been investigated systematically. and the cross-section differences between the proton and deuteron are found to be larger for lighter fragments. The experimental data are compared with the SPACS semi-empirical parameterization and the PHITS calculations including both the intra-nuclear cascade and evaporation processes.
pyres: a Python wrapper for electrical resistivity modeling with R2
NASA Astrophysics Data System (ADS)
Befus, Kevin M.
2018-04-01
A Python package, pyres, was written to handle common as well as specialized input and output tasks for the R2 electrical resistivity (ER) modeling program. Input steps including handling field data, creating quadrilateral or triangular meshes, and data filtering allow repeatable and flexible ER modeling within a programming environment. pyres includes non-trivial routines and functions for locating and constraining specific known or separately-parameterized regions in both quadrilateral and triangular meshes. Three basic examples of how to run forward and inverse models with pyres are provided. The importance of testing mesh convergence and model sensitivity are also addressed with higher-level examples that show how pyres can facilitate future research-grade ER analyses.
NASA Astrophysics Data System (ADS)
Tran, Trang; Tran, Huy; Mansfield, Marc; Lyman, Seth; Crosman, Erik
2018-03-01
Four-dimensional data assimilation (FDDA) was applied in WRF-CMAQ model sensitivity tests to study the impact of observational and analysis nudging on model performance in simulating inversion layers and O3 concentration distributions within the Uintah Basin, Utah, U.S.A. in winter 2013. Observational nudging substantially improved WRF model performance in simulating surface wind fields, correcting a 10 °C warm surface temperature bias, correcting overestimation of the planetary boundary layer height (PBLH) and correcting underestimation of inversion strengths produced by regular WRF model physics without nudging. However, the combined effects of poor performance of WRF meteorological model physical parameterization schemes in simulating low clouds, and warm and moist biases in the temperature and moisture initialization and subsequent simulation fields, likely amplified the overestimation of warm clouds during inversion days when observational nudging was applied, impacting the resulting O3 photochemical formation in the chemistry model. To reduce the impact of a moist bias in the simulations on warm cloud formation, nudging with the analysis water mixing ratio above the planetary boundary layer (PBL) was applied. However, due to poor analysis vertical temperature profiles, applying analysis nudging also increased the errors in the modeled inversion layer vertical structure compared to observational nudging. Combining both observational and analysis nudging methods resulted in unrealistically extreme stratified stability that trapped pollutants at the lowest elevations at the center of the Uintah Basin and yielded the worst WRF performance in simulating inversion layer structure among the four sensitivity tests. The results of this study illustrate the importance of carefully considering the representativeness and quality of the observational and model analysis data sets when applying nudging techniques within stable PBLs, and the need to evaluate model results on a basin-wide scale.
Tuning Fractures With Dynamic Data
NASA Astrophysics Data System (ADS)
Yao, Mengbi; Chang, Haibin; Li, Xiang; Zhang, Dongxiao
2018-02-01
Flow in fractured porous media is crucial for production of oil/gas reservoirs and exploitation of geothermal energy. Flow behaviors in such media are mainly dictated by the distribution of fractures. Measuring and inferring the distribution of fractures is subject to large uncertainty, which, in turn, leads to great uncertainty in the prediction of flow behaviors. Inverse modeling with dynamic data may assist to constrain fracture distributions, thus reducing the uncertainty of flow prediction. However, inverse modeling for flow in fractured reservoirs is challenging, owing to the discrete and non-Gaussian distribution of fractures, as well as strong nonlinearity in the relationship between flow responses and model parameters. In this work, building upon a series of recent advances, an inverse modeling approach is proposed to efficiently update the flow model to match the dynamic data while retaining geological realism in the distribution of fractures. In the approach, the Hough-transform method is employed to parameterize non-Gaussian fracture fields with continuous parameter fields, thus rendering desirable properties required by many inverse modeling methods. In addition, a recently developed forward simulation method, the embedded discrete fracture method (EDFM), is utilized to model the fractures. The EDFM maintains computational efficiency while preserving the ability to capture the geometrical details of fractures because the matrix is discretized as structured grid, while the fractures being handled as planes are inserted into the matrix grids. The combination of Hough representation of fractures with the EDFM makes it possible to tune the fractures (through updating their existence, location, orientation, length, and other properties) without requiring either unstructured grids or regridding during updating. Such a treatment is amenable to numerous inverse modeling approaches, such as the iterative inverse modeling method employed in this study, which is capable of dealing with strongly nonlinear problems. A series of numerical case studies with increasing complexity are set up to examine the performance of the proposed approach.
Relativistic three-dimensional Lippmann-Schwinger cross sections for space radiation applications
NASA Astrophysics Data System (ADS)
Werneth, C. M.; Xu, X.; Norman, R. B.; Maung, K. M.
2017-12-01
Radiation transport codes require accurate nuclear cross sections to compute particle fluences inside shielding materials. The Tripathi semi-empirical reaction cross section, which includes over 60 parameters tuned to nucleon-nucleus (NA) and nucleus-nucleus (AA) data, has been used in many of the world's best-known transport codes. Although this parameterization fits well to reaction cross section data, the predictive capability of any parameterization is questionable when it is used beyond the range of the data to which it was tuned. Using uncertainty analysis, it is shown that a relativistic three-dimensional Lippmann-Schwinger (LS3D) equation model based on Multiple Scattering Theory (MST) that uses 5 parameterizations-3 fundamental parameterizations to nucleon-nucleon (NN) data and 2 nuclear charge density parameterizations-predicts NA and AA reaction cross sections as well as the Tripathi cross section parameterization for reactions in which the kinetic energy of the projectile in the laboratory frame (TLab) is greater than 220 MeV/n. The relativistic LS3D model has the additional advantage of being able to predict highly accurate total and elastic cross sections. Consequently, it is recommended that the relativistic LS3D model be used for space radiation applications in which TLab > 220MeV /n .
Technical Note: Approximate Bayesian parameterization of a process-based tropical forest model
NASA Astrophysics Data System (ADS)
Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.
2014-02-01
Inverse parameter estimation of process-based models is a long-standing problem in many scientific disciplines. A key question for inverse parameter estimation is how to define the metric that quantifies how well model predictions fit to the data. This metric can be expressed by general cost or objective functions, but statistical inversion methods require a particular metric, the probability of observing the data given the model parameters, known as the likelihood. For technical and computational reasons, likelihoods for process-based stochastic models are usually based on general assumptions about variability in the observed data, and not on the stochasticity generated by the model. Only in recent years have new methods become available that allow the generation of likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional Markov chain Monte Carlo (MCMC) sampler, performs well in retrieving known parameter values from virtual inventory data generated by the forest model. We analyze the results of the parameter estimation, examine its sensitivity to the choice and aggregation of model outputs and observed data (summary statistics), and demonstrate the application of this method by fitting the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss how this approach differs from approximate Bayesian computation (ABC), another method commonly used to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can be successfully applied to process-based models of high complexity. The methodology is particularly suitable for heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models.
Technical Note: Approximate Bayesian parameterization of a complex tropical forest model
NASA Astrophysics Data System (ADS)
Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.
2013-08-01
Inverse parameter estimation of process-based models is a long-standing problem in ecology and evolution. A key problem of inverse parameter estimation is to define a metric that quantifies how well model predictions fit to the data. Such a metric can be expressed by general cost or objective functions, but statistical inversion approaches are based on a particular metric, the probability of observing the data given the model, known as the likelihood. Deriving likelihoods for dynamic models requires making assumptions about the probability for observations to deviate from mean model predictions. For technical reasons, these assumptions are usually derived without explicit consideration of the processes in the simulation. Only in recent years have new methods become available that allow generating likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional MCMC, performs well in retrieving known parameter values from virtual field data generated by the forest model. We analyze the results of the parameter estimation, examine the sensitivity towards the choice and aggregation of model outputs and observed data (summary statistics), and show results from using this method to fit the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss differences of this approach to Approximate Bayesian Computing (ABC), another commonly used method to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can successfully be applied to process-based models of high complexity. The methodology is particularly suited to heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models in ecology and evolution.
Seismic velocity and crustal thickness inversions: Moon and Mars
NASA Astrophysics Data System (ADS)
Drilleau, Melanie; Blanchette-Guertin, Jean-François; Kawamura, Taichi; Lognonné, Philippe; Wieczorek, Mark
2017-04-01
We present results from new inversions of seismic data arrival times acquired by the Apollo active and passive experiments. Markov chain Monte Carlo inversions are used to constrain (i) 1-D lunar crustal and upper mantle velocity models and (ii) 3-D lateral crustal thickness models under the Apollo stations and the artificial and natural impact sites. A full 3-D model of the lunar crustal thickness is then obtained using the GRAIL gravimetric data, anchored by the crustal thicknesses under each Apollo station and impact site. To avoid the use of any seismic reference model, a Bayesian inversion technique is implemented. The advantage of such an approach is to obtain robust probability density functions of interior structure parameters governed by uncertainties on the seismic data arrival times. 1-D seismic velocities are parameterized using C1-Bézier curves, which allow the exploration of both smoothly varying models and first-order discontinuities. The parameters of the inversion include the seismic velocities of P and S waves as a function of depth, the thickness of the crust under each Apollo station and impact epicentre. The forward problem consists in a ray tracing method enabling both the relocation of the natural impact epicenters, and the computation of time corrections associated to the surface topography and the crustal thickness variations under the stations and impact sites. The results show geology-related differences between the different sites, which are due to contrasts in megaregolith thickness and to shallow subsurface composition and structure. Some of the finer structural elements might be difficult to constrain and might fall within the uncertainties of the dataset. However, we use the more precise LROC-located epicentral locations for the lunar modules and Saturn-IV upper stage artificial impacts, reducing some of the uncertainties observed in past studies. In the framework of the NASA InSight/SEIS mission to Mars, the method developed in this study will be used to constrain the Martian crustal thickness as soon as the first data will be available (late 2018). For Insight, impacts will be located by MRO data differential analysis, which provide a known location enabling the direct inversion of all differential travel times with respect to P arrival time. We have performed resolution tests to investigate to what extend impact events might help us to constrain the Martian crustal thickness. Due to the high flexibility of the Bayesian algorithm, the interior model will be refined each time a new event will be detected.
A Testbed for Model Development
NASA Astrophysics Data System (ADS)
Berry, J. A.; Van der Tol, C.; Kornfeld, A.
2014-12-01
Carbon cycle and land-surface models used in global simulations need to be computationally efficient and have a high standard of software engineering. These models also make a number of scaling assumptions to simplify the representation of complex biochemical and structural properties of ecosystems. This makes it difficult to use these models to test new ideas for parameterizations or to evaluate scaling assumptions. The stripped down nature of these models also makes it difficult to "connect" with current disciplinary research which tends to be focused on much more nuanced topics than can be included in the models. In our opinion/experience this indicates the need for another type of model that can more faithfully represent the complexity ecosystems and which has the flexibility to change or interchange parameterizations and to run optimization codes for calibration. We have used the SCOPE (Soil Canopy Observation, Photochemistry and Energy fluxes) model in this way to develop, calibrate, and test parameterizations for solar induced chlorophyll fluorescence, OCS exchange and stomatal parameterizations at the canopy scale. Examples of the data sets and procedures used to develop and test new parameterizations are presented.
Ocean color modeling: Parameterization and interpretation
NASA Astrophysics Data System (ADS)
Feng, Hui
The ocean color as observed near the water surface is determined mainly by dissolved and particulate substances, known as "optically-active constituents," in the upper water column. The goal of ocean color modeling is to interpret an ocean color spectrum quantitatively to estimate the suite of optically-active constituents near the surface. In recent years, ocean color modeling efforts have been centering upon three major optically-active constituents: chlorophyll concentration, colored dissolved organic matter, and scattering particulates. Many challenges are still being faced in this arena. This thesis generally addresses and deals with some critical issues in ocean color modeling. In chapter one, an extensive literature survey on ocean color modeling is given. A general ocean color model is presented to identify critical candidate uncertainty sources in modeling the ocean color. The goal for this thesis study is then defined as well as some specific objectives. Finally, a general overview of the dissertation is portrayed, defining each of the follow-up chapters to target some relevant objectives. In chapter two, a general approach is presented to quantify constituent concentration retrieval errors induced by uncertainties in inherent optical property (IOP) submodels of a semi-analytical forward model. Chlorophyll concentrations are retrieved by inverting a forward model with nonlinear IOPs. The study demonstrates how uncertainties in individual IOP submodels influence the accuracy of the chlorophyll concentration retrieval at different chlorophyll concentration levels. The important finding for this study shows that precise knowledge of spectral shapes of IOP submodels is critical for accurate chlorophyll retrieval, suggesting an improvement in retrieval accuracy requires precise spectral IOP measurements. In chapter three, three distinct inversion techniques, namely, nonlinear optimization (NLO), principal component analysis (PCA) and artificial neural network (ANN) are compared to assess their inversion performances to retrieve optically-active constituents for a complex nonlinear bio-optical system simulated by a semi-analytical ocean color model. A well-designed simulation scheme was implemented to simulate waters of different bio-optical complexity, and then the three inversion methods were applied to these simulated datasets for performance evaluation. In chapter four, an approach is presented for optimally parameterizing an irradiance reflectance model on the basis of a bio-optical dataset made at 45 stations in the Tokyo Bay and nearby regions between 1982 and 1984. (Abstract shortened by UMI.)
Effects of multiple scattering and surface albedo on the photochemistry of the troposphere
NASA Technical Reports Server (NTRS)
Augustsson, T. R.; Tiwari, S. N.
1981-01-01
The effect of treatment of incoming solar radiation on the photochemistry of the troposphere is discussed. A one dimensional photochemical model of the troposphere containing the species of the nitrogen, oxygen, carbon, hydrogen, and sulfur families was developed. The vertical flux is simulated by use of the parameterized eddy diffusion coefficients. The photochemical model is coupled to a radiative transfer model that calculates the radiation field due to the incoming solar radiation which initiates much of the photochemistry of the troposphere. Vertical profiles of tropospheric species were compared with the Leighton approximation, radiative transfer, matrix inversion model. The radiative transfer code includes the effects of multiple scattering due to molecules and aerosols, pure absorption, and surface albedo on the transfer of incoming solar radiation. It is indicated that significant differences exist for several key photolysis frequencies and species number density profiles between the Leighton approximation and the profiles generated with, radiative transfer, matrix inversion technique. Most species show enhanced vertical profiles when the more realistic treatment of the incoming solar radiation field is included
DOE Office of Scientific and Technical Information (OSTI.GOV)
Augustsson, T.R.; Tiwari, S.N.
The effect of treatment of incoming solar radiation on the photochemistry of the troposphere is discussed. A one dimensional photochemical model of the troposphere containing the species of the nitrogen, oxygen, carbon, hydrogen, and sulfur families was developed. The vertical flux is simulated by use of the parameterized eddy diffusion coefficients. The photochemical model is coupled to a radiative transfer model that calculates the radiation field due to the incoming solar radiation which initiates much of the photochemistry of the troposphere. Vertical profiles of tropospheric species were compared with the Leighton approximation, radiative transfer, matrix inversion model. The radiative transfermore » code includes the effects of multiple scattering due to molecules and aerosols, pure absorption, and surface albedo on the transfer of incoming solar radiation. It is indicated that significant differences exist for several key photolysis frequencies and species number density profiles between the Leighton approximation and the profiles generated with, radiative transfer, matrix inversion technique. Most species show enhanced vertical profiles when the more realistic treatment of the incoming solar radiation field is included« less
NASA Astrophysics Data System (ADS)
White, Jeremy; Stengel, Victoria; Rendon, Samuel; Banta, John
2017-08-01
Computer models of hydrologic systems are frequently used to investigate the hydrologic response of land-cover change. If the modeling results are used to inform resource-management decisions, then providing robust estimates of uncertainty in the simulated response is an important consideration. Here we examine the importance of parameterization, a necessarily subjective process, on uncertainty estimates of the simulated hydrologic response of land-cover change. Specifically, we applied the soil water assessment tool (SWAT) model to a 1.4 km2 watershed in southern Texas to investigate the simulated hydrologic response of brush management (the mechanical removal of woody plants), a discrete land-cover change. The watershed was instrumented before and after brush-management activities were undertaken, and estimates of precipitation, streamflow, and evapotranspiration (ET) are available; these data were used to condition and verify the model. The role of parameterization in brush-management simulation was evaluated by constructing two models, one with 12 adjustable parameters (reduced parameterization) and one with 1305 adjustable parameters (full parameterization). Both models were subjected to global sensitivity analysis as well as Monte Carlo and generalized likelihood uncertainty estimation (GLUE) conditioning to identify important model inputs and to estimate uncertainty in several quantities of interest related to brush management. Many realizations from both parameterizations were identified as behavioral
in that they reproduce daily mean streamflow acceptably well according to Nash-Sutcliffe model efficiency coefficient, percent bias, and coefficient of determination. However, the total volumetric ET difference resulting from simulated brush management remains highly uncertain after conditioning to daily mean streamflow, indicating that streamflow data alone are not sufficient to inform the model inputs that influence the simulated outcomes of brush management the most. Additionally, the reduced-parameterization model grossly underestimates uncertainty in the total volumetric ET difference compared to the full-parameterization model; total volumetric ET difference is a primary metric for evaluating the outcomes of brush management. The failure of the reduced-parameterization model to provide robust uncertainty estimates demonstrates the importance of parameterization when attempting to quantify uncertainty in land-cover change simulations.
White, Jeremy; Stengel, Victoria G.; Rendon, Samuel H.; Banta, John
2017-01-01
Computer models of hydrologic systems are frequently used to investigate the hydrologic response of land-cover change. If the modeling results are used to inform resource-management decisions, then providing robust estimates of uncertainty in the simulated response is an important consideration. Here we examine the importance of parameterization, a necessarily subjective process, on uncertainty estimates of the simulated hydrologic response of land-cover change. Specifically, we applied the soil water assessment tool (SWAT) model to a 1.4 km2 watershed in southern Texas to investigate the simulated hydrologic response of brush management (the mechanical removal of woody plants), a discrete land-cover change. The watershed was instrumented before and after brush-management activities were undertaken, and estimates of precipitation, streamflow, and evapotranspiration (ET) are available; these data were used to condition and verify the model. The role of parameterization in brush-management simulation was evaluated by constructing two models, one with 12 adjustable parameters (reduced parameterization) and one with 1305 adjustable parameters (full parameterization). Both models were subjected to global sensitivity analysis as well as Monte Carlo and generalized likelihood uncertainty estimation (GLUE) conditioning to identify important model inputs and to estimate uncertainty in several quantities of interest related to brush management. Many realizations from both parameterizations were identified as behavioral in that they reproduce daily mean streamflow acceptably well according to Nash–Sutcliffe model efficiency coefficient, percent bias, and coefficient of determination. However, the total volumetric ET difference resulting from simulated brush management remains highly uncertain after conditioning to daily mean streamflow, indicating that streamflow data alone are not sufficient to inform the model inputs that influence the simulated outcomes of brush management the most. Additionally, the reduced-parameterization model grossly underestimates uncertainty in the total volumetric ET difference compared to the full-parameterization model; total volumetric ET difference is a primary metric for evaluating the outcomes of brush management. The failure of the reduced-parameterization model to provide robust uncertainty estimates demonstrates the importance of parameterization when attempting to quantify uncertainty in land-cover change simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bachan, John
Chisel is a new open-source hardware construction language developed at UC Berkeley that supports advanced hardware design using highly parameterized generators and layered domain-specific hardware languages. Chisel is embedded in the Scala programming language, which raises the level of hardware design abstraction by providing concepts including object orientation, functional programming, parameterized types, and type inference. From the same source, Chisel can generate a high-speed C++-based cycle-accurate software simulator, or low-level Verilog designed to pass on to standard ASIC or FPGA tools for synthesis and place and route.
Selecting an Informative/Discriminating Multivariate Response for Inverse Prediction
Thomas, Edward V.; Lewis, John R.; Anderson-Cook, Christine M.; ...
2017-11-21
nverse prediction is important in a wide variety of scientific and engineering contexts. One might use inverse prediction to predict fundamental properties/characteristics of an object using measurements obtained from it. This can be accomplished by “inverting” parameterized forward models that relate the measurements (responses) to the properties/characteristics of interest. Sometimes forward models are science based; but often, forward models are empirically based, using the results of experimentation. For empirically-based forward models, it is important that the experiments provide a sound basis to develop accurate forward models in terms of the properties/characteristics (factors). While nature dictates the causal relationship between factorsmore » and responses, experimenters can influence control of the type, accuracy, and precision of forward models that can be constructed via selection of factors, factor levels, and the set of trials that are performed. Whether the forward models are based on science, experiments or both, researchers can influence the ability to perform inverse prediction by selecting informative response variables. By using an errors-in-variables framework for inverse prediction, this paper shows via simple analysis and examples how the capability of a multivariate response (with respect to being informative and discriminating) can vary depending on how well the various responses complement one another over the range of the factor-space of interest. Insights derived from this analysis could be useful for selecting a set of response variables among candidates in cases where the number of response variables that can be acquired is limited by difficulty, expense, and/or availability of material.« less
Selecting an Informative/Discriminating Multivariate Response for Inverse Prediction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomas, Edward V.; Lewis, John R.; Anderson-Cook, Christine M.
nverse prediction is important in a wide variety of scientific and engineering contexts. One might use inverse prediction to predict fundamental properties/characteristics of an object using measurements obtained from it. This can be accomplished by “inverting” parameterized forward models that relate the measurements (responses) to the properties/characteristics of interest. Sometimes forward models are science based; but often, forward models are empirically based, using the results of experimentation. For empirically-based forward models, it is important that the experiments provide a sound basis to develop accurate forward models in terms of the properties/characteristics (factors). While nature dictates the causal relationship between factorsmore » and responses, experimenters can influence control of the type, accuracy, and precision of forward models that can be constructed via selection of factors, factor levels, and the set of trials that are performed. Whether the forward models are based on science, experiments or both, researchers can influence the ability to perform inverse prediction by selecting informative response variables. By using an errors-in-variables framework for inverse prediction, this paper shows via simple analysis and examples how the capability of a multivariate response (with respect to being informative and discriminating) can vary depending on how well the various responses complement one another over the range of the factor-space of interest. Insights derived from this analysis could be useful for selecting a set of response variables among candidates in cases where the number of response variables that can be acquired is limited by difficulty, expense, and/or availability of material.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larson, Vincent
2016-11-25
The Multiscale Modeling Framework (MMF) embeds a cloud-resolving model in each grid column of a General Circulation Model (GCM). A MMF model does not need to use a deep convective parameterization, and thereby dispenses with the uncertainties in such parameterizations. However, MMF models grossly under-resolve shallow boundary-layer clouds, and hence those clouds may still benefit from parameterization. In this grant, we successfully created a climate model that embeds a cloud parameterization (“CLUBB”) within a MMF model. This involved interfacing CLUBB’s clouds with microphysics and reducing computational cost. We have evaluated the resulting simulated clouds and precipitation with satellite observations. Themore » chief benefit of the project is to provide a MMF model that has an improved representation of clouds and that provides improved simulations of precipitation.« less
A High Resolution Hydrometer Phase Classifier Based on Analysis of Cloud Radar Doppler Spectra.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luke,E.; Kollias, P.
2007-08-06
The lifecycle and radiative properties of clouds are highly sensitive to the phase of their hydrometeors (i.e., liquid or ice). Knowledge of cloud phase is essential for specifying the optical properties of clouds, or else, large errors can be introduced in the calculation of the cloud radiative fluxes. Current parameterizations of cloud water partition in liquid and ice based on temperature are characterized by large uncertainty (Curry et al., 1996; Hobbs and Rangno, 1998; Intriery et al., 2002). This is particularly important in high geographical latitudes and temperature ranges where both liquid droplets and ice crystal phases can exist (mixed-phasemore » cloud). The mixture of phases has a large effect on cloud radiative properties, and the parameterization of mixed-phase clouds has a large impact on climate simulations (e.g., Gregory and Morris, 1996). Furthermore, the presence of both ice and liquid affects the macroscopic properties of clouds, including their propensity to precipitate. Despite their importance, mixed-phase clouds are severely understudied compared to the arguably simpler single-phase clouds. In-situ measurements in mixed-phase clouds are hindered due to aircraft icing, difficulties distinguishing hydrometeor phase, and discrepancies in methods for deriving physical quantities (Wendisch et al. 1996, Lawson et al. 2001). Satellite-based retrievals of cloud phase in high latitudes are often hindered by the highly reflecting ice-covered ground and persistent temperature inversions. From the ground, the retrieval of mixed-phase cloud properties has been the subject of extensive research over the past 20 years using polarization lidars (e.g., Sassen et al. 1990), dual radar wavelengths (e.g., Gosset and Sauvageot 1992; Sekelsky and McIntosh, 1996), and recently radar Doppler spectra (Shupe et al. 2004). Millimeter-wavelength radars have substantially improved our ability to observe non-precipitating clouds (Kollias et al., 2007) due to their excellent sensitivity that enables the detection of thin cloud layers and their ability to penetrate several non-precipitating cloud layers. However, in mixed-phase clouds conditions, the observed Doppler moments are dominated by the highly reflecting ice crystals and thus can not be used to identify the cloud phase. This limits our ability to identify the spatial distribution of cloud phase and our ability to identify the conditions under which mixed-phase clouds form.« less
NASA Astrophysics Data System (ADS)
Awatey, M. T.; Irving, J.; Oware, E. K.
2016-12-01
Markov chain Monte Carlo (McMC) inversion frameworks are becoming increasingly popular in geophysics due to their ability to recover multiple equally plausible geologic features that honor the limited noisy measurements. Standard McMC methods, however, become computationally intractable with increasing dimensionality of the problem, for example, when working with spatially distributed geophysical parameter fields. We present a McMC approach based on a sparse proper orthogonal decomposition (POD) model parameterization that implicitly incorporates the physics of the underlying process. First, we generate training images (TIs) via Monte Carlo simulations of the target process constrained to a conceptual model. We then apply POD to construct basis vectors from the TIs. A small number of basis vectors can represent most of the variability in the TIs, leading to dimensionality reduction. A projection of the starting model into the reduced basis space generates the starting POD coefficients. At each iteration, only coefficients within a specified sampling window are resimulated assuming a Gaussian prior. The sampling window grows at a specified rate as the number of iteration progresses starting from the coefficients corresponding to the highest ranked basis to those of the least informative basis. We found this gradual increment in the sampling window to be more stable compared to resampling all the coefficients right from the first iteration. We demonstrate the performance of the algorithm with both synthetic and lab-scale electrical resistivity imaging of saline tracer experiments, employing the same set of basis vectors for all inversions. We consider two scenarios of unimodal and bimodal plumes. The unimodal plume is consistent with the hypothesis underlying the generation of the TIs whereas bimodality in plume morphology was not theorized. We show that uncertainty quantification using McMC can proceed in the reduced dimensionality space while accounting for the physics of the underlying process.
NASA Astrophysics Data System (ADS)
Olugboji, T. M.; Lekic, V.; McDonough, W.
2017-07-01
We present a new approach for evaluating existing crustal models using ambient noise data sets and its associated uncertainties. We use a transdimensional hierarchical Bayesian inversion approach to invert ambient noise surface wave phase dispersion maps for Love and Rayleigh waves using measurements obtained from Ekström (2014). Spatiospectral analysis shows that our results are comparable to a linear least squares inverse approach (except at higher harmonic degrees), but the procedure has additional advantages: (1) it yields an autoadaptive parameterization that follows Earth structure without making restricting assumptions on model resolution (regularization or damping) and data errors; (2) it can recover non-Gaussian phase velocity probability distributions while quantifying the sources of uncertainties in the data measurements and modeling procedure; and (3) it enables statistical assessments of different crustal models (e.g., CRUST1.0, LITHO1.0, and NACr14) using variable resolution residual and standard deviation maps estimated from the ensemble. These assessments show that in the stable old crust of the Archean, the misfits are statistically negligible, requiring no significant update to crustal models from the ambient noise data set. In other regions of the U.S., significant updates to regionalization and crustal structure are expected especially in the shallow sedimentary basins and the tectonically active regions, where the differences between model predictions and data are statistically significant.
Inverting dedevelopment: geometric singularity theory in embryology
NASA Astrophysics Data System (ADS)
Bookstein, Fred L.; Smith, Bradley R.
2000-10-01
The diffeomorphism model so useful in the biomathematics of normal morphological variability and disease is inappropriate for applications in embryogenesis, where whole coordinate patches are created out of single points. For this application we need a suitable algebra for the creation of something from nothing in a carefully organized geometry: a formalism for parameterizing discrete nondifferentiabilities of invertible functions on Rk, k $GTR 1. One easy way to begin is via the inverse of the development map - call it the dedevelopment map, the deformation backwards in time. Extrapolated, this map will inevitably have singularities at which its derivative is zero. When the dedevelopment map is inverted to face forward in time, the singularities become appropriately isolated infinities of derivative. We have recently introduced growth visualizations via extrapolations to the isolated singularities at which only one directional derivative is zero. Maps inverse to these create new coordinate patches directionally rather than radically. The most generic singularity that suits this purpose is the crease f(x,y) equals (x,x2y+y3), which has already been applied in morphometrics for the description of focal morphogenetic phenomena. We apply it to embryogenesis in the form of its analytic inverse, and demonstrate its power using a priceless new data set of mouse embryos imaged in 3D by micro-MR with voxels smaller than 100micrometers 3.
NASA Astrophysics Data System (ADS)
Huang, Mong-Han; Fielding, Eric J.; Dickinson, Haylee; Sun, Jianbao; Gonzalez-Ortega, J. Alejandro; Freed, Andrew M.; Bürgmann, Roland
2017-01-01
The 4 April 2010 Mw 7.2 El Mayor-Cucapah (EMC) earthquake in Baja, California, and Sonora, Mexico, had primarily right-lateral strike-slip motion and a minor normal-slip component. The surface rupture extended about 120 km in a NW-SE direction, west of the Cerro Prieto fault. Here we use geodetic measurements including near- to far-field GPS, interferometric synthetic aperture radar (InSAR), and subpixel offset measurements of radar and optical images to characterize the fault slip during the EMC event. We use dislocation inversion methods and determine an optimal nine-segment fault geometry, as well as a subfault slip distribution from the geodetic measurements. With systematic perturbation of the fault dip angles, randomly removing one geodetic data constraint, or different data combinations, we are able to explore the robustness of the inferred slip distribution along fault strike and depth. The model fitting residuals imply contributions of early postseismic deformation to the InSAR measurements as well as lateral heterogeneity in the crustal elastic structure between the Peninsular Ranges and the Salton Trough. We also find that with incorporation of near-field geodetic data and finer fault patch size, the shallow slip deficit is reduced in the EMC event by reductions in the level of smoothing. These results show that the outcomes of coseismic inversions can vary greatly depending on model parameterization and methodology.
An Amphibious Magnetotelluric Investigation of the Cascadian Seismogenic and ETS zones.
NASA Astrophysics Data System (ADS)
Parris, B. A.; Livelybrooks, D.; Bedrosian, P.; Egbert, G. D.; Key, K.; Schultz, A.; Cook, A.; Kant, M.; Wogan, N.; Zeryck, A.
2015-12-01
The amphibious Magnetotelluric Observations of Cascadia using a Huge Array (MOCHA) experiment seeks to address unresolved questions about the seismogenic locked zone and down-dip transition zone where episodic tremor and slip (ETS) originates. The presence of free fluids is thought to be one of the primary controls on ETS behavior within the Cascadia margin. Since the bulk electrical conductivity in the crust and mantle can be greatly increased by fluids, magnetotelluric(MT) observations can offer unique insights on the fluid distribution and its relation to observed ETS behavior. Here we present preliminary results from the 146 MT stations collected for the MOCHA project. MOCHA is unique in that it is the first amphibious array of MT stations occupied to provide for 3-D interpretation of conductivity structure of a subduction zone. The MOCHA data set comprises 75 onshore stations and 71 offshore stations, accumulated over a two-year period, and located on an approximate 25km grid, spanning from the trench to the Eastern Willamette Valley, and from central Oregon into middle Washington. We present the results of a series of east-west (cross-strike) oriented, two-dimensional inversions created using the MARE2DEM software that provide an initial picture of the conductivity structure of the locked and ETS zones and its along strike variations. Our models can be used to identify correlations between ETS occurrence rates and inferred fluid concentrations. Our modeling explores the impact of various parameterizations on 2-D inversion results, including inclusion of a smoothness penalty reduction along the inferred slab interface. This series of 2-D inversions can then be used collectively to help make and guide an a priori 3-D inversion. In addition we will present a preliminary 3-D inversion of the onshore stations created using the ModEM software. We are currently working on modifying ModEM to support inversion of offshore data. The more computationally intensive 3-D inversion of the full amphibious data set will address questions regarding along-strike heterogeneity in fluid distributions within the locked and ETS-originating zones.
NASA Technical Reports Server (NTRS)
Dominguez, Anthony; Kleissl, Jan P.; Luvall, Jeffrey C.
2011-01-01
Large-eddy Simulation (LES) was used to study convective boundary layer (CBL) flow through suburban regions with both large and small scale heterogeneities in surface temperature. Constant remotely sensed surface temperatures were applied at the surface boundary at resolutions of 10 m, 90 m, 200 m, and 1 km. Increasing the surface resolution from 1 km to 200 m had the most significant impact on the mean and turbulent flow characteristics as the larger scale heterogeneities became resolved. While previous studies concluded that scales of heterogeneity much smaller than the CBL inversion height have little impact on the CBL characteristics, we found that further increasing the surface resolution (resolving smaller scale heterogeneities) results in an increase in mean surface heat flux, thermal blending height, and potential temperature profile. The results of this study will help to better inform sub-grid parameterization for meso-scale meteorological models. The simulation tool developed through this study (combining LES and high resolution remotely sensed surface conditions) is a significant step towards future studies on the micro-scale meteorology in urban areas.
Large-scale Density Structures in Magneto-rotational Disk Turbulence
NASA Astrophysics Data System (ADS)
Youdin, Andrew; Johansen, A.; Klahr, H.
2009-01-01
Turbulence generated by the magneto-rotational instability (MRI) is a strong candidate to drive accretion flows in disks, including sufficiently ionized regions of protoplanetary disks. The MRI is often studied in local shearing boxes, which model a small section of the disk at high resolution. I will present simulations of large, stratified shearing boxes which extend up to 10 gas scale-heights across. These simulations are a useful bridge to fully global disk simulations. We find that MRI turbulence produces large-scale, axisymmetric density perturbations . These structures are part of a zonal flow --- analogous to the banded flow in Jupiter's atmosphere --- which survives in near geostrophic balance for tens of orbits. The launching mechanism is large-scale magnetic tension generated by an inverse cascade. We demonstrate the robustness of these results by careful study of various box sizes, grid resolutions, and microscopic diffusion parameterizations. These gas structures can trap solid material (in the form of large dust or ice particles) with important implications for planet formation. Resolved disk images at mm-wavelengths (e.g. from ALMA) will verify or constrain the existence of these structures.
Uncertainty Assessment of Space-Borne Passive Soil Moisture Retrievals
NASA Technical Reports Server (NTRS)
Quets, Jan; De Lannoy, Gabrielle; Reichle, Rolf; Cosh, Michael; van der Schalie, Robin; Wigneron, Jean-Pierre
2017-01-01
The uncertainty associated with passive soil moisture retrieval is hard to quantify, and known to be underlain by various, diverse, and complex causes. Factors affecting space-borne retrieved soil moisture estimation include: (i) the optimization or inversion method applied to the radiative transfer model (RTM), such as e.g. the Single Channel Algorithm (SCA), or the Land Parameter Retrieval Model (LPRM), (ii) the selection of the observed brightness temperatures (Tbs), e.g. polarization and incidence angle, (iii) the definition of the cost function and the impact of prior information in it, and (iv) the RTM parameterization (e.g. parameterizations officially used by the SMOS L2 and SMAP L2 retrieval products, ECMWF-based SMOS assimilation product, SMAP L4 assimilation product, and perturbations from those configurations). This study aims at disentangling the relative importance of the above-mentioned sources of uncertainty, by carrying out soil moisture retrieval experiments, using SMOS Tb observations in different settings, of which some are mentioned above. The ensemble uncertainties are evaluated at 11 reference CalVal sites, over a time period of more than 5 years. These experimental retrievals were inter-compared, and further confronted with in situ soil moisture measurements and operational SMOS L2 retrievals, using commonly used skill metrics to quantify the temporal uncertainty in the retrievals.
NASA Astrophysics Data System (ADS)
Sommer, Philipp; Kaplan, Jed
2016-04-01
Accurate modelling of large-scale vegetation dynamics, hydrology, and other environmental processes requires meteorological forcing on daily timescales. While meteorological data with high temporal resolution is becoming increasingly available, simulations for the future or distant past are limited by lack of data and poor performance of climate models, e.g., in simulating daily precipitation. To overcome these limitations, we may temporally downscale monthly summary data to a daily time step using a weather generator. Parameterization of such statistical models has traditionally been based on a limited number of observations. Recent developments in the archiving, distribution, and analysis of "big data" datasets provide new opportunities for the parameterization of a temporal downscaling model that is applicable over a wide range of climates. Here we parameterize a WGEN-type weather generator using more than 50 million individual daily meteorological observations, from over 10'000 stations covering all continents, based on the Global Historical Climatology Network (GHCN) and Synoptic Cloud Reports (EECRA) databases. Using the resulting "universal" parameterization and driven by monthly summaries, we downscale mean temperature (minimum and maximum), cloud cover, and total precipitation, to daily estimates. We apply a hybrid gamma-generalized Pareto distribution to calculate daily precipitation amounts, which overcomes much of the inability of earlier weather generators to simulate high amounts of daily precipitation. Our globally parameterized weather generator has numerous applications, including vegetation and crop modelling for paleoenvironmental studies.
Estimating the Contrail Impact on Climate Using the UK Met Office Model
NASA Astrophysics Data System (ADS)
Rap, A.; Forster, P. M.
2008-12-01
With air travel predicted to increase over the coming century, the emissions associated with air traffic are expected to have a significant warming effect on climate. According to current best estimates, an important contribution comes from contrails. However, as reported by the IPCC fourth assessment report, these current best estimates still have a high uncertainty. The development and validation of contrail parameterizations in global climate models is therefore very important. This current study develops a contrail parameterization within the UK Met Office Climate Model. Using this new parameterization, we estimate that for the 2002 traffic, the global mean annual contrail coverage is approximately 0.11%, a value which in good agreement with several other estimates. The corresponding contrail radiative forcing (RF) is calculated to be approximately 4 and 6 mWm-2 in all-sky and clear-sky conditions, respectively. These values lie within the lower end of the RF range reported by the latest IPCC assessment. The relatively high cloud masking effect on contrails observed by our parameterization compared with other studies is investigated, and a possible cause for this difference is suggested. The effect of the diurnal variations of air traffic on both contrail coverage and contrail RF is also investigated. The new parameterization is also employed in thirty-year slab-ocean model runs in order to give one of the first insights into contrail effects on daily temperature range and the climate impact of contrails.
Variable pixel size ionospheric tomography
NASA Astrophysics Data System (ADS)
Zheng, Dunyong; Zheng, Hongwei; Wang, Yanjun; Nie, Wenfeng; Li, Chaokui; Ao, Minsi; Hu, Wusheng; Zhou, Wei
2017-06-01
A novel ionospheric tomography technique based on variable pixel size was developed for the tomographic reconstruction of the ionospheric electron density (IED) distribution. In variable pixel size computerized ionospheric tomography (VPSCIT) model, the IED distribution is parameterized by a decomposition of the lower and upper ionosphere with different pixel sizes. Thus, the lower and upper IED distribution may be very differently determined by the available data. The variable pixel size ionospheric tomography and constant pixel size tomography are similar in most other aspects. There are some differences between two kinds of models with constant and variable pixel size respectively, one is that the segments of GPS signal pay should be assigned to the different kinds of pixel in inversion; the other is smoothness constraint factor need to make the appropriate modified where the pixel change in size. For a real dataset, the variable pixel size method distinguishes different electron density distribution zones better than the constant pixel size method. Furthermore, it can be non-chided that when the effort is spent to identify the regions in a model with best data coverage. The variable pixel size method can not only greatly improve the efficiency of inversion, but also produce IED images with high fidelity which are the same as a used uniform pixel size method. In addition, variable pixel size tomography can reduce the underdetermined problem in an ill-posed inverse problem when the data coverage is irregular or less by adjusting quantitative proportion of pixels with different sizes. In comparison with constant pixel size tomography models, the variable pixel size ionospheric tomography technique achieved relatively good results in a numerical simulation. A careful validation of the reliability and superiority of variable pixel size ionospheric tomography was performed. Finally, according to the results of the statistical analysis and quantitative comparison, the proposed method offers an improvement of 8% compared with conventional constant pixel size tomography models in the forward modeling.
A linear-RBF multikernel SVM to classify big text corpora.
Romero, R; Iglesias, E L; Borrajo, L
2015-01-01
Support vector machine (SVM) is a powerful technique for classification. However, SVM is not suitable for classification of large datasets or text corpora, because the training complexity of SVMs is highly dependent on the input size. Recent developments in the literature on the SVM and other kernel methods emphasize the need to consider multiple kernels or parameterizations of kernels because they provide greater flexibility. This paper shows a multikernel SVM to manage highly dimensional data, providing an automatic parameterization with low computational cost and improving results against SVMs parameterized under a brute-force search. The model consists in spreading the dataset into cohesive term slices (clusters) to construct a defined structure (multikernel). The new approach is tested on different text corpora. Experimental results show that the new classifier has good accuracy compared with the classic SVM, while the training is significantly faster than several other SVM classifiers.
A ubiquitous ice size bias in simulations of tropical deep convection
NASA Astrophysics Data System (ADS)
Stanford, McKenna W.; Varble, Adam; Zipser, Ed; Strapp, J. Walter; Leroy, Delphine; Schwarzenboeck, Alfons; Potts, Rodney; Protat, Alain
2017-08-01
The High Altitude Ice Crystals - High Ice Water Content (HAIC-HIWC) joint field campaign produced aircraft retrievals of total condensed water content (TWC), hydrometeor particle size distributions (PSDs), and vertical velocity (w) in high ice water content regions of mature and decaying tropical mesoscale convective systems (MCSs). The resulting dataset is used here to explore causes of the commonly documented high bias in radar reflectivity within cloud-resolving simulations of deep convection. This bias has been linked to overly strong simulated convective updrafts lofting excessive condensate mass but is also modulated by parameterizations of hydrometeor size distributions, single particle properties, species separation, and microphysical processes. Observations are compared with three Weather Research and Forecasting model simulations of an observed MCS using different microphysics parameterizations while controlling for w, TWC, and temperature. Two popular bulk microphysics schemes (Thompson and Morrison) and one bin microphysics scheme (fast spectral bin microphysics) are compared. For temperatures between -10 and -40 °C and TWC > 1 g m-3, all microphysics schemes produce median mass diameters (MMDs) that are generally larger than observed, and the precipitating ice species that controls this size bias varies by scheme, temperature, and w. Despite a much greater number of samples, all simulations fail to reproduce observed high-TWC conditions ( > 2 g m-3) between -20 and -40 °C in which only a small fraction of condensate mass is found in relatively large particle sizes greater than 1 mm in diameter. Although more mass is distributed to large particle sizes relative to those observed across all schemes when controlling for temperature, w, and TWC, differences with observations are significantly variable between the schemes tested. As a result, this bias is hypothesized to partly result from errors in parameterized hydrometeor PSD and single particle properties, but because it is present in all schemes, it may also partly result from errors in parameterized microphysical processes present in all schemes. Because of these ubiquitous ice size biases, the frequently used microphysical parameterizations evaluated in this study inherently produce a high bias in convective reflectivity for a wide range of temperatures, vertical velocities, and TWCs.
Parameterization of photon beam dosimetry for a linear accelerator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lebron, Sharon; Barraclough, Brendan; Lu, Bo
2016-02-15
Purpose: In radiation therapy, accurate data acquisition of photon beam dosimetric quantities is important for (1) beam modeling data input into a treatment planning system (TPS), (2) comparing measured and TPS modeled data, (3) the quality assurance process of a linear accelerator’s (Linac) beam characteristics, (4) the establishment of a standard data set for comparison with other data, etcetera. Parameterization of the photon beam dosimetry creates a data set that is portable and easy to implement for different applications such as those previously mentioned. The aim of this study is to develop methods to parameterize photon beam dosimetric quantities, includingmore » percentage depth doses (PDDs), profiles, and total scatter output factors (S{sub cp}). Methods: S{sub cp}, PDDs, and profiles for different field sizes, depths, and energies were measured for a Linac using a cylindrical 3D water scanning system. All data were smoothed for the analysis and profile data were also centered, symmetrized, and geometrically scaled. The S{sub cp} data were analyzed using an exponential function. The inverse square factor was removed from the PDD data before modeling and the data were subsequently analyzed using exponential functions. For profile modeling, one halfside of the profile was divided into three regions described by exponential, sigmoid, and Gaussian equations. All of the analytical functions are field size, energy, depth, and, in the case of profiles, scan direction specific. The model’s parameters were determined using the minimal amount of measured data necessary. The model’s accuracy was evaluated via the calculation of absolute differences between the measured (processed) and calculated data in low gradient regions and distance-to-agreement analysis in high gradient regions. Finally, the results of dosimetric quantities obtained by the fitted models for a different machine were also assessed. Results: All of the differences in the PDDs’ buildup and the profiles’ penumbra regions were less than 2 and 0.5 mm, respectively. The differences in the low gradient regions were 0.20% ± 0.20% (<1% for all) and 0.50% ± 0.35% (<1% for all) for PDDs and profiles, respectively. For S{sub cp} data, all of the absolute differences were less than 0.5%. Conclusions: This novel analytical model with minimum measurement requirements was proved to accurately calculate PDDs, profiles, and S{sub cp} for different field sizes, depths, and energies.« less
NASA Astrophysics Data System (ADS)
Gebler, S.; Hendricks Franssen, H.-J.; Kollet, S. J.; Qu, W.; Vereecken, H.
2017-04-01
The prediction of the spatial and temporal variability of land surface states and fluxes with land surface models at high spatial resolution is still a challenge. This study compares simulation results using TerrSysMP including a 3D variably saturated groundwater flow model (ParFlow) coupled to the Community Land Model (CLM) of a 38 ha managed grassland head-water catchment in the Eifel (Germany), with soil water content (SWC) measurements from a wireless sensor network, actual evapotranspiration recorded by lysimeters and eddy covariance stations and discharge observations. TerrSysMP was discretized with a 10 × 10 m lateral resolution, variable vertical resolution (0.025-0.575 m), and the following parameterization strategies of the subsurface soil hydraulic parameters: (i) completely homogeneous, (ii) homogeneous parameters for different soil horizons, (iii) different parameters for each soil unit and soil horizon and (iv) heterogeneous stochastic realizations. Hydraulic conductivity and Mualem-Van Genuchten parameters in these simulations were sampled from probability density functions, constructed from either (i) soil texture measurements and Rosetta pedotransfer functions (ROS), or (ii) estimated soil hydraulic parameters by 1D inverse modelling using shuffle complex evolution (SCE). The results indicate that the spatial variability of SWC at the scale of a small headwater catchment is dominated by topography and spatially heterogeneous soil hydraulic parameters. The spatial variability of the soil water content thereby increases as a function of heterogeneity of soil hydraulic parameters. For lower levels of complexity, spatial variability of the SWC was underrepresented in particular for the ROS-simulations. Whereas all model simulations were able to reproduce the seasonal evapotranspiration variability, the poor discharge simulations with high model bias are likely related to short-term ET dynamics and the lack of information about bedrock characteristics and an on-site drainage system in the uncalibrated model. In general, simulation performance was better for the SCE setups. The SCE-simulations had a higher inverse air entry parameter resulting in SWC dynamics in better correspondence with data than the ROS simulations during dry periods. This illustrates that small scale measurements of soil hydraulic parameters cannot be transferred to the larger scale and that interpolated 1D inverse parameter estimates result in an acceptable performance for the catchment.
Mihailovic, Dragutin T; Alapaty, Kiran; Podrascanin, Zorica
2009-03-01
Improving the parameterization of processes in the atmospheric boundary layer (ABL) and surface layer, in air quality and chemical transport models. To do so, an asymmetrical, convective, non-local scheme, with varying upward mixing rates is combined with the non-local, turbulent, kinetic energy scheme for vertical diffusion (COM). For designing it, a function depending on the dimensionless height to the power four in the ABL is suggested, which is empirically derived. Also, we suggested a new method for calculating the in-canopy resistance for dry deposition over a vegetated surface. The upward mixing rate forming the surface layer is parameterized using the sensible heat flux and the friction and convective velocities. Upward mixing rates varying with height are scaled with an amount of turbulent kinetic energy in layer, while the downward mixing rates are derived from mass conservation. The vertical eddy diffusivity is parameterized using the mean turbulent velocity scale that is obtained by the vertical integration within the ABL. In-canopy resistance is calculated by integration of inverse turbulent transfer coefficient inside the canopy from the effective ground roughness length to the canopy source height and, further, from its the canopy height. This combination of schemes provides a less rapid mass transport out of surface layer into other layers, during convective and non-convective periods, than other local and non-local schemes parameterizing mixing processes in the ABL. The suggested method for calculating the in-canopy resistance for calculating the dry deposition over a vegetated surface differs remarkably from the commonly used one, particularly over forest vegetation. In this paper, we studied the performance of a non-local, turbulent, kinetic energy scheme for vertical diffusion combined with a non-local, convective mixing scheme with varying upward mixing in the atmospheric boundary layer (COM) and its impact on the concentration of pollutants calculated with chemical and air-quality models. In addition, this scheme was also compared with a commonly used, local, eddy-diffusivity scheme. Simulated concentrations of NO2 by the COM scheme and new parameterization of the in-canopy resistance are closer to the observations when compared to those obtained from using the local eddy-diffusivity scheme. Concentrations calculated with the COM scheme and new parameterization of in-canopy resistance, are in general higher and closer to the observations than those obtained by the local, eddy-diffusivity scheme (on the order of 15-22%). To examine the performance of the scheme, simulated and measured concentrations of a pollutant (NO2) were compared for the years 1999 and 2002. The comparison was made for the entire domain used in simulations performed by the chemical European Monitoring and Evaluation Program Unified model (version UNI-ACID, rv2.0) where schemes were incorporated.
A new parameterization of the post-fire snow albedo effect
NASA Astrophysics Data System (ADS)
Gleason, K. E.; Nolin, A. W.
2013-12-01
Mountain snowpack serves as an important natural reservoir of water: recharging aquifers, sustaining streams, and providing important ecosystem services. Reduced snowpacks and earlier snowmelt have been shown to affect fire size, frequency, and severity in the western United States. In turn, wildfire disturbance affects patterns of snow accumulation and ablation by reducing canopy interception, increasing turbulent fluxes, and modifying the surface radiation balance. Recent work shows that after a high severity forest fire, approximately 60% more solar radiation reaches the snow surface due to the reduction in canopy density. Also, significant amounts of pyrogenic carbon particles and larger burned woody debris (BWD) are shed from standing charred trees, which concentrate on the snowpack, darken its surface, and reduce snow albedo by 50% during ablation. Although the post-fire forest environment drives a substantial increase in net shortwave radiation at the snowpack surface, driving earlier and more rapid melt, hydrologic models do not explicitly incorporate forest fire disturbance effects to snowpack dynamics. The objective of this study was to parameterize the post-fire snow albedo effect due to BWD deposition on snow to better represent forest fire disturbance in modeling of snow-dominated hydrologic regimes. Based on empirical results from winter experiments, in-situ snow monitoring, and remote sensing data from a recent forest fire in the Oregon High Cascades, we characterized the post-fire snow albedo effect, and developed a simple parameterization of snowpack albedo decay in the post-fire forest environment. We modified the recession coefficient in the algorithm: α = α0 + K exp (-nr) where α = snowpack albedo, α0 = minimum snowpack albedo (≈0.4), K = constant (≈ 0.44), -n = number of days since last major snowfall, r = recession coefficient [Rohrer and Braun, 1994]. Our parameterization quantified BWD deposition and snow albedo decay rates and related these forest disturbance effects to radiative heating and snow melt rates. We validated our parameterization of the post-fire snow albedo effect at the plot scale using a physically-based, spatially-distributed snow accumulation and melt model, and in-situ eddy covariance and snow monitoring data. This research quantified wildfire impacts to snow dynamics in the Oregon High Cascades, and provided a new parameterization of post-fire drivers to changes in high elevation winter water storage.
NASA Astrophysics Data System (ADS)
Johnson, E. S.; Rupper, S.; Steenburgh, W. J.; Strong, C.; Kochanski, A.
2017-12-01
Climate model outputs are often used as inputs to glacier energy and mass balance models, which are essential glaciological tools for testing glacier sensitivity, providing mass balance estimates in regions with little glaciological data, and providing a means to model future changes. Climate model outputs, however, are sensitive to the choice of physical parameterizations, such as those for cloud microphysics, land-surface schemes, surface layer options, etc. Furthermore, glacier mass balance (MB) estimates that use these climate model outputs as inputs are likely sensitive to the specific parameterization schemes, but this sensitivity has not been carefully assessed. Here we evaluate the sensitivity of glacier MB estimates across the Indus Basin to the selection of cloud microphysics parameterizations in the Weather Research and Forecasting Model (WRF). Cloud microphysics parameterizations differ in how they specify the size distributions of hydrometeors, the rate of graupel and snow production, their fall speed assumptions, the rates at which they convert from one hydrometeor type to the other, etc. While glacier MB estimates are likely sensitive to other parameterizations in WRF, our preliminary results suggest that glacier MB is highly sensitive to the timing, frequency, and amount of snowfall, which is influenced by the cloud microphysics parameterization. To this end, the Indus Basin is an ideal study site, as it has both westerly (winter) and monsoonal (summer) precipitation influences, is a data-sparse region (so models are critical), and still has lingering questions as to glacier importance for local and regional resources. WRF is run at a 4 km grid scale using two commonly used parameterizations: the Thompson scheme and the Goddard scheme. On average, these parameterizations result in minimal differences in annual precipitation. However, localized regions exhibit differences in precipitation of up to 3 m w.e. a-1. The different schemes also impact the radiative budgets over the glacierized areas. Our results show that glacier MB estimates can differ by up to 45% depending on the chosen cloud microphysics scheme. These findings highlight the need to better account for uncertainties in meteorological inputs into glacier energy and mass balance models.
Engelmann Spruce Site Index Models: A Comparison of Model Functions and Parameterizations
Nigh, Gordon
2015-01-01
Engelmann spruce (Picea engelmannii Parry ex Engelm.) is a high-elevation species found in western Canada and western USA. As this species becomes increasingly targeted for harvesting, better height growth information is required for good management of this species. This project was initiated to fill this need. The objective of the project was threefold: develop a site index model for Engelmann spruce; compare the fits and modelling and application issues between three model formulations and four parameterizations; and more closely examine the grounded-Generalized Algebraic Difference Approach (g-GADA) model parameterization. The model fitting data consisted of 84 stem analyzed Engelmann spruce site trees sampled across the Engelmann Spruce – Subalpine Fir biogeoclimatic zone. The fitted models were based on the Chapman-Richards function, a modified Hossfeld IV function, and the Schumacher function. The model parameterizations that were tested are indicator variables, mixed-effects, GADA, and g-GADA. Model evaluation was based on the finite-sample corrected version of Akaike’s Information Criteria and the estimated variance. Model parameterization had more of an influence on the fit than did model formulation, with the indicator variable method providing the best fit, followed by the mixed-effects modelling (9% increase in the variance for the Chapman-Richards and Schumacher formulations over the indicator variable parameterization), g-GADA (optimal approach) (335% increase in the variance), and the GADA/g-GADA (with the GADA parameterization) (346% increase in the variance). Factors related to the application of the model must be considered when selecting the model for use as the best fitting methods have the most barriers in their application in terms of data and software requirements. PMID:25853472
Constraints to Dark Energy Using PADE Parameterizations
NASA Astrophysics Data System (ADS)
Rezaei, M.; Malekjani, M.; Basilakos, S.; Mehrabi, A.; Mota, D. F.
2017-07-01
We put constraints on dark energy (DE) properties using PADE parameterization, and compare it to the same constraints using Chevalier-Polarski-Linder (CPL) and ΛCDM, at both the background and the perturbation levels. The DE equation of the state parameter of the models is derived following the mathematical treatment of PADE expansion. Unlike CPL parameterization, PADE approximation provides different forms of the equation of state parameter that avoid the divergence in the far future. Initially we perform a likelihood analysis in order to put constraints on the model parameters using solely background expansion data, and we find that all parameterizations are consistent with each other. Then, combining the expansion and the growth rate data, we test the viability of PADE parameterizations and compare them with CPL and ΛCDM models, respectively. Specifically, we find that the growth rate of the current PADE parameterizations is lower than ΛCDM model at low redshifts, while the differences among the models are negligible at high redshifts. In this context, we provide for the first time a growth index of linear matter perturbations in PADE cosmologies. Considering that DE is homogeneous, we recover the well-known asymptotic value of the growth index (namely {γ }∞ =\\tfrac{3({w}∞ -1)}{6{w}∞ -5}), while in the case of clustered DE, we obtain {γ }∞ ≃ \\tfrac{3{w}∞ (3{w}∞ -5)}{(6{w}∞ -5)(3{w}∞ -1)}. Finally, we generalize the growth index analysis in the case where γ is allowed to vary with redshift, and we find that the form of γ (z) in PADE parameterization extends that of the CPL and ΛCDM cosmologies, respectively.
NASA Astrophysics Data System (ADS)
Kosovic, B.; Jimenez, P. A.; Haupt, S. E.; Martilli, A.; Olson, J.; Bao, J. W.
2017-12-01
At present, the planetary boundary layer (PBL) parameterizations available in most numerical weather prediction (NWP) models are one-dimensional. One-dimensional parameterizations are based on the assumption of horizontal homogeneity. This homogeneity assumption is appropriate for grid cell sizes greater than 10 km. However, for mesoscale simulations of flows in complex terrain with grid cell sizes below 1 km, the assumption of horizontal homogeneity is violated. Applying a one-dimensional PBL parameterization to high-resolution mesoscale simulations in complex terrain could result in significant error. For high-resolution mesoscale simulations of flows in complex terrain, we have therefore developed and implemented a three-dimensional (3D) PBL parameterization in the Weather Research and Forecasting (WRF) model. The implementation of the 3D PBL scheme is based on the developments outlined by Mellor and Yamada (1974, 1982). Our implementation in the Weather Research and Forecasting (WRF) model uses a pure algebraic model (level 2) to diagnose the turbulent fluxes. To evaluate the performance of the 3D PBL model, we use observations from the Wind Forecast Improvement Project 2 (WFIP2). The WFIP2 field study took place in the Columbia River Gorge area from 2015-2017. We focus on selected cases when physical phenomena of significance for wind energy applications such as mountain waves, topographic wakes, and gap flows were observed. Our assessment of the 3D PBL parameterization also considers a large-eddy simulation (LES). We carried out a nested LES with grid cell sizes of 30 m and 10 m covering a large fraction of the WFIP2 study area. Both LES domains were discretized using 6000 x 3000 x 200 grid cells in zonal, meridional, and vertical direction, respectively. The LES results are used to assess the relative magnitude of horizontal gradients of turbulent stresses and fluxes in comparison to vertical gradients. The presentation will highlight the advantages of the 3D PBL scheme in regions of complex terrain.
Modelling storm development and the impact when introducing waves, sea spray and heat fluxes
NASA Astrophysics Data System (ADS)
Wu, Lichuan; Rutgersson, Anna; Sahlée, Erik
2015-04-01
In high wind speed conditions, sea spray generated due to intensity breaking waves have big influence on the wind stress and heat fluxes. Measurements show that drag coefficient will decrease in high wind speed. Sea spray generation function (SSGF), an important term of wind stress parameterization in high wind speed, usually treated as a function of wind speed/friction velocity. In this study, we introduce a wave state depended SSGG and wave age depended Charnock number into a high wind speed wind stress parameterization (Kudryavtsev et al., 2011; 2012). The proposed wind stress parameterization and sea spray heat fluxes parameterization from Andreas et al., (2014) were applied to an atmosphere-wave coupled model to test on four storm cases. Compared with measurements from the FINO1 platform in the North Sea, the new wind stress parameterization can reduce the forecast errors of wind in high wind speed range, but not in low wind speed. Only sea spray impacted on wind stress, it will intensify the storms (minimum sea level pressure and maximum wind speed) and lower the air temperature (increase the errors). Only the sea spray impacted on the heat fluxes, it can improve the model performance on storm tracks and the air temperature, but not change much in the storm intensity. If both of sea spray impacted on the wind stress and heat fluxes are taken into account, it has the best performance in all the experiment for minimum sea level pressure and maximum wind speed and air temperature. Andreas, E. L., Mahrt, L., and Vickers, D. (2014). An improved bulk air-sea surface flux algorithm, including spray-mediated transfer. Quarterly Journal of the Royal Meteorological Society. Kudryavtsev, V. and Makin, V. (2011). Impact of ocean spray on the dynamics of the marine atmospheric boundary layer. Boundary-layer meteorology, 140(3):383-410. Kudryavtsev, V., Makin, V., and S, Z. (2012). On the sea-surface drag and heat/mass transfer at strong winds. Technical report, Royal Netherlands Meteorological Institute.
A Novel Shape Parameterization Approach
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.
1999-01-01
This paper presents a novel parameterization approach for complex shapes suitable for a multidisciplinary design optimization application. The approach consists of two basic concepts: (1) parameterizing the shape perturbations rather than the geometry itself and (2) performing the shape deformation by means of the soft objects animation algorithms used in computer graphics. Because the formulation presented in this paper is independent of grid topology, we can treat computational fluid dynamics and finite element grids in a similar manner. The proposed approach is simple, compact, and efficient. Also, the analytical sensitivity derivatives are easily computed for use in a gradient-based optimization. This algorithm is suitable for low-fidelity (e.g., linear aerodynamics and equivalent laminated plate structures) and high-fidelity analysis tools (e.g., nonlinear computational fluid dynamics and detailed finite element modeling). This paper contains the implementation details of parameterizing for planform, twist, dihedral, thickness, and camber. The results are presented for a multidisciplinary design optimization application consisting of nonlinear computational fluid dynamics, detailed computational structural mechanics, performance, and a simple propulsion module.
Development of the WRF-CO2 4D-Var assimilation system v1.0
NASA Astrophysics Data System (ADS)
Zheng, Tao; French, Nancy H. F.; Baxter, Martin
2018-05-01
Regional atmospheric CO2 inversions commonly use Lagrangian particle trajectory model simulations to calculate the required influence function, which quantifies the sensitivity of a receptor to flux sources. In this paper, an adjoint-based four-dimensional variational (4D-Var) assimilation system, WRF-CO2 4D-Var, is developed to provide an alternative approach. This system is developed based on the Weather Research and Forecasting (WRF) modeling system, including the system coupled to chemistry (WRF-Chem), with tangent linear and adjoint codes (WRFPLUS), and with data assimilation (WRFDA), all in version 3.6. In WRF-CO2 4D-Var, CO2 is modeled as a tracer and its feedback to meteorology is ignored. This configuration allows most WRF physical parameterizations to be used in the assimilation system without incurring a large amount of code development. WRF-CO2 4D-Var solves for the optimized CO2 flux scaling factors in a Bayesian framework. Two variational optimization schemes are implemented for the system: the first uses the limited memory Broyden-Fletcher-Goldfarb-Shanno (BFGS) minimization algorithm (L-BFGS-B) and the second uses the Lanczos conjugate gradient (CG) in an incremental approach. WRFPLUS forward, tangent linear, and adjoint models are modified to include the physical and dynamical processes involved in the atmospheric transport of CO2. The system is tested by simulations over a domain covering the continental United States at 48 km × 48 km grid spacing. The accuracy of the tangent linear and adjoint models is assessed by comparing against finite difference sensitivity. The system's effectiveness for CO2 inverse modeling is tested using pseudo-observation data. The results of the sensitivity and inverse modeling tests demonstrate the potential usefulness of WRF-CO2 4D-Var for regional CO2 inversions.
Rafique, Rashad; Fienen, Michael N.; Parkin, Timothy B.; Anex, Robert P.
2013-01-01
DayCent is a biogeochemical model of intermediate complexity widely used to simulate greenhouse gases (GHG), soil organic carbon and nutrients in crop, grassland, forest and savannah ecosystems. Although this model has been applied to a wide range of ecosystems, it is still typically parameterized through a traditional “trial and error” approach and has not been calibrated using statistical inverse modelling (i.e. algorithmic parameter estimation). The aim of this study is to establish and demonstrate a procedure for calibration of DayCent to improve estimation of GHG emissions. We coupled DayCent with the parameter estimation (PEST) software for inverse modelling. The PEST software can be used for calibration through regularized inversion as well as model sensitivity and uncertainty analysis. The DayCent model was analysed and calibrated using N2O flux data collected over 2 years at the Iowa State University Agronomy and Agricultural Engineering Research Farms, Boone, IA. Crop year 2003 data were used for model calibration and 2004 data were used for validation. The optimization of DayCent model parameters using PEST significantly reduced model residuals relative to the default DayCent parameter values. Parameter estimation improved the model performance by reducing the sum of weighted squared residual difference between measured and modelled outputs by up to 67 %. For the calibration period, simulation with the default model parameter values underestimated mean daily N2O flux by 98 %. After parameter estimation, the model underestimated the mean daily fluxes by 35 %. During the validation period, the calibrated model reduced sum of weighted squared residuals by 20 % relative to the default simulation. Sensitivity analysis performed provides important insights into the model structure providing guidance for model improvement.
Stochastic parameterization of shallow cumulus convection estimated from high-resolution model data
NASA Astrophysics Data System (ADS)
Dorrestijn, Jesse; Crommelin, Daan T.; Siebesma, A. Pier.; Jonker, Harm J. J.
2013-02-01
In this paper, we report on the development of a methodology for stochastic parameterization of convective transport by shallow cumulus convection in weather and climate models. We construct a parameterization based on Large-Eddy Simulation (LES) data. These simulations resolve the turbulent fluxes of heat and moisture and are based on a typical case of non-precipitating shallow cumulus convection above sea in the trade-wind region. Using clustering, we determine a finite number of turbulent flux pairs for heat and moisture that are representative for the pairs of flux profiles observed in these simulations. In the stochastic parameterization scheme proposed here, the convection scheme jumps randomly between these pre-computed pairs of turbulent flux profiles. The transition probabilities are estimated from the LES data, and they are conditioned on the resolved-scale state in the model column. Hence, the stochastic parameterization is formulated as a data-inferred conditional Markov chain (CMC), where each state of the Markov chain corresponds to a pair of turbulent heat and moisture fluxes. The CMC parameterization is designed to emulate, in a statistical sense, the convective behaviour observed in the LES data. The CMC is tested in single-column model (SCM) experiments. The SCM is able to reproduce the ensemble spread of the temperature and humidity that was observed in the LES data. Furthermore, there is a good similarity between time series of the fractions of the discretized fluxes produced by SCM and observed in LES.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barnes, Hannah C.; Houze, Robert A.
To equitably compare the spatial pattern of ice microphysical processes produced by three microphysical parameterizations with each other, observations, and theory, simulations of tropical oceanic mesoscale convective systems (MCSs) in the Weather Research and Forecasting (WRF) model were forced to develop the same mesoscale circulations as observations by assimilating radial velocity data from a Doppler radar. The same general layering of microphysical processes was found in observations and simulations with deposition anywhere above the 0°C level, aggregation at and above the 0°C level, melting at and below the 0°C level, and riming near the 0°C level. Thus, this study ismore » consistent with the layered ice microphysical pattern portrayed in previous conceptual models and indicated by dual-polarization radar data. Spatial variability of riming in the simulations suggests that riming in the midlevel inflow is related to convective-scale vertical velocity perturbations. Finally, this study sheds light on limitations of current generally available bulk microphysical parameterizations. In each parameterization, the layers in which aggregation and riming took place were generally too thick and the frequency of riming was generally too high compared to the observations and theory. Additionally, none of the parameterizations produced similar details in every microphysical spatial pattern. Discrepancies in the patterns of microphysical processes between parameterizations likely factor into creating substantial differences in model reflectivity patterns. It is concluded that improved parameterizations of ice-phase microphysics will be essential to obtain reliable, consistent model simulations of tropical oceanic MCSs.« less
NASA Astrophysics Data System (ADS)
Hayley, Kevin; Schumacher, J.; MacMillan, G. J.; Boutin, L. C.
2014-05-01
Expanding groundwater datasets collected by automated sensors, and improved groundwater databases, have caused a rapid increase in calibration data available for groundwater modeling projects. Improved methods of subsurface characterization have increased the need for model complexity to represent geological and hydrogeological interpretations. The larger calibration datasets and the need for meaningful predictive uncertainty analysis have both increased the degree of parameterization necessary during model calibration. Due to these competing demands, modern groundwater modeling efforts require a massive degree of parallelization in order to remain computationally tractable. A methodology for the calibration of highly parameterized, computationally expensive models using the Amazon EC2 cloud computing service is presented. The calibration of a regional-scale model of groundwater flow in Alberta, Canada, is provided as an example. The model covers a 30,865-km2 domain and includes 28 hydrostratigraphic units. Aquifer properties were calibrated to more than 1,500 static hydraulic head measurements and 10 years of measurements during industrial groundwater use. Three regionally extensive aquifers were parameterized (with spatially variable hydraulic conductivity fields), as was the aerial recharge boundary condition, leading to 450 adjustable parameters in total. The PEST-based model calibration was parallelized on up to 250 computing nodes located on Amazon's EC2 servers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, H; Dong, P; Xing, L
Purpose: Traditional radiotherapy inverse planning relies on the weighting factors to phenomenologically balance the conflicting criteria for different structures. The resulting manual trial-and-error determination of the weights has long been recognized as the most time-consuming part of treatment planning. The purpose of this work is to develop an inverse planning framework that parameterizes the inter-structural dosimetric tradeoff among with physically more meaningful quantities to simplify the search for a clinically sensible plan. Methods: A permissible dosimetric uncertainty is introduced for each of the structures to balance their conflicting dosimetric requirements. The inverse planning is then formulated as a convex feasibilitymore » problem, which aims to generate plans with acceptable dosimetric uncertainties. A sequential procedure (SP) is derived to decompose the model into three submodels to constrain the uncertainty in the planning target volume (PTV), the critical structures, and all other structures to spare, sequentially. The proposed technique is applied to plan a liver case and a head-and-neck case and compared with a conventional approach. Results: Our results show that the strategy is able to generate clinically sensible plans with little trial-and-error. In the case of liver IMRT, the fractional volumes to liver and heart above 20Gy are found to be 22% and 10%, respectively, which are 15.1% and 33.3% lower than that of the counterpart conventional plan while maintaining the same PTV coverage. The planning of the head and neck IMRT show the same level of success, with the DVHs for all organs at risk and PTV very competitive to a counterpart plan. Conclusion: A new inverse planning framework has been established. With physically more meaningful modeling of the inter-structural tradeoff, the technique enables us to substantially reduce the need for trial-and-error adjustment of the model parameters and opens new opportunities of incorporating prior knowledge to facilitate the treatment planning process.« less
An Overview of Numerical Weather Prediction on Various Scales
NASA Astrophysics Data System (ADS)
Bao, J.-W.
2009-04-01
The increasing public need for detailed weather forecasts, along with the advances in computer technology, has motivated many research institutes and national weather forecasting centers to develop and run global as well as regional numerical weather prediction (NWP) models at high resolutions (i.e., with horizontal resolutions of ~10 km or higher for global models and 1 km or higher for regional models, and with ~60 vertical levels or higher). The need for running NWP models at high horizontal and vertical resolutions requires the implementation of non-hydrostatic dynamic core with a choice of horizontal grid configurations and vertical coordinates that are appropriate for high resolutions. Development of advanced numerics will also be needed for high resolution global and regional models, in particular, when the models are applied to transport problems and air quality applications. In addition to the challenges in numerics, the NWP community is also facing the challenges of developing physics parameterizations that are well suited for high-resolution NWP models. For example, when NWP models are run at resolutions of ~5 km or higher, the use of much more detailed microphysics parameterizations than those currently used in NWP model will become important. Another example is that regional NWP models at ~1 km or higher only partially resolve convective energy containing eddies in the lower troposphere. Parameterizations to account for the subgrid diffusion associated with unresolved turbulence still need to be developed. Further, physically sound parameterizations for air-sea interaction will be a critical component for tropical NWP models, particularly for hurricane predictions models. In this review presentation, the above issues will be elaborated on and the approaches to address them will be discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marjanovic, Nikola; Mirocha, Jeffrey D.; Kosović, Branko
A generalized actuator line (GAL) wind turbine parameterization is implemented within the Weather Research and Forecasting model to enable high-fidelity large-eddy simulations of wind turbine interactions with boundary layer flows under realistic atmospheric forcing conditions. Numerical simulations using the GAL parameterization are evaluated against both an already implemented generalized actuator disk (GAD) wind turbine parameterization and two field campaigns that measured the inflow and near-wake regions of a single turbine. The representation of wake wind speed, variance, and vorticity distributions is examined by comparing fine-resolution GAL and GAD simulations and GAD simulations at both fine and coarse-resolutions. The higher-resolution simulationsmore » show slightly larger and more persistent velocity deficits in the wake and substantially increased variance and vorticity when compared to the coarse-resolution GAD. The GAL generates distinct tip and root vortices that maintain coherence as helical tubes for approximately one rotor diameter downstream. Coarse-resolution simulations using the GAD produce similar aggregated wake characteristics to both fine-scale GAD and GAL simulations at a fraction of the computational cost. The GAL parameterization provides the capability to resolve near wake physics, including vorticity shedding and wake expansion.« less
NASA Technical Reports Server (NTRS)
Chou, Ming-Dah; Lee, Kyu-Tae; Yang, Ping; Lau, William K. M. (Technical Monitor)
2002-01-01
Based on the single-scattering optical properties that are pre-computed using an improve geometric optics method, the bulk mass absorption coefficient, single-scattering albedo, and asymmetry factor of ice particles have been parameterized as a function of the mean effective particle size of a mixture of ice habits. The parameterization has been applied to compute fluxes for sample clouds with various particle size distributions and assumed mixtures of particle habits. Compared to the parameterization for a single habit of hexagonal column, the solar heating of clouds computed with the parameterization for a mixture of habits is smaller due to a smaller cosingle-scattering albedo. Whereas the net downward fluxes at the TOA and surface are larger due to a larger asymmetry factor. The maximum difference in the cloud heating rate is approx. 0.2 C per day, which occurs in clouds with an optical thickness greater than 3 and the solar zenith angle less than 45 degrees. Flux difference is less than 10 W per square meters for the optical thickness ranging from 0.6 to 10 and the entire range of the solar zenith angle. The maximum flux difference is approximately 3%, which occurs around an optical thickness of 1 and at high solar zenith angles.
NASA Astrophysics Data System (ADS)
Pasquier, B.; Holzer, M.; Frants, M.
2016-02-01
We construct a data-constrained mechanistic inverse model of the ocean's coupled phosphorus and iron cycles. The nutrient cycling is embedded in a data-assimilated steady global circulation. Biological nutrient uptake is parameterized in terms of nutrient, light, and temperature limitations on growth for two classes of phytoplankton that are not transported explicitly. A matrix formulation of the discretized nutrient tracer equations allows for efficient numerical solutions, which facilitates the objective optimization of the key biogeochemical parameters. The optimization minimizes the misfit between the modelled and observed nutrient fields of the current climate. We systematically assess the nonlinear response of the biological pump to changes in the aeolian iron supply for a variety of scenarios. Specifically, Green-function techniques are employed to quantify in detail the pathways and timescales with which those perturbations are propagated throughout the world oceans, determining the global teleconnections that mediate the response of the global ocean ecosystem. We confirm previous findings from idealized studies that increased iron fertilization decreases biological production in the subtropical gyres and we quantify the counterintuitive and asymmetric response of global productivity to increases and decreases in the aeolian iron supply.
NASA Astrophysics Data System (ADS)
Hedelius, J.; Wennberg, P. O.; Wunch, D.; Roehl, C. M.; Podolske, J. R.; Hillyard, P.; Iraci, L. T.
2017-12-01
Greenhouse gas (GHG) emissions from California's South Coast Air Basin (SoCAB) have been studied extensively using a variety of tower, aircraft, remote sensing, emission inventory, and modeling studies. It is impractical to survey GHG fluxes from all urban areas and hot-spots to the extent the SoCAB has been studied, but it can serve as a test location for scaling methods globally. We use a combination of remote sensing measurements from ground (Total Carbon Column Observing Network, TCCON) and space-based (Observing Carbon Observatory-2, OCO-2) sensors in an inversion to obtain the carbon dioxide flux from the SoCAB. We also perform a variety of sensitivity tests to see how the inversion performs using different model parameterizations. Fluxes do not significantly depend on the mixed layer depth, but are sensitive to the model surface layers (<5 m). Carbon dioxide fluxes are larger than those from bottom-up inventories by about 20%, and along with CO has a significant weekend:weekday effect. Methane fluxes have little weekend changes. Results also include flux estimates from sub-regions of the SoCAB. Larger top-down than bottom-up fluxes highlight the need for additional work on both approaches. Higher top-down fluxes could arise from sampling bias, model bias, or may show bottom-up values underestimate sources. Lessons learned here may help in scaling up inversions to hundreds of urban systems using space-based observations.
NASA Astrophysics Data System (ADS)
Reinisch, E. C.; Feigl, K. L.; Cardiff, M. A.; Morency, C.; Kreemer, C.; Akerley, J.
2017-12-01
Time-dependent deformation has been observed at Brady Hot Springs using data from the Global Positioning System (GPS) and interferometric synthetic aperture radar (InSAR) [e.g., Ali et al. 2016, http://dx.doi.org/10.1016/j.geothermics.2016.01.008]. We seek to determine the geophysical process governing the observed subsidence. As two end-member hypotheses, we consider thermal contraction and a decrease in pore fluid pressure. A decrease in temperature would cause contraction in the subsurface and subsidence at the surface. A decrease in pore fluid pressure would allow the volume of pores to shrink and also produce subsidence. To simulate these processes, we use a dislocation model that assumes uniform elastic properties in a half space [Okada, 1985]. The parameterization consists of many cubic volume elements (voxels), each of which contracts by closing its three mutually orthogonal bisecting square surfaces. Then we use linear inversion to solve for volumetric strain in each voxel given a measurement of range change. To differentiate between the two possible hypotheses, we use a Bayesian framework with geostatistical prior information. We perform inversion using each prior to decide if one leads to a more geophysically reasonable interpretation than the other. This work is part of a project entitled "Poroelastic Tomography by Adjoint Inverse Modeling of Data from Seismology, Geodesy, and Hydrology" and is supported by the Geothermal Technology Office of the U.S. Department of Energy [DE-EE0006760].
Sea breeze: Induced mesoscale systems and severe weather
NASA Technical Reports Server (NTRS)
Nicholls, M. E.; Pielke, R. A.; Cotton, W. R.
1990-01-01
Sea-breeze-deep convective interactions over the Florida peninsula were investigated using a cloud/mesoscale numerical model. The objective was to gain a better understanding of sea-breeze and deep convective interactions over the Florida peninsula using a high resolution convectively explicit model and to use these results to evaluate convective parameterization schemes. A 3-D numerical investigation of Florida convection was completed. The Kuo and Fritsch-Chappell parameterization schemes are summarized and evaluated.
Improved Climate Simulations through a Stochastic Parameterization of Ocean Eddies
NASA Astrophysics Data System (ADS)
Williams, Paul; Howe, Nicola; Gregory, Jonathan; Smith, Robin; Joshi, Manoj
2016-04-01
In climate simulations, the impacts of the sub-grid scales on the resolved scales are conventionally represented using deterministic closure schemes, which assume that the impacts are uniquely determined by the resolved scales. Stochastic parameterization relaxes this assumption, by sampling the sub-grid variability in a computationally inexpensive manner. This presentation shows that the simulated climatological state of the ocean is improved in many respects by implementing a simple stochastic parameterization of ocean eddies into a coupled atmosphere-ocean general circulation model. Simulations from a high-resolution, eddy-permitting ocean model are used to calculate the eddy statistics needed to inject realistic stochastic noise into a low-resolution, non-eddy-permitting version of the same model. A suite of four stochastic experiments is then run to test the sensitivity of the simulated climate to the noise definition, by varying the noise amplitude and decorrelation time within reasonable limits. The addition of zero-mean noise to the ocean temperature tendency is found to have a non-zero effect on the mean climate. Specifically, in terms of the ocean temperature and salinity fields both at the surface and at depth, the noise reduces many of the biases in the low-resolution model and causes it to more closely resemble the high-resolution model. The variability of the strength of the global ocean thermohaline circulation is also improved. It is concluded that stochastic ocean perturbations can yield reductions in climate model error that are comparable to those obtained by refining the resolution, but without the increased computational cost. Therefore, stochastic parameterizations of ocean eddies have the potential to significantly improve climate simulations. Reference PD Williams, NJ Howe, JM Gregory, RS Smith, and MM Joshi (2016) Improved Climate Simulations through a Stochastic Parameterization of Ocean Eddies. Journal of Climate, under revision.
Bayesian evidence computation for model selection in non-linear geoacoustic inference problems.
Dettmer, Jan; Dosso, Stan E; Osler, John C
2010-12-01
This paper applies a general Bayesian inference approach, based on Bayesian evidence computation, to geoacoustic inversion of interface-wave dispersion data. Quantitative model selection is carried out by computing the evidence (normalizing constants) for several model parameterizations using annealed importance sampling. The resulting posterior probability density estimate is compared to estimates obtained from Metropolis-Hastings sampling to ensure consistent results. The approach is applied to invert interface-wave dispersion data collected on the Scotian Shelf, off the east coast of Canada for the sediment shear-wave velocity profile. Results are consistent with previous work on these data but extend the analysis to a rigorous approach including model selection and uncertainty analysis. The results are also consistent with core samples and seismic reflection measurements carried out in the area.
Closing the gap between regional and global travel time tomography
Bijwaard, H.; Spakman, W.; Engdahl, E.R.
1998-01-01
Recent global travel time tomography studies by Zhou [1996] and van der Hilst et al. [1997] have been performed with cell parameterizations of the order of those frequently used in regional tomography studies (i.e., with cell sizes of 1??-2??). These new global models constitute a considerable improvement over previous results that were obtained with rather coarse parameterizations (5?? cells). The inferred structures are, however, of larger scale than is usually obtained in regional models, and it is not clear where and if individual cells are actually resolved. This study aims at resolving lateral heterogeneity on scales as small as 0.6?? in the upper mantle and 1.2??-3?? in the lower mantle. This allows for the adequate mapping of expected small-scale structures induced by, for example, lithosphere subduction, deep mantle upwellings, and mid-ocean ridges. There are three major contributions that allow for this advancement. First, we employ an irregular grid of nonoverlapping cells adapted to the heterogeneous sampling of the Earth's mantle by seismic waves [Spakman and Bijwaard, 1998]. Second, we exploit the global data set of Engdahl et al. [1998], which is a reprocessed version of the global data set of the International Seismological Centre. Their reprocessing included hypocenter redetermination and phase reidentification. Finally, we combine all data used (P, pP, and pwP phases) into nearly 5 million ray bundles with a limited spatial extent such that averaging over large mantle volumes is prevented while the signal-to-noise ratio is improved. In the approximate solution of the huge inverse problem we obtain a variance reduction of 57.1%. Synthetic sensitivity tests indicate horizontal resolution on the scale of the smallest cells (0.6?? or 1.2??) in the shallow parts of subduction zones decreasing to approximately 2??-3?? resolution in well-sampled regions in the lower mantle. Vertical resolution can be worse (up to several hundreds of kilometers) in subduction zones with rays predominantly pointing along dip. Important features of the solution are as follows: 100-200 km thick high-velocity slabs beneath all major subduction zones, sometimes flattening in the transition zone and sometimes directly penetrating into the lower mantle; large high-velocity anomalies in the lower mantle that have been attributed to subduction of the Tethys ocean and the Farallon plate; and low-velocity anomalies continuing across the 660 km discontinuity to hotspots at the surface under Iceland, east Africa, the Canary Islands, Yellowstone, and the Society Islands. Our findings corroborate that the 660 km boundary may resist but not prevent (present day) large-scale mass transfer from upper to lower mantle or vice versa. This observation confirms the results of previous, global mantle studies that employed coarser parameterizations. Copyright 1998 by the American Geophysical Union.
NASA Astrophysics Data System (ADS)
Gherboudj, Imen; Beegum, S. Naseema; Marticorena, Beatrice; Ghedira, Hosni
2015-10-01
The mineral dust emissions from arid/semiarid soils were simulated over the MENA (Middle East and North Africa) region using the dust parameterization scheme proposed by Alfaro and Gomes (2001), to quantify the effect of the soil moisture and clay fraction in the emissions. For this purpose, an extensive data set of Soil Moisture and Ocean Salinity soil moisture, European Centre for Medium-Range Weather Forecasting wind speed at 10 m height, Food Agricultural Organization soil texture maps, MODIS (Moderate Resolution Imaging Spectroradiometer) Normalized Difference Vegetation Index, and erodibility of the soil surface were collected for the a period of 3 years, from 2010 to 2013. Though the considered data sets have different temporal and spatial resolution, efforts have been made to make them consistent in time and space. At first, the simulated sandblasting flux over the region were validated qualitatively using MODIS Deep Blue aerosol optical depth and EUMETSAT MSG (Meteosat Seciond Generation) dust product from SEVIRI (Meteosat Spinning Enhanced Visible and Infrared Imager) and quantitatively based on the available ground-based measurements of near-surface particulate mass concentrations (PM10) collected over four stations in the MENA region. Sensitivity analyses were performed to investigate the effect of soil moisture and clay fraction on the emissions flux. The results showed that soil moisture and soil texture have significant roles in the dust emissions over the MENA region, particularly over the Arabian Peninsula. An inversely proportional dependency is observed between the soil moisture and the sandblasting flux, where a steep reduction in flux is observed at low friction velocity and a gradual reduction is observed at high friction velocity. Conversely, a directly proportional dependency is observed between the soil clay fraction and the sandblasting flux where a steep increase in flux is observed at low friction velocity and a gradual increase is observed at high friction velocity. The magnitude of the percentage reduction/increase in the sandblasting flux decreases with the increase of the friction velocity for both soil moisture and soil clay fraction. Furthermore, these variables are interdependent leading to a gradual decrease in the percentage increase in the sandblasting flux for higher soil moisture values.
NASA Astrophysics Data System (ADS)
Cariolle, D.; Caro, D.; Paoli, R.; Hauglustaine, D. A.; CuéNot, B.; Cozic, A.; Paugam, R.
2009-10-01
A method is presented to parameterize the impact of the nonlinear chemical reactions occurring in the plume generated by concentrated NOx sources into large-scale models. The resulting plume parameterization is implemented into global models and used to evaluate the impact of aircraft emissions on the atmospheric chemistry. Compared to previous approaches that rely on corrected emissions or corrective factors to account for the nonlinear chemical effects, the present parameterization is based on the representation of the plume effects via a fuel tracer and a characteristic lifetime during which the nonlinear interactions between species are important and operate via rates of conversion for the NOx species and an effective reaction rates for O3. The implementation of this parameterization insures mass conservation and allows the transport of emissions at high concentrations in plume form by the model dynamics. Results from the model simulations of the impact on atmospheric ozone of aircraft NOx emissions are in rather good agreement with previous work. It is found that ozone production is decreased by 10 to 25% in the Northern Hemisphere with the largest effects in the north Atlantic flight corridor when the plume effects on the global-scale chemistry are taken into account. These figures are consistent with evaluations made with corrected emissions, but regional differences are noticeable owing to the possibility offered by this parameterization to transport emitted species in plume form prior to their dilution at large scale. This method could be further improved to make the parameters used by the parameterization function of the local temperature, humidity and turbulence properties diagnosed by the large-scale model. Further extensions of the method can also be considered to account for multistep dilution regimes during the plume dissipation. Furthermore, the present parameterization can be adapted to other types of point-source NOx emissions that have to be introduced in large-scale models, such as ship exhausts, provided that the plume life cycle, the type of emissions, and the major reactions involved in the nonlinear chemical systems can be determined with sufficient accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shi, Xiangjun; Liu, Xiaohong; Zhang, Kai
In order to improve the treatment of ice nucleation in a more realistic manner in the Community Atmosphere Model version 5.3 (CAM5.3), the effects of pre-existing ice crystals on ice nucleation in cirrus clouds are considered. In addition, by considering the in-cloud variability in ice saturation ratio, homogeneous nucleation takes place spatially only in a portion of the cirrus cloud rather than in the whole area of the cirrus cloud. Compared to observations, the ice number concentrations and the probability distributions of ice number concentration are both improved with the updated treatment. The pre-existing ice crystals significantly reduce ice numbermore » concentrations in cirrus clouds, especially at mid- to high latitudes in the upper troposphere (by a factor of ~10). Furthermore, the contribution of heterogeneous ice nucleation to cirrus ice crystal number increases considerably. Besides the default ice nucleation parameterization of Liu and Penner (2005, hereafter LP) in CAM5.3, two other ice nucleation parameterizations of Barahona and Nenes (2009, hereafter BN) and Kärcher et al. (2006, hereafter KL) are implemented in CAM5.3 for the comparison. In-cloud ice crystal number concentration, percentage contribution from heterogeneous ice nucleation to total ice crystal number, and pre-existing ice effects simulated by the three ice nucleation parameterizations have similar patterns in the simulations with present-day aerosol emissions. However, the change (present-day minus pre-industrial times) in global annual mean column ice number concentration from the KL parameterization (3.24 × 10 6 m -2) is less than that from the LP (8.46 × 10 6 m -2) and BN (5.62 × 10 6 m -2) parameterizations. As a result, the experiment using the KL parameterization predicts a much smaller anthropogenic aerosol long-wave indirect forcing (0.24 W m -2) than that using the LP (0.46 W m −2) and BN (0.39 W m -2) parameterizations.« less
Shi, Xiangjun; Liu, Xiaohong; Zhang, Kai
2015-02-11
In order to improve the treatment of ice nucleation in a more realistic manner in the Community Atmosphere Model version 5.3 (CAM5.3), the effects of pre-existing ice crystals on ice nucleation in cirrus clouds are considered. In addition, by considering the in-cloud variability in ice saturation ratio, homogeneous nucleation takes place spatially only in a portion of the cirrus cloud rather than in the whole area of the cirrus cloud. Compared to observations, the ice number concentrations and the probability distributions of ice number concentration are both improved with the updated treatment. The pre-existing ice crystals significantly reduce ice numbermore » concentrations in cirrus clouds, especially at mid- to high latitudes in the upper troposphere (by a factor of ~10). Furthermore, the contribution of heterogeneous ice nucleation to cirrus ice crystal number increases considerably. Besides the default ice nucleation parameterization of Liu and Penner (2005, hereafter LP) in CAM5.3, two other ice nucleation parameterizations of Barahona and Nenes (2009, hereafter BN) and Kärcher et al. (2006, hereafter KL) are implemented in CAM5.3 for the comparison. In-cloud ice crystal number concentration, percentage contribution from heterogeneous ice nucleation to total ice crystal number, and pre-existing ice effects simulated by the three ice nucleation parameterizations have similar patterns in the simulations with present-day aerosol emissions. However, the change (present-day minus pre-industrial times) in global annual mean column ice number concentration from the KL parameterization (3.24 × 10 6 m -2) is less than that from the LP (8.46 × 10 6 m -2) and BN (5.62 × 10 6 m -2) parameterizations. As a result, the experiment using the KL parameterization predicts a much smaller anthropogenic aerosol long-wave indirect forcing (0.24 W m -2) than that using the LP (0.46 W m −2) and BN (0.39 W m -2) parameterizations.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, W; McGraw, R; Liu, Y
Metric for Quarter 4: Report results of implementation of composite parameterization in single-column model (SCM) to explore the dependency of drizzle formation on aerosol properties. To better represent VOCALS conditions during a test flight, the Liu-Duam-McGraw (LDM) drizzle parameterization is implemented in the high-resolution Weather Research and Forecasting (WRF) model, as well as in the single-column Community Atmosphere Model (CAM), to explore this dependency.
Remote Sensing Protocols for Parameterizing an Individual, Tree-Based, Forest Growth and Yield Model
2014-09-01
Leaf-Off Tree Crowns in Small Footprint, High Sampling Density LIDAR Data from Eastern Deciduous Forests in North America.” Remote Sensing of...William A. 2003. “Crown-Diameter Prediction Models for 87 Species of Stand- Grown Trees in the Eastern United States.” Southern Journal of Applied...ER D C/ CE RL T R- 14 -1 8 Base Facilities Environmental Quality Remote Sensing Protocols for Parameterizing an Individual, Tree -Based
Parameterizing the Spatial Markov Model from Breakthrough Curve Data Alone
NASA Astrophysics Data System (ADS)
Sherman, T.; Bolster, D.; Fakhari, A.; Miller, S.; Singha, K.
2017-12-01
The spatial Markov model (SMM) uses a correlated random walk and has been shown to effectively capture anomalous transport in porous media systems; in the SMM, particles' future trajectories are correlated to their current velocity. It is common practice to use a priori Lagrangian velocity statistics obtained from high resolution simulations to determine a distribution of transition probabilities (correlation) between velocity classes that govern predicted transport behavior; however, this approach is computationally cumbersome. Here, we introduce a methodology to quantify velocity correlation from Breakthrough (BTC) curve data alone; discretizing two measured BTCs into a set of arrival times and reverse engineering the rules of the SMM allows for prediction of velocity correlation, thereby enabling parameterization of the SMM in studies where Lagrangian velocity statistics are not available. The introduced methodology is applied to estimate velocity correlation from BTCs measured in high resolution simulations, thus allowing for a comparison of estimated parameters with known simulated values. Results show 1) estimated transition probabilities agree with simulated values and 2) using the SMM with estimated parameterization accurately predicts BTCs downstream. Additionally, we include uncertainty measurements by calculating lower and upper estimates of velocity correlation, which allow for prediction of a range of BTCs. The simulated BTCs fall in the range of predicted BTCs. This research proposes a novel method to parameterize the SMM from BTC data alone, thereby reducing the SMM's computational costs and widening its applicability.
Improved parametrization of the growth index for dark energy and DGP models
NASA Astrophysics Data System (ADS)
Jing, Jiliang; Chen, Songbai
2010-03-01
We propose two improved parameterized form for the growth index of the linear matter perturbations: (I) γ(z)=γ0+(γ∞-γ0)z/z+1 and (II) γ(z)=γ0+γ1 z/z+1 +(γ∞-γ1-γ0)(. With these forms of γ(z), we analyze the accuracy of the approximation the growth factor f by Ωmγ(z) for both the wCDM model and the DGP model. For the first improved parameterized form, we find that the approximation accuracy is enhanced at the high redshifts for both kinds of models, but it is not at the low redshifts. For the second improved parameterized form, it is found that Ωmγ(z) approximates the growth factor f very well for all redshifts. For chosen α, the relative error is below 0.003% for the ΛCDM model and 0.028% for the DGP model when Ωm=0.27. Thus, the second improved parameterized form of γ(z) should be useful for the high precision constraint on the growth index of different models with the observational data. Moreover, we also show that α depends on the equation of state w and the fractional energy density of matter Ωm0, which may help us learn more information about dark energy and DGP models.
Explicit Global Simulation of Gravity Waves up to the Lower Thermosphere
NASA Astrophysics Data System (ADS)
Becker, E.
2016-12-01
At least for short-term simulations, middle atmosphere general circulation models (GCMs) can be run with sufficiently high resolution in order to describe a good part of the gravity wave spectrum explicitly. Nevertheless, the parameterization of unresolved dynamical scales remains an issue, especially when the scales of parameterized gravity waves (GWs) and resolved GWs become comparable. In addition, turbulent diffusion must always be parameterized along with other subgrid-scale dynamics. A practical solution to the combined closure problem for GWs and turbulent diffusion is to dispense with a parameterization of GWs, apply a high spatial resolution, and to represent the unresolved scales by a macro-turbulent diffusion scheme that gives rise to wave damping in a self-consistent fashion. This is the approach of a few GCMs that extend from the surface to the lower thermosphere and simulate a realistic GW drag and summer-to-winter-pole residual circulation in the upper mesosphere. In this study we describe a new version of the Kuehlungsborn Mechanistic general Circulation Model (KMCM), which includes explicit (though idealized) computations of radiative transfer and the tropospheric moisture cycle. Particular emphasis is spent on 1) the turbulent diffusion scheme, 2) the attenuation of resolved GWs at critical levels, 3) the generation of GWs in the middle atmosphere from body forces, and 4) GW-tidal interactions (including the energy deposition of GWs and tides).
On the physical air-sea fluxes for climate modeling
NASA Astrophysics Data System (ADS)
Bonekamp, J. G.
2001-02-01
At the sea surface, the atmosphere and the ocean exchange momentum, heat and freshwater. Mechanisms for the exchange are wind stress, turbulent mixing, radiation, evaporation and precipitation. These surface fluxes are characterized by a large spatial and temporal variability and play an important role in not only the mean atmospheric and oceanic circulation, but also in the generation and sustainment of coupled climate fluctuations such as the El Niño/La Niña phenomenon. Therefore, a good knowledge of air-sea fluxes is required for the understanding and prediction of climate changes. As part of long-term comprehensive atmospheric reanalyses with `Numerical Weather Prediction/Data assimilation' systems, data sets of global air-sea fluxes are generated. A good example is the 15-year atmospheric reanalysis of the European Centre for Medium--Range Weather Forecasts (ECMWF). Air-sea flux data sets from these reanalyses are very beneficial for climate research, because they combine a good spatial and temporal coverage with a homogeneous and consistent method of calculation. However, atmospheric reanalyses are still imperfect sources of flux information due to shortcomings in model variables, model parameterizations, assimilation methods, sampling of observations, and quality of observations. Therefore, assessments of the errors and the usefulness of air-sea flux data sets from atmospheric (re-)analyses are relevant contributions to the quantitative study of climate variability. Currently, much research is aimed at assessing the quality and usefulness of the reanalysed air-sea fluxes. Work in this thesis intends to contribute to this assessment. In particular, it attempts to answer three relevant questions. The first question is: What is the best parameterization of the momentum flux? A comparison is made of the wind stress parameterization of the ERA15 reanalysis, the currently generated ERA40 reanalysis and the wind stress measurements over the open ocean. The comparison reveals some clear differences in the mean drag coefficient. In addition, this study has indicated that progress has been made from the ERA15 to the ERA40 reanalyses by replacing the model parameterization with a constant Charnock parameter with one which depends on the sea state. The second research question is whether comparison of the response of an ocean model with ocean observations can be exploited to assess the quality of air-sea fluxes of the ERA15 reanalysis. To answer this question in a systematic way an inverse modeling approach is adopted using a four-dimensional variational data assimilation (4DVAR) scheme. Firstly, the functioning of the 4DVAR system is demonstrated from identical twin experiments. These experiments reveal that in the equatorial Pacific, a large reduction in wind-stress and upper-ocean temperature misfits can be achieved using an assimilation time window of eight weeks. It is concluded that the usefulness of inverse ocean modeling technique for global surface flux assessment is limited. The main merit of the developed ocean 4DVAR scheme will be to diagnose errors in the ocean analyses of the ocean model. The last research question is: are the ERA15 fluxes useful for the study of regional patterns of climate variability? The climate mode of consideration is the Antarctic Circumpolar Wave. This study stresses the importance to have the right climatological forcing conditions to assess time scales of climate variability and it confirms the usefulness of ERA15 air-sea fluxes as ocean model forcing fields to study climate variability on the interannual time scale.
NASA Astrophysics Data System (ADS)
Themens, David R.; Jayachandran, P. T.; Bilitza, Dieter; Erickson, Philip J.; Häggström, Ingemar; Lyashenko, Mykhaylo V.; Reid, Benjamin; Varney, Roger H.; Pustovalova, Ljubov
2018-02-01
In this study, we present a topside model representation to be used by the Empirical Canadian High Arctic Ionospheric Model (E-CHAIM). In the process of this, we also present a comprehensive evaluation of the NeQuick's, and by extension the International Reference Ionosphere's, topside electron density model for middle and high latitudes in the Northern Hemisphere. Using data gathered from all available incoherent scatter radars, topside sounders, and Global Navigation Satellite System Radio Occultation satellites, we show that the current NeQuick parameterization suboptimally represents the shape of the topside electron density profile at these latitudes and performs poorly in the representation of seasonal and solar cycle variations of the topside scale thickness. Despite this, the simple, one variable, NeQuick model is a powerful tool for modeling the topside ionosphere. By refitting the parameters that define the maximum topside scale thickness and the rate of increase of the scale height within the NeQuick topside model function, r and g, respectively, and refitting the model's parameterization of the scale height at the F region peak, H0, we find considerable improvement in the NeQuick's ability to represent the topside shape and behavior. Building on these results, we present a new topside model extension of the E-CHAIM based on the revised NeQuick function. Overall, root-mean-square errors in topside electron density are improved over the traditional International Reference Ionosphere/NeQuick topside by 31% for a new NeQuick parameterization and by 36% for a newly proposed topside for E-CHAIM.
a Physical Parameterization of Snow Albedo for Use in Climate Models.
NASA Astrophysics Data System (ADS)
Marshall, Susan Elaine
The albedo of a natural snowcover is highly variable ranging from 90 percent for clean, new snow to 30 percent for old, dirty snow. This range in albedo represents a difference in surface energy absorption of 10 to 70 percent of incident solar radiation. Most general circulation models (GCMs) fail to calculate the surface snow albedo accurately, yet the results of these models are sensitive to the assumed value of the snow albedo. This study replaces the current simple empirical parameterizations of snow albedo with a physically-based parameterization which is accurate (within +/- 3% of theoretical estimates) yet efficient to compute. The parameterization is designed as a FORTRAN subroutine (called SNOALB) which can be easily implemented into model code. The subroutine requires less then 0.02 seconds of computer time (CRAY X-MP) per call and adds only one new parameter to the model calculations, the snow grain size. The snow grain size can be calculated according to one of the two methods offered in this thesis. All other input variables to the subroutine are available from a climate model. The subroutine calculates a visible, near-infrared and solar (0.2-5 μm) snow albedo and offers a choice of two wavelengths (0.7 and 0.9 mu m) at which the solar spectrum is separated into the visible and near-infrared components. The parameterization is incorporated into the National Center for Atmospheric Research (NCAR) Community Climate Model, version 1 (CCM1), and the results of a five -year, seasonal cycle, fixed hydrology experiment are compared to the current model snow albedo parameterization. The results show the SNOALB albedos to be comparable to the old CCM1 snow albedos for current climate conditions, with generally higher visible and lower near-infrared snow albedos using the new subroutine. However, this parameterization offers a greater predictability for climate change experiments outside the range of current snow conditions because it is physically-based and not tuned to current empirical results.
NASA Astrophysics Data System (ADS)
Davis, A. D.; Heimbach, P.; Marzouk, Y.
2017-12-01
We develop a Bayesian inverse modeling framework for predicting future ice sheet volume with associated formal uncertainty estimates. Marine ice sheets are drained by fast-flowing ice streams, which we simulate using a flowline model. Flowline models depend on geometric parameters (e.g., basal topography), parameterized physical processes (e.g., calving laws and basal sliding), and climate parameters (e.g., surface mass balance), most of which are unknown or uncertain. Given observations of ice surface velocity and thickness, we define a Bayesian posterior distribution over static parameters, such as basal topography. We also define a parameterized distribution over variable parameters, such as future surface mass balance, which we assume are not informed by the data. Hyperparameters are used to represent climate change scenarios, and sampling their distributions mimics internal variation. For example, a warming climate corresponds to increasing mean surface mass balance but an individual sample may have periods of increasing or decreasing surface mass balance. We characterize the predictive distribution of ice volume by evaluating the flowline model given samples from the posterior distribution and the distribution over variable parameters. Finally, we determine the effect of climate change on future ice sheet volume by investigating how changing the hyperparameters affects the predictive distribution. We use state-of-the-art Bayesian computation to address computational feasibility. Characterizing the posterior distribution (using Markov chain Monte Carlo), sampling the full range of variable parameters and evaluating the predictive model is prohibitively expensive. Furthermore, the required resolution of the inferred basal topography may be very high, which is often challenging for sampling methods. Instead, we leverage regularity in the predictive distribution to build a computationally cheaper surrogate over the low dimensional quantity of interest (future ice sheet volume). Continual surrogate refinement guarantees asymptotic sampling from the predictive distribution. Directly characterizing the predictive distribution in this way allows us to assess the ice sheet's sensitivity to climate variability and change.
Ultrasonic multi-skip tomography for pipe inspection
NASA Astrophysics Data System (ADS)
Volker, Arno; Vos, Rik; Hunter, Alan; Lorenz, Maarten
2012-05-01
The inspection of wall loss corrosion is difficult at pipe support locations due to limited accessibility. However, the recently developed ultrasonic Multi-Skip screening technique is suitable for this problem. The method employs ultrasonic transducers in a pitch-catch geometry positioned on opposite sides of the pipe support. Shear waves are transmitted in the axial direction within the pipe wall, reflecting multiple times between the inner and outer surfaces before reaching the receivers. Along this path, the signals accumulate information on the integral wall thickness (e.g., via variations in travel time). The method is very sensitive in detecting the presence of wall loss, but it is difficult to quantify both the extent and depth of the loss. If the extent is unknown, then only a conservative estimate of the depth can be made due to the cumulative nature of the travel time variations. Multi-Skip tomography is an extension of Multi-Skip screening and has shown promise as a complimentary follow-up inspection technique. In recent work, we have developed the technique and demonstrated its use for reconstructing high-resolution estimates of pipe wall thickness profiles. The method operates via a model-based full wave field inversion; this consists of a forward model for predicting the measured wave field and an iterative process that compares the predicted and measured wave fields and minimizes the differences with respect to the model parameters (i.e., the wall thickness profile). This paper presents our recent developments in Multi-Skip tomographic inversion, focusing on the initial localization of corrosion regions for efficient parameterization of the surface profile model and utilization of the signal phase information for improving resolution.
NASA Astrophysics Data System (ADS)
Kunkel, Daniel; Wirth, Volkmar; Hoor, Peter
2014-05-01
Recent simulations of baroclinic wave life cycles revealed that the tropopause inversion layer (TIL), commonly situated just above the thermal tropopause, is evident in such experiments and emerges after the onset of wave breaking. Furthermore, bidirectional stratosphere-troposphere exchange (STE) occurs during this non-linear stage of the wave evolution and might be affected by the appearance of the TIL. We study the evolution and the impact of the TIL on STE by using the COSMO model in an idealized mid-latitude channel geometry configuration without physical sub-grid scale parameterizations. We initialize the model with a geostrophically balanced upper level jet stream which is disturbed by an anomaly of potential vorticity to trigger the evolution of the baroclinic waves. Moreover, we use passive tracers of tropospheric or stratospheric origin to identify regions of potential STE. Our results show that the static stability is low in regions of stratosphere to troposphere exchange (STT), while it is high in regions dominated by exchange in the opposite direction (TST). Furthermore, inertia gravity waves, originating from regions with strong ageostrophic wind components, modulate the static stability as well as the vertical shear of the horizontal wind near and above the tropopause. While propagating away from their source, the inertia gravity waves lead to large values of the squared Brunt Vaisala frequency in regions which are simultaneously characterized by low bulk Richardson numbers. Thus, these regions are statically stable and turbulent at the same time and might be crucial for TST, thereby explaining tropospheric mixing ratio changes of e.g. CO across the tropopause which commonly change from tropospheric to stratospheric values a few hundred meters above the local thermal tropopause.
Adjoint tomography and centroid-moment tensor inversion of the Kanto region, Japan
NASA Astrophysics Data System (ADS)
Miyoshi, T.
2017-12-01
A three-dimensional seismic wave speed model in the Kanto region of Japan was developed using adjoint tomography based on large computing. Starting with a model based on previous travel time tomographic results, we inverted the waveforms obtained at seismic broadband stations from 140 local earthquakes in the Kanto region to obtain the P- and S-wave speeds Vp and Vs. The synthetic displacements were calculated using the spectral element method (SEM; e.g. Komatitsch and Tromp 1999; Peter et al. 2011) in which the Kanto region was parameterized using 16 million grid points. The model parameters Vp and Vs were updated iteratively by Newton's method using the misfit and Hessian kernels until the misfit between the observed and synthetic waveforms was minimized. The proposed model reveals several anomalous areas with extremely low Vs values in comparison with those of the initial model. The synthetic waveforms obtained using the newly proposed model for the selected earthquakes show better fit than the initial model to the observed waveforms in different period ranges within 5-30 s. In the present study, all centroid times of the source solutions were determined using time shifts based on cross correlation to prevent high computing resources before the structural inversion. Additionally, parameters of centroid-moment solutions were fully determined using the SEM assuming the 3D structure (e.g. Liu et al. 2004). As a preliminary result, new solutions were basically same as their initial solutions. This may indicate that the 3D structure is not effective for the source estimation. Acknowledgements: This study was supported by JSPS KAKENHI Grant Number 16K21699.
The Role of Synthetic Reconstruction Tests in Seismic Tomography
NASA Astrophysics Data System (ADS)
Rawlinson, N.; Spakman, W.
2015-12-01
Synthetic reconstruction tests are widely used in seismic tomography as a means for assessing the robustness of solutions produced by linear or iterative non-linear inversion schemes. The most common test is the so-called checkerboard resolution test, which uses an alternating pattern of high and low wavespeeds (or some other seismic property such as attenuation). However, checkerboard tests have a number of limitations, including that they (1) only provide indirect evidence of quantitative measures of reliability such as resolution and uncertainty; (2) give a potentially misleading impression of the range of scale-lengths that can be resolved; (3) don't give a true picture of the structural distortion or smearing caused by the data coverage; and (4) result in an inverse problem that is biased towards an accurate reconstruction. The widespread use of synthetic reconstruction tests in seismic tomography is likely to continue for some time yet, so it is important to implement best practice where possible. The goal here is to provide a general set of guidelines, derived from the underlying theory and illustrated by a series of numerical experiments, on their implementation in seismic tomography. In particular, we recommend (1) using a sparse distribution of spikes, rather than the more conventional tightly-spaced checkerboard; (2) using the identical data coverage (e.g. geometric rays) for the synthetic model that was computed for the observation-based model; (3) carrying out multiple tests using anomalies of different scale length; (4) exercising caution when analysing synthetic recovery tests that use anomaly patterns that closely mimic the observation-based model; (5) investigating the trade-off between data noise levels and the minimum wavelength of recovered structure; (6) where possible, test the extent to which preconditioning (e.g. identical parameterization for input and output models) influences the recovery of anomalies.
Pion, Kaon, Proton and Antiproton Production in Proton-Proton Collisions
NASA Technical Reports Server (NTRS)
Norbury, John W.; Blattnig, Steve R.
2008-01-01
Inclusive pion, kaon, proton, and antiproton production from proton-proton collisions is studied at a variety of proton energies. Various available parameterizations of Lorentz-invariant differential cross sections as a function of transverse momentum and rapidity are compared with experimental data. The Badhwar and Alper parameterizations are moderately satisfactory for charged pion production. The Badhwar parameterization provides the best fit for charged kaon production. For proton production, the Alper parameterization is best, and for antiproton production the Carey parameterization works best. However, no parameterization is able to fully account for all the data.
Impact of Parameterized Lee Wave Drag on the Energy Budget of an Eddying Global Ocean Model
2013-08-26
Teixeira, J., Peng, M., Hogan, T.F., Pauley, R., 2002. Navy Operational Global Atmospheric Prediction System (NOGAPS): Forcing for ocean models...Impact of parameterized lee wave drag on the energy budget of an eddying global ocean model David S. Trossman a,⇑, Brian K. Arbic a, Stephen T...input and output terms in the total mechanical energy budget of a hybrid coordinate high-resolution global ocean general circulation model forced by winds
Parameterization of single-scattering properties of snow
NASA Astrophysics Data System (ADS)
Räisänen, P.; Kokhanovsky, A.; Guyot, G.; Jourdan, O.; Nousiainen, T.
2015-02-01
Snow consists of non-spherical grains of various shapes and sizes. Still, in many radiative transfer applications, single-scattering properties of snow have been based on the assumption of spherical grains. More recently, second-generation Koch fractals have been employed. While they produce a relatively flat phase function typical of deformed non-spherical particles, this is still a rather ad-hoc choice. Here, angular scattering measurements for blowing snow conducted during the CLimate IMpacts of Short-Lived pollutants In the Polar region (CLIMSLIP) campaign at Ny Ålesund, Svalbard, are used to construct a reference phase function for snow. Based on this phase function, an optimized habit combination (OHC) consisting of severely rough (SR) droxtals, aggregates of SR plates and strongly distorted Koch fractals is selected. The single-scattering properties of snow are then computed for the OHC as a function of wavelength λ and snow grain volume-to-projected area equivalent radius rvp. Parameterization equations are developed for λ = 0.199-2.7 μm and rvp = 10-2000 μm, which express the single-scattering co-albedo β, the asymmetry parameter g and the phase function P11 as functions of the size parameter and the real and imaginary parts of the refractive index. The parameterizations are analytic and simple to use in radiative transfer models. Compared to the reference values computed for the OHC, the accuracy of the parameterization is very high for β and g. This is also true for the phase function parameterization, except for strongly absorbing cases (β > 0.3). Finally, we consider snow albedo and reflected radiances for the suggested snow optics parameterization, making comparisons to spheres and distorted Koch fractals.
Parameterization of single-scattering properties of snow
NASA Astrophysics Data System (ADS)
Räisänen, P.; Kokhanovsky, A.; Guyot, G.; Jourdan, O.; Nousiainen, T.
2015-06-01
Snow consists of non-spherical grains of various shapes and sizes. Still, in many radiative transfer applications, single-scattering properties of snow have been based on the assumption of spherical grains. More recently, second-generation Koch fractals have been employed. While they produce a relatively flat phase function typical of deformed non-spherical particles, this is still a rather ad hoc choice. Here, angular scattering measurements for blowing snow conducted during the CLimate IMpacts of Short-Lived pollutants In the Polar region (CLIMSLIP) campaign at Ny Ålesund, Svalbard, are used to construct a reference phase function for snow. Based on this phase function, an optimized habit combination (OHC) consisting of severely rough (SR) droxtals, aggregates of SR plates and strongly distorted Koch fractals is selected. The single-scattering properties of snow are then computed for the OHC as a function of wavelength λ and snow grain volume-to-projected area equivalent radius rvp. Parameterization equations are developed for λ = 0.199-2.7 μm and rvp = 10-2000 μm, which express the single-scattering co-albedo β, the asymmetry parameter g and the phase function P11 as functions of the size parameter and the real and imaginary parts of the refractive index. The parameterizations are analytic and simple to use in radiative transfer models. Compared to the reference values computed for the OHC, the accuracy of the parameterization is very high for β and g. This is also true for the phase function parameterization, except for strongly absorbing cases (β > 0.3). Finally, we consider snow albedo and reflected radiances for the suggested snow optics parameterization, making comparisons to spheres and distorted Koch fractals.
Numerical Study of the Role of Shallow Convection in Moisture Transport and Climate
NASA Technical Reports Server (NTRS)
Seaman, Nelson L.; Stauffer, David R.; Munoz, Ricardo C.
2001-01-01
The objective of this investigation was to study the role of shallow convection on the regional water cycle of the Mississippi and Little Washita Basins of the Southern Great Plains (SGP) using a 3-D mesoscale model, the PSU/NCAR MM5. The underlying premise of the project was that current modeling of regional-scale climate and moisture cycles over the continents is deficient without adequate treatment of shallow convection. At the beginning of the study, it was hypothesized that an improved treatment of the regional water cycle can be achieved by using a 3-D mesoscale numerical model having high-quality parameterizations for the key physical processes controlling the water cycle. These included a detailed land-surface parameterization (the Parameterization for Land-Atmosphere-Cloud Exchange (PLACE) sub-model of Wetzel and Boone), an advanced boundary-layer parameterization (the 1.5-order turbulent kinetic energy (TKE) predictive scheme of Shafran et al.), and a more complete shallow convection parameterization (the hybrid-closure scheme of Deng et al.) than are available in most current models. PLACE is a product of researchers working at NASA's Goddard Space Flight Center in Greenbelt, MD. The TKE and shallow-convection schemes are the result of model development at Penn State. The long-range goal is to develop an integrated suite of physical sub-models that can be used for regional and perhaps global climate studies of the water budget. Therefore, the work plan focused on integrating, improving, and testing these parameterizations in the MM5 and applying them to study water-cycle processes over the SGP. These schemes have been tested extensively through the course of this study and the latter two have been improved significantly as a consequence.
NASA Technical Reports Server (NTRS)
Elsaesser, Greg; Del Genio, Anthony
2015-01-01
The CMIP5 configurations of the GISS Model-E2 GCM simulated a mid- and high latitude ice IWP that decreased by 50 relative to that simulated for CMIP3 (Jiang et al. 2012; JGR). Tropical IWP increased by 15 in CMIP5. While the tropical IWP was still within the published upper-bounds of IWP uncertainty derived using NASA A-Train satellite observations, it was found that the upper troposphere (200 mb) ice water content (IWC) exceeded the published upper-bound by a factor of 2. This was largely driven by IWC in deep-convecting regions of the tropics.Recent advances in the model-E2 convective parameterization have been found to have a substantial impact on tropical IWC. These advances include the development of both a cold pool parameterization (Del Genio et al. 2015) and new convective ice parameterization. In this presentation, we focus on the new parameterization of convective cloud ice that was developed using data from the NASA TC4 Mission. Ice particle terminal velocity formulations now include information from a number of NASA field campaigns. The new parameterization predicts both an ice water mass weighted-average particle diameter and a particle cross sectional area weighted-average size diameter as a function of temperature and ice water content. By assuming a gamma-distribution functional form for the particle size distribution, these two diameter estimates are all that are needed to explicitly predict the distribution of ice particles as a function of particle diameter.GCM simulations with the improved convective parameterization yield a 50 decrease in upper tropospheric IWC, bringing the tropical and global mean IWP climatologies into even closer agreement with the A-Train satellite observation best estimates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, Po-Lun; Rasch, Philip J.; Fast, Jerome D.
A suite of physical parameterizations (deep and shallow convection, turbulent boundary layer, aerosols, cloud microphysics, and cloud fraction) from the global climate model Community Atmosphere Model version 5.1 (CAM5) has been implemented in the regional model Weather Research and Forecasting with chemistry (WRF-Chem). A downscaling modeling framework with consistent physics has also been established in which both global and regional simulations use the same emissions and surface fluxes. The WRF-Chem model with the CAM5 physics suite is run at multiple horizontal resolutions over a domain encompassing the northern Pacific Ocean, northeast Asia, and northwest North America for April 2008 whenmore » the ARCTAS, ARCPAC, and ISDAC field campaigns took place. These simulations are evaluated against field campaign measurements, satellite retrievals, and ground-based observations, and are compared with simulations that use a set of common WRF-Chem Parameterizations. This manuscript describes the implementation of the CAM5 physics suite in WRF-Chem provides an overview of the modeling framework and an initial evaluation of the simulated meteorology, clouds, and aerosols, and quantifies the resolution dependence of the cloud and aerosol parameterizations. We demonstrate that some of the CAM5 biases, such as high estimates of cloud susceptibility to aerosols and the underestimation of aerosol concentrations in the Arctic, can be reduced simply by increasing horizontal resolution. We also show that the CAM5 physics suite performs similarly to a set of parameterizations commonly used in WRF-Chem, but produces higher ice and liquid water condensate amounts and near-surface black carbon concentration. Further evaluations that use other mesoscale model parameterizations and perform other case studies are needed to infer whether one parameterization consistently produces results more consistent with observations.« less
NASA Astrophysics Data System (ADS)
Elsaesser, G.; Del Genio, A. D.
2015-12-01
The CMIP5 configurations of the GISS Model-E2 GCM simulated a mid- and high-latitude ice IWP that decreased by ~50% relative to that simulated for CMIP3 (Jiang et al. 2012; JGR). Tropical IWP increased by ~15% in CMIP5. While the tropical IWP was still within the published upper-bounds of IWP uncertainty derived using NASA A-Train satellite observations, it was found that the upper troposphere (~200 mb) ice water content (IWC) exceeded the published upper-bound by a factor of ~2. This was largely driven by IWC in deep-convecting regions of the tropics. Recent advances in the model-E2 convective parameterization have been found to have a substantial impact on tropical IWC. These advances include the development of both a cold pool parameterization (Del Genio et al. 2015) and new convective ice parameterization. In this presentation, we focus on the new parameterization of convective cloud ice that was developed using data from the NASA TC4 Mission. Ice particle terminal velocity formulations now include information from a number of NASA field campaigns. The new parameterization predicts both an ice water mass weighted-average particle diameter and a particle cross sectional area weighted-average size diameter as a function of temperature and ice water content. By assuming a gamma-distribution functional form for the particle size distribution, these two diameter estimates are all that are needed to explicitly predict the distribution of ice particles as a function of particle diameter. GCM simulations with the improved convective parameterization yield a ~50% decrease in upper tropospheric IWC, bringing the tropical and global mean IWP climatologies into even closer agreement with the A-Train satellite observation best estimates.
Liu, Ping; Li, Guodong; Liu, Xinggao
2015-09-01
Control vector parameterization (CVP) is an important approach of the engineering optimization for the industrial dynamic processes. However, its major defect, the low optimization efficiency caused by calculating the relevant differential equations in the generated nonlinear programming (NLP) problem repeatedly, limits its wide application in the engineering optimization for the industrial dynamic processes. A novel highly effective control parameterization approach, fast-CVP, is first proposed to improve the optimization efficiency for industrial dynamic processes, where the costate gradient formulae is employed and a fast approximate scheme is presented to solve the differential equations in dynamic process simulation. Three well-known engineering optimization benchmark problems of the industrial dynamic processes are demonstrated as illustration. The research results show that the proposed fast approach achieves a fine performance that at least 90% of the computation time can be saved in contrast to the traditional CVP method, which reveals the effectiveness of the proposed fast engineering optimization approach for the industrial dynamic processes. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Díaz-Mojica, J. J.; Cruz-Atienza, V. M.; Madariaga, R.; Singh, S. K.; Iglesias, A.
2013-05-01
We introduce a novel approach for imaging the earthquakes dynamics from ground motion records based on a parallel genetic algorithm (GA). The method follows the elliptical dynamic-rupture-patch approach introduced by Di Carli et al. (2010) and has been carefully verified through different numerical tests (Díaz-Mojica et al., 2012). Apart from the five model parameters defining the patch geometry, our dynamic source description has four more parameters: the stress drop inside the nucleation and the elliptical patches; and two friction parameters, the slip weakening distance and the change of the friction coefficient. These parameters are constant within the rupture surface. The forward dynamic source problem, involved in the GA inverse method, uses a highly accurate computational solver for the problem, namely the staggered-grid split-node. The synthetic inversion presented here shows that the source model parameterization is suitable for the GA, and that short-scale source dynamic features are well resolved in spite of low-pass filtering of the data for periods comparable to the source duration. Since there is always uncertainty in the propagation medium as well as in the source location and the focal mechanisms, we have introduced a statistical approach to generate a set of solution models so that the envelope of the corresponding synthetic waveforms explains as much as possible the observed data. We applied the method to the 2012 Mw6.5 intraslab Zumpango, Mexico earthquake and determined several fundamental source parameters that are in accordance with different and completely independent estimates for Mexican and worldwide earthquakes. Our weighted-average final model satisfactorily explains eastward rupture directivity observed in the recorded data. Some parameters found for the Zumpango earthquake are: Δτ = 30.2+/-6.2 MPa, Er = 0.68+/-0.36x10^15 J, G = 1.74+/-0.44x10^15 J, η = 0.27+/-0.11, Vr/Vs = 0.52+/-0.09 and Mw = 6.64+/-0.07; for the stress drop, radiated energy, fracture energy, radiation efficiency, rupture velocity and moment magnitude, respectively. Mw6.5 intraslab Zumpango earthquake location, stations location and tectonic setting in central Mexico
NASA Astrophysics Data System (ADS)
Morandage, Shehan; Schnepf, Andrea; Vanderborght, Jan; Javaux, Mathieu; Leitner, Daniel; Laloy, Eric; Vereecken, Harry
2017-04-01
Root traits are increasingly important in breading of new crop varieties. E.g., longer and fewer lateral roots are suggested to improve drought resistance of wheat. Thus, detailed root architectural parameters are important. However, classical field sampling of roots only provides more aggregated information such as root length density (coring), root counts per area (trenches) or root arrival curves at certain depths (rhizotubes). We investigate the possibility of obtaining the information about root system architecture of plants using field based classical root sampling schemes, based on sensitivity analysis and inverse parameter estimation. This methodology was developed based on a virtual experiment where a root architectural model was used to simulate root system development in a field, parameterized for winter wheat. This information provided the ground truth which is normally unknown in a real field experiment. The three sampling schemes coring, trenching, and rhizotubes where virtually applied to and aggregated information computed. Morris OAT global sensitivity analysis method was then performed to determine the most sensitive parameters of root architecture model for the three different sampling methods. The estimated means and the standard deviation of elementary effects of a total number of 37 parameters were evaluated. Upper and lower bounds of the parameters were obtained based on literature and published data of winter wheat root architectural parameters. Root length density profiles of coring, arrival curve characteristics observed in rhizotubes, and root counts in grids of trench profile method were evaluated statistically to investigate the influence of each parameter using five different error functions. Number of branches, insertion angle inter-nodal distance, and elongation rates are the most sensitive parameters and the parameter sensitivity varies slightly with the depth. Most parameters and their interaction with the other parameters show highly nonlinear effect to the model output. The most sensitive parameters will be subject to inverse estimation from the virtual field sampling data using DREAMzs algorithm. The estimated parameters can then be compared with the ground truth in order to determine the suitability of the sampling schemes to identify specific traits or parameters of the root growth model.
NASA Astrophysics Data System (ADS)
Liu, Y.; Pau, G. S. H.; Finsterle, S.
2015-12-01
Parameter inversion involves inferring the model parameter values based on sparse observations of some observables. To infer the posterior probability distributions of the parameters, Markov chain Monte Carlo (MCMC) methods are typically used. However, the large number of forward simulations needed and limited computational resources limit the complexity of the hydrological model we can use in these methods. In view of this, we studied the implicit sampling (IS) method, an efficient importance sampling technique that generates samples in the high-probability region of the posterior distribution and thus reduces the number of forward simulations that we need to run. For a pilot-point inversion of a heterogeneous permeability field based on a synthetic ponded infiltration experiment simulated with TOUGH2 (a subsurface modeling code), we showed that IS with linear map provides an accurate Bayesian description of the parameterized permeability field at the pilot points with just approximately 500 forward simulations. We further studied the use of surrogate models to improve the computational efficiency of parameter inversion. We implemented two reduced-order models (ROMs) for the TOUGH2 forward model. One is based on polynomial chaos expansion (PCE), of which the coefficients are obtained using the sparse Bayesian learning technique to mitigate the "curse of dimensionality" of the PCE terms. The other model is Gaussian process regression (GPR) for which different covariance, likelihood and inference models are considered. Preliminary results indicate that ROMs constructed based on the prior parameter space perform poorly. It is thus impractical to replace this hydrological model by a ROM directly in a MCMC method. However, the IS method can work with a ROM constructed for parameters in the close vicinity of the maximum a posteriori probability (MAP) estimate. We will discuss the accuracy and computational efficiency of using ROMs in the implicit sampling procedure for the hydrological problem considered. This work was supported, in part, by the U.S. Dept. of Energy under Contract No. DE-AC02-05CH11231
NASA Astrophysics Data System (ADS)
Lin, Shangfei; Sheng, Jinyu
2017-12-01
Depth-induced wave breaking is the primary dissipation mechanism for ocean surface waves in shallow waters. Different parametrizations were developed for parameterizing depth-induced wave breaking process in ocean surface wave models. The performance of six commonly-used parameterizations in simulating significant wave heights (SWHs) is assessed in this study. The main differences between these six parameterizations are representations of the breaker index and the fraction of breaking waves. Laboratory and field observations consisting of 882 cases from 14 sources of published observational data are used in the assessment. We demonstrate that the six parameterizations have reasonable performance in parameterizing depth-induced wave breaking in shallow waters, but with their own limitations and drawbacks. The widely-used parameterization suggested by Battjes and Janssen (1978, BJ78) has a drawback of underpredicting the SWHs in the locally-generated wave conditions and overpredicting in the remotely-generated wave conditions over flat bottoms. The drawback of BJ78 was addressed by a parameterization suggested by Salmon et al. (2015, SA15). But SA15 had relatively larger errors in SWHs over sloping bottoms than BJ78. We follow SA15 and propose a new parameterization with a dependence of the breaker index on the normalized water depth in deep waters similar to SA15. In shallow waters, the breaker index of the new parameterization has a nonlinear dependence on the local bottom slope rather than the linear dependence used in SA15. Overall, this new parameterization has the best performance with an average scatter index of ∼8.2% in comparison with the three best performing existing parameterizations with the average scatter index between 9.2% and 13.6%.
Optimisation of an idealised primitive equation ocean model using stochastic parameterization
NASA Astrophysics Data System (ADS)
Cooper, Fenwick C.
2017-05-01
Using a simple parameterization, an idealised low resolution (biharmonic viscosity coefficient of 5 × 1012 m4s-1 , 128 × 128 grid) primitive equation baroclinic ocean gyre model is optimised to have a much more accurate climatological mean, variance and response to forcing, in all model variables, with respect to a high resolution (biharmonic viscosity coefficient of 8 × 1010 m4s-1 , 512 × 512 grid) equivalent. For example, the change in the climatological mean due to a small change in the boundary conditions is more accurate in the model with parameterization. Both the low resolution and high resolution models are strongly chaotic. We also find that long timescales in the model temperature auto-correlation at depth are controlled by the vertical temperature diffusion parameter and time mean vertical advection and are caused by short timescale random forcing near the surface. This paper extends earlier work that considered a shallow water barotropic gyre. Here the analysis is extended to a more turbulent multi-layer primitive equation model that includes temperature as a prognostic variable. The parameterization consists of a constant forcing, applied to the velocity and temperature equations at each grid point, which is optimised to obtain a model with an accurate climatological mean, and a linear stochastic forcing, that is optimised to also obtain an accurate climatological variance and 5 day lag auto-covariance. A linear relaxation (nudging) is not used. Conservation of energy and momentum is discussed in an appendix.
The Super Tuesday Outbreak: Forecast Sensitivities to Single-Moment Microphysics Schemes
NASA Technical Reports Server (NTRS)
Molthan, Andrew L.; Case, Jonathan L.; Dembek, Scott R.; Jedlovec, Gary J.; Lapenta, William M.
2008-01-01
Forecast precipitation and radar characteristics are used by operational centers to guide the issuance of advisory products. As operational numerical weather prediction is performed at increasingly finer spatial resolution, convective precipitation traditionally represented by sub-grid scale parameterization schemes is now being determined explicitly through single- or multi-moment bulk water microphysics routines. Gains in forecasting skill are expected through improved simulation of clouds and their microphysical processes. High resolution model grids and advanced parameterizations are now available through steady increases in computer resources. As with any parameterization, their reliability must be measured through performance metrics, with errors noted and targeted for improvement. Furthermore, the use of these schemes within an operational framework requires an understanding of limitations and an estimate of biases so that forecasters and model development teams can be aware of potential errors. The National Severe Storms Laboratory (NSSL) Spring Experiments have produced daily, high resolution forecasts used to evaluate forecast skill among an ensemble with varied physical parameterizations and data assimilation techniques. In this research, high resolution forecasts of the 5-6 February 2008 Super Tuesday Outbreak are replicated using the NSSL configuration in order to evaluate two components of simulated convection on a large domain: sensitivities of quantitative precipitation forecasts to assumptions within a single-moment bulk water microphysics scheme, and to determine if these schemes accurately depict the reflectivity characteristics of well-simulated, organized, cold frontal convection. As radar returns are sensitive to the amount of hydrometeor mass and the distribution of mass among variably sized targets, radar comparisons may guide potential improvements to a single-moment scheme. In addition, object-based verification metrics are evaluated for their utility in gauging model performance and QPF variability.
NASA Astrophysics Data System (ADS)
Bonan, G. B.
2016-12-01
Soil moisture stress is a key regulator of canopy transpiration, the surface energy budget, and land-atmosphere coupling. Many land surface models used in Earth system models have an ad-hoc parameterization of soil moisture stress that decreases stomatal conductance with soil drying. Parameterization of soil moisture stress from more fundamental principles of plant hydrodynamics is a key research frontier for land surface models. While the biophysical and physiological foundations of such parameterizations are well-known, their best implementation in land surface models is less clear. Land surface models utilize a big-leaf canopy parameterization (or two big-leaves to represent the sunlit and shaded canopy) without vertical gradients in the canopy. However, there are strong biometeorological and physiological gradients in plant canopies. Are these gradients necessary to resolve? Here, I describe a vertically-resolved, multilayer canopy model that calculates leaf temperature and energy fluxes, photosynthesis, stomatal conductance, and leaf water potential at each level in the canopy. In this model, midday leaf water stress manifests in the upper canopy layers, which receive high amounts of solar radiation, have high leaf nitrogen and photosynthetic capacity, and have high stomatal conductance and transpiration rates (in the absence of leaf water stress). Lower levels in the canopy become water stressed in response to longer-term soil moisture drying. I examine the role of vertical gradients in the canopy microclimate (solar radiation, air temperature, vapor pressure, wind speed), structure (leaf area density), and physiology (leaf nitrogen, photosynthetic capacity, stomatal conductance) in determining above canopy fluxes and gradients of transpiration and leaf water potential within the canopy.
Cheng, Meng -Dawn; Kabela, Erik D.
2016-04-30
The Potential Source Contribution Function (PSCF) model has been successfully used for identifying regions of emission source at a long distance in this study, the PSCF model relies on backward trajectories calculated by the Hybrid Single-Particle Lagrangian Integrated Trajectory (HYSPLIT) model. In this study, we investigated the impacts of grid resolution and Planetary Boundary Layer (PBL) parameterization (e.g., turbulent transport of pollutants) on the PSCF analysis. The Mellor-Yamada-Janjic (MYJ) and Yonsei University (YUS) parameterization schemes were selected to model the turbulent transport in the PBL within the Weather Research and Forecasting (WRF version 3.6) model. Two separate domain grid sizesmore » (83 and 27 km) were chosen in the WRF downscaling in generating the wind data for driving the HYSPLIT calculation. The effects of grid size and PBL parameterization are important in incorporating the influ- ence of regional and local meteorological processes such as jet streaks, blocking patterns, Rossby waves, and terrain-induced convection on the transport of pollutants by a wind trajectory. We found high resolution PSCF did discover and locate source areas more precisely than that with lower resolution meteorological inputs. The lack of anticipated improvement could also be because a PBL scheme chosen to produce the WRF data was only a local parameterization and unable to faithfully duplicate the real atmosphere on a global scale. The MYJ scheme was able to replicate PSCF source identification by those using the Reanalysis and discover additional source areas that was not identified by the Reanalysis data. In conclusion, a potential benefit for using high-resolution wind data in the PSCF modeling is that it could discover new source location in addition to those identified by using the Reanalysis data input.« less
NASA Technical Reports Server (NTRS)
Petty, Grant W.; Katsaros, Kristina B.
1994-01-01
Based on a geometric optics model and the assumption of an isotropic Gaussian surface slope distribution, the component of ocean surface microwave emissivity variation due to large-scale surface roughness is parameterized for the frequencies and approximate viewing angle of the Special Sensor Microwave/Imager. Independent geophysical variables in the parameterization are the effective (microwave frequency dependent) slope variance and the sea surface temperature. Using the same physical model, the change in the effective zenith angle of reflected sky radiation arising from large-scale roughness is also parameterized. Independent geophysical variables in this parameterization are the effective slope variance and the atmospheric optical depth at the frequency in question. Both of the above model-based parameterizations are intended for use in conjunction with empirical parameterizations relating effective slope variance and foam coverage to near-surface wind speed. These empirical parameterizations are the subject of a separate paper.
Multiple Scattering Principal Component-based Radiative Transfer Model (PCRTM) from Far IR to UV-Vis
NASA Astrophysics Data System (ADS)
Liu, X.; Wu, W.; Yang, Q.
2017-12-01
Modern satellite hyperspectral satellite remote sensors such as AIRS, CrIS, IASI, CLARREO all require accurate and fast radiative transfer models that can deal with multiple scattering of clouds and aerosols to explore the information contents. However, performing full radiative transfer calculations using multiple stream methods such as discrete ordinate (DISORT), doubling and adding (AD), successive order of scattering order of scattering (SOS) are very time consuming. We have developed a principal component-based radiative transfer model (PCRTM) to reduce the computational burden by orders of magnitudes while maintain high accuracy. By exploring spectral correlations, the PCRTM reduce the number of radiative transfer calculations in frequency domain. It further uses a hybrid stream method to decrease the number of calls to the computational expensive multiple scattering calculations with high stream numbers. Other fast parameterizations have been used in the infrared spectral region reduce the computational time to milliseconds for an AIRS forward simulation (2378 spectral channels). The PCRTM has been development to cover spectral range from far IR to UV-Vis. The PCRTM model have been be used for satellite data inversions, proxy data generation, inter-satellite calibrations, spectral fingerprinting, and climate OSSE. We will show examples of applying the PCRTM to single field of view cloudy retrievals of atmospheric temperature, moisture, traces gases, clouds, and surface parameters. We will also show how the PCRTM are used for the NASA CLARREO project.
Solar activity and oscillation frequency splittings
NASA Technical Reports Server (NTRS)
Woodard, M. F.; Libbrecht, K. G.
1993-01-01
Solar p-mode frequency splittings, parameterized by the coefficients through order N = 12 of a Legendre polynomial expansion of the mode frequencies as a function of m/L, were obtained from an analysis of helioseismology data taken at Big Bear Solar Observatory during the 4 years 1986 and 1988-1990 (approximately solar minimum to maximum). Inversion of the even-index splitting coefficients confirms that there is a significant contribution to the frequency splittings originating near the solar poles. The strength of the polar contribution is anti correlated with the overall level or solar activity in the active latitudes, suggesting a relation to polar faculae. From an analysis of the odd-index splitting coefficients we infer an uppor limit to changes in the solar equatorial near-surface rotatinal velocity of less than 1.9 m/s (3 sigma limit) between solar minimum and maximum.
Research on ionospheric tomography based on variable pixel height
NASA Astrophysics Data System (ADS)
Zheng, Dunyong; Li, Peiqing; He, Jie; Hu, Wusheng; Li, Chaokui
2016-05-01
A novel ionospheric tomography technique based on variable pixel height was developed for the tomographic reconstruction of the ionospheric electron density distribution. The method considers the height of each pixel as an unknown variable, which is retrieved during the inversion process together with the electron density values. In contrast to conventional computerized ionospheric tomography (CIT), which parameterizes the model with a fixed pixel height, the variable-pixel-height computerized ionospheric tomography (VHCIT) model applies a disturbance to the height of each pixel. In comparison with conventional CIT models, the VHCIT technique achieved superior results in a numerical simulation. A careful validation of the reliability and superiority of VHCIT was performed. According to the results of the statistical analysis of the average root mean square errors, the proposed model offers an improvement by 15% compared with conventional CIT models.
Implications of summertime marine stratocumulus on the North American climate
NASA Technical Reports Server (NTRS)
Clark, John H. E.
1994-01-01
This study focuses on the effects of summertime stratocumulus over the eastern Pacific. This cloud is linked to the semi-permanent sub-tropical highs that dominate the low-level circulation over the Pacific and Atlantic. Subsidence on the eastern flank of these highs creates an inversion based about 800 m above sea level that caps moist air near the surface. This air overlies cool waters driven by upwelling along the coastal regions of North America. Strong surface north-westerlies mix the boundary layer enough to saturate the air just below the capping inversion. Widespread stratocumulus is thus formed. All calculations were carried out using the GENESIS general circulation model that was run at MSFC. Among the more important properties of the model is that it includes radiative forcing due to absorption of solar radiation and the emission of infrared radiation, interactive clouds (both stratocumulus and cumulus types), exchanges of heat and moisture with the lower boundary. Clouds are interactive in the sense that they impact the circulation by modifying the fields of radiative heating and turbulent fluxes of heat and moisture in the boundary layer. In turn, clouds are modified by the winds through the advection of moisture. In order to isolate the effects of mid- and high-latitude stratocumulus, two runs were made with the model: one with and the other without stratocumulus. The runs were made for a year, but with perpetual July conditions, i.e., solar forcing was fixed. The diurnal solar cycle, however, was allowed for. The sea surface temperature distribution was fixed in both runs to represent climatological July conditions. All dependent variables were represented at 12 surfaces of constant sigma = p/p(sub O), where p is pressure and p(sub O) is surface pressure. To facilitate analysis, model output was transformed to constant pressure surfaces. Structures no smaller in size than 7.5 degrees longitude and 4.5 degrees in latitude were resolved. Smaller features of the circulation were parameterized. The model thus captures synoptic- and planetary-scale circulation features.
NASA Astrophysics Data System (ADS)
Silvers, L. G.; Stevens, B. B.; Mauritsen, T.; Marco, G. A.
2015-12-01
The characteristics of clouds in General Circulation Models (GCMs) need to be constrained in a consistent manner with theory, observations, and high resolution models (HRMs). One way forward is to base improvements of parameterizations on high resolution studies which resolve more of the important dynamical motions and allow for less parameterizations. This is difficult because of the numerous differences between GCMs and HRMs, both technical and theoretical. Century long simulations at resolutions of 20-250 km on a global domain are typical of GCMs while HRMs often simulate hours at resolutions of 0.1km-5km on domains the size of a single GCM grid cell. The recently developed mode ICON provides a flexible framework which allows many of these difficulties to be overcome. This study uses the ICON model to compute SST perturbation simulations on multiple domains in a state of Radiative Convective Equilibrium (RCE) with parameterized convection. The domains used range from roughly the size of Texas to nearly half of Earth's surface area. All simulations use a doubly periodic domain with an effective distance between cell centers of 13 km and are integrated to a state of statistical stationarity. The primary analysis examines the mean characteristics of the cloud related fields and the feedback parameter of the simulations. It is shown that the simulated atmosphere of a GCM in RCE is sufficiently similar across a range of domain sizes to justify the use of RCE to study both a GCM and a HRM on the same domain with the goal of improved constraints on the parameterized clouds. The simulated atmospheres are comparable to what could be expected at midday in a typical region of Earth's tropics under calm conditions. In particular, the differences between the domains are smaller than differences which result from choosing different physics schemes. Significant convective organization is present on all domain sizes with a relatively high subsidence fraction. Notwithstanding the overall qualitative similarities of the simulations, quantitative differences lead to a surprisingly large sensitivity of the feedback parameter. This range of the feedback parameter is more than a factor of two and is similar to the range of feedbacks which were obtained by the CMIP5 models.
Reconstruction of structural damage based on reflection intensity spectra of fiber Bragg gratings
NASA Astrophysics Data System (ADS)
Huang, Guojun; Wei, Changben; Chen, Shiyuan; Yang, Guowei
2014-12-01
We present an approach for structural damage reconstruction based on the reflection intensity spectra of fiber Bragg gratings (FBGs). Our approach incorporates the finite element method, transfer matrix (T-matrix), and genetic algorithm to solve the inverse photo-elastic problem of damage reconstruction, i.e. to identify the location, size, and shape of a defect. By introducing a parameterized characterization of the damage information, the inverse photo-elastic problem is reduced to an optimization problem, and a relevant computational scheme was developed. The scheme iteratively searches for the solution to the corresponding direct photo-elastic problem until the simulated and measured (or target) reflection intensity spectra of the FBGs near the defect coincide within a prescribed error. Proof-of-concept validations of our approach were performed numerically and experimentally using both holed and cracked plate samples as typical cases of plane-stress problems. The damage identifiability was simulated by changing the deployment of the FBG sensors, including the total number of sensors and their distance to the defect. Both the numerical and experimental results demonstrate that our approach is effective and promising. It provides us with a photo-elastic method for developing a remote, automatic damage-imaging technique that substantially improves damage identification for structural health monitoring.
Watts, Seth; Tortorelli, Daniel A.
2017-04-13
Topology optimization is a methodology for assigning material or void to each point in a design domain in a way that extremizes some objective function, such as the compliance of a structure under given loads, subject to various imposed constraints, such as an upper bound on the mass of the structure. Geometry projection is a means to parameterize the topology optimization problem, by describing the design in a way that is independent of the mesh used for analysis of the design's performance; it results in many fewer design parameters, necessarily resolves the ill-posed nature of the topology optimization problem, andmore » provides sharp descriptions of the material interfaces. We extend previous geometric projection work to 3 dimensions and design unit cells for lattice materials using inverse homogenization. We perform a sensitivity analysis of the geometric projection and show it has smooth derivatives, making it suitable for use with gradient-based optimization algorithms. The technique is demonstrated by designing unit cells comprised of a single constituent material plus void space to obtain light, stiff materials with cubic and isotropic material symmetry. Here, we also design a single-constituent isotropic material with negative Poisson's ratio and a light, stiff material comprised of 2 constituent solids plus void space.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Watts, Seth; Tortorelli, Daniel A.
Topology optimization is a methodology for assigning material or void to each point in a design domain in a way that extremizes some objective function, such as the compliance of a structure under given loads, subject to various imposed constraints, such as an upper bound on the mass of the structure. Geometry projection is a means to parameterize the topology optimization problem, by describing the design in a way that is independent of the mesh used for analysis of the design's performance; it results in many fewer design parameters, necessarily resolves the ill-posed nature of the topology optimization problem, andmore » provides sharp descriptions of the material interfaces. We extend previous geometric projection work to 3 dimensions and design unit cells for lattice materials using inverse homogenization. We perform a sensitivity analysis of the geometric projection and show it has smooth derivatives, making it suitable for use with gradient-based optimization algorithms. The technique is demonstrated by designing unit cells comprised of a single constituent material plus void space to obtain light, stiff materials with cubic and isotropic material symmetry. Here, we also design a single-constituent isotropic material with negative Poisson's ratio and a light, stiff material comprised of 2 constituent solids plus void space.« less
Liu, Dan; Cai, Wenwen; Xia, Jiangzhou; Dong, Wenjie; Zhou, Guangsheng; Chen, Yang; Zhang, Haicheng; Yuan, Wenping
2014-01-01
Gross Primary Production (GPP) is the largest flux in the global carbon cycle. However, large uncertainties in current global estimations persist. In this study, we examined the performance of a process-based model (Integrated BIosphere Simulator, IBIS) at 62 eddy covariance sites around the world. Our results indicated that the IBIS model explained 60% of the observed variation in daily GPP at all validation sites. Comparison with a satellite-based vegetation model (Eddy Covariance-Light Use Efficiency, EC-LUE) revealed that the IBIS simulations yielded comparable GPP results as the EC-LUE model. Global mean GPP estimated by the IBIS model was 107.50±1.37 Pg C year(-1) (mean value ± standard deviation) across the vegetated area for the period 2000-2006, consistent with the results of the EC-LUE model (109.39±1.48 Pg C year(-1)). To evaluate the uncertainty introduced by the parameter Vcmax, which represents the maximum photosynthetic capacity, we inversed Vcmax using Markov Chain-Monte Carlo (MCMC) procedures. Using the inversed Vcmax values, the simulated global GPP increased by 16.5 Pg C year(-1), indicating that IBIS model is sensitive to Vcmax, and large uncertainty exists in model parameterization.
Approaches for Subgrid Parameterization: Does Scaling Help?
NASA Astrophysics Data System (ADS)
Yano, Jun-Ichi
2016-04-01
Arguably the scaling behavior is a well-established fact in many geophysical systems. There are already many theoretical studies elucidating this issue. However, the scaling law is slow to be introduced in "operational" geophysical modelling, notably for weather forecast as well as climate projection models. The main purpose of this presentation is to ask why, and try to answer this question. As a reference point, the presentation reviews the three major approaches for traditional subgrid parameterization: moment, PDF (probability density function), and mode decomposition. The moment expansion is a standard method for describing the subgrid-scale turbulent flows both in the atmosphere and the oceans. The PDF approach is intuitively appealing as it directly deals with a distribution of variables in subgrid scale in a more direct manner. The third category, originally proposed by Aubry et al (1988) in context of the wall boundary-layer turbulence, is specifically designed to represent coherencies in compact manner by a low--dimensional dynamical system. Their original proposal adopts the proper orthogonal decomposition (POD, or empirical orthogonal functions, EOF) as their mode-decomposition basis. However, the methodology can easily be generalized into any decomposition basis. The mass-flux formulation that is currently adopted in majority of atmospheric models for parameterizing convection can also be considered a special case of the mode decomposition, adopting the segmentally-constant modes for the expansion basis. The mode decomposition can, furthermore, be re-interpreted as a type of Galarkin approach for numerically modelling the subgrid-scale processes. Simple extrapolation of this re-interpretation further suggests us that the subgrid parameterization problem may be re-interpreted as a type of mesh-refinement problem in numerical modelling. We furthermore see a link between the subgrid parameterization and downscaling problems along this line. The mode decomposition approach would also be the best framework for linking between the traditional parameterizations and the scaling perspectives. However, by seeing the link more clearly, we also see strength and weakness of introducing the scaling perspectives into parameterizations. Any diagnosis under a mode decomposition would immediately reveal a power-law nature of the spectrum. However, exploiting this knowledge in operational parameterization would be a different story. It is symbolic to realize that POD studies have been focusing on representing the largest-scale coherency within a grid box under a high truncation. This problem is already hard enough. Looking at differently, the scaling law is a very concise manner for characterizing many subgrid-scale variabilities in systems. We may even argue that the scaling law can provide almost complete subgrid-scale information in order to construct a parameterization, but with a major missing link: its amplitude must be specified by an additional condition. The condition called "closure" in the parameterization problem, and known to be a tough problem. We should also realize that the studies of the scaling behavior tend to be statistical in the sense that it hardly provides complete information for constructing a parameterization: can we specify the coefficients of all the decomposition modes by a scaling law perfectly when the first few leading modes are specified? Arguably, the renormalization group (RNG) is a very powerful tool for reducing a system with a scaling behavior into a low dimension, say, under an appropriate mode decomposition procedure. However, RNG is analytical tool: it is extremely hard to apply it to real complex geophysical systems. It appears that it is still a long way to go for us before we can begin to exploit the scaling law in order to construct operational subgrid parameterizations in effective manner.
NASA Astrophysics Data System (ADS)
Gilson, Gaëlle; Jiskoot, Hester
2017-04-01
Arctic sea fog hasn't been extensively studied despite its importance for environmental impact such as on traffic safety and on glacier ablation in coastal Arctic regions. Understanding fog processes can improve nowcasting of environmental impact in such remote regions where few observational data exist. To understand fog's physical, macrophysical and radiative properties, it is important to determine accurate Arctic fog climatology. Our previous study suggested that fog peaks in July over East Greenland and associates with sea ice break-up and a sea breeze with wind speeds between 1-4 m/s. The goal of this study is to understand Arctic coastal fog macrophysical properties and quantify its vertical extent. Radiosonde profiles were extracted from the Integrated Global Radiosonde Archive (IGRA) between 1980-2012, coincident with manual and automated fog observations at three synoptic weather stations along the coast of East Greenland. A new method using air mass saturation ratio and thermodynamic stability was developed to derive fog top height from IGRA radiosonde profiles. Soundings were classified into nine categories, based on surface and low-level saturation ratio, inversion type, and the fog top height relative to the inversion base. Results show that Arctic coastal fog mainly occurs under thermodynamically stable conditions characterized by deep and strong low-level inversions. Fog thickness is commonly about 100-400 m, often reaching the top of the boundary layer. Fog top height is greater at northern stations, where daily fog duration is also longer and often lasts throughout the day. Fog thickness is likely correlated to sea ice concentration density during sea ice break-up. Overall, it is hypothesized that our sounding classes represent development or dissipation stages of advection fog, or stratus lowering and fog lifting processes. With a new automated method, it is planned to retrieve fog height from IGRA data over Arctic terrain around the entire North Atlantic region. These results will serve as a basis for the incorporation of fog and temperature inversions into glacier surface energy balance models and can aid in improving the parameterization of fog for nowcasting methods for aviation applications.
New Approaches to Quantifying Transport Model Error in Atmospheric CO2 Simulations
NASA Technical Reports Server (NTRS)
Ott, L.; Pawson, S.; Zhu, Z.; Nielsen, J. E.; Collatz, G. J.; Gregg, W. W.
2012-01-01
In recent years, much progress has been made in observing CO2 distributions from space. However, the use of these observations to infer source/sink distributions in inversion studies continues to be complicated by difficulty in quantifying atmospheric transport model errors. We will present results from several different experiments designed to quantify different aspects of transport error using the Goddard Earth Observing System, Version 5 (GEOS-5) Atmospheric General Circulation Model (AGCM). In the first set of experiments, an ensemble of simulations is constructed using perturbations to parameters in the model s moist physics and turbulence parameterizations that control sub-grid scale transport of trace gases. Analysis of the ensemble spread and scales of temporal and spatial variability among the simulations allows insight into how parameterized, small-scale transport processes influence simulated CO2 distributions. In the second set of experiments, atmospheric tracers representing model error are constructed using observation minus analysis statistics from NASA's Modern-Era Retrospective Analysis for Research and Applications (MERRA). The goal of these simulations is to understand how errors in large scale dynamics are distributed, and how they propagate in space and time, affecting trace gas distributions. These simulations will also be compared to results from NASA's Carbon Monitoring System Flux Pilot Project that quantified the impact of uncertainty in satellite constrained CO2 flux estimates on atmospheric mixing ratios to assess the major factors governing uncertainty in global and regional trace gas distributions.
Uniting statistical and individual-based approaches for animal movement modelling.
Latombe, Guillaume; Parrott, Lael; Basille, Mathieu; Fortin, Daniel
2014-01-01
The dynamic nature of their internal states and the environment directly shape animals' spatial behaviours and give rise to emergent properties at broader scales in natural systems. However, integrating these dynamic features into habitat selection studies remains challenging, due to practically impossible field work to access internal states and the inability of current statistical models to produce dynamic outputs. To address these issues, we developed a robust method, which combines statistical and individual-based modelling. Using a statistical technique for forward modelling of the IBM has the advantage of being faster for parameterization than a pure inverse modelling technique and allows for robust selection of parameters. Using GPS locations from caribou monitored in Québec, caribou movements were modelled based on generative mechanisms accounting for dynamic variables at a low level of emergence. These variables were accessed by replicating real individuals' movements in parallel sub-models, and movement parameters were then empirically parameterized using Step Selection Functions. The final IBM model was validated using both k-fold cross-validation and emergent patterns validation and was tested for two different scenarios, with varying hardwood encroachment. Our results highlighted a functional response in habitat selection, which suggests that our method was able to capture the complexity of the natural system, and adequately provided projections on future possible states of the system in response to different management plans. This is especially relevant for testing the long-term impact of scenarios corresponding to environmental configurations that have yet to be observed in real systems.
Uniting Statistical and Individual-Based Approaches for Animal Movement Modelling
Latombe, Guillaume; Parrott, Lael; Basille, Mathieu; Fortin, Daniel
2014-01-01
The dynamic nature of their internal states and the environment directly shape animals' spatial behaviours and give rise to emergent properties at broader scales in natural systems. However, integrating these dynamic features into habitat selection studies remains challenging, due to practically impossible field work to access internal states and the inability of current statistical models to produce dynamic outputs. To address these issues, we developed a robust method, which combines statistical and individual-based modelling. Using a statistical technique for forward modelling of the IBM has the advantage of being faster for parameterization than a pure inverse modelling technique and allows for robust selection of parameters. Using GPS locations from caribou monitored in Québec, caribou movements were modelled based on generative mechanisms accounting for dynamic variables at a low level of emergence. These variables were accessed by replicating real individuals' movements in parallel sub-models, and movement parameters were then empirically parameterized using Step Selection Functions. The final IBM model was validated using both k-fold cross-validation and emergent patterns validation and was tested for two different scenarios, with varying hardwood encroachment. Our results highlighted a functional response in habitat selection, which suggests that our method was able to capture the complexity of the natural system, and adequately provided projections on future possible states of the system in response to different management plans. This is especially relevant for testing the long-term impact of scenarios corresponding to environmental configurations that have yet to be observed in real systems. PMID:24979047
Improved Climate Simulations through a Stochastic Parameterization of Ocean Eddies
NASA Astrophysics Data System (ADS)
Williams, Paul; Howe, Nicola; Gregory, Jonathan; Smith, Robin; Joshi, Manoj
2017-04-01
In climate simulations, the impacts of the subgrid scales on the resolved scales are conventionally represented using deterministic closure schemes, which assume that the impacts are uniquely determined by the resolved scales. Stochastic parameterization relaxes this assumption, by sampling the subgrid variability in a computationally inexpensive manner. This study shows that the simulated climatological state of the ocean is improved in many respects by implementing a simple stochastic parameterization of ocean eddies into a coupled atmosphere-ocean general circulation model. Simulations from a high-resolution, eddy-permitting ocean model are used to calculate the eddy statistics needed to inject realistic stochastic noise into a low-resolution, non-eddy-permitting version of the same model. A suite of four stochastic experiments is then run to test the sensitivity of the simulated climate to the noise definition by varying the noise amplitude and decorrelation time within reasonable limits. The addition of zero-mean noise to the ocean temperature tendency is found to have a nonzero effect on the mean climate. Specifically, in terms of the ocean temperature and salinity fields both at the surface and at depth, the noise reduces many of the biases in the low-resolution model and causes it to more closely resemble the high-resolution model. The variability of the strength of the global ocean thermohaline circulation is also improved. It is concluded that stochastic ocean perturbations can yield reductions in climate model error that are comparable to those obtained by refining the resolution, but without the increased computational cost. Therefore, stochastic parameterizations of ocean eddies have the potential to significantly improve climate simulations. Reference Williams PD, Howe NJ, Gregory JM, Smith RS, and Joshi MM (2016) Improved Climate Simulations through a Stochastic Parameterization of Ocean Eddies. Journal of Climate, 29, 8763-8781. http://dx.doi.org/10.1175/JCLI-D-15-0746.1
NASA Astrophysics Data System (ADS)
Churilova, T.; Moiseeva, N.; Efimova, T.; Suslin, V.; Krivenko, O.; Zemlianskaia, E.
2017-11-01
Bio-optical studies were carried out in coastal waters around the Crimea peninsula in different seasons 2016. It was shown that variability of chlorophyll a concentration (Chl-a), light absorption by suspended particles (ap(λ)), phytoplankton pigments (aph(λ)), non-algal particles (aNAP(λ)) and by colored dissolved organic matter (aCDOM(λ)) in the Crimea coastal water was high ( order of magnitudes) in all seasons 2016. Relationships between ap(440), aph(440) and Chl-a were obtained and their seasonal differences were analyzed. Spectral distribution of aNAP(λ) and aCDOM(λ) were parameterized. Seasonality in aCDOM(λ) parameterization was revealed, but - in aNAP(λ) parameterization was not revealed. The budget of light absorption by aph(λ), aNAP(λ) i aCDOM(λ) at 440 nm was assessed and its seasonal dynamics was analyzed.
Application of a planetary wave breaking parameterization to stratospheric circulation statistics
NASA Technical Reports Server (NTRS)
Randel, William J.; Garcia, Rolando R.
1994-01-01
The planetary wave parameterization scheme developed recently by Garcia is applied to statospheric circulation statistics derived from 12 years of National Meteorological Center operational stratospheric analyses. From the data a planetary wave breaking criterion (based on the ratio of the eddy to zonal mean meridional potential vorticity (PV) gradients), a wave damping rate, and a meridional diffusion coefficient are calculated. The equatorward flank of the polar night jet during winter is identified as a wave breaking region from the observed PV gradients; the region moves poleward with season, covering all high latitudes in spring. Derived damping rates maximize in the subtropical upper stratosphere (the 'surf zone'), with damping time scales of 3-4 days. Maximum diffusion coefficients follow the spatial patterns of the wave breaking criterion, with magnitudes comparable to prior published estimates. Overall, the observed results agree well with the parameterized calculations of Garcia.
Rogers, Alistair; Serbin, Shawn P; Ely, Kim S; Sloan, Victoria L; Wullschleger, Stan D
2017-12-01
Terrestrial biosphere models (TBMs) are highly sensitive to model representation of photosynthesis, in particular the parameters maximum carboxylation rate and maximum electron transport rate at 25°C (V c,max.25 and J max.25 , respectively). Many TBMs do not include representation of Arctic plants, and those that do rely on understanding and parameterization from temperate species. We measured photosynthetic CO 2 response curves and leaf nitrogen (N) content in species representing the dominant vascular plant functional types found on the coastal tundra near Barrow, Alaska. The activation energies associated with the temperature response functions of V c,max and J max were 17% lower than commonly used values. When scaled to 25°C, V c,max.25 and J max.25 were two- to five-fold higher than the values used to parameterize current TBMs. This high photosynthetic capacity was attributable to a high leaf N content and the high fraction of N invested in Rubisco. Leaf-level modeling demonstrated that current parameterization of TBMs resulted in a two-fold underestimation of the capacity for leaf-level CO 2 assimilation in Arctic vegetation. This study highlights the poor representation of Arctic photosynthesis in TBMs, and provides the critical data necessary to improve our ability to project the response of the Arctic to global environmental change. No claim to original US Government works. New Phytologist © 2017 New Phytologist Trust.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rogers, Alistair; Serbin, Shawn P.; Ely, Kim S.
Terrestrial biosphere models (TBMs) are highly sensitive to model representation of photosynthesis, in particular the parameters maximum carboxylation rate and maximum electron transport rate at 25°C (V c,max.25 and J max.25, respectively). Many TBMs do not include representation of Arctic plants, and those that do rely on understanding and parameterization from temperate species. We then measured photosynthetic CO 2 response curves and leaf nitrogen (N) content in species representing the dominant vascular plant functional types found on the coastal tundra near Barrow, Alaska. The activation energies associated with the temperature response functions of Vc,max and Jmax were 17% lower thanmore » commonly used values. When scaled to 25°C, Vc,max.25 and J max.25 were two- to five-fold higher than the values used to parameterize current TBMs. This high photosynthetic capacity was attributable to a high leaf N content and the high fraction of N invested in Rubisco. Leaf-level modeling demonstrated that current parameterization of TBMs resulted in a two-fold underestimation of the capacity for leaf-level CO 2 assimilation in Arctic vegetation. Our study highlights the poor representation of Arctic photosynthesis in TBMs, and provides the critical data necessary to improve our ability to project the response of the Arctic to global environmental change.« less
Rogers, Alistair; Serbin, Shawn P.; Ely, Kim S.; ...
2017-09-06
Terrestrial biosphere models (TBMs) are highly sensitive to model representation of photosynthesis, in particular the parameters maximum carboxylation rate and maximum electron transport rate at 25°C (V c,max.25 and J max.25, respectively). Many TBMs do not include representation of Arctic plants, and those that do rely on understanding and parameterization from temperate species. We then measured photosynthetic CO 2 response curves and leaf nitrogen (N) content in species representing the dominant vascular plant functional types found on the coastal tundra near Barrow, Alaska. The activation energies associated with the temperature response functions of Vc,max and Jmax were 17% lower thanmore » commonly used values. When scaled to 25°C, Vc,max.25 and J max.25 were two- to five-fold higher than the values used to parameterize current TBMs. This high photosynthetic capacity was attributable to a high leaf N content and the high fraction of N invested in Rubisco. Leaf-level modeling demonstrated that current parameterization of TBMs resulted in a two-fold underestimation of the capacity for leaf-level CO 2 assimilation in Arctic vegetation. Our study highlights the poor representation of Arctic photosynthesis in TBMs, and provides the critical data necessary to improve our ability to project the response of the Arctic to global environmental change.« less
NASA Astrophysics Data System (ADS)
Hughes, J. D.; White, J.; Doherty, J.
2011-12-01
Linear prediction uncertainty analysis in a Bayesian framework was applied to guide the conditioning of an integrated surface water/groundwater model that will be used to predict the effects of groundwater withdrawals on surface-water and groundwater flows. Linear prediction uncertainty analysis is an effective approach for identifying (1) raw and processed data most effective for model conditioning prior to inversion, (2) specific observations and periods of time critically sensitive to specific predictions, and (3) additional observation data that would reduce model uncertainty relative to specific predictions. We present results for a two-dimensional groundwater model of a 2,186 km2 area of the Biscayne aquifer in south Florida implicitly coupled to a surface-water routing model of the actively managed canal system. The model domain includes 5 municipal well fields withdrawing more than 1 Mm3/day and 17 operable surface-water control structures that control freshwater releases from the Everglades and freshwater discharges to Biscayne Bay. More than 10 years of daily observation data from 35 groundwater wells and 24 surface water gages are available to condition model parameters. A dense parameterization was used to fully characterize the contribution of the inversion null space to predictive uncertainty and included bias-correction parameters. This approach allows better resolution of the boundary between the inversion null space and solution space. Bias-correction parameters (e.g., rainfall, potential evapotranspiration, and structure flow multipliers) absorb information that is present in structural noise that may otherwise contaminate the estimation of more physically-based model parameters. This allows greater precision in predictions that are entirely solution-space dependent, and reduces the propensity for bias in predictions that are not. Results show that application of this analysis is an effective means of identifying those surface-water and groundwater data, both raw and processed, that minimize predictive uncertainty, while simultaneously identifying the maximum solution-space dimensionality of the inverse problem supported by the data.
Parameterizing deep convection using the assumed probability density function method
Storer, R. L.; Griffin, B. M.; Höft, J.; ...
2014-06-11
Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method. The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and mid-latitude deep convection. These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing ismore » weak. The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.« less
Parameterizing deep convection using the assumed probability density function method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Storer, R. L.; Griffin, B. M.; Höft, J.
2015-01-06
Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method.The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and midlatitude deep convection. These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing is weak.more » The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.« less
Parameterizing deep convection using the assumed probability density function method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Storer, R. L.; Griffin, B. M.; Hoft, Jan
2015-01-06
Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method.The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and mid-latitude deep convection.These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing is weak. Themore » same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.« less
NASA Astrophysics Data System (ADS)
Eggers, G. L.; Lewis, K. W.; Simons, F. J.; Olhede, S.
2013-12-01
Venus does not possess a plate-tectonic system like that observed on Earth, and many surface features--such as tesserae and coronae--lack terrestrial equivalents. To understand Venus' tectonics is to understand its lithosphere, requiring a study of topography and gravity, and how they relate. Past studies of topography dealt with mapping and classification of visually observed features, and studies of gravity dealt with inverting the relation between topography and gravity anomalies to recover surface density and elastic thickness in either the space (correlation) or the spectral (admittance, coherence) domain. In the former case, geological features could be delineated but not classified quantitatively. In the latter case, rectangular or circular data windows were used, lacking geological definition. While the estimates of lithospheric strength on this basis were quantitative, they lacked robust error estimates. Here, we remapped the surface into 77 regions visually and qualitatively defined from a combination of Magellan topography, gravity, and radar images. We parameterize the spectral covariance of the observed topography, treating it as a Gaussian process assumed to be stationary over the mapped regions, using a three-parameter isotropic Matern model, and perform maximum-likelihood based inversions for the parameters. We discuss the parameter distribution across the Venusian surface and across terrain types such as coronoae, dorsae, tesserae, and their relation with mean elevation and latitudinal position. We find that the three-parameter model, while mathematically established and applicable to Venus topography, is overparameterized, and thus reduce the results to a two-parameter description of the peak spectral variance and the range-to-half-peak variance (in function of the wavenumber). With the reduction the clustering of geological region types in two-parameter space becomes promising. Finally, we perform inversions for the JOINT spectral variance of topography and gravity, in which the INITIAL loading by topography retains the Matern form but the FINAL topography and gravity are the result of flexural compensation. In our modeling, we pay explicit attention to finite-field spectral estimation effects (and their remedy via tapering), and to the implementation of statistical tests (for anisotropy, for initial-loading process correlation, to ascertain the proper density contrasts and interface depth in a two-layer model), robustness assessment and uncertainty quantification, as well as to algorithmic intricacies related to low-dimensional but poorly scaled maximum-likelihood inversions. We conclude that Venusian geomorphic terrains are well described by their 2-D topographic and gravity (cross-)power spectra, and the spectral properties of distinct geologic provinces on Venus are worth quantifying via maximum-likelihood-based methods under idealized three-parameter Matern distributions. Analysis of fitted parameters and the fitted-data residuals reveals natural variability in the (sub)surface properties on Venus, as well as some directional anisotropy. Geologic regions tend to cluster according to terrain type in our parameter space, which we analyze to confirm their shared geologic histories and utilize for guidance in ongoing mapping efforts of Venus and other terrestrial bodies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liou, Kuo-Nan
2016-02-09
Under the support of the aforementioned DOE Grant, we have made two fundamental contributions to atmospheric and climate sciences: (1) Develop an efficient 3-D radiative transfer parameterization for application to intense and intricate inhomogeneous mountain/snow regions. (2) Innovate a stochastic parameterization for light absorption by internally mixed black carbon and dust particles in snow grains for understanding and physical insight into snow albedo reduction in climate models. With reference to item (1), we divided solar fluxes reaching mountain surfaces into five components: direct and diffuse fluxes, direct- and diffuse-reflected fluxes, and coupled mountain-mountain flux. “Exact” 3D Monte Carlo photon tracingmore » computations can then be performed for these solar flux components to compare with those calculated from the conventional plane-parallel (PP) radiative transfer program readily available in climate models. Subsequently, Parameterizations of the deviations of 3D from PP results for five flux components are carried out by means of the multiple linear regression analysis associated with topographic information, including elevation, solar incident angle, sky view factor, and terrain configuration factor. We derived five regression equations with high statistical correlations for flux deviations and successfully incorporated this efficient parameterization into WRF model, which was used as the testbed in connection with the Fu-Liou-Gu PP radiation scheme that has been included in the WRF physics package. Incorporating this 3D parameterization program, we conducted simulations of WRF and CCSM4 to understand and evaluate the mountain/snow effect on snow albedo reduction during seasonal transition and the interannual variability for snowmelt, cloud cover, and precipitation over the Western United States presented in the final report. With reference to item (2), we developed in our previous research a geometric-optics surface-wave approach (GOS) for the computation of light absorption and scattering by complex and inhomogeneous particles for application to aggregates and snow grains with external and internal mixing structures. We demonstrated that a small black (BC) particle on the order of 1 μm internally mixed with snow grains could effectively reduce visible snow albedo by as much as 5–10%. Following this work and within the context of DOE support, we have made two key accomplishments presented in the attached final report.« less
NASA Technical Reports Server (NTRS)
Krizmanic, John F.
2013-01-01
We have been assessing the effects of background radiation in low-Earth orbit for the next generation of X-ray and Cosmic-ray experiments, in particular for International Space Station orbit. Outside the areas of high fluxes of trapped radiation, we have been using parameterizations developed by the Fermi team to quantify the high-energy induced background. For the low-energy background, we have been using the AE8 and AP8 SPENVIS models to determine the orbit fractions where the fluxes of trapped particles are too high to allow for useful operation of the experiment. One area we are investigating is how the fluxes of SPENVIS predictions at higher energies match the fluxes at the low-energy end of our parameterizations. I will summarize our methodology for background determination from the various sources of cosmogenic and terrestrial radiation and how these compare to SPENVIS predictions in overlapping energy ranges.
NASA Astrophysics Data System (ADS)
Schneider, Tapio; Lan, Shiwei; Stuart, Andrew; Teixeira, João.
2017-12-01
Climate projections continue to be marred by large uncertainties, which originate in processes that need to be parameterized, such as clouds, convection, and ecosystems. But rapid progress is now within reach. New computational tools and methods from data assimilation and machine learning make it possible to integrate global observations and local high-resolution simulations in an Earth system model (ESM) that systematically learns from both and quantifies uncertainties. Here we propose a blueprint for such an ESM. We outline how parameterization schemes can learn from global observations and targeted high-resolution simulations, for example, of clouds and convection, through matching low-order statistics between ESMs, observations, and high-resolution simulations. We illustrate learning algorithms for ESMs with a simple dynamical system that shares characteristics of the climate system; and we discuss the opportunities the proposed framework presents and the challenges that remain to realize it.
Spectral bidirectional reflectance of Antarctic snow: Measurements and parameterization
NASA Astrophysics Data System (ADS)
Hudson, Stephen R.; Warren, Stephen G.; Brandt, Richard E.; Grenfell, Thomas C.; Six, Delphine
2006-09-01
The bidirectional reflectance distribution function (BRDF) of snow was measured from a 32-m tower at Dome C, at latitude 75°S on the East Antarctic Plateau. These measurements were made at 96 solar zenith angles between 51° and 87° and cover wavelengths 350-2400 nm, with 3- to 30-nm resolution, over the full range of viewing geometry. The BRDF at 900 nm had previously been measured at the South Pole; the Dome C measurement at that wavelength is similar. At both locations the natural roughness of the snow surface causes the anisotropy of the BRDF to be less than that of flat snow. The inherent BRDF of the snow is nearly constant in the high-albedo part of the spectrum (350-900 nm), but the angular distribution of reflected radiance becomes more isotropic at the shorter wavelengths because of atmospheric Rayleigh scattering. Parameterizations were developed for the anisotropic reflectance factor using a small number of empirical orthogonal functions. Because the reflectance is more anisotropic at wavelengths at which ice is more absorptive, albedo rather than wavelength is used as a predictor in the near infrared. The parameterizations cover nearly all viewing angles and are applicable to the high parts of the Antarctic Plateau that have small surface roughness and, at viewing zenith angles less than 55°, elsewhere on the plateau, where larger surface roughness affects the BRDF at larger viewing angles. The root-mean-squared error of the parameterized reflectances is between 2% and 4% at wavelengths less than 1400 nm and between 5% and 8% at longer wavelengths.
Parameterized and resolved Southern Ocean eddy compensation
NASA Astrophysics Data System (ADS)
Poulsen, Mads B.; Jochum, Markus; Nuterman, Roman
2018-04-01
The ability to parameterize Southern Ocean eddy effects in a forced coarse resolution ocean general circulation model is assessed. The transient model response to a suite of different Southern Ocean wind stress forcing perturbations is presented and compared to identical experiments performed with the same model in 0.1° eddy-resolving resolution. With forcing of present-day wind stress magnitude and a thickness diffusivity formulated in terms of the local stratification, it is shown that the Southern Ocean residual meridional overturning circulation in the two models is different in structure and magnitude. It is found that the difference in the upper overturning cell is primarily explained by an overly strong subsurface flow in the parameterized eddy-induced circulation while the difference in the lower cell is mainly ascribed to the mean-flow overturning. With a zonally constant decrease of the zonal wind stress by 50% we show that the absolute decrease in the overturning circulation is insensitive to model resolution, and that the meridional isopycnal slope is relaxed in both models. The agreement between the models is not reproduced by a 50% wind stress increase, where the high resolution overturning decreases by 20%, but increases by 100% in the coarse resolution model. It is demonstrated that this difference is explained by changes in surface buoyancy forcing due to a reduced Antarctic sea ice cover, which strongly modulate the overturning response and ocean stratification. We conclude that the parameterized eddies are able to mimic the transient response to altered wind stress in the high resolution model, but partly misrepresent the unperturbed Southern Ocean meridional overturning circulation and associated heat transports.
Numerical optimization of Ignition and Growth reactive flow modeling for PAX2A
NASA Astrophysics Data System (ADS)
Baker, E. L.; Schimel, B.; Grantham, W. J.
1996-05-01
Variable metric nonlinear optimization has been successfully applied to the parameterization of unreacted and reacted products thermodynamic equations of state and reactive flow modeling of the HMX based high explosive PAX2A. The NLQPEB nonlinear optimization program has been recently coupled to the LLNL developed two-dimensional high rate continuum modeling programs DYNA2D and CALE. The resulting program has the ability to optimize initial modeling parameters. This new optimization capability was used to optimally parameterize the Ignition and Growth reactive flow model to experimental manganin gauge records. The optimization varied the Ignition and Growth reaction rate model parameters in order to minimize the difference between the calculated pressure histories and the experimental pressure histories.
Improved Overpressure Recording and Modeling for Near-Surface Explosion Forensics
NASA Astrophysics Data System (ADS)
Kim, K.; Schnurr, J.; Garces, M. A.; Rodgers, A. J.
2017-12-01
The accurate recording and analysis of air-blast acoustic waveforms is a key component of the forensic analysis of explosive events. Smartphone apps can enhance traditional technologies by providing scalable, cost-effective ubiquitous sensor solutions for monitoring blasts, undeclared activities, and inaccessible facilities. During a series of near-surface chemical high explosive tests, iPhone 6's running the RedVox infrasound recorder app were co-located with high-fidelity Hyperion overpressure sensors, allowing for direct comparison of the resolution and frequency content of the devices. Data from the traditional sensors is used to characterize blast signatures and to determine relative iPhone microphone amplitude and phase responses. A Wiener filter based source deconvolution method is applied, using a parameterized source function estimated from traditional overpressure sensor data, to estimate system responses. In addition, progress on a new parameterized air-blast model is presented. The model is based on the analysis of a large set of overpressure waveforms from several surface explosion test series. An appropriate functional form with parameters determined empirically from modern air-blast and acoustic data will allow for better parameterization of signals and the improved characterization of explosive sources.
Preserving Lagrangian Structure in Nonlinear Model Reduction with Application to Structural Dynamics
Carlberg, Kevin; Tuminaro, Ray; Boggs, Paul
2015-03-11
Our work proposes a model-reduction methodology that preserves Lagrangian structure and achieves computational efficiency in the presence of high-order nonlinearities and arbitrary parameter dependence. As such, the resulting reduced-order model retains key properties such as energy conservation and symplectic time-evolution maps. We focus on parameterized simple mechanical systems subjected to Rayleigh damping and external forces, and consider an application to nonlinear structural dynamics. To preserve structure, the method first approximates the system's “Lagrangian ingredients''---the Riemannian metric, the potential-energy function, the dissipation function, and the external force---and subsequently derives reduced-order equations of motion by applying the (forced) Euler--Lagrange equation with thesemore » quantities. Moreover, from the algebraic perspective, key contributions include two efficient techniques for approximating parameterized reduced matrices while preserving symmetry and positive definiteness: matrix gappy proper orthogonal decomposition and reduced-basis sparsification. Our results for a parameterized truss-structure problem demonstrate the practical importance of preserving Lagrangian structure and illustrate the proposed method's merits: it reduces computation time while maintaining high accuracy and stability, in contrast to existing nonlinear model-reduction techniques that do not preserve structure.« less
Preserving Lagrangian Structure in Nonlinear Model Reduction with Application to Structural Dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlberg, Kevin; Tuminaro, Ray; Boggs, Paul
Our work proposes a model-reduction methodology that preserves Lagrangian structure and achieves computational efficiency in the presence of high-order nonlinearities and arbitrary parameter dependence. As such, the resulting reduced-order model retains key properties such as energy conservation and symplectic time-evolution maps. We focus on parameterized simple mechanical systems subjected to Rayleigh damping and external forces, and consider an application to nonlinear structural dynamics. To preserve structure, the method first approximates the system's “Lagrangian ingredients''---the Riemannian metric, the potential-energy function, the dissipation function, and the external force---and subsequently derives reduced-order equations of motion by applying the (forced) Euler--Lagrange equation with thesemore » quantities. Moreover, from the algebraic perspective, key contributions include two efficient techniques for approximating parameterized reduced matrices while preserving symmetry and positive definiteness: matrix gappy proper orthogonal decomposition and reduced-basis sparsification. Our results for a parameterized truss-structure problem demonstrate the practical importance of preserving Lagrangian structure and illustrate the proposed method's merits: it reduces computation time while maintaining high accuracy and stability, in contrast to existing nonlinear model-reduction techniques that do not preserve structure.« less
NASA Astrophysics Data System (ADS)
Sofiev, Mikhail; Soares, Joana; Kouznetsov, Rostislav; Vira, Julius; Prank, Marje
2016-04-01
Top-down emission estimation via inverse dispersion modelling is used for various problems, where bottom-up approaches are difficult or highly uncertain. One of such areas is the estimation of emission from wild-land fires. In combination with dispersion modelling, satellite and/or in-situ observations can, in principle, be used to efficiently constrain the emission values. This is the main strength of the approach: the a-priori values of the emission factors (based on laboratory studies) are refined for real-life situations using the inverse-modelling technique. However, the approach also has major uncertainties, which are illustrated here with a few examples of the Integrated System for wild-land Fires (IS4FIRES). IS4FIRES generates the smoke emission and injection profile from MODIS and SEVIRI active-fire radiative energy observations. The emission calculation includes two steps: (i) initial top-down calibration of emission factors via inverse dispersion problem solution that is made once using training dataset from the past, (ii) application of the obtained emission coefficients to individual-fire radiative energy observations, thus leading to bottom-up emission compilation. For such a procedure, the major classes of uncertainties include: (i) imperfect information on fires, (ii) simplifications in the fire description, (iii) inaccuracies in the smoke observations and modelling, (iv) inaccuracies of the inverse problem solution. Using examples of the fire seasons 2010 in Russia, 2012 in Eurasia, 2007 in Australia, etc, it is pointed out that the top-down system calibration performed for a limited number of comparatively moderate cases (often the best-observed ones) may lead to errors in application to extreme events. For instance, the total emission of 2010 Russian fires is likely to be over-estimated by up to 50% if the calibration is based on the season 2006 and fire description is simplified. Longer calibration period and more sophisticated parameterization (including the smoke injection model and distinguishing all relevant vegetation types) can improve the predictions. The other significant parameter, so far weakly addressed in fire emission inventories, is the size spectrum of the emitted aerosols. Direct size-resolving measurements showed, for instance, that smoke from smouldering fires has smaller particles as compares with smoke from flaming fires. Due to dependence of the smoke optical thickness on the size distribution, such variability can lead to significant changes in the top-down calibration step. Experiments with IS4FIRES-SILAM system manifested up to a factor of two difference in AOD, depending on the assumption on particle spectrum.
Global model comparison of heterogeneous ice nucleation parameterizations in mixed phase clouds
NASA Astrophysics Data System (ADS)
Yun, Yuxing; Penner, Joyce E.
2012-04-01
A new aerosol-dependent mixed phase cloud parameterization for deposition/condensation/immersion (DCI) ice nucleation and one for contact freezing are compared to the original formulations in a coupled general circulation model and aerosol transport model. The present-day cloud liquid and ice water fields and cloud radiative forcing are analyzed and compared to observations. The new DCI freezing parameterization changes the spatial distribution of the cloud water field. Significant changes are found in the cloud ice water fraction and in the middle cloud fractions. The new DCI freezing parameterization predicts less ice water path (IWP) than the original formulation, especially in the Southern Hemisphere. The smaller IWP leads to a less efficient Bergeron-Findeisen process resulting in a larger liquid water path, shortwave cloud forcing, and longwave cloud forcing. It is found that contact freezing parameterizations have a greater impact on the cloud water field and radiative forcing than the two DCI freezing parameterizations that we compared. The net solar flux at top of atmosphere and net longwave flux at the top of the atmosphere change by up to 8.73 and 3.52 W m-2, respectively, due to the use of different DCI and contact freezing parameterizations in mixed phase clouds. The total climate forcing from anthropogenic black carbon/organic matter in mixed phase clouds is estimated to be 0.16-0.93 W m-2using the aerosol-dependent parameterizations. A sensitivity test with contact ice nuclei concentration in the original parameterization fit to that recommended by Young (1974) gives results that are closer to the new contact freezing parameterization.
Modeling global yield growth of major crops under multiple socioeconomic pathways
NASA Astrophysics Data System (ADS)
Iizumi, T.; Kim, W.; Zhihong, S.; Nishimori, M.
2016-12-01
Global gridded crop models (GGCMs) are a key tool in deriving global food security scenarios under climate change. However, it is difficult for GGCMs to reproduce the reported yield growth patterns—rapid growth, yield stagnation and yield collapse. Here, we propose a set of parameterizations for GGCMs to capture the contributions to yield from technological improvements at the national and multi-decadal scales. These include country annual per capita gross domestic product (GDP)-based parameterizations for the nitrogen application rate and crop tolerance to stresses associated with high temperature, low temperature, water deficit and water excess. Using a GGCM combined with the parameterizations, we present global 140-year (1961-2100) yield growth simulations for maize, soybean, rice and wheat under multiple shared socioeconomic pathways (SSPs) and no climate change. The model reproduces the major characteristics of reported global and country yield growth patterns over the 1961-2013 period. Under the most rapid developmental pathway SSP5, the simulated global yields for 2091-2100, relative to 2001-2010, are the highest (1.21-1.82 times as high, with variations across the crops), followed by SSP1 (1.14-1.56 times as high), SSP2 (1.12-1.49 times as high), SSP4 (1.08-1.38 times as high) and SSP3 (1.08-1.36 times as high). Future country yield growth varies substantially by income level as well as by crop and by SSP. These yield pathways offer a new baseline for addressing the interdisciplinary questions related to global agricultural development, food security and climate change.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Zhaoqing; Taraphdar, Sourav; Wang, Taiping
This paper presents a modeling study conducted to evaluate the uncertainty of a regional model in simulating hurricane wind and pressure fields, and the feasibility of driving coastal storm surge simulation using an ensemble of region model outputs produced by 18 combinations of three convection schemes and six microphysics parameterizations, using Hurricane Katrina as a test case. Simulated wind and pressure fields were compared to observed H*Wind data for Hurricane Katrina and simulated storm surge was compared to observed high-water marks on the northern coast of the Gulf of Mexico. The ensemble modeling analysis demonstrated that the regional model wasmore » able to reproduce the characteristics of Hurricane Katrina with reasonable accuracy and can be used to drive the coastal ocean model for simulating coastal storm surge. Results indicated that the regional model is sensitive to both convection and microphysics parameterizations that simulate moist processes closely linked to the tropical cyclone dynamics that influence hurricane development and intensification. The Zhang and McFarlane (ZM) convection scheme and the Lim and Hong (WDM6) microphysics parameterization are the most skillful in simulating Hurricane Katrina maximum wind speed and central pressure, among the three convection and the six microphysics parameterizations. Error statistics of simulated maximum water levels were calculated for a baseline simulation with H*Wind forcing and the 18 ensemble simulations driven by the regional model outputs. The storm surge model produced the overall best results in simulating the maximum water levels using wind and pressure fields generated with the ZM convection scheme and the WDM6 microphysics parameterization.« less
NASA Astrophysics Data System (ADS)
Madhulatha, A.; Rajeevan, M.
2018-02-01
Main objective of the present paper is to examine the role of various parameterization schemes in simulating the evolution of mesoscale convective system (MCS) occurred over south-east India. Using the Weather Research and Forecasting (WRF) model, numerical experiments are conducted by considering various planetary boundary layer, microphysics, and cumulus parameterization schemes. Performances of different schemes are evaluated by examining boundary layer, reflectivity, and precipitation features of MCS using ground-based and satellite observations. Among various physical parameterization schemes, Mellor-Yamada-Janjic (MYJ) boundary layer scheme is able to produce deep boundary layer height by simulating warm temperatures necessary for storm initiation; Thompson (THM) microphysics scheme is capable to simulate the reflectivity by reasonable distribution of different hydrometeors during various stages of system; Betts-Miller-Janjic (BMJ) cumulus scheme is able to capture the precipitation by proper representation of convective instability associated with MCS. Present analysis suggests that MYJ, a local turbulent kinetic energy boundary layer scheme, which accounts strong vertical mixing; THM, a six-class hybrid moment microphysics scheme, which considers number concentration along with mixing ratio of rain hydrometeors; and BMJ, a closure cumulus scheme, which adjusts thermodynamic profiles based on climatological profiles might have contributed for better performance of respective model simulations. Numerical simulation carried out using the above combination of schemes is able to capture storm initiation, propagation, surface variations, thermodynamic structure, and precipitation features reasonably well. This study clearly demonstrates that the simulation of MCS characteristics is highly sensitive to the choice of parameterization schemes.
NASA Astrophysics Data System (ADS)
Cariolle, D.; Teyssèdre, H.
2007-01-01
This article describes the validation of a linear parameterization of the ozone photochemistry for use in upper tropospheric and stratospheric studies. The present work extends a previously developed scheme by improving the 2D model used to derive the coefficients of the parameterization. The chemical reaction rates are updated from a compilation that includes recent laboratory works. Furthermore, the polar ozone destruction due to heterogeneous reactions at the surface of the polar stratospheric clouds is taken into account as a function of the stratospheric temperature and the total chlorine content. Two versions of the parameterization are tested. The first one only requires the resolution of a continuity equation for the time evolution of the ozone mixing ratio, the second one uses one additional equation for a cold tracer. The parameterization has been introduced into the chemical transport model MOCAGE. The model is integrated with wind and temperature fields from the ECMWF operational analyses over the period 2000-2004. Overall, the results show a very good agreement between the modelled ozone distribution and the Total Ozone Mapping Spectrometer (TOMS) satellite data and the "in-situ" vertical soundings. During the course of the integration the model does not show any drift and the biases are generally small. The model also reproduces fairly well the polar ozone variability, with notably the formation of "ozone holes" in the southern hemisphere with amplitudes and seasonal evolutions that follow the dynamics and time evolution of the polar vortex. The introduction of the cold tracer further improves the model simulation by allowing additional ozone destruction inside air masses exported from the high to the mid-latitudes, and by maintaining low ozone contents inside the polar vortex of the southern hemisphere over longer periods in spring time. It is concluded that for the study of climatic scenarios or the assimilation of ozone data, the present parameterization gives an interesting alternative to the introduction of detailed and computationally costly chemical schemes into general circulation models.
NASA Astrophysics Data System (ADS)
Endalamaw, A. M.; Bolton, W. R.; Young, J. M.; Morton, D.; Hinzman, L. D.
2013-12-01
The sub-arctic environment can be characterized as being located in the zone of discontinuous permafrost. Although the distribution of permafrost is site specific, it dominates many of the hydrologic and ecologic responses and functions including vegetation distribution, stream flow, soil moisture, and storage processes. In this region, the boundaries that separate the major ecosystem types (deciduous dominated and coniferous dominated ecosystems) as well as permafrost (permafrost verses non-permafrost) occur over very short spatial scales. One of the goals of this research project is to improve parameterizations of meso-scale hydrologic models in this environment. Using the Caribou-Poker Creeks Research Watershed (CPCRW) as the test area, simulations of the headwater catchments of varying permafrost and vegetation distributions were performed. CPCRW, located approximately 50 km northeast of Fairbanks, Alaska, is located within the zone of discontinuous permafrost and the boreal forest ecosystem. The Variable Infiltration Capacity (VIC) model was selected as the hydrologic model. In CPCRW, permafrost and coniferous vegetation is generally found on north facing slopes and valley bottoms. Permafrost free soils and deciduous vegetation is generally found on south facing slopes. In this study, hydrologic simulations using fine scale vegetation and soil parameterizations - based upon slope and aspect analysis at a 50 meter resolution - were conducted. Simulations were also conducted using downscaled vegetation from the Scenarios Network for Alaska and Arctic Planning (SNAP) (1 km resolution) and soil data sets from the Food and Agriculture Organization (FAO) (approximately 9 km resolution). Preliminary simulation results show that soil and vegetation parameterizations based upon fine scale slope/aspect analysis increases the R2 values (0.5 to 0.65 in the high permafrost (53%) basin; 0.43 to 0.56 in the low permafrost (2%) basin) relative to parameterization based on coarse scale data. These results suggest that using fine resolution parameterizations can be used to improve meso-scale hydrological modeling in this region.
Evaluation of Warm-Rain Microphysical Parameterizations in Cloudy Boundary Layer Transitions
NASA Astrophysics Data System (ADS)
Nelson, K.; Mechem, D. B.
2014-12-01
Common warm-rain microphysical parameterizations used for marine boundary layer (MBL) clouds are either tuned for specific cloud types (e.g., the Khairoutdinov and Kogan 2000 parameterization, "KK2000") or are altogether ill-posed (Kessler 1969). An ideal microphysical parameterization should be "unified" in the sense of being suitable across MBL cloud regimes that include stratocumulus, cumulus rising into stratocumulus, and shallow trade cumulus. The recent parameterization of Kogan (2013, "K2013") was formulated for shallow cumulus but has been shown in a large-eddy simulation environment to work quite well for stratocumulus as well. We report on our efforts to implement and test this parameterization into a regional forecast model (NRL COAMPS). Results from K2013 and KK2000 are compared with the operational Kessler parameterization for a 5-day period of the VOCALS-REx field campaign, which took place over the southeast Pacific. We focus on both the relative performance of the three parameterizations and also on how they compare to the VOCALS-REx observations from the NOAA R/V Ronald H. Brown, in particular estimates of boundary-layer depth, liquid water path (LWP), cloud base, and area-mean precipitation rate obtained from C-band radar.
Thayer-Calder, K.; Gettelman, A.; Craig, C.; ...
2015-06-30
Most global climate models parameterize separate cloud types using separate parameterizations. This approach has several disadvantages, including obscure interactions between parameterizations and inaccurate triggering of cumulus parameterizations. Alternatively, a unified cloud parameterization uses one equation set to represent all cloud types. Such cloud types include stratiform liquid and ice cloud, shallow convective cloud, and deep convective cloud. Vital to the success of a unified parameterization is a general interface between clouds and microphysics. One such interface involves drawing Monte Carlo samples of subgrid variability of temperature, water vapor, cloud liquid, and cloud ice, and feeding the sample points into amore » microphysics scheme.This study evaluates a unified cloud parameterization and a Monte Carlo microphysics interface that has been implemented in the Community Atmosphere Model (CAM) version 5.3. Results describing the mean climate and tropical variability from global simulations are presented. The new model shows a degradation in precipitation skill but improvements in short-wave cloud forcing, liquid water path, long-wave cloud forcing, precipitable water, and tropical wave simulation. Also presented are estimations of computational expense and investigation of sensitivity to number of subcolumns.« less
Thayer-Calder, Katherine; Gettelman, A.; Craig, Cheryl; ...
2015-12-01
Most global climate models parameterize separate cloud types using separate parameterizations.This approach has several disadvantages, including obscure interactions between parameterizations and inaccurate triggering of cumulus parameterizations. Alternatively, a unified cloud parameterization uses one equation set to represent all cloud types. Such cloud types include stratiform liquid and ice cloud, shallow convective cloud, and deep convective cloud. Vital to the success of a unified parameterization is a general interface between clouds and microphysics. One such interface involves drawing Monte Carlo samples of subgrid variability of temperature, water vapor, cloud liquid, and cloud ice, and feeding the sample points into a microphysicsmore » scheme. This study evaluates a unified cloud parameterization and a Monte Carlo microphysics interface that has been implemented in the Community Atmosphere Model (CAM) version 5.3. Results describing the mean climate and tropical variability from global simulations are presented. In conclusion, the new model shows a degradation in precipitation skill but improvements in short-wave cloud forcing, liquid water path, long-wave cloud forcing, perceptible water, and tropical wave simulation. Also presented are estimations of computational expense and investigation of sensitivity to number of subcolumns.« less
NASA Astrophysics Data System (ADS)
Li, Jiming; Lv, Qiaoyi; Jian, Bida; Zhang, Min; Zhao, Chuanfeng; Fu, Qiang; Kawamoto, Kazuaki; Zhang, Hua
2018-05-01
Studies have shown that changes in cloud cover are responsible for the rapid climate warming over the Tibetan Plateau (TP) in the past 3 decades. To simulate the total cloud cover, atmospheric models have to reasonably represent the characteristics of vertical overlap between cloud layers. Until now, however, this subject has received little attention due to the limited availability of observations, especially over the TP. Based on the above information, the main aim of this study is to examine the properties of cloud overlaps over the TP region and to build an empirical relationship between cloud overlap properties and large-scale atmospheric dynamics using 4 years (2007-2010) of data from the CloudSat cloud product and collocated ERA-Interim reanalysis data. To do this, the cloud overlap parameter α, which is an inverse exponential function of the cloud layer separation D and decorrelation length scale L, is calculated using CloudSat and is discussed. The parameters α and L are both widely used to characterize the transition from the maximum to random overlap assumption with increasing layer separations. For those non-adjacent layers without clear sky between them (that is, contiguous cloud layers), it is found that the overlap parameter α is sensitive to the unique thermodynamic and dynamic environment over the TP, i.e., the unstable atmospheric stratification and corresponding weak wind shear, which leads to maximum overlap (that is, greater α values). This finding agrees well with the previous studies. Finally, we parameterize the decorrelation length scale L as a function of the wind shear and atmospheric stability based on a multiple linear regression. Compared with previous parameterizations, this new scheme can improve the simulation of total cloud cover over the TP when the separations between cloud layers are greater than 1 km. This study thus suggests that the effects of both wind shear and atmospheric stability on cloud overlap should be taken into account in the parameterization of decorrelation length scale L in order to further improve the calculation of the radiative budget and the prediction of climate change over the TP in the atmospheric models.
Integrating wildfire plume rises within atmospheric transport models
NASA Astrophysics Data System (ADS)
Mallia, D. V.; Kochanski, A.; Wu, D.; Urbanski, S. P.; Krueger, S. K.; Lin, J. C.
2016-12-01
Wildfires can generate significant pyro-convection that is responsible for releasing pollutants, greenhouse gases, and trace species into the free troposphere, which are then transported a significant distance downwind from the fire. Oftentimes, atmospheric transport and chemistry models have a difficult time resolving the transport of smoke from these wildfires, primarily due to deficiencies in estimating the plume injection height, which has been highlighted in previous work as the most important aspect of simulating wildfire plume transport. As a result of the uncertainties associated with modeled wildfire plume rise, researchers face difficulties modeling the impacts of wildfire smoke on air quality and constraining fire emissions using inverse modeling techniques. Currently, several plume rise parameterizations exist that are able to determine the injection height of fire emissions; however, the success of these parameterizations has been mixed. With the advent of WRF-SFIRE, the wildfire plume rise and injection height can now be explicitly calculated using a fire spread model (SFIRE) that is dynamically linked with the atmosphere simulated by WRF. However, this model has only been tested on a limited basis due to computational costs. Here, we will test the performance of WRF-SFIRE in addition to several commonly adopted plume parameterizations (Freitas, Sofiev, and Briggs) for the 2013 Patch Springs (Utah) and 2012 Baker Canyon (Washington) fires, for both of which observations of plume rise heights are available. These plume rise techniques will then be incorporated within a Lagrangian atmospheric transport model (STILT) in order to simulate CO and CO2 concentrations during NASA's CARVE Earth Science Airborne Program over Alaska during the summer of 2012. Initial model results showed that STILT model simulations were unable to reproduce enhanced CO concentrations produced by Alaskan fires observed during 2012. Near-surface concentrations were drastically overestimated while free tropospheric concentrations of CO were underestimated, likely a result of STILT injecting the fire emissions strictly into the PBL. We show in this study to what degree coupling the STILT model with an external plume rise model can help mitigate these problems.
NASA Astrophysics Data System (ADS)
Neggers, R.
2017-12-01
Recent advances in supercomputing have introduced a "grey zone" in the representation of cumulus convection in general circulation models, in which this process is partially resolved. Cumulus parameterizations need to be made scale-aware and scale-adaptive to be able to conceptually and practically deal with this situation. A potential way forward are schemes formulated in terms of discretized Cloud Size Densities, or CSDs. Advantages include i) the introduction of scale-awareness at the foundation of the scheme, and ii) the possibility to apply size-filtering of parameterized convective transport and clouds. The CSD is a new variable that requires closure; this concerns its shape, its range, but also variability in cloud number that can appear due to i) subsampling effects and ii) organization in a cloud field. The goal of this study is to gain insight by means of sub-domain analyses of various large-domain LES realizations of cumulus cloud populations. For a series of three-dimensional snapshots, each with a different degree of organization, the cloud size distribution is calculated in all subdomains, for a range of subdomain sizes. The standard deviation of the number of clouds of a certain size is found to decrease with the subdomain size, following a powerlaw scaling corresponding to an inverse-linear dependence. Cloud number variability also increases with cloud size; this reflects that subsampling affects the largest clouds first, due to their typically larger neighbor spacing. Rewriting this dependence in terms of two dimensionless groups, by dividing by cloud number and cloud size respectively, yields a data collapse. Organization in the cloud field is found to act on top of this primary dependence, by enhancing the cloud number variability at the smaller sizes. This behavior reflects that small clouds start to "live" on top of larger structures such as cold pools, favoring or inhibiting their formation (as illustrated by the attached figure of cloud mask). Powerlaw scaling is still evident, but with a reduced exponent, suggesting that this behavior could be parameterized.
Pyrolysis of reinforced polymer composites: Parameterizing a model for multiple compositions
NASA Astrophysics Data System (ADS)
Martin, Geraldine E.
A single set of material properties was developed to describe the pyrolysis of fiberglass reinforced polyester composites at multiple composition ratios. Milligram-scale testing was performed on the unsaturated polyester (UP) resin using thermogravimetric analysis (TGA) coupled with differential scanning calorimetry (DSC) to establish and characterize an effective semi-global reaction mechanism, of three consecutive first-order reactions. Radiation-driven gasification experiments were conducted on UP resin and the fiberglass composites at compositions ranging from 41 to 54 wt% resin at external heat fluxes from 30 to 70 kW m -2. The back surface temperature was recorded with an infrared camera and used as the target for inverse analysis to determine the thermal conductivity of the systematically isolated constituent species. Manual iterations were performed in a comprehensive pyrolysis model, ThermaKin. The complete set of properties was validated for the ability to reproduce the mass loss rate during gasification testing.
Advancing X-ray scattering metrology using inverse genetic algorithms.
Hannon, Adam F; Sunday, Daniel F; Windover, Donald; Kline, R Joseph
2016-01-01
We compare the speed and effectiveness of two genetic optimization algorithms to the results of statistical sampling via a Markov chain Monte Carlo algorithm to find which is the most robust method for determining real space structure in periodic gratings measured using critical dimension small angle X-ray scattering. Both a covariance matrix adaptation evolutionary strategy and differential evolution algorithm are implemented and compared using various objective functions. The algorithms and objective functions are used to minimize differences between diffraction simulations and measured diffraction data. These simulations are parameterized with an electron density model known to roughly correspond to the real space structure of our nanogratings. The study shows that for X-ray scattering data, the covariance matrix adaptation coupled with a mean-absolute error log objective function is the most efficient combination of algorithm and goodness of fit criterion for finding structures with little foreknowledge about the underlying fine scale structure features of the nanograting.
Quantitative three-dimensional ice roughness from scanning electron microscopy
NASA Astrophysics Data System (ADS)
Butterfield, Nicholas; Rowe, Penny M.; Stewart, Emily; Roesel, David; Neshyba, Steven
2017-03-01
We present a method for inferring surface morphology of ice from scanning electron microscope images. We first develop a novel functional form for the backscattered electron intensity as a function of ice facet orientation; this form is parameterized using smooth ice facets of known orientation. Three-dimensional representations of rough surfaces are retrieved at approximately micrometer resolution using Gauss-Newton inversion within a Bayesian framework. Statistical analysis of the resulting data sets permits characterization of ice surface roughness with a much higher statistical confidence than previously possible. A survey of results in the range -39°C to -29°C shows that characteristics of the roughness (e.g., Weibull parameters) are sensitive not only to the degree of roughening but also to the symmetry of the roughening. These results suggest that roughening characteristics obtained by remote sensing and in situ measurements of atmospheric ice clouds can potentially provide more facet-specific information than has previously been appreciated.
Computer simulations of austenite decomposition of microalloyed 700 MPa steel during cooling
NASA Astrophysics Data System (ADS)
Pohjonen, Aarne; Paananen, Joni; Mourujärvi, Juho; Manninen, Timo; Larkiola, Jari; Porter, David
2018-05-01
We present computer simulations of austenite decomposition to ferrite and bainite during cooling. The phase transformation model is based on Johnson-Mehl-Avrami-Kolmogorov type equations. The model is parameterized by numerical fitting to continuous cooling data obtained with Gleeble thermo-mechanical simulator and it can be used for calculation of the transformation behavior occurring during cooling along any cooling path. The phase transformation model has been coupled with heat conduction simulations. The model includes separate parameters to account for the incubation stage and for the kinetics after the transformation has started. The incubation time is calculated with inversion of the CCT transformation start time. For heat conduction simulations we employed our own parallelized 2-dimensional finite difference code. In addition, the transformation model was also implemented as a subroutine in commercial finite-element software Abaqus which allows for the use of the model in various engineering applications.
Full-wave multiscale anisotropy tomography in Southern California
NASA Astrophysics Data System (ADS)
Lin, Yu-Pin; Zhao, Li; Hung, Shu-Huei
2014-12-01
Understanding the spatial variation of anisotropy in the upper mantle is important for characterizing the lithospheric deformation and mantle flow dynamics. In this study, we apply a full-wave approach to image the upper-mantle anisotropy in Southern California using 5954 SKS splitting data. Three-dimensional sensitivity kernels combined with a wavelet-based model parameterization are adopted in a multiscale inversion. Spatial resolution lengths are estimated based on a statistical resolution matrix approach, showing a finest resolution length of ~25 km in regions with densely distributed stations. The anisotropic model displays structural fabric in relation to surface geologic features such as the Salton Trough, the Transverse Ranges, and the San Andreas Fault. The depth variation of anisotropy does not suggest a lithosphere-asthenosphere decoupling. At long wavelengths, the fast directions of anisotropy are aligned with the absolute plate motion inside the Pacific and North American plates.
Localization of extended brain sources from EEG/MEG: the ExSo-MUSIC approach.
Birot, Gwénaël; Albera, Laurent; Wendling, Fabrice; Merlet, Isabelle
2011-05-01
We propose a new MUSIC-like method, called 2q-ExSo-MUSIC (q ≥ 1). This method is an extension of the 2q-MUSIC (q ≥ 1) approach for solving the EEG/MEG inverse problem, when spatially-extended neocortical sources ("ExSo") are considered. It introduces a novel ExSo-MUSIC principle. The novelty is two-fold: i) the parameterization of the spatial source distribution that leads to an appropriate metric in the context of distributed brain sources and ii) the introduction of an original, efficient and low-cost way of optimizing this metric. In 2q-ExSo-MUSIC, the possible use of higher order statistics (q ≥ 2) offers a better robustness with respect to Gaussian noise of unknown spatial coherence and modeling errors. As a result we reduced the penalizing effects of both the background cerebral activity that can be seen as a Gaussian and spatially correlated noise, and the modeling errors induced by the non-exact resolution of the forward problem. Computer results on simulated EEG signals obtained with physiologically-relevant models of both the sources and the volume conductor show a highly increased performance of our 2q-ExSo-MUSIC method as compared to the classical 2q-MUSIC algorithms. Copyright © 2011 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Fienen, M.; Hunt, R.; Krabbenhoft, D.; Clemo, T.
2009-08-01
Flow path delineation is a valuable tool for interpreting the subsurface hydrogeochemical environment. Different types of data, such as groundwater flow and transport, inform different aspects of hydrogeologic parameter values (hydraulic conductivity in this case) which, in turn, determine flow paths. This work combines flow and transport information to estimate a unified set of hydrogeologic parameters using the Bayesian geostatistical inverse approach. Parameter flexibility is allowed by using a highly parameterized approach with the level of complexity informed by the data. Despite the effort to adhere to the ideal of minimal a priori structure imposed on the problem, extreme contrasts in parameters can result in the need to censor correlation across hydrostratigraphic bounding surfaces. These partitions segregate parameters into facies associations. With an iterative approach in which partitions are based on inspection of initial estimates, flow path interpretation is progressively refined through the inclusion of more types of data. Head observations, stable oxygen isotopes (18O/16O ratios), and tritium are all used to progressively refine flow path delineation on an isthmus between two lakes in the Trout Lake watershed, northern Wisconsin, United States. Despite allowing significant parameter freedom by estimating many distributed parameter values, a smooth field is obtained.
Fienen, M.; Hunt, R.; Krabbenhoft, D.; Clemo, T.
2009-01-01
Flow path delineation is a valuable tool for interpreting the subsurface hydrogeochemical environment. Different types of data, such as groundwater flow and transport, inform different aspects of hydrogeologic parameter values (hydraulic conductivity in this case) which, in turn, determine flow paths. This work combines flow and transport information to estimate a unified set of hydrogeologic parameters using the Bayesian geostatistical inverse approach. Parameter flexibility is allowed by using a highly parameterized approach with the level of complexity informed by the data. Despite the effort to adhere to the ideal of minimal a priori structure imposed on the problem, extreme contrasts in parameters can result in the need to censor correlation across hydrostratigraphic bounding surfaces. These partitions segregate parameters into facies associations. With an iterative approach in which partitions are based on inspection of initial estimates, flow path interpretation is progressively refined through the inclusion of more types of data. Head observations, stable oxygen isotopes (18O/16O ratios), and tritium are all used to progressively refine flow path delineation on an isthmus between two lakes in the Trout Lake watershed, northern Wisconsin, United States. Despite allowing significant parameter freedom by estimating many distributed parameter values, a smooth field is obtained.
Atmospheric parameterization schemes for satellite cloud property retrieval during FIRE IFO 2
NASA Technical Reports Server (NTRS)
Titlow, James; Baum, Bryan A.
1993-01-01
Satellite cloud retrieval algorithms generally require atmospheric temperature and humidity profiles to determine such cloud properties as pressure and height. For instance, the CO2 slicing technique called the ratio method requires the calculation of theoretical upwelling radiances both at the surface and a prescribed number (40) of atmospheric levels. This technique has been applied to data from, for example, the High Resolution Infrared Radiometer Sounder (HIRS/2, henceforth HIRS) flown aboard the NOAA series of polar orbiting satellites and the High Resolution Interferometer Sounder (HIS). In this particular study, four NOAA-11 HIRS channels in the 15-micron region are used. The ratio method may be applied to various channel combinations to estimate cloud top heights using channels in the 15-mu m region. Presently, the multispectral, multiresolution (MSMR) scheme uses 4 HIRS channel combination estimates for mid- to high-level cloud pressure retrieval and Advanced Very High Resolution Radiometer (AVHRR) data for low-level (is greater than 700 mb) cloud level retrieval. In order to determine theoretical upwelling radiances, atmospheric temperature and water vapor profiles must be provided as well as profiles of other radiatively important gas absorber constituents such as CO2, O3, and CH4. The assumed temperature and humidity profiles have a large effect on transmittance and radiance profiles, which in turn are used with HIRS data to calculate cloud pressure, and thus cloud height and temperature. For large spatial scale satellite data analysis, atmospheric parameterization schemes for cloud retrieval algorithms are usually based on a gridded product such as that provided by the European Center for Medium Range Weather Forecasting (ECMWF) or the National Meteorological Center (NMC). These global, gridded products prescribe temperature and humidity profiles for a limited number of pressure levels (up to 14) in a vertical atmospheric column. The FIRE IFO 2 experiment provides an opportunity to investigate current atmospheric profile parameterization schemes, compare satellite cloud height results using both gridded products (ECMWF) and high vertical resolution sonde data from the National Weather Service (NWS) and Cross Chain Loran Atmospheric Sounding System (CLASS), and suggest modifications in atmospheric parameterization schemes based on these results.
Carbon balance of South Asia constrained by passenger aircraft CO2 measurements
NASA Astrophysics Data System (ADS)
Patra, P. K.; Niwa, Y.; Schuck, T. J.; Brenninkmeijer, C. A.; Machida, T.; Matsueda, H.; Sawa, Y.
2011-12-01
Quantifying the fluxes of carbon dioxide (CO2) between the atmosphere and terrestrial ecosystems in all their diversity, across the continents, is important and urgent for implementing effective mitigating policies. Whereas much is known for Europe and North America for instance, in comparison, South Asia, with 1.6 billion inhabitants and considerable CO2 fluxes, remained terra incognita in this respect. The sole measurement site at Cape Rama does not constrain CO2 fluxes during the summer monsoon season. We use regional measurements of atmospheric CO2 aboard a Lufthansa passenger aircraft between Frankfurt (Germany) and Chennai (India) at cruise altitude, in addition to the existing network sites for 2008, to estimate monthly fluxes for 64-regions using Bayesian inversion and ACTM transport model simulations. The applicability of the model's transport parameterization is confirmed using multi-tracer (SF6, CH4, N2O) simulations for the CARIBIC datasets. The annual carbon flux obtained by including the aircraft data is twice as large as the fluxes simulated by a terrestrial ecosystem model that was applied to prescribe the fluxes used in the inversions. It is shown that South Asia sequestered carbon at a rate of 0.37±0.20 Pg C yr-1 for the years 2007 and 2008, primarily during the summer monsoon season when the water limitation for this tropical ecosystem is relaxed. The seasonality and the strength of the calculated monthly fluxes are successfully validated using independent measurements of vertical CO2 profiles over Delhi and spatial variations at cruising altitude by the CONTRAIL program over Asia aboard Japan Airlines passenger aircraft (Patra et al., 2011). Major challenges remain the verification of the inverse model flux seasonality and annual totals by bottom-up estimations using field measurements and terrestrial ecosystem models.
NASA Astrophysics Data System (ADS)
Gutowitz, Howard
1991-08-01
Cellular automata, dynamic systems in which space and time are discrete, are yielding interesting applications in both the physical and natural sciences. The thirty four contributions in this book cover many aspects of contemporary studies on cellular automata and include reviews, research reports, and guides to recent literature and available software. Chapters cover mathematical analysis, the structure of the space of cellular automata, learning rules with specified properties: cellular automata in biology, physics, chemistry, and computation theory; and generalizations of cellular automata in neural nets, Boolean nets, and coupled map lattices. Current work on cellular automata may be viewed as revolving around two central and closely related problems: the forward problem and the inverse problem. The forward problem concerns the description of properties of given cellular automata. Properties considered include reversibility, invariants, criticality, fractal dimension, and computational power. The role of cellular automata in computation theory is seen as a particularly exciting venue for exploring parallel computers as theoretical and practical tools in mathematical physics. The inverse problem, an area of study gaining prominence particularly in the natural sciences, involves designing rules that possess specified properties or perform specified task. A long-term goal is to develop a set of techniques that can find a rule or set of rules that can reproduce quantitative observations of a physical system. Studies of the inverse problem take up the organization and structure of the set of automata, in particular the parameterization of the space of cellular automata. Optimization and learning techniques, like the genetic algorithm and adaptive stochastic cellular automata are applied to find cellular automaton rules that model such physical phenomena as crystal growth or perform such adaptive-learning tasks as balancing an inverted pole. Howard Gutowitz is Collaborateur in the Service de Physique du Solide et Résonance Magnetique, Commissariat a I'Energie Atomique, Saclay, France.
CO2 Flux Estimation Errors Associated with Moist Atmospheric Processes
NASA Technical Reports Server (NTRS)
Parazoo, N. C.; Denning, A. S.; Kawa, S. R.; Pawson, S.; Lokupitiya, R.
2012-01-01
Vertical transport by moist sub-grid scale processes such as deep convection is a well-known source of uncertainty in CO2 source/sink inversion. However, a dynamical link between vertical transport, satellite based retrievals of column mole fractions of CO2, and source/sink inversion has not yet been established. By using the same offline transport model with meteorological fields from slightly different data assimilation systems, we examine sensitivity of frontal CO2 transport and retrieved fluxes to different parameterizations of sub-grid vertical transport. We find that frontal transport feeds off background vertical CO2 gradients, which are modulated by sub-grid vertical transport. The implication for source/sink estimation is two-fold. First, CO2 variations contained in moist poleward moving air masses are systematically different from variations in dry equatorward moving air. Moist poleward transport is hidden from orbital sensors on satellites, causing a sampling bias, which leads directly to small but systematic flux retrieval errors in northern mid-latitudes. Second, differences in the representation of moist sub-grid vertical transport in GEOS-4 and GEOS-5 meteorological fields cause differences in vertical gradients of CO2, which leads to systematic differences in moist poleward and dry equatorward CO2 transport and therefore the fraction of CO2 variations hidden in moist air from satellites. As a result, sampling biases are amplified and regional scale flux errors enhanced, most notably in Europe (0.43+/-0.35 PgC /yr). These results, cast from the perspective of moist frontal transport processes, support previous arguments that the vertical gradient of CO2 is a major source of uncertainty in source/sink inversion.
2015-06-13
The Berkeley Out-of-Order Machine (BOOM): An Industry- Competitive, Synthesizable, Parameterized RISC-V Processor Christopher Celio David A...Synthesizable, Parameterized RISC-V Processor Christopher Celio, David Patterson, and Krste Asanović University of California, Berkeley, California 94720...Order Machine BOOM is a synthesizable, parameterized, superscalar out- of-order RISC-V core designed to serve as the prototypical baseline processor
Parameterized hardware description as object oriented hardware model implementation
NASA Astrophysics Data System (ADS)
Drabik, Pawel K.
2010-09-01
The paper introduces novel model for design, visualization and management of complex, highly adaptive hardware systems. The model settles component oriented environment for both hardware modules and software application. It is developed on parameterized hardware description research. Establishment of stable link between hardware and software, as a purpose of designed and realized work, is presented. Novel programming framework model for the environment, named Graphic-Functional-Components is presented. The purpose of the paper is to present object oriented hardware modeling with mentioned features. Possible model implementation in FPGA chips and its management by object oriented software in Java is described.
Data-driven parameterization of the generalized Langevin equation
Lei, Huan; Baker, Nathan A.; Li, Xiantao
2016-11-29
We present a data-driven approach to determine the memory kernel and random noise of the generalized Langevin equation. To facilitate practical implementations, we parameterize the kernel function in the Laplace domain by a rational function, with coefficients directly linked to the equilibrium statistics of the coarse-grain variables. Further, we show that such an approximation can be constructed to arbitrarily high order. Within these approximations, the generalized Langevin dynamics can be embedded in an extended stochastic model without memory. We demonstrate how to introduce the stochastic noise so that the fluctuation-dissipation theorem is exactly satisfied.
NASA Astrophysics Data System (ADS)
Hailegeorgis, Teklu T.; Alfredsen, Knut; Abdella, Yisak S.; Kolberg, Sjur
2015-03-01
Identification of proper parameterizations of spatial heterogeneity is required for precipitation-runoff models. However, relevant studies with a specific aim at hourly runoff simulation in boreal mountainous catchments are not common. We conducted calibration and evaluation of hourly runoff simulation in a boreal mountainous watershed based on six different parameterizations of the spatial heterogeneity of subsurface storage capacity for a semi-distributed (subcatchments hereafter called elements) and distributed (1 × 1 km2 grid) setup. We evaluated representation of element-to-element, grid-to-grid, and probabilistic subcatchment/subbasin, subelement and subgrid heterogeneities. The parameterization cases satisfactorily reproduced the streamflow hydrographs with Nash-Sutcliffe efficiency values for the calibration and validation periods up to 0.84 and 0.86 respectively, and similarly for the log-transformed streamflow up to 0.85 and 0.90. The parameterizations reproduced the flow duration curves, but predictive reliability in terms of quantile-quantile (Q-Q) plots indicated marked over and under predictions. The simple and parsimonious parameterizations with no subelement or no subgrid heterogeneities provided equivalent simulation performance compared to the more complex cases. The results indicated that (i) identification of parameterizations require measurements from denser precipitation stations than what is required for acceptable calibration of the precipitation-streamflow relationships, (ii) there is challenges in the identification of parameterizations based on only calibration to catchment integrated streamflow observations and (iii) a potential preference for the simple and parsimonious parameterizations for operational forecast contingent on their equivalent simulation performance for the available input data. In addition, the effects of non-identifiability of parameters (interactions and equifinality) can contribute to the non-identifiability of the parameterizations.
Evaluation of wave runup predictions from numerical and parametric models
Stockdon, Hilary F.; Thompson, David M.; Plant, Nathaniel G.; Long, Joseph W.
2014-01-01
Wave runup during storms is a primary driver of coastal evolution, including shoreline and dune erosion and barrier island overwash. Runup and its components, setup and swash, can be predicted from a parameterized model that was developed by comparing runup observations to offshore wave height, wave period, and local beach slope. Because observations during extreme storms are often unavailable, a numerical model is used to simulate the storm-driven runup to compare to the parameterized model and then develop an approach to improve the accuracy of the parameterization. Numerically simulated and parameterized runup were compared to observations to evaluate model accuracies. The analysis demonstrated that setup was accurately predicted by both the parameterized model and numerical simulations. Infragravity swash heights were most accurately predicted by the parameterized model. The numerical model suffered from bias and gain errors that depended on whether a one-dimensional or two-dimensional spatial domain was used. Nonetheless, all of the predictions were significantly correlated to the observations, implying that the systematic errors can be corrected. The numerical simulations did not resolve the incident-band swash motions, as expected, and the parameterized model performed best at predicting incident-band swash heights. An assimilated prediction using a weighted average of the parameterized model and the numerical simulations resulted in a reduction in prediction error variance. Finally, the numerical simulations were extended to include storm conditions that have not been previously observed. These results indicated that the parameterized predictions of setup may need modification for extreme conditions; numerical simulations can be used to extend the validity of the parameterized predictions of infragravity swash; and numerical simulations systematically underpredict incident swash, which is relatively unimportant under extreme conditions.
NASA Astrophysics Data System (ADS)
Dipankar, A.; Stevens, B. B.; Zängl, G.; Pondkule, M.; Brdar, S.
2014-12-01
The effect of clouds on large scale dynamics is represented in climate models through parameterization of various processes, of which the parameterization of shallow and deep convection are particularly uncertain. The atmospheric boundary layer, which controls the coupling to the surface, and which defines the scale of shallow convection, is typically 1 km in depth. Thus, simulations on a O(100 m) grid largely obviate the need for such parameterizations. By crossing this threshold of O(100m) grid resolution one can begin thinking of large-eddy simulation (LES), wherein the sub-grid scale parameterization have a sounder theoretical foundation. Substantial initiatives have been taken internationally to approach this threshold. For example, Miura et al., 2007 and Mirakawa et al., 2014 approach this threshold by doing global simulations, with (gradually) decreasing grid resolution, to understand the effect of cloud-resolving scales on the general circulation. Our strategy, on the other hand, is to take a big leap forward by fixing the resolution at O(100 m), and gradually increasing the domain size. We believe that breaking this threshold would greatly help in improving the parameterization schemes and reducing the uncertainty in climate predictions. To take this forward, the German Federal Ministry of Education and Research has initiated a project on HD(CP)2 that aims for a limited area LES at resolution O(100 m) using the new unified modeling system ICON (Zängl et al., 2014). In the talk, results from the HD(CP)2 evaluation simulation will be shown that targets high resolution simulation over a small domain around Jülich, Germany. This site is chosen because high resolution HD(CP)2 Observational Prototype Experiment took place in this region from 1.04.2013 to 31.05.2013, in order to critically evaluate the model. Nesting capabilities of ICON is used to gradually increase the resolution from the outermost domain, which is forced from the COSMO-DE data, to the innermost and finest resolution domain centered around Jülich (see Fig. 1 top panel). Furthermore, detailed analyses of the simulation results against the observation data will be presented. A reprsentative figure showing time series of column integrated water vapor (IWV) for both model and observation on 24.04.2013 is shown in bottom panel of Fig. 1.
A Earth Outgoing Longwave Radiation Climate Model
NASA Astrophysics Data System (ADS)
Yang, Shi-Keng
An Earth outgoing longwave radiation (OLWR) climate model has been constructed for radiation budget study. The model consists of the upward radiative transfer parameterization of Thompson and Warren (1982), the cloud cover model of Sherr et al. (1968) and a monthly average climatology defined by the data from Crutcher and Meserve (1971) and Taljaard et al. (1969). Additional required information is provided by the empirical 100mb water vapor mixing ratio equation of Harries (1976), and the mixing ratio interpolation scheme of Briegleb and Ramanathan (1982). Cloud top temperature is adjusted so that the calculation would agree with NOAA scanning radiometer measurements. Both clear sky and cloudy sky cases are calculated and discussed for global average, zonal average and world-wide distributed cases. The results agree well with the satellite observations. The clear sky case shows that the OLWR field is highly modulated by water vapor, especially in the tropics. The strongest longitudinal variation occurs in the tropics. This variation can be mostly explained by the strong water vapor gradient. Although in the zonal average case the tropics have a minimum in OLWR, the minimum is essentially contributed by a few very low flux regions, such as the Amazon, Indonesia and the Congo. There are regions in the tropics such that their OLWR is as large as that of the subtropics. In the high latitudes, where cold air contains less water vapor, OLWR is basically modulated by the surface temperature. Thus, the topographical heat capacity becomes a dominant factor in determining the distribution. Clouds enhance water vapor modulation of OLWR. Tropical clouds have the coldest cloud top temperatures. This again increases the longitudinal variation in the region. However, in the polar region, where temperature inversion is prominent, cloud top temperature is warmer than the surface. Hence, cloud has the effect of increasing OLWR. The implication of this cloud mechanism is that the latitudinal gradient of net radiation is thus further increased, and the forcing of the general atmospheric circulation is substantially different due to the increased additional available energy. The analysis of the results also suggests that to improve the performance of the Budyko-Sellers type energy balance climate model in the tropical region, the parameterization of the longwave cooling should include a water vapor absorbing term.
Surface wave tomography applied to the North American upper mantle
NASA Astrophysics Data System (ADS)
van der Lee, Suzan; Frederiksen, Andrew
Tomographic techniques that invert seismic surface waves for 3-D Earth structure differ in their definitions of data and the forward problem as well as in the parameterization of the tomographic model. However, all such techniques have in common that the tomographic inverse problem involves solving a large and mixed-determined set of linear equations. Consequently these inverse problems have multiple solutions and inherently undefinable accuracy. Smoother and rougher tomographic models are found with rougher (confined to great circle path) and smoother (finite-width) sensitivity kernels, respectively. A powerful, well-tested method of surface wave tomography (Partitioned Waveform Inversion) is based on inverting the waveforms of wave trains comprising regional S and surface waves from at least hundreds of seismograms for 3-D variations in S wave velocity. We apply this method to nearly 1400 seismograms recorded by digital broadband seismic stations in North America. The new 3-D S-velocity model, NA04, is consistent with previous findings that are based on separate, overlapping data sets. The merging of US and Canadian data sets, adding Canadian recordings of Mexican earthquakes, and combining fundamental-mode with higher-mode waveforms provides superior resolution, in particular in the US-Canada border region and the deep upper mantle. NA04 shows that 1) the Atlantic upper mantle is seismically faster than the Pacific upper mantle, 2) the uppermost mantle beneath Precambrian North America could be one and a half times as rigid as the upper mantle beneath Meso- and Cenozoic North America, with the upper mantle beneath Paleozoic North America being intermediate in seismic rigidity, 3) upper-mantle structure varies laterally within these geologic-age domains, and 4) the distribution of high-velocity anomalies in the deep upper mantle aligns with lower mantle images of the subducted Farallon and Kula plates and indicate that trailing fragments of these subducted oceanic plates still reside in the transition zone. The thickness of the high-velocity layer beneath Precambrian North America is estimated to be 250±70 km thick. On a smaller scale NA04 shows 1) high-velocities associated with subduction of the Pacific plate beneath the Aleutian arc, 2) the absence of expected high velocities in the upper mantle beneath the Wyoming craton, 3) a V-shaped dent below 150 km in the high-velocity cratonic lithosphere beneath New England, 4) the cratonic lithosphere beneath Precambrian North America being confined southwest of Baffin Bay, west of the Appalachians, north of the Ouachitas, east of the Rocky Mountains, and south of the Arctic Ocean, 5) the cratonic lithosphere beneath the Canadian shield having higher S-velocities than that beneath Precambrian basement that is covered with Phanerozoic sediments, 6) the lowest S velocities are concentrated beneath the Gulf of California, northern Mexico, and the Basin and Range Province.
Lateral and Time Distributions of Extensive Air Showers for CHICOS
NASA Astrophysics Data System (ADS)
Jillings, C. J.; Wells, D.; Chan, K. C.; Hill, J.; Falkowski, B.; Sepikas, J.
2005-04-01
We report results of a series of detailed Monte-Carlo calculations to determine the density and arrival-time distribution of charged particles in extensive air showers. We have parameterized both distributions as a function of distance from the shower axis, energy of the primary cosmic-ray proton, and incident zenith angle. Muons and electrons are parameterized separately. These parameterizations can be easily used in maximum-likelihood reconstruction of air showers. Calculations were performed for primary energies between 10^18 and 10^21eV and zenith angles out to approximately 50^o. The calculations are appropriate for the California High School Cosmic Ray Observatory: a 400 km^2 array of scintillation detectors in Los Angeles county. The average elevation of the array is approximately 250 meters above sea level. Currently 64 of 90 sites are operational. The array will be completed this year. We thank the NSF, the CURE program at the Jet Propulsion Laboratory, the SURF program at Caltech, and the Chinese University of Hong Kong.
Separation of Intercepted Multi-Radar Signals Based on Parameterized Time-Frequency Analysis
NASA Astrophysics Data System (ADS)
Lu, W. L.; Xie, J. W.; Wang, H. M.; Sheng, C.
2016-09-01
Modern radars use complex waveforms to obtain high detection performance and low probabilities of interception and identification. Signals intercepted from multiple radars overlap considerably in both the time and frequency domains and are difficult to separate with primary time parameters. Time-frequency analysis (TFA), as a key signal-processing tool, can provide better insight into the signal than conventional methods. In particular, among the various types of TFA, parameterized time-frequency analysis (PTFA) has shown great potential to investigate the time-frequency features of such non-stationary signals. In this paper, we propose a procedure for PTFA to separate overlapped radar signals; it includes five steps: initiation, parameterized time-frequency analysis, demodulating the signal of interest, adaptive filtering and recovering the signal. The effectiveness of the method was verified with simulated data and an intercepted radar signal received in a microwave laboratory. The results show that the proposed method has good performance and has potential in electronic reconnaissance applications, such as electronic intelligence, electronic warfare support measures, and radar warning.
On constraining pilot point calibration with regularization in PEST
Fienen, M.N.; Muffels, C.T.; Hunt, R.J.
2009-01-01
Ground water model calibration has made great advances in recent years with practical tools such as PEST being instrumental for making the latest techniques available to practitioners. As models and calibration tools get more sophisticated, however, the power of these tools can be misapplied, resulting in poor parameter estimates and/or nonoptimally calibrated models that do not suit their intended purpose. Here, we focus on an increasingly common technique for calibrating highly parameterized numerical models - pilot point parameterization with Tikhonov regularization. Pilot points are a popular method for spatially parameterizing complex hydrogeologic systems; however, additional flexibility offered by pilot points can become problematic if not constrained by Tikhonov regularization. The objective of this work is to explain and illustrate the specific roles played by control variables in the PEST software for Tikhonov regularization applied to pilot points. A recent study encountered difficulties implementing this approach, but through examination of that analysis, insight into underlying sources of potential misapplication can be gained and some guidelines for overcoming them developed. ?? 2009 National Ground Water Association.
Multidisciplinary Aerodynamic-Structural Shape Optimization Using Deformation (MASSOUD)
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.
2000-01-01
This paper presents a multidisciplinary shape parameterization approach. The approach consists of two basic concepts: (1) parameterizing the shape perturbations rather than the geometry itself and (2) performing the shape deformation by means of the soft object animation algorithms used in computer graphics. Because the formulation presented in this paper is independent of grid topology, we can treat computational fluid dynamics and finite element grids in the same manner. The proposed approach is simple, compact, and efficient. Also, the analytical sensitivity derivatives are easily computed for use in a gradient-based optimization. This algorithm is suitable for low-fidelity (e.g., linear aerodynamics and equivalent laminate plate structures) and high-fidelity (e.g., nonlinear computational fluid dynamics and detailed finite element modeling) analysis tools. This paper contains the implementation details of parameterizing for planform, twist, dihedral, thickness, camber, and free-form surface. Results are presented for a multidisciplinary application consisting of nonlinear computational fluid dynamics, detailed computational structural mechanics, and a simple performance module.
Multidisciplinary Aerodynamic-Structural Shape Optimization Using Deformation (MASSOUD)
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.
2000-01-01
This paper presents a multidisciplinary shape parameterization approach. The approach consists of two basic concepts: (1) parameterizing the shape perturbations rather than the geometry itself and (2) performing the shape deformation by means of the soft object animation algorithms used in computer graphics. Because the formulation presented in this paper is independent of grid topology, we can treat computational fluid dynamics and finite element grids in a similar manner. The proposed approach is simple, compact, and efficient. Also, the analytical sensitivity derivatives are easily computed for use in a gradient-based optimization. This algorithm is suitable for low-fidelity (e.g., linear aerodynamics and equivalent laminated plate structures) and high-fidelity (e.g., nonlinear computational fluid dynamics and detailed finite element modeling analysis tools. This paper contains the implementation details of parameterizing for planform, twist, dihedral, thickness, camber, and free-form surface. Results are presented for a multidisciplinary design optimization application consisting of nonlinear computational fluid dynamics, detailed computational structural mechanics, and a simple performance module.
Application and evaluation of high-resolution WRF-CMAQ with simple urban parameterization.
The 2-way coupled WRF-CMAQ meteorology and air quality modeling system is evaluated for high-resolution applications by comparing to a regional air quality field study (Discover-AQ). The model was modified to better account for the effects of urban environments. High-resolution...
Application and evaluation of high-resolution WRF-CMAQ with simple urban parameterization
The 2-way coupled WRF-CMAQ meteorology and air quality modeling system is evaluated for high-resolution applications by comparing to a regional air quality field study (Discover-AQ). The model was modified to better account for the effects of urban environments. High-resolution...
Age of high redshift objects—a litmus test for the dark energy models
NASA Astrophysics Data System (ADS)
Jain, Deepak; Dev, Abha
2006-02-01
The discovery of the quasar, the APM 08279+5255 at z=3.91 whose age is 2 3 Gyr has once again led to “age crisis”. The noticeable fact about this object is that it cannot be accommodated in a universe with Ω=0.27, currently accepted value of matter density parameter and ω=const. In this work, we explore the concordance of various dark energy parameterizations (w(z) models) with the age estimates of the old high redshift objects. It is alarming to note that the quasar cannot be accommodated in any dark energy model even for Ω=0.23, which corresponds to 1σ deviation below the best fit value provided by WMAP. There is a need to look for alternative cosmologies or some other dark energy parameterizations which allow the existence of the high redshift objects.
Clein, Joy S.; McGuire, A.D.; Zhang, X.; Kicklighter, D.W.; Melillo, J.M.; Wofsy, S.C.; Jarvis, P.G.; Massheder, J.M.
2002-01-01
The role of carbon (C) and nitrogen (N) interactions on sequestration of atmospheric CO2 in black spruce ecosystems across North America was evaluated with the Terrestrial Ecosystem Model (TEM) by applying parameterizations of the model in which C-N dynamics were either coupled or uncoupled. First, the performance of the parameterizations, which were developed for the dynamics of black spruce ecosystems at the Bonanza Creek Long-Term Ecological Research site in Alaska, were evaluated by simulating C dynamics at eddy correlation tower sites in the Boreal Ecosystem Atmosphere Study (BOREAS) for black spruce ecosystems in the northern study area (northern site) and the southern study area (southern site) with local climate data. We compared simulated monthly growing season (May to September) estimates of gross primary production (GPP), total ecosystem respiration (RESP), and net ecosystem production (NEP) from 1994 to 1997 to available field-based estimates at both sites. At the northern site, monthly growing season estimates of GPP and RESP for the coupled and uncoupled simulations were highly correlated with the field-based estimates (coupled: R2= 0.77, 0.88 for GPP and RESP; uncoupled: R2 = 0.67, 0.92 for GPP and RESP). Although the simulated seasonal pattern of NEP generally matched the field-based data, the correlations between field-based and simulated monthly growing season NEP were lower (R2 = 0.40, 0.00 for coupled and uncoupled simulations, respectively) in comparison to the correlations between field-based and simulated GPP and RESP. The annual NEP simulated by the coupled parameterization fell within the uncertainty of field-based estimates in two of three years. On the other hand, annual NEP simulated by the uncoupled parameterization only fell within the field-based uncertainty in one of three years. At the southern site, simulated NEP generally matched field-based NEP estimates, and the correlation between monthly growing season field-based and simulated NEP (R2 = 0.36, 0.20 for coupled and uncoupled simulations, respectively) was similar to the correlations at the northern site. To evaluate the role of N dynamics in C balance of black spruce ecosystems across North America, we simulated historical and projected C dynamics from 1900 to 2100 with a global-based climatology at 0.5?? resolution (latitude ?? longitude) with both the coupled and uncoupled parameterizations of TEM. From analyses at the northern site, several consistent patterns emerge. There was greater inter-annual variability in net primary production (NPP) simulated by the uncoupled parameterization as compared to the coupled parameterization, which led to substantial differences in inter-annual variability in NEP between the parameterizations. The divergence between NPP and heterotrophic respiration was greater in the uncoupled simulation, resulting in more C sequestration during the projected period. These responses were the result of fundamentally different responses of the coupled and uncoupled parameterizations to changes in CO2 and climate. Across North American black spruce ecosystems, the range of simulated decadal changes in C storage was substantially greater for the uncoupled parameterization than for the coupled parameterization. Analysis of the spatial variability in decadal responses of C dynamics revealed that C fluxes simulated by the coupled and uncoupled parameterizations have different sensitivities to climate and that the climate sensitivities of the fluxes change over the temporal scope of the simulations. The results of this study suggest that uncertainties can be reduced through (1) factorial studies focused on elucidating the role of C and N interactions in the response of mature black spruce ecosystems to manipulations of atmospheric CO2 and climate, (2) establishment of a network of continuous, long-term measurements of C dynamics across the range of mature black spruce ecosystems in North America, and (3) ancillary measureme
Bayesian parameter estimation for nonlinear modelling of biological pathways.
Ghasemi, Omid; Lindsey, Merry L; Yang, Tianyi; Nguyen, Nguyen; Huang, Yufei; Jin, Yu-Fang
2011-01-01
The availability of temporal measurements on biological experiments has significantly promoted research areas in systems biology. To gain insight into the interaction and regulation of biological systems, mathematical frameworks such as ordinary differential equations have been widely applied to model biological pathways and interpret the temporal data. Hill equations are the preferred formats to represent the reaction rate in differential equation frameworks, due to their simple structures and their capabilities for easy fitting to saturated experimental measurements. However, Hill equations are highly nonlinearly parameterized functions, and parameters in these functions cannot be measured easily. Additionally, because of its high nonlinearity, adaptive parameter estimation algorithms developed for linear parameterized differential equations cannot be applied. Therefore, parameter estimation in nonlinearly parameterized differential equation models for biological pathways is both challenging and rewarding. In this study, we propose a Bayesian parameter estimation algorithm to estimate parameters in nonlinear mathematical models for biological pathways using time series data. We used the Runge-Kutta method to transform differential equations to difference equations assuming a known structure of the differential equations. This transformation allowed us to generate predictions dependent on previous states and to apply a Bayesian approach, namely, the Markov chain Monte Carlo (MCMC) method. We applied this approach to the biological pathways involved in the left ventricle (LV) response to myocardial infarction (MI) and verified our algorithm by estimating two parameters in a Hill equation embedded in the nonlinear model. We further evaluated our estimation performance with different parameter settings and signal to noise ratios. Our results demonstrated the effectiveness of the algorithm for both linearly and nonlinearly parameterized dynamic systems. Our proposed Bayesian algorithm successfully estimated parameters in nonlinear mathematical models for biological pathways. This method can be further extended to high order systems and thus provides a useful tool to analyze biological dynamics and extract information using temporal data.
Ruff, Kiersten M.; Harmon, Tyler S.; Pappu, Rohit V.
2015-01-01
We report the development and deployment of a coarse-graining method that is well suited for computer simulations of aggregation and phase separation of protein sequences with block-copolymeric architectures. Our algorithm, named CAMELOT for Coarse-grained simulations Aided by MachinE Learning Optimization and Training, leverages information from converged all atom simulations that is used to determine a suitable resolution and parameterize the coarse-grained model. To parameterize a system-specific coarse-grained model, we use a combination of Boltzmann inversion, non-linear regression, and a Gaussian process Bayesian optimization approach. The accuracy of the coarse-grained model is demonstrated through direct comparisons to results from all atom simulations. We demonstrate the utility of our coarse-graining approach using the block-copolymeric sequence from the exon 1 encoded sequence of the huntingtin protein. This sequence comprises of 17 residues from the N-terminal end of huntingtin (N17) followed by a polyglutamine (polyQ) tract. Simulations based on the CAMELOT approach are used to show that the adsorption and unfolding of the wild type N17 and its sequence variants on the surface of polyQ tracts engender a patchy colloid like architecture that promotes the formation of linear aggregates. These results provide a plausible explanation for experimental observations, which show that N17 accelerates the formation of linear aggregates in block-copolymeric N17-polyQ sequences. The CAMELOT approach is versatile and is generalizable for simulating the aggregation and phase behavior of a range of block-copolymeric protein sequences. PMID:26723608
Maiti, Saumen; Erram, V C; Gupta, Gautam; Tiwari, Ram Krishna; Kulkarni, U D; Sangpal, R R
2013-04-01
Deplorable quality of groundwater arising from saltwater intrusion, natural leaching and anthropogenic activities is one of the major concerns for the society. Assessment of groundwater quality is, therefore, a primary objective of scientific research. Here, we propose an artificial neural network-based method set in a Bayesian neural network (BNN) framework and employ it to assess groundwater quality. The approach is based on analyzing 36 water samples and inverting up to 85 Schlumberger vertical electrical sounding data. We constructed a priori model by suitably parameterizing geochemical and geophysical data collected from the western part of India. The posterior model (post-inversion) was estimated using the BNN learning procedure and global hybrid Monte Carlo/Markov Chain Monte Carlo optimization scheme. By suitable parameterization of geochemical and geophysical parameters, we simulated 1,500 training samples, out of which 50 % samples were used for training and remaining 50 % were used for validation and testing. We show that the trained model is able to classify validation and test samples with 85 % and 80 % accuracy respectively. Based on cross-correlation analysis and Gibb's diagram of geochemical attributes, the groundwater qualities of the study area were classified into following three categories: "Very good", "Good", and "Unsuitable". The BNN model-based results suggest that groundwater quality falls mostly in the range of "Good" to "Very good" except for some places near the Arabian Sea. The new modeling results powered by uncertainty and statistical analyses would provide useful constrain, which could be utilized in monitoring and assessment of the groundwater quality.
Sun, Deyong; Hu, Chuanmin; Qiu, Zhongfeng; Wang, Shengqiang
2015-06-01
A new scheme has been proposed by Lee et al. (2014) to reconstruct hyperspectral (400 - 700 nm, 5 nm resolution) remote sensing reflectance (Rrs(λ), sr-1) of representative global waters using measurements at 15 spectral bands. This study tested its applicability to optically complex turbid inland waters in China, where Rrs(λ) are typically much higher than those used in Lee et al. (2014). Strong interdependence of Rrs(λ) between neighboring bands (≤ 10 nm interval) was confirmed, with Pearson correlation coefficient (PCC) mostly above 0.98. The scheme of Lee et al. (2014) for Rrs(λ) re-construction with its original global parameterization worked well with this data set, while new parameterization showed improvement in reducing uncertainties in the reconstructed Rrs(λ). Mean absolute error (MAERrs(λi)) in the reconstructed Rrs(λ) was mostly < 0.0002 sr-1 between 400 and 700nm, and mean relative error (MRERrs(λi)) was < 1% when the comparison was made between reconstructed and measured Rrs(λ) spectra. When Rrs(λ) at the MODIS bands were used to reconstruct the hyperspectral Rrs(λ), MAERrs(λi) was < 0.001 sr-1 and MRERrs(λi) was < 3%. When Rrs(λ) at the MERIS bands were used, MAERrs(λi) in the reconstructed hyperspectral Rrs(λ) was < 0.0004 sr-1 and MRERrs(λi) was < 1%. These results have significant implications for inversion algorithms to retrieve concentrations of phytoplankton pigments (e.g., chlorophyll-a or Chla, and phycocyanin or PC) and total suspended materials (TSM) as well as absorption coefficient of colored dissolved organic matter (CDOM), as some of the algorithms were developed from in situ Rrs(λ) data using spectral bands that may not exist on satellite sensors.
Scale dependency of regional climate modeling of current and future climate extremes in Germany
NASA Astrophysics Data System (ADS)
Tölle, Merja H.; Schefczyk, Lukas; Gutjahr, Oliver
2017-11-01
A warmer climate is projected for mid-Europe, with less precipitation in summer, but with intensified extremes of precipitation and near-surface temperature. However, the extent and magnitude of such changes are associated with creditable uncertainty because of the limitations of model resolution and parameterizations. Here, we present the results of convection-permitting regional climate model simulations for Germany integrated with the COSMO-CLM using a horizontal grid spacing of 1.3 km, and additional 4.5- and 7-km simulations with convection parameterized. Of particular interest is how the temperature and precipitation fields and their extremes depend on the horizontal resolution for current and future climate conditions. The spatial variability of precipitation increases with resolution because of more realistic orography and physical parameterizations, but values are overestimated in summer and over mountain ridges in all simulations compared to observations. The spatial variability of temperature is improved at a resolution of 1.3 km, but the results are cold-biased, especially in summer. The increase in resolution from 7/4.5 km to 1.3 km is accompanied by less future warming in summer by 1 ∘C. Modeled future precipitation extremes will be more severe, and temperature extremes will not exclusively increase with higher resolution. Although the differences between the resolutions considered (7/4.5 km and 1.3 km) are small, we find that the differences in the changes in extremes are large. High-resolution simulations require further studies, with effective parameterizations and tunings for different topographic regions. Impact models and assessment studies may benefit from such high-resolution model results, but should account for the impact of model resolution on model processes and climate change.
Objective calibration of regional climate models
NASA Astrophysics Data System (ADS)
Bellprat, O.; Kotlarski, S.; Lüthi, D.; SchäR, C.
2012-12-01
Climate models are subject to high parametric uncertainty induced by poorly confined model parameters of parameterized physical processes. Uncertain model parameters are typically calibrated in order to increase the agreement of the model with available observations. The common practice is to adjust uncertain model parameters manually, often referred to as expert tuning, which lacks objectivity and transparency in the use of observations. These shortcomings often haze model inter-comparisons and hinder the implementation of new model parameterizations. Methods which would allow to systematically calibrate model parameters are unfortunately often not applicable to state-of-the-art climate models, due to computational constraints facing the high dimensionality and non-linearity of the problem. Here we present an approach to objectively calibrate a regional climate model, using reanalysis driven simulations and building upon a quadratic metamodel presented by Neelin et al. (2010) that serves as a computationally cheap surrogate of the model. Five model parameters originating from different parameterizations are selected for the optimization according to their influence on the model performance. The metamodel accurately estimates spatial averages of 2 m temperature, precipitation and total cloud cover, with an uncertainty of similar magnitude as the internal variability of the regional climate model. The non-linearities of the parameter perturbations are well captured, such that only a limited number of 20-50 simulations are needed to estimate optimal parameter settings. Parameter interactions are small, which allows to further reduce the number of simulations. In comparison to an ensemble of the same model which has undergone expert tuning, the calibration yields similar optimal model configurations, but leading to an additional reduction of the model error. The performance range captured is much wider than sampled with the expert-tuned ensemble and the presented methodology is effective and objective. It is argued that objective calibration is an attractive tool and could become standard procedure after introducing new model implementations, or after a spatial transfer of a regional climate model. Objective calibration of parameterizations with regional models could also serve as a strategy toward improving parameterization packages of global climate models.
Kooperman, Gabriel J.; Pritchard, Michael S.; O'Brien, Travis A.; ...
2018-04-01
Deficiencies in the parameterizations of convection used in global climate models often lead to a distorted representation of the simulated rainfall intensity distribution (i.e., too much rainfall from weak rain rates). While encouraging improvements in high percentile rainfall intensity have been found as the horizontal resolution of the Community Atmosphere Model is increased to ~25 km, we demonstrate no corresponding improvement in the moderate rain rates that generate the majority of accumulated rainfall. Using a statistical framework designed to emphasize links between precipitation intensity and accumulated rainfall beyond just the frequency distribution, we show that CAM cannot realistically simulate moderatemore » rain rates, and cannot capture their intensification with climate change, even as resolution is increased. However, by separating the parameterized convective and large-scale resolved contributions to total rainfall, we find that the intensity, geographic pattern, and climate change response of CAM's large-scale rain rates are more consistent with observations (TRMM 3B42), superparameterization, and theoretical expectations, despite issues with parameterized convection. Increasing CAM's horizontal resolution does improve the representation of total rainfall intensity, but not due to changes in the intensity of large-scale rain rates, which are surprisingly insensitive to horizontal resolution. Rather, improvements occur through an increase in the relative contribution of the large-scale component to the total amount of accumulated rainfall. Analysis of sensitivities to convective timescale and entrainment rate confirm the importance of these parameters in the possible development of scale-aware parameterizations, but also reveal unrecognized trade-offs from the entanglement of precipitation frequency and total amount.« less
NASA Technical Reports Server (NTRS)
Considine, David B.; Douglass, Anne R.; Jackman, Charles H.
1994-01-01
A parameterization of Type 1 and 2 polar stratospheric cloud (PSC) formation is presented which is appropriate for use in two-dimensional (2-D) photochemical models of the stratosphere. The calculations of PSC frequency of occurrence and surface area density uses climatological temperature probability distributions obtained from National Meteorological Center data to avoid using zonal mean temperatures, which are not good predictors of PSC behavior. The parameterization does not attempt to model the microphysics of PSCs. The parameterization predicts changes in PSC formation and heterogeneous processing due to perturbations of stratospheric trace constituents. It is therefore useful in assessing the potential effects of a fleet of stratospheric aircraft (high speed civil transports, or HSCTs) on stratospheric composition. the model calculated frequency of PSC occurrence agrees well with a climatology based on stratospheric aerosol measurement (SAM) 2 observations. PSCs are predicted to occur in the tropics. Their vertical range is narrow, however, and their impact on model O3 fields is small. When PSC and sulfate aerosol heterogeneous processes are included in the model calculations, the O3 change for 1980 - 1990 is in substantially better agreement with the total ozone mapping spectrometer (TOMS)-derived O3 trend than otherwise. The overall changes in model O3 response to standard HSCT perturbation scenarios produced by the parameterization are small and tend to decrease the model sensitivity to the HSCT perturbation. However, in the southern hemisphere spring a significant increase in O3 sensitivity to HSCT perturbations is found. At this location and time, increased PSC formation leads to increased levels of active chlorine, which produce the O3 decreases.
NASA Astrophysics Data System (ADS)
Savre, J.; Ekman, A. M. L.
2015-05-01
A new parameterization for heterogeneous ice nucleation constrained by laboratory data and based on classical nucleation theory is introduced. Key features of the parameterization include the following: a consistent and modular modeling framework for treating condensation/immersion and deposition freezing, the possibility to consider various potential ice nucleating particle types (e.g., dust, black carbon, and bacteria), and the possibility to account for an aerosol size distribution. The ice nucleating ability of each aerosol type is described using a contact angle (θ) probability density function (PDF). A new modeling strategy is described to allow the θ PDF to evolve in time so that the most efficient ice nuclei (associated with the lowest θ values) are progressively removed as they nucleate ice. A computationally efficient quasi Monte Carlo method is used to integrate the computed ice nucleation rates over both size and contact angle distributions. The parameterization is employed in a parcel model, forced by an ensemble of Lagrangian trajectories extracted from a three-dimensional simulation of a springtime low-level Arctic mixed-phase cloud, in order to evaluate the accuracy and convergence of the method using different settings. The same model setup is then employed to examine the importance of various parameters for the simulated ice production. Modeling the time evolution of the θ PDF is found to be particularly crucial; assuming a time-independent θ PDF significantly overestimates the ice nucleation rates. It is stressed that the capacity of black carbon (BC) to form ice in the condensation/immersion freezing mode is highly uncertain, in particular at temperatures warmer than -20°C. In its current version, the parameterization most likely overestimates ice initiation by BC.
NASA Astrophysics Data System (ADS)
Kooperman, Gabriel J.; Pritchard, Michael S.; O'Brien, Travis A.; Timmermans, Ben W.
2018-04-01
Deficiencies in the parameterizations of convection used in global climate models often lead to a distorted representation of the simulated rainfall intensity distribution (i.e., too much rainfall from weak rain rates). While encouraging improvements in high percentile rainfall intensity have been found as the horizontal resolution of the Community Atmosphere Model is increased to ˜25 km, we demonstrate no corresponding improvement in the moderate rain rates that generate the majority of accumulated rainfall. Using a statistical framework designed to emphasize links between precipitation intensity and accumulated rainfall beyond just the frequency distribution, we show that CAM cannot realistically simulate moderate rain rates, and cannot capture their intensification with climate change, even as resolution is increased. However, by separating the parameterized convective and large-scale resolved contributions to total rainfall, we find that the intensity, geographic pattern, and climate change response of CAM's large-scale rain rates are more consistent with observations (TRMM 3B42), superparameterization, and theoretical expectations, despite issues with parameterized convection. Increasing CAM's horizontal resolution does improve the representation of total rainfall intensity, but not due to changes in the intensity of large-scale rain rates, which are surprisingly insensitive to horizontal resolution. Rather, improvements occur through an increase in the relative contribution of the large-scale component to the total amount of accumulated rainfall. Analysis of sensitivities to convective timescale and entrainment rate confirm the importance of these parameters in the possible development of scale-aware parameterizations, but also reveal unrecognized trade-offs from the entanglement of precipitation frequency and total amount.
Noble, Erik; Druyan, Leonard M; Fulakeza, Matthew
2016-01-01
This paper evaluates the performance of the Weather and Research Forecasting (WRF) model as a regional-atmospheric model over West Africa. It tests WRF sensitivity to 64 configurations of alternative parameterizations in a series of 104 twelve-day September simulations during eleven consecutive years, 2000-2010. The 64 configurations combine WRF parameterizations of cumulus convection, radiation, surface-hydrology, and PBL. Simulated daily and total precipitation results are validated against Global Precipitation Climatology Project (GPCP) and Tropical Rainfall Measuring Mission (TRMM) data. Particular attention is given to westward-propagating precipitation maxima associated with African Easterly Waves (AEWs). A wide range of daily precipitation validation scores demonstrates the influence of alternative parameterizations. The best WRF performers achieve time-longitude correlations (against GPCP) of between 0.35-0.42 and spatiotemporal variability amplitudes only slightly higher than observed estimates. A parallel simulation by the benchmark Regional Model-v.3 achieves a higher correlation (0.52) and realistic spatiotemporal variability amplitudes. The largest favorable impact on WRF precipitation validation is achieved by selecting the Grell-Devenyi convection scheme, resulting in higher correlations against observations than using the Kain-Fritch convection scheme. Other parameterizations have less obvious impact. Validation statistics for optimized WRF configurations simulating the parallel period during 2000-2010 are more favorable for 2005, 2006, and 2008 than for other years. The selection of some of the same WRF configurations as high scorers in both circulation and precipitation validations supports the notion that simulations of West African daily precipitation benefit from skillful simulations of associated AEW vorticity centers and that simulations of AEWs would benefit from skillful simulations of convective precipitation.
Noble, Erik; Druyan, Leonard M.; Fulakeza, Matthew
2018-01-01
This paper evaluates the performance of the Weather and Research Forecasting (WRF) model as a regional-atmospheric model over West Africa. It tests WRF sensitivity to 64 configurations of alternative parameterizations in a series of 104 twelve-day September simulations during eleven consecutive years, 2000–2010. The 64 configurations combine WRF parameterizations of cumulus convection, radiation, surface-hydrology, and PBL. Simulated daily and total precipitation results are validated against Global Precipitation Climatology Project (GPCP) and Tropical Rainfall Measuring Mission (TRMM) data. Particular attention is given to westward-propagating precipitation maxima associated with African Easterly Waves (AEWs). A wide range of daily precipitation validation scores demonstrates the influence of alternative parameterizations. The best WRF performers achieve time-longitude correlations (against GPCP) of between 0.35–0.42 and spatiotemporal variability amplitudes only slightly higher than observed estimates. A parallel simulation by the benchmark Regional Model-v.3 achieves a higher correlation (0.52) and realistic spatiotemporal variability amplitudes. The largest favorable impact on WRF precipitation validation is achieved by selecting the Grell-Devenyi convection scheme, resulting in higher correlations against observations than using the Kain-Fritch convection scheme. Other parameterizations have less obvious impact. Validation statistics for optimized WRF configurations simulating the parallel period during 2000–2010 are more favorable for 2005, 2006, and 2008 than for other years. The selection of some of the same WRF configurations as high scorers in both circulation and precipitation validations supports the notion that simulations of West African daily precipitation benefit from skillful simulations of associated AEW vorticity centers and that simulations of AEWs would benefit from skillful simulations of convective precipitation. PMID:29563651
NASA Astrophysics Data System (ADS)
Popova, E. E.; Coward, A. C.; Nurser, G. A.; de Cuevas, B.; Fasham, M. J. R.; Anderson, T. R.
2006-12-01
A global general circulation model coupled to a simple six-compartment ecosystem model is used to study the extent to which global variability in primary and export production can be realistically predicted on the basis of advanced parameterizations of upper mixed layer physics, without recourse to introducing extra complexity in model biology. The "K profile parameterization" (KPP) scheme employed, combined with 6-hourly external forcing, is able to capture short-term periodic and episodic events such as diurnal cycling and storm-induced deepening. The model realistically reproduces various features of global ecosystem dynamics that have been problematic in previous global modelling studies, using a single generic parameter set. The realistic simulation of deep convection in the North Atlantic, and lack of it in the North Pacific and Southern Oceans, leads to good predictions of chlorophyll and primary production in these contrasting areas. Realistic levels of primary production are predicted in the oligotrophic gyres due to high frequency external forcing of the upper mixed layer (accompanying paper Popova et al., 2006) and novel parameterizations of zooplankton excretion. Good agreement is shown between model and observations at various JGOFS time series sites: BATS, KERFIX, Papa and HOT. One exception is the northern North Atlantic where lower grazing rates are needed, perhaps related to the dominance of mesozooplankton there. The model is therefore not globally robust in the sense that additional parameterizations are needed to realistically simulate ecosystem dynamics in the North Atlantic. Nevertheless, the work emphasises the need to pay particular attention to the parameterization of mixed layer physics in global ocean ecosystem modelling as a prerequisite to increasing the complexity of ecosystem models.
NASA Astrophysics Data System (ADS)
Astitha, M.; Abdel Kader, M.; Pozzer, A.; Lelieveld, J.
2012-04-01
Atmospheric particulate matter and more specific desert dust has been the topic of numerous research studies in the past due to the wide range of impacts in the environment and climate and the uncertainty of characterizing and quantifying these impacts in a global scale. In this work we present two physical parameterizations of the desert dust production that have been incorporated in the atmospheric chemistry general circulation model EMAC (ECHAM5/MESSy2.41 Atmospheric Chemistry). The scope of this work is to assess the impact of the two physical parameterizations in the global distribution of desert dust and highlight the advantages and disadvantages of using either technique. The dust concentration and deposition has been evaluated using the AEROCOM dust dataset for the year 2000 and data from the MODIS and MISR satellites as well as sun-photometer data from the AERONET network was used to compare the modelled aerosol optical depth with observations. The implementation of the two parameterizations and the simulations using relatively high spatial resolution (T106~1.1deg) has highlighted the large spatial heterogeneity of the dust emission sources as well as the importance of the input parameters (soil size and texture, vegetation, surface wind speed). Also, sensitivity simulations with the nudging option using reanalysis data from ECMWF and without nudging have showed remarkable differences for some areas. Both parameterizations have revealed the difficulty of simulating all arid regions with the same assumptions and mechanisms. Depending on the arid region, each emission scheme performs more or less satisfactorily which leads to the necessity of treating each desert differently. Even though this is a quite different task to accomplish in a global model, some recommendations are given and ideas for future improvements.
The Role of Moist Processes in the Intrinsic Predictability of Indian Ocean Cyclones
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taraphdar, Sourav; Mukhopadhyay, P.; Leung, Lai-Yung R.
The role of moist processes and the possibility of error cascade from cloud scale processes affecting the intrinsic predictable time scale of a high resolution convection permitting model within the environment of tropical cyclones (TCs) over the Indian region are investigated. Consistent with past studies of extra-tropical cyclones, it is demonstrated that moist processes play a major role in forecast error growth which may ultimately limit the intrinsic predictability of the TCs. Small errors in the initial conditions may grow rapidly and cascades from smaller scales to the larger scales through strong diabatic heating and nonlinearities associated with moist convection.more » Results from a suite of twin perturbation experiments for four tropical cyclones suggest that the error growth is significantly higher in cloud permitting simulation at 3.3 km resolutions compared to simulations at 3.3 km and 10 km resolution with parameterized convection. Convective parameterizations with prescribed convective time scales typically longer than the model time step allows the effects of microphysical tendencies to average out so convection responds to a smoother dynamical forcing. Without convective parameterizations, the finer-scale instabilities resolved at 3.3 km resolution and stronger vertical motion that results from the cloud microphysical parameterizations removing super-saturation at each model time step can ultimately feed the error growth in convection permitting simulations. This implies that careful considerations and/or improvements in cloud parameterizations are needed if numerical predictions are to be improved through increased model resolution. Rapid upscale error growth from convective scales may ultimately limit the intrinsic mesoscale predictability of the TCs, which further supports the needs for probabilistic forecasts of these events, even at the mesoscales.« less
Pritchard, Michael S.; O'Brien, Travis A.; Timmermans, Ben W.
2018-01-01
Abstract Deficiencies in the parameterizations of convection used in global climate models often lead to a distorted representation of the simulated rainfall intensity distribution (i.e., too much rainfall from weak rain rates). While encouraging improvements in high percentile rainfall intensity have been found as the horizontal resolution of the Community Atmosphere Model is increased to ∼25 km, we demonstrate no corresponding improvement in the moderate rain rates that generate the majority of accumulated rainfall. Using a statistical framework designed to emphasize links between precipitation intensity and accumulated rainfall beyond just the frequency distribution, we show that CAM cannot realistically simulate moderate rain rates, and cannot capture their intensification with climate change, even as resolution is increased. However, by separating the parameterized convective and large‐scale resolved contributions to total rainfall, we find that the intensity, geographic pattern, and climate change response of CAM's large‐scale rain rates are more consistent with observations (TRMM 3B42), superparameterization, and theoretical expectations, despite issues with parameterized convection. Increasing CAM's horizontal resolution does improve the representation of total rainfall intensity, but not due to changes in the intensity of large‐scale rain rates, which are surprisingly insensitive to horizontal resolution. Rather, improvements occur through an increase in the relative contribution of the large‐scale component to the total amount of accumulated rainfall. Analysis of sensitivities to convective timescale and entrainment rate confirm the importance of these parameters in the possible development of scale‐aware parameterizations, but also reveal unrecognized trade‐offs from the entanglement of precipitation frequency and total amount. PMID:29861837
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kooperman, Gabriel J.; Pritchard, Michael S.; O'Brien, Travis A.
Deficiencies in the parameterizations of convection used in global climate models often lead to a distorted representation of the simulated rainfall intensity distribution (i.e., too much rainfall from weak rain rates). While encouraging improvements in high percentile rainfall intensity have been found as the horizontal resolution of the Community Atmosphere Model is increased to ~25 km, we demonstrate no corresponding improvement in the moderate rain rates that generate the majority of accumulated rainfall. Using a statistical framework designed to emphasize links between precipitation intensity and accumulated rainfall beyond just the frequency distribution, we show that CAM cannot realistically simulate moderatemore » rain rates, and cannot capture their intensification with climate change, even as resolution is increased. However, by separating the parameterized convective and large-scale resolved contributions to total rainfall, we find that the intensity, geographic pattern, and climate change response of CAM's large-scale rain rates are more consistent with observations (TRMM 3B42), superparameterization, and theoretical expectations, despite issues with parameterized convection. Increasing CAM's horizontal resolution does improve the representation of total rainfall intensity, but not due to changes in the intensity of large-scale rain rates, which are surprisingly insensitive to horizontal resolution. Rather, improvements occur through an increase in the relative contribution of the large-scale component to the total amount of accumulated rainfall. Analysis of sensitivities to convective timescale and entrainment rate confirm the importance of these parameters in the possible development of scale-aware parameterizations, but also reveal unrecognized trade-offs from the entanglement of precipitation frequency and total amount.« less
Majda, Andrew J; Abramov, Rafail; Gershgorin, Boris
2010-01-12
Climate change science focuses on predicting the coarse-grained, planetary-scale, longtime changes in the climate system due to either changes in external forcing or internal variability, such as the impact of increased carbon dioxide. The predictions of climate change science are carried out through comprehensive, computational atmospheric, and oceanic simulation models, which necessarily parameterize physical features such as clouds, sea ice cover, etc. Recently, it has been suggested that there is irreducible imprecision in such climate models that manifests itself as structural instability in climate statistics and which can significantly hamper the skill of computer models for climate change. A systematic approach to deal with this irreducible imprecision is advocated through algorithms based on the Fluctuation Dissipation Theorem (FDT). There are important practical and computational advantages for climate change science when a skillful FDT algorithm is established. The FDT response operator can be utilized directly for multiple climate change scenarios, multiple changes in forcing, and other parameters, such as damping and inverse modelling directly without the need of running the complex climate model in each individual case. The high skill of FDT in predicting climate change, despite structural instability, is developed in an unambiguous fashion using mathematical theory as guidelines in three different test models: a generic class of analytical models mimicking the dynamical core of the computer climate models, reduced stochastic models for low-frequency variability, and models with a significant new type of irreducible imprecision involving many fast, unstable modes.
NASA Technical Reports Server (NTRS)
Andrews, Arlyn; Kawa, Randy; Zhu, Zhengxin; Burris, John; Abshire, Jim
2004-01-01
A detailed mechanistic understanding of the sources and sinks of CO2 will be required to reliably predict future CO2 levels and climate. A commonly used technique for deriving information about CO2 exchange with surface reservoirs is to solve an 'inverse problem', where CO2 observations are used with an atmospheric transport model to find the optimal distribution of sources and sinks. Synthesis inversion methods are powerful tools for addressing this question, but the results are disturbingly sensitive to the details of the calculation. Studies done using different atmospheric transport models and combinations of surface station data have produced substantially different distributions of surface fluxes. Adjoint methods are now being developed that will more effectively incorporate diverse datasets in estimates of surface fluxes of CO2. In an adjoint framework, it will be possible to combine CO2 concentration data from longterm surface and aircraft monitoring stations with data from intensive field campaigns and with proposed future satellite observations. We have recently developed an adjoint for the GSFC 3-D Parameterized Chemistry and Transport Model (PCTM). Here, we will present results from a PCTM Adjoint study comparing the sampling footprints of tall tower, aircraft and potential future lidar observations of CO2. The vertical resolution and extent of the profiles and the observation frequency will be considered for several sites in North America.
Inverting Monotonic Nonlinearities by Entropy Maximization
López-de-Ipiña Pena, Karmele; Caiafa, Cesar F.
2016-01-01
This paper proposes a new method for blind inversion of a monotonic nonlinear map applied to a sum of random variables. Such kinds of mixtures of random variables are found in source separation and Wiener system inversion problems, for example. The importance of our proposed method is based on the fact that it permits to decouple the estimation of the nonlinear part (nonlinear compensation) from the estimation of the linear one (source separation matrix or deconvolution filter), which can be solved by applying any convenient linear algorithm. Our new nonlinear compensation algorithm, the MaxEnt algorithm, generalizes the idea of Gaussianization of the observation by maximizing its entropy instead. We developed two versions of our algorithm based either in a polynomial or a neural network parameterization of the nonlinear function. We provide a sufficient condition on the nonlinear function and the probability distribution that gives a guarantee for the MaxEnt method to succeed compensating the distortion. Through an extensive set of simulations, MaxEnt is compared with existing algorithms for blind approximation of nonlinear maps. Experiments show that MaxEnt is able to successfully compensate monotonic distortions outperforming other methods in terms of the obtained Signal to Noise Ratio in many important cases, for example when the number of variables in a mixture is small. Besides its ability for compensating nonlinearities, MaxEnt is very robust, i.e. showing small variability in the results. PMID:27780261
Inverting Monotonic Nonlinearities by Entropy Maximization.
Solé-Casals, Jordi; López-de-Ipiña Pena, Karmele; Caiafa, Cesar F
2016-01-01
This paper proposes a new method for blind inversion of a monotonic nonlinear map applied to a sum of random variables. Such kinds of mixtures of random variables are found in source separation and Wiener system inversion problems, for example. The importance of our proposed method is based on the fact that it permits to decouple the estimation of the nonlinear part (nonlinear compensation) from the estimation of the linear one (source separation matrix or deconvolution filter), which can be solved by applying any convenient linear algorithm. Our new nonlinear compensation algorithm, the MaxEnt algorithm, generalizes the idea of Gaussianization of the observation by maximizing its entropy instead. We developed two versions of our algorithm based either in a polynomial or a neural network parameterization of the nonlinear function. We provide a sufficient condition on the nonlinear function and the probability distribution that gives a guarantee for the MaxEnt method to succeed compensating the distortion. Through an extensive set of simulations, MaxEnt is compared with existing algorithms for blind approximation of nonlinear maps. Experiments show that MaxEnt is able to successfully compensate monotonic distortions outperforming other methods in terms of the obtained Signal to Noise Ratio in many important cases, for example when the number of variables in a mixture is small. Besides its ability for compensating nonlinearities, MaxEnt is very robust, i.e. showing small variability in the results.
Inter-Annual Variability of Soil Moisture Stress Function in the Wheat Field
NASA Astrophysics Data System (ADS)
Akuraju, V. R.; Ryu, D.; George, B.; Ryu, Y.; Dassanayake, K. B.
2014-12-01
Root-zone soil moisture content is a key variable that controls the exchange of water and energy fluxes between land and atmosphere. In the soil-vegetation-atmosphere transfer (SVAT) schemes, the influence of root-zone soil moisture on evapotranspiration (ET) is parameterized by the soil moisture stress function (SSF). Dependence of actual ET: potential ET (fPET) or evaporative fraction to the root-zone soil moisture via SSF can also be used inversely to estimate root-zone soil moisture when fPET is estimated by remotely sensed land surface states. In this work we present fPET versus available soil water (ASW) in the root zone observed in the experimental farm sites in Victoria, Australia in 2012-2013. In the wheat field site, fPET vs ASW exhibited distinct features for different soil depth, net radiation, and crop growth stages. Interestingly, SSF in the wheat field presented contrasting shapes for two cropping years of 2012 and 2013. We argue that different temporal patterns of rainfall (and resulting soil moisture) during the growing seasons in 2012 and 2013 are responsible for the distinctive SSFs. SSF of the wheat field was simulated by the Agricultural Production Systems sIMulator (APSIM). The APSIM was able to reproduce the observed fPET vs. ASW. We discuss implications of our findings for existing modeling and (inverse) remote sensing approaches relying on SSF and alternative growth-stage-dependent SSFs.
NASA Technical Reports Server (NTRS)
Reichardt, J.; Reichardt, S.; Yang, P.; McGee, T. J.; Bhartia, P. K. (Technical Monitor)
2001-01-01
A retrieval algorithm has been developed for the microphysical analysis of polar stratospheric cloud (PSC) optical data obtained using lidar instrumentation. The parameterization scheme of the PSC microphysical properties allows for coexistence of up to three different particle types with size-dependent shapes. The finite difference time domain (FDTD) method has been used to calculate optical properties of particles with maximum dimensions equal to or less than 2 mu m and with shapes that can be considered more representative of PSCs on the scale of individual crystals than the commonly assumed spheroids. Specifically. these are irregular and hexagonal crystals. Selection of the optical parameters that are input to the inversion algorithm is based on a potential data set such as that gathered by two of the lidars on board the NASA DC-8 during the Stratospheric Aerosol and Gas Experiment 0 p (SAGE) Ozone Loss Validation experiment (SOLVE) campaign in winter 1999/2000: the Airborne Raman Ozone and Temperature Lidar (AROTEL) and the NASA Langley Differential Absorption Lidar (DIAL). The 0 microphysical retrieval algorithm has been applied to study how particle shape assumptions affect the inversion of lidar data measured in leewave PSCs. The model simulations show that under the assumption of spheroidal particle shapes, PSC surface and volume density are systematically smaller than the FDTD-based values by, respectively, approximately 10-30% and approximately 5-23%.
Evaluation of Surface Flux Parameterizations with Long-Term ARM Observations
Liu, Gang; Liu, Yangang; Endo, Satoshi
2013-02-01
Surface momentum, sensible heat, and latent heat fluxes are critical for atmospheric processes such as clouds and precipitation, and are parameterized in a variety of models ranging from cloud-resolving models to large-scale weather and climate models. However, direct evaluation of the parameterization schemes for these surface fluxes is rare due to limited observations. This study takes advantage of the long-term observations of surface fluxes collected at the Southern Great Plains site by the Department of Energy Atmospheric Radiation Measurement program to evaluate the six surface flux parameterization schemes commonly used in the Weather Research and Forecasting (WRF) model and threemore » U.S. general circulation models (GCMs). The unprecedented 7-yr-long measurements by the eddy correlation (EC) and energy balance Bowen ratio (EBBR) methods permit statistical evaluation of all six parameterizations under a variety of stability conditions, diurnal cycles, and seasonal variations. The statistical analyses show that the momentum flux parameterization agrees best with the EC observations, followed by latent heat flux, sensible heat flux, and evaporation ratio/Bowen ratio. The overall performance of the parameterizations depends on atmospheric stability, being best under neutral stratification and deteriorating toward both more stable and more unstable conditions. Further diagnostic analysis reveals that in addition to the parameterization schemes themselves, the discrepancies between observed and parameterized sensible and latent heat fluxes may stem from inadequate use of input variables such as surface temperature, moisture availability, and roughness length. The results demonstrate the need for improving the land surface models and measurements of surface properties, which would permit the evaluation of full land surface models.« less
Ionosphere Profile Estimation Using Ionosonde & GPS Data in an Inverse Refraction Calculation
NASA Astrophysics Data System (ADS)
Psiaki, M. L.
2014-12-01
A method has been developed to assimilate ionosonde virtual heights and GPS slant TEC data to estimate the parameters of a local ionosphere model, including estimates of the topside and of latitude and longitude variations. This effort seeks to better assimilate a variety of remote sensing data in order to characterize local (and eventually regional and global) ionosphere electron density profiles. The core calculations involve a forward refractive ray-tracing solution and a nonlinear optimal estimation algorithm that inverts the forward model. The ray-tracing calculations solve a nonlinear two-point boundary value problem for the curved ionosonde or GPS ray path through a parameterized electron density profile. It implements a full 3D solution that can handle the case of a tilted ionosphere. These calculations use Hamiltonian equivalents of the Appleton-Hartree magneto-plasma refraction index model. The current ionosphere parameterization is a modified Booker profile. It has been augmented to include latitude and longitude dependencies. The forward ray-tracing solution yields a given signal's group delay and beat carrier phase observables. An auxiliary set of boundary value problem solutions determine the sensitivities of the ray paths and observables with respect to the parameters of the augmented Booker profile. The nonlinear estimation algorithm compares the measured ionosonde virtual-altitude observables and GPS slant-TEC observables to the corresponding values from the forward refraction model. It uses the parameter sensitivities of the model to iteratively improve its parameter estimates in a way the reduces the residual errors between the measurements and their modeled values. This method has been applied to data from HAARP in Gakona, AK and has produced good TEC and virtual height fits. It has been extended to characterize electron density perturbations caused by HAARP heating experiments through the use of GPS slant TEC data for an LOS through the heated zone. The next planned extension of the method is to estimate the parameters of a regional ionosphere profile. The input observables will be slant TEC from an array of GPS receivers and group delay and carrier phase observables from an array of high-frequency beacons. The beacon array will function as a sort of multi-static ionosonde.
NASA Astrophysics Data System (ADS)
Gunn de Rosas, C. L.
2013-12-01
The Soufrière Hills Volcano, Montserrat (SHV) is an active, mainly andesitic and well-studied stratovolcano situated at the northern end of the Lesser Antilles Arc subduction zone in the Caribbean Sea. The goal of our research is to create a high resolution 3D subsurface model of the shallow and deeper aspects of the magma storage and plumbing system at SHV. Our model will integrate inversions using continuous and campaign geodetic observations at SHV from 1995 to the present as well as local seismic records taken at various unrest intervals to construct a best-fit geometry, pressure point source and inflation rate and magnitude. We will also incorporate a heterogeneous media in the crust and use the most contemporary understanding of deep crustal- or even mantle-depth 'hot-zone' genesis and chemical evolution of silicic and intermediate magmas to inform the character of the deep edifice influx. Our heat transfer model will be constructed with a modified 'thin shell' enveloping the magma chamber to simulate the insulating or conducting influence of heat-altered chamber boundary conditions. The final forward model should elucidate observational data preceding and proceeding unrest events, the behavioral suite of magma transport in the subsurface environment and the feedback mechanisms that may contribute to eruption triggering. Preliminary hypotheses suggest wet, low-viscosity residual melts derived from 'hot zones' will ascend rapidly to shallower stall-points and that their products (eventually erupted lavas as well as stalled plutonic masses) will experience and display two discrete periods of shallow evolution; a rapid depressurization crystallization event followed by a slower conduction-controlled heat transfer and cooling crystallization. These events have particular implications for shallow magma behaviors, notably inflation, compressibility and pressure values. Visualization of the model with its inversion constraints will be affected with ComSol. Conclusions about the subsurface behavioral suite at SHV will have high applicability to other silicic and intermediate volcanic edifices and may aid in the hazard mitigation associated with volcanic unrest.
Extensions and applications of a second-order landsurface parameterization
NASA Technical Reports Server (NTRS)
Andreou, S. A.; Eagleson, P. S.
1983-01-01
Extensions and applications of a second order land surface parameterization, proposed by Andreou and Eagleson are developed. Procedures for evaluating the near surface storage depth used in one cell land surface parameterizations are suggested and tested by using the model. Sensitivity analysis to the key soil parameters is performed. A case study involving comparison with an "exact" numerical model and another simplified parameterization, under very dry climatic conditions and for two different soil types, is also incorporated.
NASA Technical Reports Server (NTRS)
Stauffer, David R.; Seaman, Nelson L.; Munoz, Ricardo C.
2000-01-01
The objective of this investigation was to study the role of shallow convection on the regional water cycle of the Mississippi and Little Washita Basins using a 3-D mesoscale model, the PSUINCAR MM5. The underlying premise of the project was that current modeling of regional-scale climate and moisture cycles over the continents is deficient without adequate treatment of shallow convection. It was hypothesized that an improved treatment of the regional water cycle can be achieved by using a 3-D mesoscale numerical model having a detailed land-surface parameterization, an advanced boundary-layer parameterization, and a more complete shallow convection parameterization than are available in most current models. The methodology was based on the application in the MM5 of new or recently improved parameterizations covering these three physical processes. Therefore, the work plan focused on integrating, improving, and testing these parameterizations in the MM5 and applying them to study water-cycle processes over the Southern Great Plains (SGP): (1) the Parameterization for Land-Atmosphere-Cloud Exchange (PLACE) described by Wetzel and Boone; (2) the 1.5-order turbulent kinetic energy (TKE)-predicting scheme of Shafran et al.; and (3) the hybrid-closure sub-grid shallow convection parameterization of Deng. Each of these schemes has been tested extensively through this study and the latter two have been improved significantly to extend their capabilities.
NASA Technical Reports Server (NTRS)
Elsaesser, Gregory
2015-01-01
Cold pools are increasingly being recognized as important players in the evolution of both shallow and deep convection; hence, the incorporation of cold pool processes into a number of recently developed convective parameterizations. Unfortunately, observations serving to inform cold pool parameterization development are limited to select field programs and limited radar domains. However, a number of recent studies have noted that cold pools are often associated with arcs-lines of shallow clouds traversing 10 100 km in visible satellite imagery. Boundary layer thermodynamic perturbations are plausible at such scales, coincident with such mesoscale features. Atmospheric signatures of features at these spatial scales are potentially observable from satellites. In this presentation, we discuss recent work that uses multi-sensor, high-resolution satellite products for observing mesoscale wind vector fluctuations and boundary layer temperature depressions attributed to cold pools produced by antecedent convection. The relationship to subsequent convection as well as convective system longevity is discussed. As improvements in satellite technology occur and efforts to reduce noise in high-resolution orbital products progress, satellite pixel level (10 km) thermodynamic and dynamic (e.g. mesoscale convergence) parameters can increasingly serve as useful benchmarks for constraining convective parameterization development, including for regimes where organized convection contributes substantially to the cloud and rainfall climatology.
NASA Astrophysics Data System (ADS)
Gornostyrev, Yu. N.; Katsnelson, M. I.; Mryasov, Oleg N.; Freeman, A. J.; Trefilov, M. V.
1998-03-01
Theoretical analysis of the fracture behaviour of fcc Au, Ir and Al have been performed within various brittle/ductile criteria (BDC) with ab-initio, embedded atom (EAM), and pseudopotential parameterizations. We systematically examined several important aspects of the fracture behaviour: (i) dislocation structure, (ii) energetics of the cleavage decohesion and (iii) character of the interatomic interactions. Unit dislocation structures were analyzed within a two dimensional generalization of the Peierls-Nabarro model with restoring forces determined from ab-initio total energy calculations and found to be split with well defined highly mobile partials for all considered metals. We find from ab-initio and pseudopotential that in contrast with most of fcc metals, cleavage decohesion curve for Al appreciably differs from UBER relation. Finally, using ab-initio, EAM and pseudopotential parameterizations, we demonstrate that (i) Au (as a typical example of a ductile metal) is well described within existing BDC's, (ii) anomalous cleavage-like crack propagation of Ir is driven predominantly by it's high elastic modulus and (iii) Al is not described within BDC due to it's long-range interatomic interactions (and hence requires adjustments of the brittle/ductile criteria).
Gunalan, Kabilar; Chaturvedi, Ashutosh; Howell, Bryan; Duchin, Yuval; Lempka, Scott F; Patriat, Remi; Sapiro, Guillermo; Harel, Noam; McIntyre, Cameron C
2017-01-01
Deep brain stimulation (DBS) is an established clinical therapy and computational models have played an important role in advancing the technology. Patient-specific DBS models are now common tools in both academic and industrial research, as well as clinical software systems. However, the exact methodology for creating patient-specific DBS models can vary substantially and important technical details are often missing from published reports. Provide a detailed description of the assembly workflow and parameterization of a patient-specific DBS pathway-activation model (PAM) and predict the response of the hyperdirect pathway to clinical stimulation. Integration of multiple software tools (e.g. COMSOL, MATLAB, FSL, NEURON, Python) enables the creation and visualization of a DBS PAM. An example DBS PAM was developed using 7T magnetic resonance imaging data from a single unilaterally implanted patient with Parkinson's disease (PD). This detailed description implements our best computational practices and most elaborate parameterization steps, as defined from over a decade of technical evolution. Pathway recruitment curves and strength-duration relationships highlight the non-linear response of axons to changes in the DBS parameter settings. Parameterization of patient-specific DBS models can be highly detailed and constrained, thereby providing confidence in the simulation predictions, but at the expense of time demanding technical implementation steps. DBS PAMs represent new tools for investigating possible correlations between brain pathway activation patterns and clinical symptom modulation.
Parameterized post-Newtonian cosmology
NASA Astrophysics Data System (ADS)
Sanghai, Viraj A. A.; Clifton, Timothy
2017-03-01
Einstein’s theory of gravity has been extensively tested on solar system scales, and for isolated astrophysical systems, using the perturbative framework known as the parameterized post-Newtonian (PPN) formalism. This framework is designed for use in the weak-field and slow-motion limit of gravity, and can be used to constrain a large class of metric theories of gravity with data collected from the aforementioned systems. Given the potential of future surveys to probe cosmological scales to high precision, it is a topic of much contemporary interest to construct a similar framework to link Einstein’s theory of gravity and its alternatives to observations on cosmological scales. Our approach to this problem is to adapt and extend the existing PPN formalism for use in cosmology. We derive a set of equations that use the same parameters to consistently model both weak fields and cosmology. This allows us to parameterize a large class of modified theories of gravity and dark energy models on cosmological scales, using just four functions of time. These four functions can be directly linked to the background expansion of the universe, first-order cosmological perturbations, and the weak-field limit of the theory. They also reduce to the standard PPN parameters on solar system scales. We illustrate how dark energy models and scalar-tensor and vector-tensor theories of gravity fit into this framework, which we refer to as ‘parameterized post-Newtonian cosmology’ (PPNC).
NASA Astrophysics Data System (ADS)
Berloff, P. S.
2016-12-01
This work aims at developing a framework for dynamically consistent parameterization of mesoscale eddy effects for use in non-eddy-resolving ocean circulation models. The proposed eddy parameterization framework is successfully tested on the classical, wind-driven double-gyre model, which is solved both with explicitly resolved vigorous eddy field and in the non-eddy-resolving configuration with the eddy parameterization replacing the eddy effects. The parameterization focuses on the effect of the stochastic part of the eddy forcing that backscatters and induces eastward jet extension of the western boundary currents and its adjacent recirculation zones. The parameterization locally approximates transient eddy flux divergence by spatially localized and temporally periodic forcing, referred to as the plunger, and focuses on the linear-dynamics flow solution induced by it. The nonlinear self-interaction of this solution, referred to as the footprint, characterizes and quantifies the induced eddy forcing exerted on the large-scale flow. We find that spatial pattern and amplitude of each footprint strongly depend on the underlying large-scale flow, and the corresponding relationships provide the basis for the eddy parameterization and its closure on the large-scale flow properties. Dependencies of the footprints on other important parameters of the problem are also systematically analyzed. The parameterization utilizes the local large-scale flow information, constructs and scales the corresponding footprints, and then sums them up over the gyres to produce the resulting eddy forcing field, which is interactively added to the model as an extra forcing. Thus, the assumed ensemble of plunger solutions can be viewed as a simple model for the cumulative effect of the stochastic eddy forcing. The parameterization framework is implemented in the simplest way, but it provides a systematic strategy for improving the implementation algorithm.
NASA Technical Reports Server (NTRS)
Stone, Peter H.; Yao, Mao-Sung
1990-01-01
A number of perpetual January simulations are carried out with a two-dimensional zonally averaged model employing various parameterizations of the eddy fluxes of heat (potential temperature) and moisture. The parameterizations are evaluated by comparing these results with the eddy fluxes calculated in a parallel simulation using a three-dimensional general circulation model with zonally symmetric forcing. The three-dimensional model's performance in turn is evaluated by comparing its results using realistic (nonsymmetric) boundary conditions with observations. Branscome's parameterization of the meridional eddy flux of heat and Leovy's parameterization of the meridional eddy flux of moisture simulate the seasonal and latitudinal variations of these fluxes reasonably well, while somewhat underestimating their magnitudes. New parameterizations of the vertical eddy fluxes are developed that take into account the enhancement of the eddy mixing slope in a growing baroclinic wave due to condensation, and also the effect of eddy fluctuations in relative humidity. The new parameterizations, when tested in the two-dimensional model, simulate the seasonal, latitudinal, and vertical variations of the vertical eddy fluxes quite well, when compared with the three-dimensional model, and only underestimate the magnitude of the fluxes by 10 to 20 percent.
NASA Technical Reports Server (NTRS)
Jones, John H.
2010-01-01
Longhi et al. [1] have used the D(Ni) vs. D(Mg) parameterizations of Jones [2, 3] in attempting to explain the Ni systematics of lunar differentiation. A key element of the Jones parameterization and the Longhi et al. models is that, at very high temperatures, Ni may become incompatible in olivine. Unfortunately, there is no actual experimental evidence that this is ever the case [1]. To date, all experiments designed to demonstrate such incompatibility have failed. Here I will investigate the thermodynamic foundations of the D vs. D(Mg) trends for olivine/liquid discovered by [2].
CCPP-ARM Parameterization Testbed Model Forecast Data
Klein, Stephen
2008-01-15
Dataset contains the NCAR CAM3 (Collins et al., 2004) and GFDL AM2 (GFDL GAMDT, 2004) forecast data at locations close to the ARM research sites. These data are generated from a series of multi-day forecasts in which both CAM3 and AM2 are initialized at 00Z every day with the ECMWF reanalysis data (ERA-40), for the year 1997 and 2000 and initialized with both the NASA DAO Reanalyses and the NCEP GDAS data for the year 2004. The DOE CCPP-ARM Parameterization Testbed (CAPT) project assesses climate models using numerical weather prediction techniques in conjunction with high quality field measurements (e.g. ARM data).
NASA Astrophysics Data System (ADS)
Monicke, A.; Katajisto, H.; Leroy, M.; Petermann, N.; Kere, P.; Perillo, M.
2012-07-01
For many years, layered composites have proven essential for the successful design of high-performance space structures, such as launchers or satellites. A generic cylindrical composite structure for a launcher application was optimized with respect to objectives and constraints typical for space applications. The studies included the structural stability, laminate load response and failure analyses. Several types of cylinders (with and without stiffeners) were considered and optimized using different lay-up parameterizations. Results for the best designs are presented and discussed. The simulation tools, ESAComp [1] and modeFRONTIER [2], employed in the optimization loop are elucidated and their value for the optimization process is explained.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiao, Heng; Gustafson, Jr., William I.; Hagos, Samson M.
2015-04-18
With this study, to better understand the behavior of quasi-equilibrium-based convection parameterizations at higher resolution, we use a diagnostic framework to examine the resolution-dependence of subgrid-scale vertical transport of moist static energy as parameterized by the Zhang-McFarlane convection parameterization (ZM). Grid-scale input to ZM is supplied by coarsening output from cloud-resolving model (CRM) simulations onto subdomains ranging in size from 8 × 8 to 256 × 256 km 2s.
Karanovic, Marinko; Muffels, Christopher T.; Tonkin, Matthew J.; Hunt, Randall J.
2012-01-01
Models of environmental systems have become increasingly complex, incorporating increasingly large numbers of parameters in an effort to represent physical processes on a scale approaching that at which they occur in nature. Consequently, the inverse problem of parameter estimation (specifically, model calibration) and subsequent uncertainty analysis have become increasingly computation-intensive endeavors. Fortunately, advances in computing have made computational power equivalent to that of dozens to hundreds of desktop computers accessible through a variety of alternate means: modelers have various possibilities, ranging from traditional Local Area Networks (LANs) to cloud computing. Commonly used parameter estimation software is well suited to take advantage of the availability of such increased computing power. Unfortunately, logistical issues become increasingly important as an increasing number and variety of computers are brought to bear on the inverse problem. To facilitate efficient access to disparate computer resources, the PESTCommander program documented herein has been developed to provide a Graphical User Interface (GUI) that facilitates the management of model files ("file management") and remote launching and termination of "slave" computers across a distributed network of computers ("run management"). In version 1.0 described here, PESTCommander can access and ascertain resources across traditional Windows LANs: however, the architecture of PESTCommander has been developed with the intent that future releases will be able to access computing resources (1) via trusted domains established in Wide Area Networks (WANs) in multiple remote locations and (2) via heterogeneous networks of Windows- and Unix-based operating systems. The design of PESTCommander also makes it suitable for extension to other computational resources, such as those that are available via cloud computing. Version 1.0 of PESTCommander was developed primarily to work with the parameter estimation software PEST; the discussion presented in this report focuses on the use of the PESTCommander together with Parallel PEST. However, PESTCommander can be used with a wide variety of programs and models that require management, distribution, and cleanup of files before or after model execution. In addition to its use with the Parallel PEST program suite, discussion is also included in this report regarding the use of PESTCommander with the Global Run Manager GENIE, which was developed simultaneously with PESTCommander.
Parameterization of daily solar global ultraviolet irradiation.
Feister, U; Jäkel, E; Gericke, K
2002-09-01
Daily values of solar global ultraviolet (UV) B and UVA irradiation as well as erythemal irradiation have been parameterized to be estimated from pyranometer measurements of daily global and diffuse irradiation as well as from atmospheric column ozone. Data recorded at the Meteorological Observatory Potsdam (52 degrees N, 107 m asl) in Germany over the time period 1997-2000 have been used to derive sets of regression coefficients. The validation of the method against independent data sets of measured UV irradiation shows that the parameterization provides a gain of information for UVB, UVA and erythemal irradiation referring to their averages. A comparison between parameterized daily UV irradiation and independent values of UV irradiation measured at a mountain station in southern Germany (Meteorological Observatory Hohenpeissenberg at 48 degrees N, 977 m asl) indicates that the parameterization also holds even under completely different climatic conditions. On a long-term average (1953-2000), parameterized annual UV irradiation values are 15% and 21% higher for UVA and UVB, respectively, at Hohenpeissenberg than they are at Potsdam. Daily global and diffuse irradiation measured at 28 weather stations of the Deutscher Wetterdienst German Radiation Network and grid values of column ozone from the EPTOMS satellite experiment served as inputs to calculate the estimates of the spatial distribution of daily and annual values of UV irradiation across Germany. Using daily values of global and diffuse irradiation recorded at Potsdam since 1937 as well as atmospheric column ozone measured since 1964 at the same site, estimates of daily and annual UV irradiation have been derived for this site over the period from 1937 through 2000, which include the effects of changes in cloudiness, in aerosols and, at least for the period of ozone measurements from 1964 to 2000, in atmospheric ozone. It is shown that the extremely low ozone values observed mainly after the eruption of Mt. Pinatubo in 1991 have substantially enhanced UVB irradiation in the first half of the 1990s. According to the measurements and calculations, the nonlinear long-term changes observed between 1968 and 2000 amount to +4%, ..., +5% for annual global irradiation and UVA irradiation mainly because of changing cloudiness and + 14%, ..., +15% for UVB and erythemal irradiation because of both changing cloudiness and decreasing column ozone. At the mountain site, Hohenpeissenberg, measured global irradiation and parameterized UVA irradiation decreased during the same time period by -3%, ..., -4%, probably because of the enhanced occurrence and increasing optical thickness of clouds, whereas UVB and erythemal irradiation derived by the parameterization have increased by +3%, ..., +4% because of the combined effect of clouds and decreasing ozone. The parameterizations described here should be applicable to other regions with similar atmospheric and geographic conditions, whereas for regions with significantly different climatic conditions, such as high mountainous areas and arctic or tropical regions, the representativeness of the regression coefficients would have to be approved. It is emphasized here that parameterizations, as the one described in this article, cannot replace measurements of solar UV radiation, but they can use existing measurements of solar global and diffuse radiation as well as data on atmospheric ozone to provide estimates of UV irradiation in regions and over time periods for which UV measurements are not available.
Parameterizing by the Number of Numbers
NASA Astrophysics Data System (ADS)
Fellows, Michael R.; Gaspers, Serge; Rosamond, Frances A.
The usefulness of parameterized algorithmics has often depended on what Niedermeier has called "the art of problem parameterization". In this paper we introduce and explore a novel but general form of parameterization: the number of numbers. Several classic numerical problems, such as Subset Sum, Partition, 3-Partition, Numerical 3-Dimensional Matching, and Numerical Matching with Target Sums, have multisets of integers as input. We initiate the study of parameterizing these problems by the number of distinct integers in the input. We rely on an FPT result for Integer Linear Programming Feasibility to show that all the above-mentioned problems are fixed-parameter tractable when parameterized in this way. In various applied settings, problem inputs often consist in part of multisets of integers or multisets of weighted objects (such as edges in a graph, or jobs to be scheduled). Such number-of-numbers parameterized problems often reduce to subproblems about transition systems of various kinds, parameterized by the size of the system description. We consider several core problems of this kind relevant to number-of-numbers parameterization. Our main hardness result considers the problem: given a non-deterministic Mealy machine M (a finite state automaton outputting a letter on each transition), an input word x, and a census requirement c for the output word specifying how many times each letter of the output alphabet should be written, decide whether there exists a computation of M reading x that outputs a word y that meets the requirement c. We show that this problem is hard for W[1]. If the question is whether there exists an input word x such that a computation of M on x outputs a word that meets c, the problem becomes fixed-parameter tractable.
NASA Astrophysics Data System (ADS)
Prasad, K.; Lopez-Coto, I.; Ghosh, S.; Mueller, K.; Whetstone, J. R.
2015-12-01
The North-East Corridor project aims to use a top-down inversion methodology to quantify sources of Greenhouse Gas (GHG) emissions over urban domains such as Washington DC / Baltimore with high spatial and temporal resolution. Atmospheric transport of tracer gases from an emission source to a tower mounted receptor are usually conducted using the Weather Research and Forecasting (WRF) model. For such simulations, WRF employs a parameterized turbulence model and does not resolve the fine scale dynamics generated by the flow around buildings and communities comprising a large city. The NIST Fire Dynamics Simulator (FDS) is a computational fluid dynamics model that utilizes large eddy simulation methods to model flow around buildings at length scales much smaller than is practical with WRF. FDS has the potential to evaluate the impact of complex urban topography on near-field dispersion and mixing difficult to simulate with a mesoscale atmospheric model. Such capabilities may be important in determining urban GHG emissions using atmospheric measurements. A methodology has been developed to run FDS as a sub-grid scale model within a WRF simulation. The coupling is based on nudging the FDS flow field towards that computed by WRF, and is currently limited to one way coupling performed in an off-line mode. Using the coupled WRF / FDS model, NIST will investigate the effects of the urban canopy at horizontal resolutions of 10-20 m in a domain of 12 x 12 km. The coupled WRF-FDS simulations will be used to calculate the dispersion of tracer gases in the North-East Corridor and to evaluate the upwind areas that contribute to tower observations, referred to in the inversion community as influence functions. Results of this study will provide guidance regarding the importance of explicit simulations of urban atmospheric turbulence in obtaining accurate estimates of greenhouse gas emissions and transport.
An Empirically-Derived Index of High School Academic Rigor. ACT Working Paper 2017-5
ERIC Educational Resources Information Center
Allen, Jeff; Ndum, Edwin; Mattern, Krista
2017-01-01
We derived an index of high school academic rigor by optimizing the prediction of first-year college GPA based on high school courses taken, grades, and indicators of advanced coursework. Using a large data set (n~108,000) and nominal parameterization of high school course outcomes, the high school academic rigor (HSAR) index capitalizes on…
Parameterized Cross Sections for Pion Production in Proton-Proton Collisions
NASA Technical Reports Server (NTRS)
Blattnig, Steve R.; Swaminathan, Sudha R.; Kruger, Adam T.; Ngom, Moussa; Norbury, John W.; Tripathi, R. K.
2000-01-01
An accurate knowledge of cross sections for pion production in proton-proton collisions finds wide application in particle physics, astrophysics, cosmic ray physics, and space radiation problems, especially in situations where an incident proton is transported through some medium and knowledge of the output particle spectrum is required when given the input spectrum. In these cases, accurate parameterizations of the cross sections are desired. In this paper much of the experimental data are reviewed and compared with a wide variety of different cross section parameterizations. Therefore, parameterizations of neutral and charged pion cross sections are provided that give a very accurate description of the experimental data. Lorentz invariant differential cross sections, spectral distributions, and total cross section parameterizations are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larson, Vincent; Gettelman, Andrew; Morrison, Hugh
In state-of-the-art climate models, each cloud type is treated using its own separate cloud parameterization and its own separate microphysics parameterization. This use of separate schemes for separate cloud regimes is undesirable because it is theoretically unfounded, it hampers interpretation of results, and it leads to the temptation to overtune parameters. In this grant, we are creating a climate model that contains a unified cloud parameterization and a unified microphysics parameterization. This model will be used to address the problems of excessive frequency of drizzle in climate models and excessively early onset of deep convection in the Tropics over land.more » The resulting model will be compared with ARM observations.« less
NASA Astrophysics Data System (ADS)
Cariolle, D.; Teyssèdre, H.
2007-05-01
This article describes the validation of a linear parameterization of the ozone photochemistry for use in upper tropospheric and stratospheric studies. The present work extends a previously developed scheme by improving the 2-D model used to derive the coefficients of the parameterization. The chemical reaction rates are updated from a compilation that includes recent laboratory work. Furthermore, the polar ozone destruction due to heterogeneous reactions at the surface of the polar stratospheric clouds is taken into account as a function of the stratospheric temperature and the total chlorine content. Two versions of the parameterization are tested. The first one only requires the solution of a continuity equation for the time evolution of the ozone mixing ratio, the second one uses one additional equation for a cold tracer. The parameterization has been introduced into the chemical transport model MOCAGE. The model is integrated with wind and temperature fields from the ECMWF operational analyses over the period 2000-2004. Overall, the results from the two versions show a very good agreement between the modelled ozone distribution and the Total Ozone Mapping Spectrometer (TOMS) satellite data and the "in-situ" vertical soundings. During the course of the integration the model does not show any drift and the biases are generally small, of the order of 10%. The model also reproduces fairly well the polar ozone variability, notably the formation of "ozone holes" in the Southern Hemisphere with amplitudes and a seasonal evolution that follow the dynamics and time evolution of the polar vortex. The introduction of the cold tracer further improves the model simulation by allowing additional ozone destruction inside air masses exported from the high to the mid-latitudes, and by maintaining low ozone content inside the polar vortex of the Southern Hemisphere over longer periods in spring time. It is concluded that for the study of climate scenarios or the assimilation of ozone data, the present parameterization gives a valuable alternative to the introduction of detailed and computationally costly chemical schemes into general circulation models.
Ice-nucleating particle emissions from photochemically aged diesel and biodiesel exhaust
NASA Astrophysics Data System (ADS)
Schill, G. P.; Jathar, S. H.; Kodros, J. K.; Levin, E. J. T.; Galang, A. M.; Friedman, B.; Link, M. F.; Farmer, D. K.; Pierce, J. R.; Kreidenweis, S. M.; DeMott, P. J.
2016-05-01
Immersion-mode ice-nucleating particle (INP) concentrations from an off-road diesel engine were measured using a continuous-flow diffusion chamber at -30°C. Both petrodiesel and biodiesel were utilized, and the exhaust was aged up to 1.5 photochemically equivalent days using an oxidative flow reactor. We found that aged and unaged diesel exhaust of both fuels is not likely to contribute to atmospheric INP concentrations at mixed-phase cloud conditions. To explore this further, a new limit-of-detection parameterization for ice nucleation on diesel exhaust was developed. Using a global-chemical transport model, potential black carbon INP (INPBC) concentrations were determined using a current literature INPBC parameterization and the limit-of-detection parameterization. Model outputs indicate that the current literature parameterization likely overemphasizes INPBC concentrations, especially in the Northern Hemisphere. These results highlight the need to integrate new INPBC parameterizations into global climate models as generalized INPBC parameterizations are not valid for diesel exhaust.
Radiative flux and forcing parameterization error in aerosol-free clear skies
Pincus, Robert; Mlawer, Eli J.; Oreopoulos, Lazaros; ...
2015-07-03
This article reports on the accuracy in aerosol- and cloud-free conditions of the radiation parameterizations used in climate models. Accuracy is assessed relative to observationally validated reference models for fluxes under present-day conditions and forcing (flux changes) from quadrupled concentrations of carbon dioxide. Agreement among reference models is typically within 1 W/m 2, while parameterized calculations are roughly half as accurate in the longwave and even less accurate, and more variable, in the shortwave. Absorption of shortwave radiation is underestimated by most parameterizations in the present day and has relatively large errors in forcing. Error in present-day conditions is essentiallymore » unrelated to error in forcing calculations. Recent revisions to parameterizations have reduced error in most cases. As a result, a dependence on atmospheric conditions, including integrated water vapor, means that global estimates of parameterization error relevant for the radiative forcing of climate change will require much more ambitious calculations.« less
Adaptively Parameterized Tomography of the Western Hellenic Subduction Zone
NASA Astrophysics Data System (ADS)
Hansen, S. E.; Papadopoulos, G. A.
2017-12-01
The Hellenic subduction zone (HSZ) is the most seismically active region in Europe and plays a major role in the active tectonics of the eastern Mediterranean. This complicated environment has the potential to generate both large magnitude (M > 8) earthquakes and tsunamis. Situated above the western end of the HSZ, Greece faces a high risk from these geologic hazards, and characterizing this risk requires detailed understanding of the geodynamic processes occurring in this area. However, despite previous investigations, the kinematics of the HSZ are still controversial. Regional tomographic studies have yielded important information about the shallow seismic structure of the HSZ, but these models only image down to 150 km depth within small geographic areas. Deeper structure is constrained by global tomographic models but with coarser resolution ( 200-300 km). Additionally, current tomographic models focused on the HSZ were generated with regularly-spaced gridding, and this type of parameterization often over-emphasizes poorly sampled regions of the model or under-represents small-scale structure. Therefore, we are developing a new, high-resolution image of the mantle structure beneath the western HSZ using an adaptively parameterized seismic tomography approach. By combining multiple, regional travel-time datasets in the context of a global model, with adaptable gridding based on the sampling density of high-frequency data, this method generates a composite model of mantle structure that is being used to better characterize geodynamic processes within the HSZ, thereby allowing for improved hazard assessment. Preliminary results will be shown.
Parameterization Interactions in Global Aquaplanet Simulations
NASA Astrophysics Data System (ADS)
Bhattacharya, Ritthik; Bordoni, Simona; Suselj, Kay; Teixeira, João.
2018-02-01
Global climate simulations rely on parameterizations of physical processes that have scales smaller than the resolved ones. In the atmosphere, these parameterizations represent moist convection, boundary layer turbulence and convection, cloud microphysics, longwave and shortwave radiation, and the interaction with the land and ocean surface. These parameterizations can generate different climates involving a wide range of interactions among parameterizations and between the parameterizations and the resolved dynamics. To gain a simplified understanding of a subset of these interactions, we perform aquaplanet simulations with the global version of the Weather Research and Forecasting (WRF) model employing a range (in terms of properties) of moist convection and boundary layer (BL) parameterizations. Significant differences are noted in the simulated precipitation amounts, its partitioning between convective and large-scale precipitation, as well as in the radiative impacts. These differences arise from the way the subcloud physics interacts with convection, both directly and through various pathways involving the large-scale dynamics and the boundary layer, convection, and clouds. A detailed analysis of the profiles of the different tendencies (from the different physical processes) for both potential temperature and water vapor is performed. While different combinations of convection and boundary layer parameterizations can lead to different climates, a key conclusion of this study is that similar climates can be simulated with model versions that are different in terms of the partitioning of the tendencies: the vertically distributed energy and water balances in the tropics can be obtained with significantly different profiles of large-scale, convection, and cloud microphysics tendencies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kao, C.Y.J.; Bossert, J.E.; Winterkamp, J.
1993-10-01
One of the objectives of the DOE ARM Program is to improve the parameterization of clouds in general circulation models (GCMs). The approach taken in this research is two fold. We first examine the behavior of cumulus parameterization schemes by comparing their performance against the results from explicit cloud simulations with state-of-the-art microphysics. This is conducted in a two-dimensional (2-D) configuration of an idealized convective system. We then apply the cumulus parameterization schemes to realistic three-dimensional (3-D) simulations over the western US for a case with an enormous amount of convection in an extended period of five days. In themore » 2-D idealized tests, cloud effects are parameterized in the ``parameterization cases`` with a coarse resolution, whereas each cloud is explicitly resolved by the ``microphysics cases`` with a much finer resolution. Thus, the capability of the parameterization schemes in reproducing the growth and life cycle of a convective system can then be evaluated. These 2-D tests will form the basis for further 3-D realistic simulations which have the model resolution equivalent to that of the next generation of GCMs. Two cumulus parameterizations are used in this research: the Arakawa-Schubert (A-S) scheme (Arakawa and Schubert, 1974) used in Kao and Ogura (1987) and the Kuo scheme (Kuo, 1974) used in Tremback (1990). The numerical model used in this research is the Regional Atmospheric Modeling System (RAMS) developed at Colorado State University (CSU).« less
Brain Surface Conformal Parameterization Using Riemann Surface Structure
Wang, Yalin; Lui, Lok Ming; Gu, Xianfeng; Hayashi, Kiralee M.; Chan, Tony F.; Toga, Arthur W.; Thompson, Paul M.; Yau, Shing-Tung
2011-01-01
In medical imaging, parameterized 3-D surface models are useful for anatomical modeling and visualization, statistical comparisons of anatomy, and surface-based registration and signal processing. Here we introduce a parameterization method based on Riemann surface structure, which uses a special curvilinear net structure (conformal net) to partition the surface into a set of patches that can each be conformally mapped to a parallelogram. The resulting surface subdivision and the parameterizations of the components are intrinsic and stable (their solutions tend to be smooth functions and the boundary conditions of the Dirichlet problem can be enforced). Conformal parameterization also helps transform partial differential equations (PDEs) that may be defined on 3-D brain surface manifolds to modified PDEs on a two-dimensional parameter domain. Since the Jacobian matrix of a conformal parameterization is diagonal, the modified PDE on the parameter domain is readily solved. To illustrate our techniques, we computed parameterizations for several types of anatomical surfaces in 3-D magnetic resonance imaging scans of the brain, including the cerebral cortex, hippocampi, and lateral ventricles. For surfaces that are topologically homeomorphic to each other and have similar geometrical structures, we show that the parameterization results are consistent and the subdivided surfaces can be matched to each other. Finally, we present an automatic sulcal landmark location algorithm by solving PDEs on cortical surfaces. The landmark detection results are used as constraints for building conformal maps between surfaces that also match explicitly defined landmarks. PMID:17679336
Impact of Apex Model parameterization strategy on estimated benefit of conservation practices
USDA-ARS?s Scientific Manuscript database
Three parameterized Agriculture Policy Environmental eXtender (APEX) models for corn-soybean rotation on clay pan soils were developed with the objectives, 1. Evaluate model performance of three parameterization strategies on a validation watershed; and 2. Compare predictions of water quality benefi...
Single-Column Modeling, GCM Parameterizations and Atmospheric Radiation Measurement Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Somerville, R.C.J.; Iacobellis, S.F.
2005-03-18
Our overall goal is identical to that of the Atmospheric Radiation Measurement (ARM) Program: the development of new and improved parameterizations of cloud-radiation effects and related processes, using ARM data at all three ARM sites, and the implementation and testing of these parameterizations in global and regional models. To test recently developed prognostic parameterizations based on detailed cloud microphysics, we have first compared single-column model (SCM) output with ARM observations at the Southern Great Plains (SGP), North Slope of Alaska (NSA) and Topical Western Pacific (TWP) sites. We focus on the predicted cloud amounts and on a suite of radiativemore » quantities strongly dependent on clouds, such as downwelling surface shortwave radiation. Our results demonstrate the superiority of parameterizations based on comprehensive treatments of cloud microphysics and cloud-radiative interactions. At the SGP and NSA sites, the SCM results simulate the ARM measurements well and are demonstrably more realistic than typical parameterizations found in conventional operational forecasting models. At the TWP site, the model performance depends strongly on details of the scheme, and the results of our diagnostic tests suggest ways to develop improved parameterizations better suited to simulating cloud-radiation interactions in the tropics generally. These advances have made it possible to take the next step and build on this progress, by incorporating our parameterization schemes in state-of-the-art 3D atmospheric models, and diagnosing and evaluating the results using independent data. Because the improved cloud-radiation results have been obtained largely via implementing detailed and physically comprehensive cloud microphysics, we anticipate that improved predictions of hydrologic cycle components, and hence of precipitation, may also be achievable. We are currently testing the performance of our ARM-based parameterizations in state-of-the--art global and regional models. One fruitful strategy for evaluating advances in parameterizations has turned out to be using short-range numerical weather prediction as a test-bed within which to implement and improve parameterizations for modeling and predicting climate variability. The global models we have used to date are the CAM atmospheric component of the National Center for Atmospheric Research (NCAR) CCSM climate model as well as the National Centers for Environmental Prediction (NCEP) numerical weather prediction model, thus allowing testing in both climate simulation and numerical weather prediction modes. We present detailed results of these tests, demonstrating the sensitivity of model performance to changes in parameterizations.« less
NASA Astrophysics Data System (ADS)
Davis, K. J.; Baier, B.; Baker, D.; Barkley, Z.; Bell, E.; Bowman, K. W.; Browell, E. V.; Campbell, J.; Chen, H. W.; Choi, Y.; DiGangi, J. P.; Dobler, J. T.; Erxleben, W. H.; Fan, T. F.; Feng, S.; Fried, A.; Gaudet, B. J.; Jacobson, A. R.; Keller, K.; Kooi, S. A.; Lauvaux, T.; Lin, B.; McGill, M. J.; McGregor, D.; Michalak, A.; Obland, M. D.; O'Dell, C.; Pal, S.; Parazoo, N.; Pauly, R.; Randazzo, N. A.; Samaddar, A.; Schuh, A. E.; Sweeney, C.; Wesloh, D.; Williams, C. A.; Zhang, F.; Zhou, Y.
2017-12-01
The Atmospheric Carbon and Transport (ACT) - America mission aims to improve our understanding of transport and fluxes of greenhouse gases (GHGs) via airborne campaigns spanning a range of mid-latitude weather conditions, and thus to improve the accuracy and precision of regional inverse flux estimates of GHGs. ACT-America has conducted three field campaigns with two aircraft across three regions of the eastern United States during summer 2016, winter 2017 and fall 2017. Simulations of atmospheric GHGs have been conducted for a subset of these campaigns. We present progress from these campaigns. Mid-summer observations suggest a net biological source of CO2 to the atmosphere in the Gulf Coast states. These results contradict those terrestrial biosphere models that show net uptake of CO2 in this region in summer. Methane observations downwind of major sources in the MidAtlantic suggest that these sources are represented fairly well by existing emissions inventories. Flux estimation in other regions is underway. Spatially-coherent differences in GHGs extend throughout the depth of the troposphere are observed at frontal boundaries in summer and winter. These spatial structures are captured in global and mesoscale simulations, though the simulated GHG mole fractions are sometimes biased with respect to observations, suggesting potential biases in synoptic transport. Mesoscale simulations overestimate spatial differences in ABL CO2 mole fractions in fair weather conditions as compared to observations and the CarbonTracker global inverse modeling system. ABL depths are simulated fairly well by both mesoscale and global modeling systems, suggesting that either weather-scale flux amplitudes are overestimated by CarbonTracker, or the mesoscale model lacks parameterized transport above the ABL. Measurements of OCS, 14CO2, and CO are being used to attribute CO2 variability to biogenic and anthropogenic processes and to expand the evaluation of GHG simulation systems. Cross-evaluation of OCO-2 and airborne lidar XCO2 observations against in situ measurements is defining the regional precision and accuracy of these observations. These findings are moving us toward improved regional GHG inverse flux estimates via better understanding of prior fluxes, atmospheric transport, and satellite CO2 observations.
Model Reduction via Principe Component Analysis and Markov Chain Monte Carlo (MCMC) Methods
NASA Astrophysics Data System (ADS)
Gong, R.; Chen, J.; Hoversten, M. G.; Luo, J.
2011-12-01
Geophysical and hydrogeological inverse problems often include a large number of unknown parameters, ranging from hundreds to millions, depending on parameterization and problems undertaking. This makes inverse estimation and uncertainty quantification very challenging, especially for those problems in two- or three-dimensional spatial domains. Model reduction technique has the potential of mitigating the curse of dimensionality by reducing total numbers of unknowns while describing the complex subsurface systems adequately. In this study, we explore the use of principal component analysis (PCA) and Markov chain Monte Carlo (MCMC) sampling methods for model reduction through the use of synthetic datasets. We compare the performances of three different but closely related model reduction approaches: (1) PCA methods with geometric sampling (referred to as 'Method 1'), (2) PCA methods with MCMC sampling (referred to as 'Method 2'), and (3) PCA methods with MCMC sampling and inclusion of random effects (referred to as 'Method 3'). We consider a simple convolution model with five unknown parameters as our goal is to understand and visualize the advantages and disadvantages of each method by comparing their inversion results with the corresponding analytical solutions. We generated synthetic data with noise added and invert them under two different situations: (1) the noised data and the covariance matrix for PCA analysis are consistent (referred to as the unbiased case), and (2) the noise data and the covariance matrix are inconsistent (referred to as biased case). In the unbiased case, comparison between the analytical solutions and the inversion results show that all three methods provide good estimates of the true values and Method 1 is computationally more efficient. In terms of uncertainty quantification, Method 1 performs poorly because of relatively small number of samples obtained, Method 2 performs best, and Method 3 overestimates uncertainty due to inclusion of random effects. However, in the biased case, only Method 3 correctly estimates all the unknown parameters, and both Methods 1 and 2 provide wrong values for the biased parameters. The synthetic case study demonstrates that if the covariance matrix for PCA analysis is inconsistent with true models, the PCA methods with geometric or MCMC sampling will provide incorrect estimates.
A Joint Method of Envelope Inversion Combined with Hybrid-domain Full Waveform Inversion
NASA Astrophysics Data System (ADS)
CUI, C.; Hou, W.
2017-12-01
Full waveform inversion (FWI) aims to construct high-precision subsurface models by fully using the information in seismic records, including amplitude, travel time, phase and so on. However, high non-linearity and the absence of low frequency information in seismic data lead to the well-known cycle skipping problem and make inversion easily fall into local minima. In addition, those 3D inversion methods that are based on acoustic approximation ignore the elastic effects in real seismic field, and make inversion harder. As a result, the accuracy of final inversion results highly relies on the quality of initial model. In order to improve stability and quality of inversion results, multi-scale inversion that reconstructs subsurface model from low to high frequency are applied. But, the absence of very low frequencies (< 3Hz) in field data is still bottleneck in the FWI. By extracting ultra low-frequency data from field data, envelope inversion is able to recover low wavenumber model with a demodulation operator (envelope operator), though the low frequency data does not really exist in field data. To improve the efficiency and viability of the inversion, in this study, we proposed a joint method of envelope inversion combined with hybrid-domain FWI. First, we developed 3D elastic envelope inversion, and the misfit function and the corresponding gradient operator were derived. Then we performed hybrid-domain FWI with envelope inversion result as initial model which provides low wavenumber component of model. Here, forward modeling is implemented in the time domain and inversion in the frequency domain. To accelerate the inversion, we adopt CPU/GPU heterogeneous computing techniques. There were two levels of parallelism. In the first level, the inversion tasks are decomposed and assigned to each computation node by shot number. In the second level, GPU multithreaded programming is used for the computation tasks in each node, including forward modeling, envelope extraction, DFT (discrete Fourier transform) calculation and gradients calculation. Numerical tests demonstrated that the combined envelope inversion + hybrid-domain FWI could obtain much faithful and accurate result than conventional hybrid-domain FWI. The CPU/GPU heterogeneous parallel computation could improve the performance speed.
Improved parameterization for the vertical flux of dust aerosols emitted by an eroding soil
USDA-ARS?s Scientific Manuscript database
The representation of the dust cycle in atmospheric circulation models hinges on an accurate parameterization of the vertical dust flux at emission. However, existing parameterizations of the vertical dust flux vary substantially in their scaling with wind friction velocity, require input parameters...
Climate and the equilibrium state of land surface hydrology parameterizations
NASA Technical Reports Server (NTRS)
Entekhabi, Dara; Eagleson, Peter S.
1991-01-01
For given climatic rates of precipitation and potential evaporation, the land surface hydrology parameterizations of atmospheric general circulation models will maintain soil-water storage conditions that balance the moisture input and output. The surface relative soil saturation for such climatic conditions serves as a measure of the land surface parameterization state under a given forcing. The equilibrium value of this variable for alternate parameterizations of land surface hydrology are determined as a function of climate and the sensitivity of the surface to shifts and changes in climatic forcing are estimated.
Cross-Section Parameterizations for Pion and Nucleon Production From Negative Pion-Proton Collisions
NASA Technical Reports Server (NTRS)
Norbury, John W.; Blattnig, Steve R.; Norman, Ryan; Tripathi, R. K.
2002-01-01
Ranft has provided parameterizations of Lorentz invariant differential cross sections for pion and nucleon production in pion-proton collisions that are compared to some recent data. The Ranft parameterizations are then numerically integrated to form spectral and total cross sections. These numerical integrations are further parameterized to provide formula for spectral and total cross sections suitable for use in radiation transport codes. The reactions analyzed are for charged pions in the initial state and both charged and neutral pions in the final state.
Controllers, observers, and applications thereof
NASA Technical Reports Server (NTRS)
Gao, Zhiqiang (Inventor); Zhou, Wankun (Inventor); Miklosovic, Robert (Inventor); Radke, Aaron (Inventor); Zheng, Qing (Inventor)
2011-01-01
Controller scaling and parameterization are described. Techniques that can be improved by employing the scaling and parameterization include, but are not limited to, controller design, tuning and optimization. The scaling and parameterization methods described here apply to transfer function based controllers, including PID controllers. The parameterization methods also apply to state feedback and state observer based controllers, as well as linear active disturbance rejection (ADRC) controllers. Parameterization simplifies the use of ADRC. A discrete extended state observer (DESO) and a generalized extended state observer (GESO) are described. They improve the performance of the ESO and therefore ADRC. A tracking control algorithm is also described that improves the performance of the ADRC controller. A general algorithm is described for applying ADRC to multi-input multi-output systems. Several specific applications of the control systems and processes are disclosed.
Vařeková, Radka Svobodová; Jiroušková, Zuzana; Vaněk, Jakub; Suchomel, Šimon; Koča, Jaroslav
2007-01-01
The Electronegativity Equalization Method (EEM) is a fast approach for charge calculation. A challenging part of the EEM is the parameterization, which is performed using ab initio charges obtained for a set of molecules. The goal of our work was to perform the EEM parameterization for selected sets of organic, organohalogen and organometal molecules. We have performed the most robust parameterization published so far. The EEM parameterization was based on 12 training sets selected from a database of predicted 3D structures (NCI DIS) and from a database of crystallographic structures (CSD). Each set contained from 2000 to 6000 molecules. We have shown that the number of molecules in the training set is very important for quality of the parameters. We have improved EEM parameters (STO-3G MPA charges) for elements that were already parameterized, specifically: C, O, N, H, S, F and Cl. The new parameters provide more accurate charges than those published previously. We have also developed new parameters for elements that were not parameterized yet, specifically for Br, I, Fe and Zn. We have also performed crossover validation of all obtained parameters using all training sets that included relevant elements and confirmed that calculated parameters provide accurate charges.
Spectral cumulus parameterization based on cloud-resolving model
NASA Astrophysics Data System (ADS)
Baba, Yuya
2018-02-01
We have developed a spectral cumulus parameterization using a cloud-resolving model. This includes a new parameterization of the entrainment rate which was derived from analysis of the cloud properties obtained from the cloud-resolving model simulation and was valid for both shallow and deep convection. The new scheme was examined in a single-column model experiment and compared with the existing parameterization of Gregory (2001, Q J R Meteorol Soc 127:53-72) (GR scheme). The results showed that the GR scheme simulated more shallow and diluted convection than the new scheme. To further validate the physical performance of the parameterizations, Atmospheric Model Intercomparison Project (AMIP) experiments were performed, and the results were compared with reanalysis data. The new scheme performed better than the GR scheme in terms of mean state and variability of atmospheric circulation, i.e., the new scheme improved positive bias of precipitation in western Pacific region, and improved positive bias of outgoing shortwave radiation over the ocean. The new scheme also simulated better features of convectively coupled equatorial waves and Madden-Julian oscillation. These improvements were found to be derived from the modification of parameterization for the entrainment rate, i.e., the proposed parameterization suppressed excessive increase of entrainment, thus suppressing excessive increase of low-level clouds.
NASA Astrophysics Data System (ADS)
Zandomeneghi, D.; Aster, R. C.; Barclay, A. H.; Chaput, J. A.; Kyle, P. R.
2011-12-01
Erebus volcano (Ross Island), the most active volcano in Antarctica, is characterized by a persistent phonolitic lava lake at its summit and a wide range of seismic signals associated with its underlying long-lived magmatic system. The magmatic structure in a 3 by 3 km area around the summit has been imaged using high-quality data from a seismic tomographic experiment carried out during the 2008-2009 austral field season (Zandomeneghi et al., 2010). An array of 78 short period, 14 broadband, and 4 permanent Mount Erebus Volcano Observatory seismic stations and a program of 12 shots were used to model the velocity structure in the uppermost kilometer over the volcano conduit. P-wave travel times were inverted for the 3-D velocity structure using the shortest-time ray tracing (50-m grid spacing) and LSQR inversion (100-m node spacing) of a tomography code (Toomey et al., 1994) that allows for the inclusion of topography. Regularization is controlled by damping and smoothing weights and smoothing lengths, and addresses complications that are inherent in a strongly heterogeneous medium featuring rough topography and a dense parameterization and distribution of receivers/sources. The tomography reveals a composite distribution of very high and low P-wave velocity anomalies (i.e., exceeding 20% in some regions), indicating a complex sub-lava-lake magmatic geometry immediately beneath the summit region and in surrounding areas, as well as the presence of significant high velocity shallow regions. The strongest and broadest low velocity zone is located W-NW of the crater rim, indicating the presence of an off-axis shallow magma body. This feature spatially corresponds to the inferred centroid source of VLP signals associated with Strombolian eruptions and lava lake refill (Aster et al., 2008). Other resolved structures correlate with the Side Crater and with lineaments of ice cave thermal anomalies extending NE and SW of the rim. High velocities in the summit area possibly constitute the seismic image of an older caldera, solidified intrusions or massive lava flows. REFERENCES: Aster et al., (2008) Moment tensor inversion of very long period seismic signals from Strombolian eruptions of Erebus volcano. J. Volcanol. Geotherm. Res., 177, 635-647. Toomey et al., (1994), Tomographic imaging of the shallow crustal structure of the East Pacific Rise at 9°30'N. J. Geophys. Res., 99 (B12), 24,135-24,157. Zandomeneghi et al., (2010), Seismic Tomography of Erebus Volcano, Antarctica, Eos, 91, 6, 53-55.
NASA Astrophysics Data System (ADS)
Schiavon, Mario; Mazzola, Mauro; Lupi, Angelo; Drofa, Oxana; Tampieri, Francesco; Pelliccioni, Armando; Choi, Taejin; Vitale, Vito; Viola, Angelo P.
2017-04-01
At high latitudes, the Atmospheric Boundary Layer ( ABL) is often characterized by extremely stable vertical stratification since the surface radiative cooling determines inversions in temperature profiles especially during the polar night over land, ice and snow surfaces. Improvements are required in the theoretical understanding of the turbulent behavior of the high-latitude ABL. The parameterizations of surface-atmosphere exchanges employed in numerical weather prediction and climate models have also to be tested in the Arctic area. Moreover, the boundary layer structure and dynamics influence the vertical distribution of aerosol. The main issue is related to the height of PBL: the question is whether some decoupling occurs between the surface layer and the atmosphere aloft when the PBL is shallow or the mechanical mixing due to the synoptic circulation provides an overall vertical homogeneity of the concentration of the aerosol irrespective of the stability conditions. In this aim, the work investigates the features of the high-latitude ABL with particular attention to its vertical structure, the relationships among the main turbulent statistics (in a similarity approach) and their variation with the ABL state. The used data refer to measurements collected since 2012 to 2016 by slow and fast response sensors deployed at the 34 m high Amundsen-Nobile Climate Change Tower (CCT) installed at Ny-Ålesund, Svalbard. Data from four conventional Young anemometers and Väisäla thermo-hygrometers at 2, 4.8, 10.3 and 33.4 m a.g.l., alternated by three lined up sonic anemometers at 3.7, 7.5 and 21 m a.g.l., are used in the analysis. The presented results highlight that the performance of the commonly adopted ABL similarity schemes (e.g. flux-gradient relationships and parameterizations for the stable ABL height) depends upon the ABL state, determined mainly by the wind speed and the shape of the profiles of second order moments (the two being related) . For neutral or stable stratification, strong wind and second order moments monotonically decreasing with height (traditional stable ABL), classical similarity schemes perform well also in the Arctic ABL. Instead, critical conditions, for which the classical similarity approach is not satisfactory, occur for low wind and profiles of second order moments deviating from the traditional case: e.g. upside-down ABL. Numerical experiments with the atmospheric model Bolam have been performed, for the whole period April-August 2013 in hindcast mode, on a domain covering the area of the observations, in order to assess the capability of an atmospheric numerical model to reproduce the observed vertical profiles in the PBL under different synoptic situations.
NASA Technical Reports Server (NTRS)
Rasool, Quazi Z.; Zhang, Rui; Lash, Benjamin; Cohan, Daniel S.; Cooter, Ellen J.; Bash, Jesse O.; Lamsal, Lok N.
2016-01-01
Modeling of soil nitric oxide (NO) emissions is highly uncertain and may misrepresent its spatial and temporal distribution. This study builds upon a recently introduced parameterization to improve the timing and spatial distribution of soil NO emission estimates in the Community Multiscale Air Quality (CMAQ) model. The parameterization considers soil parameters, meteorology, land use, and mineral nitrogen (N) availability to estimate NO emissions. We incorporate daily year-specific fertilizer data from the Environmental Policy Integrated Climate (EPIC) agricultural model to replace the annual generic data of the initial parameterization, and use a 12km resolution soil biome map over the continental USA. CMAQ modeling for July 2011 shows slight differences in model performance in simulating fine particulate matter and ozone from Interagency Monitoring of Protected Visual Environments (IMPROVE) and Clean Air Status and Trends Network (CASTNET) sites and NO2 columns from Ozone Monitoring Instrument (OMI) satellite retrievals. We also simulate how the change in soil NO emissions scheme affects the expected O3 response to projected emissions reductions.
Improving microphysics in a convective parameterization: possibilities and limitations
NASA Astrophysics Data System (ADS)
Labbouz, Laurent; Heikenfeld, Max; Stier, Philip; Morrison, Hugh; Milbrandt, Jason; Protat, Alain; Kipling, Zak
2017-04-01
The convective cloud field model (CCFM) is a convective parameterization implemented in the climate model ECHAM6.1-HAM2.2. It represents a population of clouds within each ECHAM-HAM model column, simulating up to 10 different convective cloud types with individual radius, vertical velocities and microphysical properties. Comparisons between CCFM and radar data at Darwin, Australia, show that in order to reproduce both the convective cloud top height distribution and the vertical velocity profile, the effect of aerodynamic drag on the rising parcel has to be considered, along with a reduced entrainment parameter. A new double-moment microphysics (the Predicted Particle Properties scheme, P3) has been implemented in the latest version of CCFM and is compared to the standard single-moment microphysics and the radar retrievals at Darwin. The microphysical process rates (autoconversion, accretion, deposition, freezing, …) and their response to changes in CDNC are investigated and compared to high resolution CRM WRF simulations over the Amazon region. The results shed light on the possibilities and limitations of microphysics improvements in the framework of CCFM and in convective parameterizations in general.
Anisotropic mesoscale eddy transport in ocean general circulation models
NASA Astrophysics Data System (ADS)
Reckinger, Scott; Fox-Kemper, Baylor; Bachman, Scott; Bryan, Frank; Dennis, John; Danabasoglu, Gokhan
2014-11-01
In modern climate models, the effects of oceanic mesoscale eddies are introduced by relating subgrid eddy fluxes to the resolved gradients of buoyancy or other tracers, where the proportionality is, in general, governed by an eddy transport tensor. The symmetric part of the tensor, which represents the diffusive effects of mesoscale eddies, is universally treated isotropically. However, the diffusive processes that the parameterization approximates, such as shear dispersion and potential vorticity barriers, typically have strongly anisotropic characteristics. Generalizing the eddy diffusivity tensor for anisotropy extends the number of parameters from one to three: major diffusivity, minor diffusivity, and alignment. The Community Earth System Model (CESM) with the anisotropic eddy parameterization is used to test various choices for the parameters, which are motivated by observations and the eddy transport tensor diagnosed from high resolution simulations. Simply setting the ratio of major to minor diffusivities to a value of five globally, while aligning the major axis along the flow direction, improves biogeochemical tracer ventilation and reduces temperature and salinity biases. These effects can be improved by parameterizing the oceanic anisotropic transport mechanisms.
Liu, Ping; Li, Guodong; Liu, Xinggao; Xiao, Long; Wang, Yalin; Yang, Chunhua; Gui, Weihua
2018-02-01
High quality control method is essential for the implementation of aircraft autopilot system. An optimal control problem model considering the safe aerodynamic envelop is therefore established to improve the control quality of aircraft flight level tracking. A novel non-uniform control vector parameterization (CVP) method with time grid refinement is then proposed for solving the optimal control problem. By introducing the Hilbert-Huang transform (HHT) analysis, an efficient time grid refinement approach is presented and an adaptive time grid is automatically obtained. With this refinement, the proposed method needs fewer optimization parameters to achieve better control quality when compared with uniform refinement CVP method, whereas the computational cost is lower. Two well-known flight level altitude tracking problems and one minimum time cost problem are tested as illustrations and the uniform refinement control vector parameterization method is adopted as the comparative base. Numerical results show that the proposed method achieves better performances in terms of optimization accuracy and computation cost; meanwhile, the control quality is efficiently improved. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Sensitivity of liquid clouds to homogenous freezing parameterizations.
Herbert, Ross J; Murray, Benjamin J; Dobbie, Steven J; Koop, Thomas
2015-03-16
Water droplets in some clouds can supercool to temperatures where homogeneous ice nucleation becomes the dominant freezing mechanism. In many cloud resolving and mesoscale models, it is assumed that homogeneous ice nucleation in water droplets only occurs below some threshold temperature typically set at -40°C. However, laboratory measurements show that there is a finite rate of nucleation at warmer temperatures. In this study we use a parcel model with detailed microphysics to show that cloud properties can be sensitive to homogeneous ice nucleation as warm as -30°C. Thus, homogeneous ice nucleation may be more important for cloud development, precipitation rates, and key cloud radiative parameters than is often assumed. Furthermore, we show that cloud development is particularly sensitive to the temperature dependence of the nucleation rate. In order to better constrain the parameterization of homogeneous ice nucleation laboratory measurements are needed at both high (>-35°C) and low (<-38°C) temperatures. Homogeneous freezing may be significant as warm as -30°CHomogeneous freezing should not be represented by a threshold approximationThere is a need for an improved parameterization of homogeneous ice nucleation.
FINAL REPORT (DE-FG02-97ER62338): Single-column modeling, GCM parameterizations, and ARM data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Richard C. J. Somerville
2009-02-27
Our overall goal is the development of new and improved parameterizations of cloud-radiation effects and related processes, using ARM data at all three ARM sites, and the implementation and testing of these parameterizations in global models. To test recently developed prognostic parameterizations based on detailed cloud microphysics, we have compared SCM (single-column model) output with ARM observations at the SGP, NSA and TWP sites. We focus on the predicted cloud amounts and on a suite of radiative quantities strongly dependent on clouds, such as downwelling surface shortwave radiation. Our results demonstrate the superiority of parameterizations based on comprehensive treatments ofmore » cloud microphysics and cloud-radiative interactions. At the SGP and NSA sites, the SCM results simulate the ARM measurements well and are demonstrably more realistic than typical parameterizations found in conventional operational forecasting models. At the TWP site, the model performance depends strongly on details of the scheme, and the results of our diagnostic tests suggest ways to develop improved parameterizations better suited to simulating cloud-radiation interactions in the tropics generally. These advances have made it possible to take the next step and build on this progress, by incorporating our parameterization schemes in state-of-the-art three-dimensional atmospheric models, and diagnosing and evaluating the results using independent data. Because the improved cloud-radiation results have been obtained largely via implementing detailed and physically comprehensive cloud microphysics, we anticipate that improved predictions of hydrologic cycle components, and hence of precipitation, may also be achievable.« less
NASA Astrophysics Data System (ADS)
Määttänen, Anni; Merikanto, Joonas; Henschel, Henning; Duplissy, Jonathan; Makkonen, Risto; Ortega, Ismael K.; Vehkamäki, Hanna
2018-01-01
We have developed new parameterizations of electrically neutral homogeneous and ion-induced sulfuric acid-water particle formation for large ranges of environmental conditions, based on an improved model that has been validated against a particle formation rate data set produced by Cosmics Leaving OUtdoor Droplets (CLOUD) experiments at European Organization for Nuclear Research (CERN). The model uses a thermodynamically consistent version of the Classical Nucleation Theory normalized using quantum chemical data. Unlike the earlier parameterizations for H2SO4-H2O nucleation, the model is applicable to extreme dry conditions where the one-component sulfuric acid limit is approached. Parameterizations are presented for the critical cluster sulfuric acid mole fraction, the critical cluster radius, the total number of molecules in the critical cluster, and the particle formation rate. If the critical cluster contains only one sulfuric acid molecule, a simple formula for kinetic particle formation can be used: this threshold has also been parameterized. The parameterization for electrically neutral particle formation is valid for the following ranges: temperatures 165-400 K, sulfuric acid concentrations 104-1013 cm-3, and relative humidities 0.001-100%. The ion-induced particle formation parameterization is valid for temperatures 195-400 K, sulfuric acid concentrations 104-1016 cm-3, and relative humidities 10-5-100%. The new parameterizations are thus applicable for the full range of conditions in the Earth's atmosphere relevant for binary sulfuric acid-water particle formation, including both tropospheric and stratospheric conditions. They are also suitable for describing particle formation in the atmosphere of Venus.
USDA-ARS?s Scientific Manuscript database
Simulation models can be used to make management decisions when properly parameterized. This study aimed to parameterize the ALMANAC (Agricultural Land Management Alternatives with Numerical Assessment Criteria) crop simulation model for dry bean in the semi-arid temperate areas of Mexico. The par...
Midgley, S M
2004-01-21
A novel parameterization of x-ray interaction cross-sections is developed, and employed to describe the x-ray linear attenuation coefficient and mass energy absorption coefficient for both elements and mixtures. The new parameterization scheme addresses the Z-dependence of elemental cross-sections (per electron) using a simple function of atomic number, Z. This obviates the need for a complicated mathematical formalism. Energy dependent coefficients describe the Z-direction curvature of the cross-sections. The composition dependent quantities are the electron density and statistical moments describing the elemental distribution. We show that it is possible to describe elemental cross-sections for the entire periodic table and at energies above the K-edge (from 6 keV to 125 MeV), with an accuracy of better than 2% using a parameterization containing not more than five coefficients. For the biologically important elements 1 < or = Z < or = 20, and the energy range 30-150 keV, the parameterization utilizes four coefficients. At higher energies, the parameterization uses fewer coefficients with only two coefficients needed at megavoltage energies.
Simulations of isoprene: Ozone reactions for a general circulation/chemical transport model
NASA Technical Reports Server (NTRS)
Makar, P. A.; Mcconnell, J. C.
1994-01-01
A parameterized reaction mechanism has been created to examine the interactions between isoprene and other tropospheric gas-phase chemicals. Tests of the parameterization have shown that its results match those of a more complex reaction set to a high degree of accuracy. Comparisons between test runs have shown that the presence of isoprene at the start of a six day interval can enhance later ozone concentrations by as much as twenty-nine percent. The test cases used no input fluxes beyond the initial time, implying that a single input of a biogenic hydrocarbon to an airmass can alter its ozone chemistry over a time scale on the order of a week.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Yang; Leung, L. Ruby; Fan, Jiwen
This is a collaborative project among North Carolina State University, Pacific Northwest National Laboratory, and Scripps Institution of Oceanography, University of California at San Diego to address the critical need for an accurate representation of aerosol indirect effect in climate and Earth system models. In this project, we propose to develop and improve parameterizations of aerosol-cloud-precipitation feedbacks in climate models and apply them to study the effect of aerosols and clouds on radiation and hydrologic cycle. Our overall objective is to develop, improve, and evaluate parameterizations to enable more accurate simulations of these feedbacks in high resolution regional and globalmore » climate models.« less
An empirical approach for estimating stress-coupling lengths for marine-terminating glaciers
Enderlin, Ellyn; Hamilton, Gordon S.; O'Neel, Shad; Bartholomaus, Timothy C.; Morlighem, Mathieu; Holt, John W.
2016-01-01
Here we present a new empirical method to estimate the SCL for marine-terminating glaciers using high-resolution observations. We use the empirically-determined periodicity in resistive stress oscillations as a proxy for the SCL. Application of our empirical method to two well-studied tidewater glaciers (Helheim Glacier, SE Greenland, and Columbia Glacier, Alaska, USA) demonstrates that SCL estimates obtained using this approach are consistent with theory (i.e., can be parameterized as a function of the ice thickness) and with prior, independent SCL estimates. In order to accurately resolve stress variations, we suggest that similar empirical stress-coupling parameterizations be employed in future analyses of glacier dynamics.
NASA Astrophysics Data System (ADS)
Brill, Nicolai; Wirtz, Mathias; Merhof, Dorit; Tingart, Markus; Jahr, Holger; Truhn, Daniel; Schmitt, Robert; Nebelung, Sven
2016-07-01
Polarization-sensitive optical coherence tomography (PS-OCT) is a light-based, high-resolution, real-time, noninvasive, and nondestructive imaging modality yielding quasimicroscopic cross-sectional images of cartilage. As yet, comprehensive parameterization and quantification of birefringence and tissue properties have not been performed on human cartilage. PS-OCT and algorithm-based image analysis were used to objectively grade human cartilage degeneration in terms of surface irregularity, tissue homogeneity, signal attenuation, as well as birefringence coefficient and band width, height, depth, and number. Degeneration-dependent changes were noted for the former three parameters exclusively, thereby questioning the diagnostic value of PS-OCT in the assessment of human cartilage degeneration.
Gunalan, Kabilar; Chaturvedi, Ashutosh; Howell, Bryan; Duchin, Yuval; Lempka, Scott F.; Patriat, Remi; Sapiro, Guillermo; Harel, Noam; McIntyre, Cameron C.
2017-01-01
Background Deep brain stimulation (DBS) is an established clinical therapy and computational models have played an important role in advancing the technology. Patient-specific DBS models are now common tools in both academic and industrial research, as well as clinical software systems. However, the exact methodology for creating patient-specific DBS models can vary substantially and important technical details are often missing from published reports. Objective Provide a detailed description of the assembly workflow and parameterization of a patient-specific DBS pathway-activation model (PAM) and predict the response of the hyperdirect pathway to clinical stimulation. Methods Integration of multiple software tools (e.g. COMSOL, MATLAB, FSL, NEURON, Python) enables the creation and visualization of a DBS PAM. An example DBS PAM was developed using 7T magnetic resonance imaging data from a single unilaterally implanted patient with Parkinson’s disease (PD). This detailed description implements our best computational practices and most elaborate parameterization steps, as defined from over a decade of technical evolution. Results Pathway recruitment curves and strength-duration relationships highlight the non-linear response of axons to changes in the DBS parameter settings. Conclusion Parameterization of patient-specific DBS models can be highly detailed and constrained, thereby providing confidence in the simulation predictions, but at the expense of time demanding technical implementation steps. DBS PAMs represent new tools for investigating possible correlations between brain pathway activation patterns and clinical symptom modulation. PMID:28441410
Anisotropic Mesoscale Eddy Transport in Ocean General Circulation Models
NASA Astrophysics Data System (ADS)
Reckinger, S. J.; Fox-Kemper, B.; Bachman, S.; Bryan, F.; Dennis, J.; Danabasoglu, G.
2014-12-01
Modern climate models are limited to coarse-resolution representations of large-scale ocean circulation that rely on parameterizations for mesoscale eddies. The effects of eddies are typically introduced by relating subgrid eddy fluxes to the resolved gradients of buoyancy or other tracers, where the proportionality is, in general, governed by an eddy transport tensor. The symmetric part of the tensor, which represents the diffusive effects of mesoscale eddies, is universally treated isotropically in general circulation models. Thus, only a single parameter, namely the eddy diffusivity, is used at each spatial and temporal location to impart the influence of mesoscale eddies on the resolved flow. However, the diffusive processes that the parameterization approximates, such as shear dispersion, potential vorticity barriers, oceanic turbulence, and instabilities, typically have strongly anisotropic characteristics. Generalizing the eddy diffusivity tensor for anisotropy extends the number of parameters to three: a major diffusivity, a minor diffusivity, and the principal axis of alignment. The Community Earth System Model (CESM) with the anisotropic eddy parameterization is used to test various choices for the newly introduced parameters, which are motivated by observations and the eddy transport tensor diagnosed from high resolution simulations. Simply setting the ratio of major to minor diffusivities to a value of five globally, while aligning the major axis along the flow direction, improves biogeochemical tracer ventilation and reduces global temperature and salinity biases. These effects can be improved even further by parameterizing the anisotropic transport mechanisms in the ocean.
A unified spectral,parameterization for wave breaking: from the deep ocean to the surf zone
NASA Astrophysics Data System (ADS)
Filipot, J.
2010-12-01
A new wave-breaking dissipation parameterization designed for spectral wave models is presented. It combines wave breaking basic physical quantities, namely, the breaking probability and the dissipation rate per unit area. The energy lost by waves is fi[|#12#|]rst calculated in the physical space before being distributed over the relevant spectral components. This parameterization allows a seamless numerical model from the deep ocean into the surf zone. This transition from deep to shallow water is made possible by a dissipation rate per unit area of breaking waves that varies with the wave height, wavelength and water depth.The parameterization is further tested in the WAVEWATCH III TM code, from the global ocean to the beach scale. Model errors are smaller than with most specialized deep or shallow water parameterizations.
Methods of testing parameterizations: Vertical ocean mixing
NASA Technical Reports Server (NTRS)
Tziperman, Eli
1992-01-01
The ocean's velocity field is characterized by an exceptional variety of scales. While the small-scale oceanic turbulence responsible for the vertical mixing in the ocean is of scales a few centimeters and smaller, the oceanic general circulation is characterized by horizontal scales of thousands of kilometers. In oceanic general circulation models that are typically run today, the vertical structure of the ocean is represented by a few tens of discrete grid points. Such models cannot explicitly model the small-scale mixing processes, and must, therefore, find ways to parameterize them in terms of the larger-scale fields. Finding a parameterization that is both reliable and plausible to use in ocean models is not a simple task. Vertical mixing in the ocean is the combined result of many complex processes, and, in fact, mixing is one of the less known and less understood aspects of the oceanic circulation. In present models of the oceanic circulation, the many complex processes responsible for vertical mixing are often parameterized in an oversimplified manner. Yet, finding an adequate parameterization of vertical ocean mixing is crucial to the successful application of ocean models to climate studies. The results of general circulation models for quantities that are of particular interest to climate studies, such as the meridional heat flux carried by the ocean, are quite sensitive to the strength of the vertical mixing. We try to examine the difficulties in choosing an appropriate vertical mixing parameterization, and the methods that are available for validating different parameterizations by comparing model results to oceanographic data. First, some of the physical processes responsible for vertically mixing the ocean are briefly mentioned, and some possible approaches to the parameterization of these processes in oceanographic general circulation models are described in the following section. We then discuss the role of the vertical mixing in the physics of the large-scale ocean circulation, and examine methods of validating mixing parameterizations using large-scale ocean models.
NASA Astrophysics Data System (ADS)
Roy, Dipankar; Marianski, Mateusz; Maitra, Neepa T.; Dannenberg, J. J.
2012-10-01
We compare dispersion and induction interactions for noble gas dimers and for Ne, methane, and 2-butyne with HF and LiF using a variety of functionals (including some specifically parameterized to evaluate dispersion interactions) with ab initio methods including CCSD(T) and MP2. We see that inductive interactions tend to enhance dispersion and may be accompanied by charge-transfer. We show that the functionals do not generally follow the expected trends in interaction energies, basis set superposition errors (BSSE), and interaction distances as a function of basis set size. The functionals parameterized to treat dispersion interactions often overestimate these interactions, sometimes by quite a lot, when compared to higher level calculations. Which functionals work best depends upon the examples chosen. The B3LYP and X3LYP functionals, which do not describe pure dispersion interactions, appear to describe dispersion mixed with induction about as accurately as those parametrized to treat dispersion. We observed significant differences in high-level wavefunction calculations in a basis set larger than those used to generate the structures in many of the databases. We discuss the implications for highly parameterized functionals based on these databases, as well as the use of simple potential energy for fitting the parameters rather than experimentally determinable thermodynamic state functions that involve consideration of vibrational states.
Roy, Dipankar; Marianski, Mateusz; Maitra, Neepa T; Dannenberg, J J
2012-10-07
We compare dispersion and induction interactions for noble gas dimers and for Ne, methane, and 2-butyne with HF and LiF using a variety of functionals (including some specifically parameterized to evaluate dispersion interactions) with ab initio methods including CCSD(T) and MP2. We see that inductive interactions tend to enhance dispersion and may be accompanied by charge-transfer. We show that the functionals do not generally follow the expected trends in interaction energies, basis set superposition errors (BSSE), and interaction distances as a function of basis set size. The functionals parameterized to treat dispersion interactions often overestimate these interactions, sometimes by quite a lot, when compared to higher level calculations. Which functionals work best depends upon the examples chosen. The B3LYP and X3LYP functionals, which do not describe pure dispersion interactions, appear to describe dispersion mixed with induction about as accurately as those parametrized to treat dispersion. We observed significant differences in high-level wavefunction calculations in a basis set larger than those used to generate the structures in many of the databases. We discuss the implications for highly parameterized functionals based on these databases, as well as the use of simple potential energy for fitting the parameters rather than experimentally determinable thermodynamic state functions that involve consideration of vibrational states.
Roy, Dipankar; Marianski, Mateusz; Maitra, Neepa T.; Dannenberg, J. J.
2012-01-01
We compare dispersion and induction interactions for noble gas dimers and for Ne, methane, and 2-butyne with HF and LiF using a variety of functionals (including some specifically parameterized to evaluate dispersion interactions) with ab initio methods including CCSD(T) and MP2. We see that inductive interactions tend to enhance dispersion and may be accompanied by charge-transfer. We show that the functionals do not generally follow the expected trends in interaction energies, basis set superposition errors (BSSE), and interaction distances as a function of basis set size. The functionals parameterized to treat dispersion interactions often overestimate these interactions, sometimes by quite a lot, when compared to higher level calculations. Which functionals work best depends upon the examples chosen. The B3LYP and X3LYP functionals, which do not describe pure dispersion interactions, appear to describe dispersion mixed with induction about as accurately as those parametrized to treat dispersion. We observed significant differences in high-level wavefunction calculations in a basis set larger than those used to generate the structures in many of the databases. We discuss the implications for highly parameterized functionals based on these databases, as well as the use of simple potential energy for fitting the parameters rather than experimentally determinable thermodynamic state functions that involve consideration of vibrational states. PMID:23039587
Multiresolution Iterative Reconstruction in High-Resolution Extremity Cone-Beam CT
Cao, Qian; Zbijewski, Wojciech; Sisniega, Alejandro; Yorkston, John; Siewerdsen, Jeffrey H; Stayman, J Webster
2016-01-01
Application of model-based iterative reconstruction (MBIR) to high resolution cone-beam CT (CBCT) is computationally challenging because of the very fine discretization (voxel size <100 µm) of the reconstructed volume. Moreover, standard MBIR techniques require that the complete transaxial support for the acquired projections is reconstructed, thus precluding acceleration by restricting the reconstruction to a region-of-interest. To reduce the computational burden of high resolution MBIR, we propose a multiresolution Penalized-Weighted Least Squares (PWLS) algorithm, where the volume is parameterized as a union of fine and coarse voxel grids as well as selective binning of detector pixels. We introduce a penalty function designed to regularize across the boundaries between the two grids. The algorithm was evaluated in simulation studies emulating an extremity CBCT system and in a physical study on a test-bench. Artifacts arising from the mismatched discretization of the fine and coarse sub-volumes were investigated. The fine grid region was parameterized using 0.15 mm voxels and the voxel size in the coarse grid region was varied by changing a downsampling factor. No significant artifacts were found in either of the regions for downsampling factors of up to 4×. For a typical extremities CBCT volume size, this downsampling corresponds to an acceleration of the reconstruction that is more than five times faster than a brute force solution that applies fine voxel parameterization to the entire volume. For certain configurations of the coarse and fine grid regions, in particular when the boundary between the regions does not cross high attenuation gradients, downsampling factors as high as 10× can be used without introducing artifacts, yielding a ~50× speedup in PWLS. The proposed multiresolution algorithm significantly reduces the computational burden of high resolution iterative CBCT reconstruction and can be extended to other applications of MBIR where computationally expensive, high-fidelity forward models are applied only to a sub-region of the field-of-view. PMID:27694701
NASA Astrophysics Data System (ADS)
Waring, Michael S.
2016-11-01
Terpene ozonolysis reactions can be a strong source of secondary organic aerosol (SOA) indoors. SOA formation can be parameterized and predicted using the aerosol mass fraction (AMF), also known as the SOA yield, which quantifies the mass ratio of generated SOA to oxidized terpene. Limonene is a monoterpene that is at sufficient concentrations such that it reacts meaningfully with ozone indoors. It has two unsaturated bonds, and the magnitude of the limonene ozonolysis AMF varies by a factor of ∼4 depending on whether one or both of its unsaturated bonds are ozonated, which depends on whether ozone is in excess compared to limonene as well as the available time for reactions indoors. Hence, this study developed a framework to predict the limonene AMF as a function of the ozone [O3] and limonene [lim] concentrations and the air exchange rate (AER, h-1), which is the inverse of the residence time. Empirical AMF data were used to calculate a mixing coefficient, β, that would yield a 'resultant AMF' as the combination of the AMFs due to ozonolysis of one or both of limonene's unsaturated bonds, within the volatility basis set (VBS) organic aerosol framework. Then, β was regressed against predictors of log10([O3]/[lim]) and AER (R2 = 0.74). The β increased as the log10([O3]/[lim]) increased and as AER decreased, having the physical meaning of driving the resultant AMF to the upper AMF condition when both unsaturated bonds of limonene are ozonated. Modeling demonstrates that using the correct resultant AMF to simulate SOA formation owing to limonene ozonolysis is crucial for accurate indoor prediction.
Frembgen-Kesner, Tamara; Andrews, Casey T.; Li, Shuxiang; Ngo, Nguyet Anh; Shubert, Scott A.; Jain, Aakash; Olayiwola, Oluwatoni; Weishaar, Mitch R.; Elcock, Adrian H.
2015-01-01
Recently, we reported the parameterization of a set of coarse-grained (CG) nonbonded potential functions, derived from all-atom explicit-solvent molecular dynamics (MD) simulations of amino acid pairs, and designed for use in (implicit-solvent) Brownian dynamics (BD) simulations of proteins; this force field was named COFFDROP (COarse-grained Force Field for Dynamic Representations Of Proteins). Here, we describe the extension of COFFDROP to include bonded backbone terms derived from fitting to results of explicit-solvent MD simulations of all possible two-residue peptides containing the 20 standard amino acids, with histidine modeled in both its protonated and neutral forms. The iterative Boltzmann inversion (IBI) method was used to optimize new CG potential functions for backbone-related terms by attempting to reproduce angle, dihedral and distance probability distributions generated by the MD simulations. In a simple test of the transferability of the extended force field, the angle, dihedral and distance probability distributions obtained from BD simulations of 56 three-residue peptides were compared to results from corresponding explicit-solvent MD simulations. In a more challenging test of the COFFDROP force field, it was used to simulate eight intrinsically disordered proteins and was shown to quite accurately reproduce the experimental hydrodynamic radii (Rhydro), provided that the favorable nonbonded interactions of the force field were uniformly scaled downwards in magnitude. Overall, the results indicate that the COFFDROP force field is likely to find use in modeling the conformational behavior of intrinsically disordered proteins and multi-domain proteins connected by flexible linkers. PMID:26574429
Finite-fault source inversion using teleseismic P waves: Simple parameterization and rapid analysis
Mendoza, C.; Hartzell, S.
2013-01-01
We examine the ability of teleseismic P waves to provide a timely image of the rupture history for large earthquakes using a simple, 2D finite‐fault source parameterization. We analyze the broadband displacement waveforms recorded for the 2010 Mw∼7 Darfield (New Zealand) and El Mayor‐Cucapah (Baja California) earthquakes using a single planar fault with a fixed rake. Both of these earthquakes were observed to have complicated fault geometries following detailed source studies conducted by other investigators using various data types. Our kinematic, finite‐fault analysis of the events yields rupture models that similarly identify the principal areas of large coseismic slip along the fault. The results also indicate that the amount of stabilization required to spatially smooth the slip across the fault and minimize the seismic moment is related to the amplitudes of the observed P waveforms and can be estimated from the absolute values of the elements of the coefficient matrix. This empirical relationship persists for earthquakes of different magnitudes and is consistent with the stabilization constraint obtained from the L‐curve in Tikhonov regularization. We use the relation to estimate the smoothing parameters for the 2011 Mw 7.1 East Turkey, 2012 Mw 8.6 Northern Sumatra, and 2011 Mw 9.0 Tohoku, Japan, earthquakes and invert the teleseismic P waves in a single step to recover timely, preliminary slip models that identify the principal source features observed in finite‐fault solutions obtained by the U.S. Geological Survey National Earthquake Information Center (USGS/NEIC) from the analysis of body‐ and surface‐wave data. These results indicate that smoothing constraints can be estimated a priori to derive a preliminary, first‐order image of the coseismic slip using teleseismic records.
The correction of infrasound signals for upper atmospheric winds
NASA Technical Reports Server (NTRS)
Mutschlecner, J. Paul; Whitaker, Rodney W.
1990-01-01
Infrasound waves propagate in the atmosphere by a well known mechanism produced by refraction of the waves, return to earth, and reflection at the surface into the atmosphere for subsequent bounces. A figure illustrates this phenomenon with results from a ray trace model. In this instance three rays are returned to earth from a region centered at about 50 kilometers in altitude and two from a region near 110 kilometers in altitude. The control of the wave refraction is largely dominated by the temperature-height profile and inversions; however, a major influence is also produced by the atmospheric wind profile. Another figure illustrates the considerable ray differences for rays moving in the wind direction (to the right) and in the counter direction (to the left). It obviously can be expected that infrasonic signal amplitudes will be greatly influenced by the winds in the atmosphere. The seasonal variation of the high altitude atmospheric winds is well documented. A third figure illustrates this with average statistics on the observed zonal wind in the region of 50 plus or minus 5 kilometers in altitude. The results are based upon a survey by Webb; Webb terms this parameterization the Stratospheric Circulation Index (SCI). The very strong seasonal variation has the ability to exert a major seasonal influence on infrasonic signals. The purpose here is to obtain a method for the correction of this effect.
Aris-Brosou, Stéphane; Bielawski, Joseph P
2006-08-15
A popular approach to examine the roles of mutation and selection in the evolution of genomes has been to consider the relationship between codon bias and synonymous rates of molecular evolution. A significant relationship between these two quantities is taken to indicate the action of weak selection on substitutions among synonymous codons. The neutral theory predicts that the rate of evolution is inversely related to the level of functional constraint. Therefore, selection against the use of non-preferred codons among those coding for the same amino acid should result in lower rates of synonymous substitution as compared with sites not subject to such selection pressures. However, reliably measuring the extent of such a relationship is problematic, as estimates of synonymous rates are sensitive to our assumptions about the process of molecular evolution. Previous studies showed the importance of accounting for unequal codon frequencies, in particular when synonymous codon usage is highly biased. Yet, unequal codon frequencies can be modeled in different ways, making different assumptions about the mutation process. Here we conduct a simulation study to evaluate two different ways of modeling uneven codon frequencies and show that both model parameterizations can have a dramatic impact on rate estimates and affect biological conclusions about genome evolution. We reanalyze three large data sets to demonstrate the relevance of our results to empirical data analysis.
Saloranta, Tuomo M; Andersen, Tom; Naes, Kristoffer
2006-01-01
Rate constant bioaccumulation models are applied to simulate the flow of polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/Fs) in the coastal marine food web of Frierfjorden, a contaminated fjord in southern Norway. We apply two different ways to parameterize the rate constants in the model, global sensitivity analysis of the models using Extended Fourier Amplitude Sensitivity Test (Extended FAST) method, as well as results from general linear system theory, in order to obtain a more thorough insight to the system's behavior and to the flow pathways of the PCDD/Fs. We calibrate our models against observed body concentrations of PCDD/Fs in the food web of Frierfjorden. Differences between the predictions from the two models (using the same forcing and parameter values) are of the same magnitude as their individual deviations from observations, and the models can be said to perform about equally well in our case. Sensitivity analysis indicates that the success or failure of the models in predicting the PCDD/F concentrations in the food web organisms highly depends on the adequate estimation of the truly dissolved concentrations in water and sediment pore water. We discuss the pros and cons of such models in understanding and estimating the present and future concentrations and bioaccumulation of persistent organic pollutants in aquatic food webs.
Mantle P wave travel time tomography of Eastern and Southern Africa: New images of mantle upwellings
NASA Astrophysics Data System (ADS)
Benoit, M. H.; Li, C.; van der Hilst, R.
2006-12-01
Much of Eastern Africa, including Ethiopia, Kenya, and Tanzania, has undergone extensive tectonism, including rifting, uplift, and volcanism during the Cenozoic. The cause of this tectonism is often attributed to the presence of one or more mantle upwellings, including starting thermal plumes and superplumes. Previous regional seismic studies and global tomographic models show conflicting results regarding the spatial and thermal characteristics of these upwellings. Additionally, there are questions concerning the extent to which the Archean and Proterozoic lithosphere has been altered by possible thermal upwellings in the mantle. To further constrain the mantle structure beneath Southern and Eastern Africa and to investigate the origin of the tectonism in Eastern Africa, we present preliminary results of a large-scale P wave travel time tomographic study of the region. We invert travel time measurements from the EHB database with travel time measurements taken from regional PASSCAL datasets including the Ethiopia Broadband Seismic Experiment (2000-2002); Kenya Broadband Seismic Experiment (2000-2002); Southern Africa Seismic Experiment (1997- 1999); Tanzania Broadband Seismic Experiment (1995-1997), and the Saudi Arabia PASSCAL Experiment (1995-1997). The tomographic inversion uses 3-D sensitivity kernels to combine different datasets and is parameterized with an irregular grid so that high spatial resolution can be obtained in areas of dense data coverage. It uses an adaptive least-squares context using the LSQR method with norm and gradient damping.
NASA Astrophysics Data System (ADS)
Argüeso, D.; Hidalgo-Muñoz, J. M.; Gámiz-Fortis, S. R.; Esteban-Parra, M. J.; Castro-Díez, Y.
2009-04-01
An evaluation of MM5 mesoscale model sensitivity to different parameterizations schemes is presented in terms of temperature and precipitation for high-resolution integrations over Andalusia (South of Spain). As initial and boundary conditions ERA-40 Reanalysis data are used. Two domains were used, a coarse one with dimensions of 55 by 60 grid points with spacing of 30 km and a nested domain of 48 by 72 grid points grid spaced 10 km. Coarse domain fully covers Iberian Peninsula and Andalusia fits loosely in the finer one. In addition to parameterization tests, two dynamical downscaling techniques have been applied in order to examine the influence of initial conditions on RCM long-term studies. Regional climate studies usually employ continuous integration for the period under survey, initializing atmospheric fields only at the starting point and feeding boundary conditions regularly. An alternative approach is based on frequent re-initialization of atmospheric fields; hence the simulation is divided in several independent integrations. Altogether, 20 simulations have been performed using varying physics options, of which 4 were fulfilled applying the re-initialization technique. Surface temperature and accumulated precipitation (daily and monthly scale) were analyzed for a 5-year period covering from 1990 to 1994. Results have been compared with daily observational data series from 110 stations for temperature and 95 for precipitation Both daily and monthly average temperatures are generally well represented by the model. Conversely, daily precipitation results present larger deviations from observational data. However, noticeable accuracy is gained when comparing with monthly precipitation observations. There are some especially conflictive subregions where precipitation is scarcely captured, such as the Southeast of the Iberian Peninsula, mainly due to its extremely convective nature. Regarding parameterization schemes performance, every set provides very similar results either for temperature or precipitation and no configuration seems to outperform the others both for the whole region and for every season. Nevertheless, some marked differences between areas within the domain appear when analyzing certain physics options, particularly for precipitation. Some of the physics options, such as radiation, have little impact on model performance with respect to precipitation and results do not vary when the scheme is modified. On the other hand, cumulus and boundary layer parameterizations are responsible for most of the differences obtained between configurations. Acknowledgements: The Spanish Ministry of Science and Innovation, with additional support from the European Community Funds (FEDER), project CGL2007-61151/CLI, and the Regional Government of Andalusia project P06-RNM-01622, have financed this study. The "Centro de Servicios de Informática y Redes de Comunicaciones" (CSIRC), Universidad de Granada, has provided the computing time. Key words: MM5 mesoscale model, parameterizations schemes, temperature and precipitation, South of Spain.
NASA Astrophysics Data System (ADS)
Serva, Federico; Cagnazzo, Chiara; Riccio, Angelo
2016-04-01
The effects of the propagation and breaking of atmospheric gravity waves have long been considered crucial for their impact on the circulation, especially in the stratosphere and mesosphere, between heights of 10 and 110 km. These waves, that in the Earth's atmosphere originate from surface orography (OGWs) or from transient (nonorographic) phenomena such as fronts and convective processes (NOGWs), have horizontal wavelengths between 10 and 1000 km, vertical wavelengths of several km, and frequencies spanning from minutes to hours. Orographic and nonorographic GWs must be accounted for in climate models to obtain a realistic simulation of the stratosphere in both hemispheres, since they can have a substantial impact on circulation and temperature, hence an important role in ozone chemistry for chemistry-climate models. Several types of parameterization are currently employed in models, differing in the formulation and for the values assigned to parameters, but the common aim is to quantify the effect of wave breaking on large-scale wind and temperature patterns. In the last decade, both global observations from satellite-borne instruments and the outputs of very high resolution climate models provided insight on the variability and properties of gravity wave field, and these results can be used to constrain some of the empirical parameters present in most parameterization scheme. A feature of the NOGW forcing that clearly emerges is the intermittency, linked with the nature of the sources: this property is absent in the majority of the models, in which NOGW parameterizations are uncoupled with other atmospheric phenomena, leading to results which display lower variability compared to observations. In this work, we analyze the climate simulated in AMIP runs of the MAECHAM5 model, which uses the Hines NOGW parameterization and with a fine vertical resolution suitable to capture the effects of wave-mean flow interaction. We compare the results obtained with two version of the model, the default and a new stochastic version, in which the value of the perturbation field at launching level is not constant and uniform, but extracted at each time-step and grid-point from a given PDF. With this approach we are trying to add further variability to the effects given by the deterministic NOGW parameterization: the impact on the simulated climate will be assessed focusing on the Quasi-Biennial Oscillation of the equatorial stratosphere (known to be driven also by gravity waves) and on the variability of the mid-to-high latitudes atmosphere. The different characteristics of the circulation will be compared with recent reanalysis products in order to determine the advantages of the stochastic approach over the traditional deterministic scheme.
NASA Technical Reports Server (NTRS)
Choi, Hyun-Joo; Chun, Hye-Yeong; Gong, Jie; Wu, Dong L.
2012-01-01
The realism of ray-based spectral parameterization of convective gravity wave drag, which considers the updated moving speed of the convective source and multiple wave propagation directions, is tested against the Atmospheric Infrared Sounder (AIRS) onboard the Aqua satellite. Offline parameterization calculations are performed using the global reanalysis data for January and July 2005, and gravity wave temperature variances (GWTVs) are calculated at z = 2.5 hPa (unfiltered GWTV). AIRS-filtered GWTV, which is directly compared with AIRS, is calculated by applying the AIRS visibility function to the unfiltered GWTV. A comparison between the parameterization calculations and AIRS observations shows that the spatial distribution of the AIRS-filtered GWTV agrees well with that of the AIRS GWTV. However, the magnitude of the AIRS-filtered GWTV is smaller than that of the AIRS GWTV. When an additional cloud top gravity wave momentum flux spectrum with longer horizontal wavelength components that were obtained from the mesoscale simulations is included in the parameterization, both the magnitude and spatial distribution of the AIRS-filtered GWTVs from the parameterization are in good agreement with those of the AIRS GWTVs. The AIRS GWTV can be reproduced reasonably well by the parameterization not only with multiple wave propagation directions but also with two wave propagation directions of 45 degrees (northeast-southwest) and 135 degrees (northwest-southeast), which are optimally chosen for computational efficiency.
The Influence of Microphysical Cloud Parameterization on Microwave Brightness Temperatures
NASA Technical Reports Server (NTRS)
Skofronick-Jackson, Gail M.; Gasiewski, Albin J.; Wang, James R.; Zukor, Dorothy J. (Technical Monitor)
2000-01-01
The microphysical parameterization of clouds and rain-cells plays a central role in atmospheric forward radiative transfer models used in calculating passive microwave brightness temperatures. The absorption and scattering properties of a hydrometeor-laden atmosphere are governed by particle phase, size distribution, aggregate density., shape, and dielectric constant. This study identifies the sensitivity of brightness temperatures with respect to the microphysical cloud parameterization. Cloud parameterizations for wideband (6-410 GHz observations of baseline brightness temperatures were studied for four evolutionary stages of an oceanic convective storm using a five-phase hydrometeor model in a planar-stratified scattering-based radiative transfer model. Five other microphysical cloud parameterizations were compared to the baseline calculations to evaluate brightness temperature sensitivity to gross changes in the hydrometeor size distributions and the ice-air-water ratios in the frozen or partly frozen phase. The comparison shows that, enlarging the rain drop size or adding water to the partly Frozen hydrometeor mix warms brightness temperatures by up to .55 K at 6 GHz. The cooling signature caused by ice scattering intensifies with increasing ice concentrations and at higher frequencies. An additional comparison to measured Convection and Moisture LA Experiment (CAMEX 3) brightness temperatures shows that in general all but, two parameterizations produce calculated T(sub B)'s that fall within the observed clear-air minima and maxima. The exceptions are for parameterizations that, enhance the scattering characteristics of frozen hydrometeors.
A stochastic parameterization for deep convection using cellular automata
NASA Astrophysics Data System (ADS)
Bengtsson, L.; Steinheimer, M.; Bechtold, P.; Geleyn, J.
2012-12-01
Cumulus parameterizations used in most operational weather and climate models today are based on the mass-flux concept which took form in the early 1970's. In such schemes it is assumed that a unique relationship exists between the ensemble-average of the sub-grid convection, and the instantaneous state of the atmosphere in a vertical grid box column. However, such a relationship is unlikely to be described by a simple deterministic function (Palmer, 2011). Thus, because of the statistical nature of the parameterization challenge, it has been recognized by the community that it is important to introduce stochastic elements to the parameterizations (for instance: Plant and Craig, 2008, Khouider et al. 2010, Frenkel et al. 2011, Bentsson et al. 2011, but the list is far from exhaustive). There are undoubtedly many ways in which stochastisity can enter new developments. In this study we use a two-way interacting cellular automata (CA), as its intrinsic nature possesses many qualities interesting for deep convection parameterization. In the one-dimensional entraining plume approach, there is no parameterization of horizontal transport of heat, moisture or momentum due to cumulus convection. In reality, mass transport due to gravity waves that propagate in the horizontal can trigger new convection, important for the organization of deep convection (Huang, 1988). The self-organizational characteristics of the CA allows for lateral communication between adjacent NWP model grid-boxes, and temporal memory. Thus the CA scheme used in this study contain three interesting components for representation of cumulus convection, which are not present in the traditional one-dimensional bulk entraining plume method: horizontal communication, memory and stochastisity. The scheme is implemented in the high resolution regional NWP model ALARO, and simulations show enhanced organization of convective activity along squall-lines. Probabilistic evaluation demonstrate an enhanced spread in large-scale variables in regions where convective activity is large. A two month extended evaluation of the deterministic behaviour of the scheme indicate a neutral impact on forecast skill. References: Bengtsson, L., H. Körnich, E. Källén, and G. Svensson, 2011: Large-scale dynamical response to sub-grid scale organization provided by cellular automata. Journal of the Atmospheric Sciences, 68, 3132-3144. Frenkel, Y., A. Majda, and B. Khouider, 2011: Using the stochastic multicloud model to improve tropical convective parameterization: A paradigm example. Journal of the Atmospheric Sciences, doi: 10.1175/JAS-D-11-0148.1. Huang, X.-Y., 1988: The organization of moist convection by internal 365 gravity waves. Tellus A, 42, 270-285. Khouider, B., J. Biello, and A. Majda, 2010: A Stochastic Multicloud Model for Tropical Convection. Comm. Math. Sci., 8, 187-216. Palmer, T., 2011: Towards the Probabilistic Earth-System Simulator: A Vision for the Future of Climate and Weather Prediction. Quarterly Journal of the Royal Meteorological Society 138 (2012) 841-861 Plant, R. and G. Craig, 2008: A stochastic parameterization for deep convection based on equilibrium statistics. J. Atmos. Sci., 65, 87-105.
Uncertainties of parameterized surface downward clear-sky shortwave and all-sky longwave radiation.
NASA Astrophysics Data System (ADS)
Gubler, S.; Gruber, S.; Purves, R. S.
2012-06-01
As many environmental models rely on simulating the energy balance at the Earth's surface based on parameterized radiative fluxes, knowledge of the inherent model uncertainties is important. In this study we evaluate one parameterization of clear-sky direct, diffuse and global shortwave downward radiation (SDR) and diverse parameterizations of clear-sky and all-sky longwave downward radiation (LDR). In a first step, SDR is estimated based on measured input variables and estimated atmospheric parameters for hourly time steps during the years 1996 to 2008. Model behaviour is validated using the high quality measurements of six Alpine Surface Radiation Budget (ASRB) stations in Switzerland covering different elevations, and measurements of the Swiss Alpine Climate Radiation Monitoring network (SACRaM) in Payerne. In a next step, twelve clear-sky LDR parameterizations are calibrated using the ASRB measurements. One of the best performing parameterizations is elected to estimate all-sky LDR, where cloud transmissivity is estimated using measured and modeled global SDR during daytime. In a last step, the performance of several interpolation methods is evaluated to determine the cloud transmissivity in the night. We show that clear-sky direct, diffuse and global SDR is adequately represented by the model when using measurements of the atmospheric parameters precipitable water and aerosol content at Payerne. If the atmospheric parameters are estimated and used as a fix value, the relative mean bias deviance (MBD) and the relative root mean squared deviance (RMSD) of the clear-sky global SDR scatter between between -2 and 5%, and 7 and 13% within the six locations. The small errors in clear-sky global SDR can be attributed to compensating effects of modeled direct and diffuse SDR since an overestimation of aerosol content in the atmosphere results in underestimating the direct, but overestimating the diffuse SDR. Calibration of LDR parameterizations to local conditions reduces MBD and RMSD strongly compared to using the published values of the parameters, resulting in relative MBD and RMSD of less than 5% respectively 10% for the best parameterizations. The best results to estimate cloud transmissivity during nighttime were obtained by linearly interpolating the average of the cloud transmissivity of the four hours of the preceeding afternoon and the following morning. Model uncertainty can be caused by different errors such as code implementation, errors in input data and in estimated parameters, etc. The influence of the latter (errors in input data and model parameter uncertainty) on model outputs is determined using Monte Carlo. Model uncertainty is provided as the relative standard deviation σrel of the simulated frequency distributions of the model outputs. An optimistic estimate of the relative uncertainty σrel resulted in 10% for the clear-sky direct, 30% for diffuse, 3% for global SDR, and 3% for the fitted all-sky LDR.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Na, Ji Sung; Koo, Eunmo; Munoz-Esparza, Domingo
High-resolution large-eddy simulation of the flow over a large wind farm (64 wind turbines) is performed using the HIGRAD/FIRETEC-WindBlade model, which is a high-performance computing wind turbine–atmosphere interaction model that uses the Lagrangian actuator line method to represent rotating turbine blades. These high-resolution large-eddy simulation results are used to parameterize the thrust and power coefficients that contain information about turbine interference effects within the wind farm. Those coefficients are then incorporated into the WRF (Weather Research and Forecasting) model in order to evaluate interference effects in larger-scale models. In the high-resolution WindBlade wind farm simulation, insufficient distance between turbines createsmore » the interference between turbines, including significant vertical variations in momentum and turbulent intensity. The characteristics of the wake are further investigated by analyzing the distribution of the vorticity and turbulent intensity. Quadrant analysis in the turbine and post-turbine areas reveals that the ejection motion induced by the presence of the wind turbines is dominant compared to that in the other quadrants, indicating that the sweep motion is increased at the location where strong wake recovery occurs. Regional-scale WRF simulations reveal that although the turbulent mixing induced by the wind farm is partly diffused to the upper region, there is no significant change in the boundary layer depth. The velocity deficit does not appear to be very sensitive to the local distribution of turbine coefficients. However, differences of about 5% on parameterized turbulent kinetic energy were found depending on the turbine coefficient distribution. Furthermore, turbine coefficients that consider interference in the wind farm should be used in wind farm parameterization for larger-scale models to better describe sub-grid scale turbulent processes.« less
Turbulent kinetics of a large wind farm and their impact in the neutral boundary layer
Na, Ji Sung; Koo, Eunmo; Munoz-Esparza, Domingo; ...
2015-12-28
High-resolution large-eddy simulation of the flow over a large wind farm (64 wind turbines) is performed using the HIGRAD/FIRETEC-WindBlade model, which is a high-performance computing wind turbine–atmosphere interaction model that uses the Lagrangian actuator line method to represent rotating turbine blades. These high-resolution large-eddy simulation results are used to parameterize the thrust and power coefficients that contain information about turbine interference effects within the wind farm. Those coefficients are then incorporated into the WRF (Weather Research and Forecasting) model in order to evaluate interference effects in larger-scale models. In the high-resolution WindBlade wind farm simulation, insufficient distance between turbines createsmore » the interference between turbines, including significant vertical variations in momentum and turbulent intensity. The characteristics of the wake are further investigated by analyzing the distribution of the vorticity and turbulent intensity. Quadrant analysis in the turbine and post-turbine areas reveals that the ejection motion induced by the presence of the wind turbines is dominant compared to that in the other quadrants, indicating that the sweep motion is increased at the location where strong wake recovery occurs. Regional-scale WRF simulations reveal that although the turbulent mixing induced by the wind farm is partly diffused to the upper region, there is no significant change in the boundary layer depth. The velocity deficit does not appear to be very sensitive to the local distribution of turbine coefficients. However, differences of about 5% on parameterized turbulent kinetic energy were found depending on the turbine coefficient distribution. Furthermore, turbine coefficients that consider interference in the wind farm should be used in wind farm parameterization for larger-scale models to better describe sub-grid scale turbulent processes.« less
The application of depletion curves for parameterization of subgrid variability of snow
C. H. Luce; D. G. Tarboton
2004-01-01
Parameterization of subgrid-scale variability in snow accumulation and melt is important for improvements in distributed snowmelt modelling. We have taken the approach of using depletion curves that relate fractional snowcovered area to element-average snow water equivalent to parameterize the effect of snowpack heterogeneity within a physically based mass and energy...
NASA Astrophysics Data System (ADS)
Dubovik, O.; Litvinov, P.; Lapyonok, T.; Ducos, F.; Fuertes, D.; Huang, X.; Torres, B.; Aspetsberger, M.; Federspiel, C.
2014-12-01
The POLDER imager on board of the PARASOL micro-satellite is the only satellite polarimeter provided ~ 9 years extensive record of detailed polarmertic observations of Earth atmosphere from space. POLDER / PARASOL registers spectral polarimetric characteristics of the reflected atmospheric radiation at up to 16 viewing directions over each observed pixel. Such observations have very high sensitivity to the variability of the properties of atmosphere and underlying surface and can not be adequately interpreted using look-up-table retrieval algorithms developed for analyzing mono-viewing intensity only observations traditionally used in atmospheric remote sensing. Therefore, a new enhanced retrieval algorithm GRASP (Generalized Retrieval of Aerosol and Surface Properties) has been developed and applied for processing of PARASOL data. GRASP relies on highly optimized statistical fitting of observations and derives large number of unknowns for each observed pixel. The algorithm uses elaborated model of the atmosphere and fully accounts for all multiple interactions of scattered solar light with aerosol, gases and the underlying surface. All calculations are implemented during inversion and no look-up tables are used. The algorithm is very flexible in utilization of various types of a priori constraints on the retrieved characteristics and in parameterization of surface - atmosphere system. It is also optimized for high performance calculations. The results of the PARASOL data processing will be presented with the emphasis on the discussion of transferability and adaptability of the developed retrieval concept for processing polarimetric observations of other planets. For example, flexibility and possible alternative in modeling properties of aerosol polydisperse mixtures, particle composition and shape, reflectance of surface, etc. will be discussed.
Multimodel Uncertainty Changes in Simulated River Flows Induced by Human Impact Parameterizations
NASA Technical Reports Server (NTRS)
Liu, Xingcai; Tang, Qiuhong; Cui, Huijuan; Mu, Mengfei; Gerten Dieter; Gosling, Simon; Masaki, Yoshimitsu; Satoh, Yusuke; Wada, Yoshihide
2017-01-01
Human impacts increasingly affect the global hydrological cycle and indeed dominate hydrological changes in some regions. Hydrologists have sought to identify the human-impact-induced hydrological variations via parameterizing anthropogenic water uses in global hydrological models (GHMs). The consequently increased model complexity is likely to introduce additional uncertainty among GHMs. Here, using four GHMs, between-model uncertainties are quantified in terms of the ratio of signal to noise (SNR) for average river flow during 1971-2000 simulated in two experiments, with representation of human impacts (VARSOC) and without (NOSOC). It is the first quantitative investigation of between-model uncertainty resulted from the inclusion of human impact parameterizations. Results show that the between-model uncertainties in terms of SNRs in the VARSOC annual flow are larger (about 2 for global and varied magnitude for different basins) than those in the NOSOC, which are particularly significant in most areas of Asia and northern areas to the Mediterranean Sea. The SNR differences are mostly negative (-20 to 5, indicating higher uncertainty) for basin-averaged annual flow. The VARSOC high flow shows slightly lower uncertainties than NOSOC simulations, with SNR differences mostly ranging from -20 to 20. The uncertainty differences between the two experiments are significantly related to the fraction of irrigation areas of basins. The large additional uncertainties in VARSOC simulations introduced by the inclusion of parameterizations of human impacts raise the urgent need of GHMs development regarding a better understanding of human impacts. Differences in the parameterizations of irrigation, reservoir regulation and water withdrawals are discussed towards potential directions of improvements for future GHM development. We also discuss the advantages of statistical approaches to reduce the between-model uncertainties, and the importance of calibration of GHMs for not only better performances of historical simulations but also more robust and confidential future projections of hydrological changes under a changing environment.
NASA Astrophysics Data System (ADS)
Mogensen, Ditte; Aaltonen, Hermanni; Aalto, Juho; Bäck, Jaana; Kieloaho, Antti-Jussi; Gierens, Rosa; Smolander, Sampo; Kulmala, Markku; Boy, Michael
2015-04-01
Volatile organic compounds (VOCs) are emitted from the biosphere and can work as precursor gases for aerosol particles that can affect the climate (e.g. Makkonen et al., ACP, 2012). VOC emissions from needles and leaves have gained the most attention, however other parts of the ecosystem also have the ability to emit a vast amount of VOCs. This, often neglected, source can be important e.g. at periods where leaves are absent. Both sources and drivers related to forest floor emission of VOCs are currently limited. It is thought that the sources are mainly due to degradation of organic matter (Isidorov and Jdanova, Chemosphere, 2002), living roots (Asensio et al., Soil Biol. Biochem., 2008) and ground vegetation. The drivers are biotic (e.g. microbes) and abiotic (e.g. temperature and moisture). However, the relative importance of the sources and the drivers individually are currently poorly understood. Further, the relative importance of these factors is highly dependent on the tree species occupying the area of interest. The emission of isoprene and monoterpenes where measured from the boreal forest floor at the SMEAR II station in Southern Finland (Hari and Kulmala, Boreal Env. Res., 2005) during the snow-free period in 2010-2012. We used a dynamic method with 3 automated chambers analyzed by Proton Transfer Reaction - Mass Spectrometer (Aaltonen et al., Plant Soil, 2013). Using this data, we have developed empirical parameterizations for the emission of isoprene and monoterpenes from the forest floor. These parameterizations depends on abiotic factors, however, since the parameterizations are based on field measurements, biotic features are captured. Further, we have used the 1D chemistry-transport model SOSAA (Boy et al., ACP, 2011) to test the seasonal relative importance of inclusion of these parameterizations of the forest floor compared to the canopy crown emissions, on the atmospheric reactivity throughout the canopy.
NASA Astrophysics Data System (ADS)
Lee, S.-H.; Kim, S.-W.; Angevine, W. M.; Bianco, L.; McKeen, S. A.; Senff, C. J.; Trainer, M.; Tucker, S. C.; Zamora, R. J.
2011-03-01
The performance of different urban surface parameterizations in the WRF (Weather Research and Forecasting) in simulating urban boundary layer (UBL) was investigated using extensive measurements during the Texas Air Quality Study 2006 field campaign. The extensive field measurements collected on surface (meteorological, wind profiler, energy balance flux) sites, a research aircraft, and a research vessel characterized 3-dimensional atmospheric boundary layer structures over the Houston-Galveston Bay area, providing a unique opportunity for the evaluation of the physical parameterizations. The model simulations were performed over the Houston metropolitan area for a summertime period (12-17 August) using a bulk urban parameterization in the Noah land surface model (original LSM), a modified LSM, and a single-layer urban canopy model (UCM). The UCM simulation compared quite well with the observations over the Houston urban areas, reducing the systematic model biases in the original LSM simulation by 1-2 °C in near-surface air temperature and by 200-400 m in UBL height, on average. A more realistic turbulent (sensible and latent heat) energy partitioning contributed to the improvements in the UCM simulation. The original LSM significantly overestimated the sensible heat flux (~200 W m-2) over the urban areas, resulting in warmer and higher UBL. The modified LSM slightly reduced warm and high biases in near-surface air temperature (0.5-1 °C) and UBL height (~100 m) as a result of the effects of urban vegetation. The relatively strong thermal contrast between the Houston area and the water bodies (Galveston Bay and the Gulf of Mexico) in the LSM simulations enhanced the sea/bay breezes, but the model performance in predicting local wind fields was similar among the simulations in terms of statistical evaluations. These results suggest that a proper surface representation (e.g. urban vegetation, surface morphology) and explicit parameterizations of urban physical processes are required for accurate urban atmospheric numerical modeling.
Modeling late rectal toxicities based on a parameterized representation of the 3D dose distribution
NASA Astrophysics Data System (ADS)
Buettner, Florian; Gulliford, Sarah L.; Webb, Steve; Partridge, Mike
2011-04-01
Many models exist for predicting toxicities based on dose-volume histograms (DVHs) or dose-surface histograms (DSHs). This approach has several drawbacks as firstly the reduction of the dose distribution to a histogram results in the loss of spatial information and secondly the bins of the histograms are highly correlated with each other. Furthermore, some of the complex nonlinear models proposed in the past lack a direct physical interpretation and the ability to predict probabilities rather than binary outcomes. We propose a parameterized representation of the 3D distribution of the dose to the rectal wall which explicitly includes geometrical information in the form of the eccentricity of the dose distribution as well as its lateral and longitudinal extent. We use a nonlinear kernel-based probabilistic model to predict late rectal toxicity based on the parameterized dose distribution and assessed its predictive power using data from the MRC RT01 trial (ISCTRN 47772397). The endpoints under consideration were rectal bleeding, loose stools, and a global toxicity score. We extract simple rules identifying 3D dose patterns related to a specifically low risk of complication. Normal tissue complication probability (NTCP) models based on parameterized representations of geometrical and volumetric measures resulted in areas under the curve (AUCs) of 0.66, 0.63 and 0.67 for predicting rectal bleeding, loose stools and global toxicity, respectively. In comparison, NTCP models based on standard DVHs performed worse and resulted in AUCs of 0.59 for all three endpoints. In conclusion, we have presented low-dimensional, interpretable and nonlinear NTCP models based on the parameterized representation of the dose to the rectal wall. These models had a higher predictive power than models based on standard DVHs and their low dimensionality allowed for the identification of 3D dose patterns related to a low risk of complication.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berg, Larry K.; Shrivastava, ManishKumar B.; Easter, Richard C.
A new treatment of cloud-aerosol interactions within parameterized shallow and deep convection has been implemented in WRF-Chem that can be used to better understand the aerosol lifecycle over regional to synoptic scales. The modifications to the model to represent cloud-aerosol interactions include treatment of the cloud dropletnumber mixing ratio; key cloud microphysical and macrophysical parameters (including the updraft fractional area, updraft and downdraft mass fluxes, and entrainment) averaged over the population of shallow clouds, or a single deep convective cloud; and vertical transport, activation/resuspension, aqueous chemistry, and wet removal of aerosol and trace gases in warm clouds. Thesechanges have beenmore » implemented in both the WRF-Chem chemistry packages as well as the Kain-Fritsch cumulus parameterization that has been modified to better represent shallow convective clouds. Preliminary testing of the modified WRF-Chem has been completed using observations from the Cumulus Humilis Aerosol Processing Study (CHAPS) as well as a high-resolution simulation that does not include parameterized convection. The simulation results are used to investigate the impact of cloud-aerosol interactions on the regional scale transport of black carbon (BC), organic aerosol (OA), and sulfate aerosol. Based on the simulations presented here, changes in the column integrated BC can be as large as -50% when cloud-aerosol interactions are considered (due largely to wet removal), or as large as +35% for sulfate in non-precipitating conditions due to the sulfate production in the parameterized clouds. The modifications to WRF-Chem version 3.2.1 are found to account for changes in the cloud drop number concentration (CDNC) and changes in the chemical composition of cloud-drop residuals in a way that is consistent with observations collected during CHAPS. Efforts are currently underway to port the changes described here to WRF-Chem version 3.5, and it is anticipated that they will be included in a future public release of WRF-Chem.« less
NASA Astrophysics Data System (ADS)
Alexander, M. Joan; Stephan, Claudia
2015-04-01
In climate models, gravity waves remain too poorly resolved to be directly modelled. Instead, simplified parameterizations are used to include gravity wave effects on model winds. A few climate models link some of the parameterized waves to convective sources, providing a mechanism for feedback between changes in convection and gravity wave-driven changes in circulation in the tropics and above high-latitude storms. These convective wave parameterizations are based on limited case studies with cloud-resolving models, but they are poorly constrained by observational validation, and tuning parameters have large uncertainties. Our new work distills results from complex, full-physics cloud-resolving model studies to essential variables for gravity wave generation. We use the Weather Research Forecast (WRF) model to study relationships between precipitation, latent heating/cooling and other cloud properties to the spectrum of gravity wave momentum flux above midlatitude storm systems. Results show the gravity wave spectrum is surprisingly insensitive to the representation of microphysics in WRF. This is good news for use of these models for gravity wave parameterization development since microphysical properties are a key uncertainty. We further use the full-physics cloud-resolving model as a tool to directly link observed precipitation variability to gravity wave generation. We show that waves in an idealized model forced with radar-observed precipitation can quantitatively reproduce instantaneous satellite-observed features of the gravity wave field above storms, which is a powerful validation of our understanding of waves generated by convection. The idealized model directly links observations of surface precipitation to observed waves in the stratosphere, and the simplicity of the model permits deep/large-area domains for studies of wave-mean flow interactions. This unique validated model tool permits quantitative studies of gravity wave driving of regional circulation and provides a new method for future development of realistic convective gravity wave parameterizations.