Sample records for grid stochastic collocation

  1. A two-stage adaptive stochastic collocation method on nested sparse grids for multiphase flow in randomly heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    Liao, Qinzhuo; Zhang, Dongxiao; Tchelepi, Hamdi

    2017-02-01

    A new computational method is proposed for efficient uncertainty quantification of multiphase flow in porous media with stochastic permeability. For pressure estimation, it combines the dimension-adaptive stochastic collocation method on Smolyak sparse grids and the Kronrod-Patterson-Hermite nested quadrature formulas. For saturation estimation, an additional stage is developed, in which the pressure and velocity samples are first generated by the sparse grid interpolation and then substituted into the transport equation to solve for the saturation samples, to address the low regularity problem of the saturation. Numerical examples are presented for multiphase flow with stochastic permeability fields to demonstrate accuracy and efficiency of the proposed two-stage adaptive stochastic collocation method on nested sparse grids.

  2. A two-stage adaptive stochastic collocation method on nested sparse grids for multiphase flow in randomly heterogeneous porous media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liao, Qinzhuo, E-mail: liaoqz@pku.edu.cn; Zhang, Dongxiao; Tchelepi, Hamdi

    A new computational method is proposed for efficient uncertainty quantification of multiphase flow in porous media with stochastic permeability. For pressure estimation, it combines the dimension-adaptive stochastic collocation method on Smolyak sparse grids and the Kronrod–Patterson–Hermite nested quadrature formulas. For saturation estimation, an additional stage is developed, in which the pressure and velocity samples are first generated by the sparse grid interpolation and then substituted into the transport equation to solve for the saturation samples, to address the low regularity problem of the saturation. Numerical examples are presented for multiphase flow with stochastic permeability fields to demonstrate accuracy and efficiencymore » of the proposed two-stage adaptive stochastic collocation method on nested sparse grids.« less

  3. A two-level stochastic collocation method for semilinear elliptic equations with random coefficients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Luoping; Zheng, Bin; Lin, Guang

    In this work, we propose a novel two-level discretization for solving semilinear elliptic equations with random coefficients. Motivated by the two-grid method for deterministic partial differential equations (PDEs) introduced by Xu, our two-level stochastic collocation method utilizes a two-grid finite element discretization in the physical space and a two-level collocation method in the random domain. In particular, we solve semilinear equations on a coarse meshmore » $$\\mathcal{T}_H$$ with a low level stochastic collocation (corresponding to the polynomial space $$\\mathcal{P}_{P}$$) and solve linearized equations on a fine mesh $$\\mathcal{T}_h$$ using high level stochastic collocation (corresponding to the polynomial space $$\\mathcal{P}_p$$). We prove that the approximated solution obtained from this method achieves the same order of accuracy as that from solving the original semilinear problem directly by stochastic collocation method with $$\\mathcal{T}_h$$ and $$\\mathcal{P}_p$$. The two-level method is computationally more efficient, especially for nonlinear problems with high random dimensions. Numerical experiments are also provided to verify the theoretical results.« less

  4. Stochastic collocation using Kronrod-Patterson-Hermite quadrature with moderate delay for subsurface flow and transport

    NASA Astrophysics Data System (ADS)

    Liao, Q.; Tchelepi, H.; Zhang, D.

    2015-12-01

    Uncertainty quantification aims at characterizing the impact of input parameters on the output responses and plays an important role in many areas including subsurface flow and transport. In this study, a sparse grid collocation approach, which uses a nested Kronrod-Patterson-Hermite quadrature rule with moderate delay for Gaussian random parameters, is proposed to quantify the uncertainty of model solutions. The conventional stochastic collocation method serves as a promising non-intrusive approach and has drawn a great deal of interests. The collocation points are usually chosen to be Gauss-Hermite quadrature nodes, which are naturally unnested. The Kronrod-Patterson-Hermite nodes are shown to be more efficient than the Gauss-Hermite nodes due to nestedness. We propose a Kronrod-Patterson-Hermite rule with moderate delay to further improve the performance. Our study demonstrates the effectiveness of the proposed method for uncertainty quantification through subsurface flow and transport examples.

  5. Kalman filter parameter estimation for a nonlinear diffusion model of epithelial cell migration using stochastic collocation and the Karhunen-Loeve expansion.

    PubMed

    Barber, Jared; Tanase, Roxana; Yotov, Ivan

    2016-06-01

    Several Kalman filter algorithms are presented for data assimilation and parameter estimation for a nonlinear diffusion model of epithelial cell migration. These include the ensemble Kalman filter with Monte Carlo sampling and a stochastic collocation (SC) Kalman filter with structured sampling. Further, two types of noise are considered -uncorrelated noise resulting in one stochastic dimension for each element of the spatial grid and correlated noise parameterized by the Karhunen-Loeve (KL) expansion resulting in one stochastic dimension for each KL term. The efficiency and accuracy of the four methods are investigated for two cases with synthetic data with and without noise, as well as data from a laboratory experiment. While it is observed that all algorithms perform reasonably well in matching the target solution and estimating the diffusion coefficient and the growth rate, it is illustrated that the algorithms that employ SC and KL expansion are computationally more efficient, as they require fewer ensemble members for comparable accuracy. In the case of SC methods, this is due to improved approximation in stochastic space compared to Monte Carlo sampling. In the case of KL methods, the parameterization of the noise results in a stochastic space of smaller dimension. The most efficient method is the one combining SC and KL expansion. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Uncertainty Propagation for Turbulent, Compressible Flow in a Quasi-1D Nozzle Using Stochastic Methods

    NASA Technical Reports Server (NTRS)

    Zang, Thomas A.; Mathelin, Lionel; Hussaini, M. Yousuff; Bataille, Francoise

    2003-01-01

    This paper describes a fully spectral, Polynomial Chaos method for the propagation of uncertainty in numerical simulations of compressible, turbulent flow, as well as a novel stochastic collocation algorithm for the same application. The stochastic collocation method is key to the efficient use of stochastic methods on problems with complex nonlinearities, such as those associated with the turbulence model equations in compressible flow and for CFD schemes requiring solution of a Riemann problem. Both methods are applied to compressible flow in a quasi-one-dimensional nozzle. The stochastic collocation method is roughly an order of magnitude faster than the fully Galerkin Polynomial Chaos method on the inviscid problem.

  7. An adaptive sparse-grid high-order stochastic collocation method for Bayesian inference in groundwater reactive transport modeling

    NASA Astrophysics Data System (ADS)

    Zhang, Guannan; Lu, Dan; Ye, Ming; Gunzburger, Max; Webster, Clayton

    2013-10-01

    Bayesian analysis has become vital to uncertainty quantification in groundwater modeling, but its application has been hindered by the computational cost associated with numerous model executions required by exploring the posterior probability density function (PPDF) of model parameters. This is particularly the case when the PPDF is estimated using Markov Chain Monte Carlo (MCMC) sampling. In this study, a new approach is developed to improve the computational efficiency of Bayesian inference by constructing a surrogate of the PPDF, using an adaptive sparse-grid high-order stochastic collocation (aSG-hSC) method. Unlike previous works using first-order hierarchical basis, this paper utilizes a compactly supported higher-order hierarchical basis to construct the surrogate system, resulting in a significant reduction in the number of required model executions. In addition, using the hierarchical surplus as an error indicator allows locally adaptive refinement of sparse grids in the parameter space, which further improves computational efficiency. To efficiently build the surrogate system for the PPDF with multiple significant modes, optimization techniques are used to identify the modes, for which high-probability regions are defined and components of the aSG-hSC approximation are constructed. After the surrogate is determined, the PPDF can be evaluated by sampling the surrogate system directly without model execution, resulting in improved efficiency of the surrogate-based MCMC compared with conventional MCMC. The developed method is evaluated using two synthetic groundwater reactive transport models. The first example involves coupled linear reactions and demonstrates the accuracy of our high-order hierarchical basis approach in approximating high-dimensional posteriori distribution. The second example is highly nonlinear because of the reactions of uranium surface complexation, and demonstrates how the iterative aSG-hSC method is able to capture multimodal and non-Gaussian features of PPDF caused by model nonlinearity. Both experiments show that aSG-hSC is an effective and efficient tool for Bayesian inference.

  8. On the efficacy of stochastic collocation, stochastic Galerkin, and stochastic reduced order models for solving stochastic problems

    DOE PAGES

    Richard V. Field, Jr.; Emery, John M.; Grigoriu, Mircea Dan

    2015-05-19

    The stochastic collocation (SC) and stochastic Galerkin (SG) methods are two well-established and successful approaches for solving general stochastic problems. A recently developed method based on stochastic reduced order models (SROMs) can also be used. Herein we provide a comparison of the three methods for some numerical examples; our evaluation only holds for the examples considered in the paper. The purpose of the comparisons is not to criticize the SC or SG methods, which have proven very useful for a broad range of applications, nor is it to provide overall ratings of these methods as compared to the SROM method.more » Furthermore, our objectives are to present the SROM method as an alternative approach to solving stochastic problems and provide information on the computational effort required by the implementation of each method, while simultaneously assessing their performance for a collection of specific problems.« less

  9. Reducing the cost of using collocation to compute vibrational energy levels: Results for CH2NH.

    PubMed

    Avila, Gustavo; Carrington, Tucker

    2017-08-14

    In this paper, we improve the collocation method for computing vibrational spectra that was presented in the work of Avila and Carrington, Jr. [J. Chem. Phys. 143, 214108 (2015)]. Known quadrature and collocation methods using a Smolyak grid require storing intermediate vectors with more elements than points on the Smolyak grid. This is due to the fact that grid labels are constrained among themselves and basis labels are constrained among themselves. We show that by using the so-called hierarchical basis functions, one can significantly reduce the memory required. In this paper, the intermediate vectors have only as many elements as the Smolyak grid. The ideas are tested by computing energy levels of CH 2 NH.

  10. The welfare effects of integrating renewable energy into electricity markets

    NASA Astrophysics Data System (ADS)

    Lamadrid, Alberto J.

    The challenges of deploying more renewable energy sources on an electric grid are caused largely by their inherent variability. In this context, energy storage can help make the electric delivery system more reliable by mitigating this variability. This thesis analyzes a series of models for procuring electricity and ancillary services for both individuals and social planners with high penetrations of stochastic wind energy. The results obtained for an individual decision maker using stochastic optimization are ambiguous, with closed form solutions dependent on technological parameters, and no consideration of the system reliability. The social planner models correctly reflect the effect of system reliability, and in the case of a Stochastic, Security Constrained Optimal Power Flow (S-SC-OPF or SuperOPF), determine reserve capacity endogenously so that system reliability is maintained. A single-period SuperOPF shows that including ramping costs in the objective function leads to more wind spilling and increased capacity requirements for reliability. However, this model does not reflect the inter temporal tradeoffs of using Energy Storage Systems (ESS) to improve reliability and mitigate wind variability. The results with the multiperiod SuperOPF determine the optimum use of storage for a typical day, and compare the effects of collocating ESS at wind sites with the same amount of storage (deferrable demand) located at demand centers. The collocated ESS has slightly lower operating costs and spills less wind generation compared to deferrable demand, but the total amount of conventional generating capacity needed for system adequacy is higher. In terms of the total system costs, that include the capital cost of conventional generating capacity, the costs with deferrable demand is substantially lower because the daily demand profile is flattened and less conventional generation capacity is then needed for reliability purposes. The analysis also demonstrates that the optimum daily pattern of dispatch and reserves is seriously distorted if the stochastic characteristics of wind generation are ignored.

  11. Collocated electrodynamic FDTD schemes using overlapping Yee grids and higher-order Hodge duals

    NASA Astrophysics Data System (ADS)

    Deimert, C.; Potter, M. E.; Okoniewski, M.

    2016-12-01

    The collocated Lebedev grid has previously been proposed as an alternative to the Yee grid for electromagnetic finite-difference time-domain (FDTD) simulations. While it performs better in anisotropic media, it performs poorly in isotropic media because it is equivalent to four overlapping, uncoupled Yee grids. We propose to couple the four Yee grids and fix the Lebedev method using discrete exterior calculus (DEC) with higher-order Hodge duals. We find that higher-order Hodge duals do improve the performance of the Lebedev grid, but they also improve the Yee grid by a similar amount. The effectiveness of coupling overlapping Yee grids with a higher-order Hodge dual is thus questionable. However, the theoretical foundations developed to derive these methods may be of interest in other problems.

  12. Error and Complexity Analysis for a Collocation-Grid-Projection Plus Precorrected-FFT Algorithm for Solving Potential Integral Equations with LaPlace or Helmholtz Kernels

    NASA Technical Reports Server (NTRS)

    Phillips, J. R.

    1996-01-01

    In this paper we derive error bounds for a collocation-grid-projection scheme tuned for use in multilevel methods for solving boundary-element discretizations of potential integral equations. The grid-projection scheme is then combined with a precorrected FFT style multilevel method for solving potential integral equations with 1/r and e(sup ikr)/r kernels. A complexity analysis of this combined method is given to show that for homogeneous problems, the method is order n natural log n nearly independent of the kernel. In addition, it is shown analytically and experimentally that for an inhomogeneity generated by a very finely discretized surface, the combined method slows to order n(sup 4/3). Finally, examples are given to show that the collocation-based grid-projection plus precorrected-FFT scheme is competitive with fast-multipole algorithms when considering realistic problems and 1/r kernels, but can be used over a range of spatial frequencies with only a small performance penalty.

  13. Data centers as dispatchable loads to harness stranded power

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Kibaek; Yang, Fan; Zavala, Victor M.

    Here, we analyze how traditional data center placement and optimal placement of dispatchable data centers affect power grid efficiency. We use detailed network models, stochastic optimization formulations, and diverse renewable generation scenarios to perform our analysis. Our results reveal that significant spillage and stranded power will persist in power grids as wind power levels are increased. A counter-intuitive finding is that collocating data centers with inflexible loads next to wind farms has limited impacts on renewable portfolio standard (RPS) goals because it provides limited system-level flexibility. Such an approach can, in fact, increase stranded power and fossil-fueled generation. In contrast,more » optimally placing data centers that are dispatchable provides system-wide flexibility, reduces stranded power, and improves efficiency. In short, optimally placed dispatchable computing loads can enable better scaling to high RPS. In our case study, we find that these dispatchable computing loads are powered to 60-80% of their requested capacity, indicating that there are significant economic incentives provided by stranded power.« less

  14. Data centers as dispatchable loads to harness stranded power

    DOE PAGES

    Kim, Kibaek; Yang, Fan; Zavala, Victor M.; ...

    2016-07-20

    Here, we analyze how traditional data center placement and optimal placement of dispatchable data centers affect power grid efficiency. We use detailed network models, stochastic optimization formulations, and diverse renewable generation scenarios to perform our analysis. Our results reveal that significant spillage and stranded power will persist in power grids as wind power levels are increased. A counter-intuitive finding is that collocating data centers with inflexible loads next to wind farms has limited impacts on renewable portfolio standard (RPS) goals because it provides limited system-level flexibility. Such an approach can, in fact, increase stranded power and fossil-fueled generation. In contrast,more » optimally placing data centers that are dispatchable provides system-wide flexibility, reduces stranded power, and improves efficiency. In short, optimally placed dispatchable computing loads can enable better scaling to high RPS. In our case study, we find that these dispatchable computing loads are powered to 60-80% of their requested capacity, indicating that there are significant economic incentives provided by stranded power.« less

  15. A Comprehensive Study of Gridding Methods for GPS Horizontal Velocity Fields

    NASA Astrophysics Data System (ADS)

    Wu, Yanqiang; Jiang, Zaisen; Liu, Xiaoxia; Wei, Wenxin; Zhu, Shuang; Zhang, Long; Zou, Zhenyu; Xiong, Xiaohui; Wang, Qixin; Du, Jiliang

    2017-03-01

    Four gridding methods for GPS velocities are compared in terms of their precision, applicability and robustness by analyzing simulated data with uncertainties from 0.0 to ±3.0 mm/a. When the input data are 1° × 1° grid sampled and the uncertainty of the additional error is greater than ±1.0 mm/a, the gridding results show that the least-squares collocation method is highly robust while the robustness of the Kriging method is low. In contrast, the spherical harmonics and the multi-surface function are moderately robust, and the regional singular values for the multi-surface function method and the edge effects for the spherical harmonics method become more significant with increasing uncertainty of the input data. When the input data (with additional errors of ±2.0 mm/a) are decimated by 50% from the 1° × 1° grid data and then erased in three 6° × 12° regions, the gridding results in these three regions indicate that the least-squares collocation and the spherical harmonics methods have good performances, while the multi-surface function and the Kriging methods may lead to singular values. The gridding techniques are also applied to GPS horizontal velocities with an average error of ±0.8 mm/a over the Chinese mainland and the surrounding areas, and the results show that the least-squares collocation method has the best performance, followed by the Kriging and multi-surface function methods. Furthermore, the edge effects of the spherical harmonics method are significantly affected by the sparseness and geometric distribution of the input data. In general, the least-squares collocation method is superior in terms of its robustness, edge effect, error distribution and stability, while the other methods have several positive features.

  16. Eulerian Lagrangian Adaptive Fup Collocation Method for solving the conservative solute transport in heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    Gotovac, Hrvoje; Srzic, Veljko

    2014-05-01

    Contaminant transport in natural aquifers is a complex, multiscale process that is frequently studied using different Eulerian, Lagrangian and hybrid numerical methods. Conservative solute transport is typically modeled using the advection-dispersion equation (ADE). Despite the large number of available numerical methods that have been developed to solve it, the accurate numerical solution of the ADE still presents formidable challenges. In particular, current numerical solutions of multidimensional advection-dominated transport in non-uniform velocity fields are affected by one or all of the following problems: numerical dispersion that introduces artificial mixing and dilution, grid orientation effects, unresolved spatial and temporal scales and unphysical numerical oscillations (e.g., Herrera et al, 2009; Bosso et al., 2012). In this work we will present Eulerian Lagrangian Adaptive Fup Collocation Method (ELAFCM) based on Fup basis functions and collocation approach for spatial approximation and explicit stabilized Runge-Kutta-Chebyshev temporal integration (public domain routine SERK2) which is especially well suited for stiff parabolic problems. Spatial adaptive strategy is based on Fup basis functions which are closely related to the wavelets and splines so that they are also compactly supported basis functions; they exactly describe algebraic polynomials and enable a multiresolution adaptive analysis (MRA). MRA is here performed via Fup Collocation Transform (FCT) so that at each time step concentration solution is decomposed using only a few significant Fup basis functions on adaptive collocation grid with appropriate scales (frequencies) and locations, a desired level of accuracy and a near minimum computational cost. FCT adds more collocations points and higher resolution levels only in sensitive zones with sharp concentration gradients, fronts and/or narrow transition zones. According to the our recent achievements there is no need for solving the large linear system on adaptive grid because each Fup coefficient is obtained by predefined formulas equalizing Fup expansion around corresponding collocation point and particular collocation operator based on few surrounding solution values. Furthermore, each Fup coefficient can be obtained independently which is perfectly suited for parallel processing. Adaptive grid in each time step is obtained from solution of the last time step or initial conditions and advective Lagrangian step in the current time step according to the velocity field and continuous streamlines. On the other side, we implement explicit stabilized routine SERK2 for dispersive Eulerian part of solution in the current time step on obtained spatial adaptive grid. Overall adaptive concept does not require the solving of large linear systems for the spatial and temporal approximation of conservative transport. Also, this new Eulerian-Lagrangian-Collocation scheme resolves all mentioned numerical problems due to its adaptive nature and ability to control numerical errors in space and time. Proposed method solves advection in Lagrangian way eliminating problems in Eulerian methods, while optimal collocation grid efficiently describes solution and boundary conditions eliminating usage of large number of particles and other problems in Lagrangian methods. Finally, numerical tests show that this approach enables not only accurate velocity field, but also conservative transport even in highly heterogeneous porous media resolving all spatial and temporal scales of concentration field.

  17. Simplex-stochastic collocation method with improved scalability

    NASA Astrophysics Data System (ADS)

    Edeling, W. N.; Dwight, R. P.; Cinnella, P.

    2016-04-01

    The Simplex-Stochastic Collocation (SSC) method is a robust tool used to propagate uncertain input distributions through a computer code. However, it becomes prohibitively expensive for problems with dimensions higher than 5. The main purpose of this paper is to identify bottlenecks, and to improve upon this bad scalability. In order to do so, we propose an alternative interpolation stencil technique based upon the Set-Covering problem, and we integrate the SSC method in the High-Dimensional Model-Reduction framework. In addition, we address the issue of ill-conditioned sample matrices, and we present an analytical map to facilitate uniformly-distributed simplex sampling.

  18. Single-grid spectral collocation for the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Bernardi, Christine; Canuto, Claudio; Maday, Yvon; Metivet, Brigitte

    1988-01-01

    The aim of the paper is to study a collocation spectral method to approximate the Navier-Stokes equations: only one grid is used, which is built from the nodes of a Gauss-Lobatto quadrature formula, either of Legendre or of Chebyshev type. The convergence is proven for the Stokes problem provided with inhomogeneous Dirichlet conditions, then thoroughly analyzed for the Navier-Stokes equations. The practical implementation algorithm is presented, together with numerical results.

  19. Inverse Modeling Using Markov Chain Monte Carlo Aided by Adaptive Stochastic Collocation Method with Transformation

    NASA Astrophysics Data System (ADS)

    Zhang, D.; Liao, Q.

    2016-12-01

    The Bayesian inference provides a convenient framework to solve statistical inverse problems. In this method, the parameters to be identified are treated as random variables. The prior knowledge, the system nonlinearity, and the measurement errors can be directly incorporated in the posterior probability density function (PDF) of the parameters. The Markov chain Monte Carlo (MCMC) method is a powerful tool to generate samples from the posterior PDF. However, since the MCMC usually requires thousands or even millions of forward simulations, it can be a computationally intensive endeavor, particularly when faced with large-scale flow and transport models. To address this issue, we construct a surrogate system for the model responses in the form of polynomials by the stochastic collocation method. In addition, we employ interpolation based on the nested sparse grids and takes into account the different importance of the parameters, under the condition of high random dimensions in the stochastic space. Furthermore, in case of low regularity such as discontinuous or unsmooth relation between the input parameters and the output responses, we introduce an additional transform process to improve the accuracy of the surrogate model. Once we build the surrogate system, we may evaluate the likelihood with very little computational cost. We analyzed the convergence rate of the forward solution and the surrogate posterior by Kullback-Leibler divergence, which quantifies the difference between probability distributions. The fast convergence of the forward solution implies fast convergence of the surrogate posterior to the true posterior. We also tested the proposed algorithm on water-flooding two-phase flow reservoir examples. The posterior PDF calculated from a very long chain with direct forward simulation is assumed to be accurate. The posterior PDF calculated using the surrogate model is in reasonable agreement with the reference, revealing a great improvement in terms of computational efficiency.

  20. Spectral methods on arbitrary grids

    NASA Technical Reports Server (NTRS)

    Carpenter, Mark H.; Gottlieb, David

    1995-01-01

    Stable and spectrally accurate numerical methods are constructed on arbitrary grids for partial differential equations. These new methods are equivalent to conventional spectral methods but do not rely on specific grid distributions. Specifically, we show how to implement Legendre Galerkin, Legendre collocation, and Laguerre Galerkin methodology on arbitrary grids.

  1. A multi-dimensional Smolyak collocation method in curvilinear coordinates for computing vibrational spectra

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Avila, Gustavo, E-mail: Gustavo-Avila@telefonica.net; Carrington, Tucker, E-mail: Tucker.Carrington@queensu.ca

    In this paper, we improve the collocation method for computing vibrational spectra that was presented in Avila and Carrington, Jr. [J. Chem. Phys. 139, 134114 (2013)]. Using an iterative eigensolver, energy levels and wavefunctions are determined from values of the potential on a Smolyak grid. The kinetic energy matrix-vector product is evaluated by transforming a vector labelled with (nondirect product) grid indices to a vector labelled by (nondirect product) basis indices. Both the transformation and application of the kinetic energy operator (KEO) scale favorably. Collocation facilitates dealing with complicated KEOs because it obviates the need to calculate integrals of coordinatemore » dependent coefficients of differential operators. The ideas are tested by computing energy levels of HONO using a KEO in bond coordinates.« less

  2. A non-linear dimension reduction methodology for generating data-driven stochastic input models

    NASA Astrophysics Data System (ADS)

    Ganapathysubramanian, Baskar; Zabaras, Nicholas

    2008-06-01

    Stochastic analysis of random heterogeneous media (polycrystalline materials, porous media, functionally graded materials) provides information of significance only if realistic input models of the topology and property variations are used. This paper proposes a framework to construct such input stochastic models for the topology and thermal diffusivity variations in heterogeneous media using a data-driven strategy. Given a set of microstructure realizations (input samples) generated from given statistical information about the medium topology, the framework constructs a reduced-order stochastic representation of the thermal diffusivity. This problem of constructing a low-dimensional stochastic representation of property variations is analogous to the problem of manifold learning and parametric fitting of hyper-surfaces encountered in image processing and psychology. Denote by M the set of microstructures that satisfy the given experimental statistics. A non-linear dimension reduction strategy is utilized to map M to a low-dimensional region, A. We first show that M is a compact manifold embedded in a high-dimensional input space Rn. An isometric mapping F from M to a low-dimensional, compact, connected set A⊂Rd(d≪n) is constructed. Given only a finite set of samples of the data, the methodology uses arguments from graph theory and differential geometry to construct the isometric transformation F:M→A. Asymptotic convergence of the representation of M by A is shown. This mapping F serves as an accurate, low-dimensional, data-driven representation of the property variations. The reduced-order model of the material topology and thermal diffusivity variations is subsequently used as an input in the solution of stochastic partial differential equations that describe the evolution of dependant variables. A sparse grid collocation strategy (Smolyak algorithm) is utilized to solve these stochastic equations efficiently. We showcase the methodology by constructing low-dimensional input stochastic models to represent thermal diffusivity in two-phase microstructures. This model is used in analyzing the effect of topological variations of two-phase microstructures on the evolution of temperature in heat conduction processes.

  3. Accuracy and speed in computing the Chebyshev collocation derivative

    NASA Technical Reports Server (NTRS)

    Don, Wai-Sun; Solomonoff, Alex

    1991-01-01

    We studied several algorithms for computing the Chebyshev spectral derivative and compare their roundoff error. For a large number of collocation points, the elements of the Chebyshev differentiation matrix, if constructed in the usual way, are not computed accurately. A subtle cause is is found to account for the poor accuracy when computing the derivative by the matrix-vector multiplication method. Methods for accurately computing the elements of the matrix are presented, and we find that if the entities of the matrix are computed accurately, the roundoff error of the matrix-vector multiplication is as small as that of the transform-recursion algorithm. Results of CPU time usage are shown for several different algorithms for computing the derivative by the Chebyshev collocation method for a wide variety of two-dimensional grid sizes on both an IBM and a Cray 2 computer. We found that which algorithm is fastest on a particular machine depends not only on the grid size, but also on small details of the computer hardware as well. For most practical grid sizes used in computation, the even-odd decomposition algorithm is found to be faster than the transform-recursion method.

  4. On multilevel RBF collocation to solve nonlinear PDEs arising from endogenous stochastic volatility models

    NASA Astrophysics Data System (ADS)

    Bastani, Ali Foroush; Dastgerdi, Maryam Vahid; Mighani, Abolfazl

    2018-06-01

    The main aim of this paper is the analytical and numerical study of a time-dependent second-order nonlinear partial differential equation (PDE) arising from the endogenous stochastic volatility model, introduced in [Bensoussan, A., Crouhy, M. and Galai, D., Stochastic equity volatility related to the leverage effect (I): equity volatility behavior. Applied Mathematical Finance, 1, 63-85, 1994]. As the first step, we derive a consistent set of initial and boundary conditions to complement the PDE, when the firm is financed by equity and debt. In the sequel, we propose a Newton-based iteration scheme for nonlinear parabolic PDEs which is an extension of a method for solving elliptic partial differential equations introduced in [Fasshauer, G. E., Newton iteration with multiquadrics for the solution of nonlinear PDEs. Computers and Mathematics with Applications, 43, 423-438, 2002]. The scheme is based on multilevel collocation using radial basis functions (RBFs) to solve the resulting locally linearized elliptic PDEs obtained at each level of the Newton iteration. We show the effectiveness of the resulting framework by solving a prototypical example from the field and compare the results with those obtained from three different techniques: (1) a finite difference discretization; (2) a naive RBF collocation and (3) a benchmark approximation, introduced for the first time in this paper. The numerical results confirm the robustness, higher convergence rate and good stability properties of the proposed scheme compared to other alternatives. We also comment on some possible research directions in this field.

  5. Deterministic analysis of extrinsic and intrinsic noise in an epidemiological model.

    PubMed

    Bayati, Basil S

    2016-05-01

    We couple a stochastic collocation method with an analytical expansion of the canonical epidemiological master equation to analyze the effects of both extrinsic and intrinsic noise. It is shown that depending on the distribution of the extrinsic noise, the master equation yields quantitatively different results compared to using the expectation of the distribution for the stochastic parameter. This difference is incident to the nonlinear terms in the master equation, and we show that the deviation away from the expectation of the extrinsic noise scales nonlinearly with the variance of the distribution. The method presented here converges linearly with respect to the number of particles in the system and exponentially with respect to the order of the polynomials used in the stochastic collocation calculation. This makes the method presented here more accurate than standard Monte Carlo methods, which suffer from slow, nonmonotonic convergence. In epidemiological terms, the results show that extrinsic fluctuations should be taken into account since they effect the speed of disease breakouts and that the gamma distribution should be used to model the basic reproductive number.

  6. Radiation Transport in Random Media With Large Fluctuations

    NASA Astrophysics Data System (ADS)

    Olson, Aaron; Prinja, Anil; Franke, Brian

    2017-09-01

    Neutral particle transport in media exhibiting large and complex material property spatial variation is modeled by representing cross sections as lognormal random functions of space and generated through a nonlinear memory-less transformation of a Gaussian process with covariance uniquely determined by the covariance of the cross section. A Karhunen-Loève decomposition of the Gaussian process is implemented to effciently generate realizations of the random cross sections and Woodcock Monte Carlo used to transport particles on each realization and generate benchmark solutions for the mean and variance of the particle flux as well as probability densities of the particle reflectance and transmittance. A computationally effcient stochastic collocation method is implemented to directly compute the statistical moments such as the mean and variance, while a polynomial chaos expansion in conjunction with stochastic collocation provides a convenient surrogate model that also produces probability densities of output quantities of interest. Extensive numerical testing demonstrates that use of stochastic reduced-order modeling provides an accurate and cost-effective alternative to random sampling for particle transport in random media.

  7. A non-linear dimension reduction methodology for generating data-driven stochastic input models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ganapathysubramanian, Baskar; Zabaras, Nicholas

    Stochastic analysis of random heterogeneous media (polycrystalline materials, porous media, functionally graded materials) provides information of significance only if realistic input models of the topology and property variations are used. This paper proposes a framework to construct such input stochastic models for the topology and thermal diffusivity variations in heterogeneous media using a data-driven strategy. Given a set of microstructure realizations (input samples) generated from given statistical information about the medium topology, the framework constructs a reduced-order stochastic representation of the thermal diffusivity. This problem of constructing a low-dimensional stochastic representation of property variations is analogous to the problem ofmore » manifold learning and parametric fitting of hyper-surfaces encountered in image processing and psychology. Denote by M the set of microstructures that satisfy the given experimental statistics. A non-linear dimension reduction strategy is utilized to map M to a low-dimensional region, A. We first show that M is a compact manifold embedded in a high-dimensional input space R{sup n}. An isometric mapping F from M to a low-dimensional, compact, connected set A is contained in R{sup d}(d<

  8. Three-dimensional wave field modeling by a collocated-grid finite-difference method in the anelastic model with surface topography

    NASA Astrophysics Data System (ADS)

    Wang, N.; Li, J.; Borisov, D.; Gharti, H. N.; Shen, Y.; Zhang, W.; Savage, B. K.

    2016-12-01

    We incorporate 3D anelastic attenuation into the collocated-grid finite-difference method on curvilinear grids (Zhang et al., 2012), using the rheological model of the generalized Maxwell body (Emmerich and Korn, 1987; Moczo and Kristek, 2005; Käser et al., 2007). We follow a conventional procedure to calculate the anelastic coefficients (Emmerich and Korn, 1987) determined by the Q(ω)-law, with a modification in the choice of frequency band and thus the relaxation frequencies that equidistantly cover the logarithmic frequency range. We show that such an optimization of anelastic coefficients is more accurate when using a fixed number of relaxation mechanisms to fit the frequency independent Q-factors. We use curvilinear grids to represent the surface topography. The velocity-stress form of the 3D isotropic anelastic wave equation is solved with a collocated-grid finite-difference method. Compared with the elastic case, we need to solve additional material-independent anelastic functions (Kristek and Moczo, 2003) for the mechanisms at each relaxation frequency. Based on the stress-strain relation, we calculate the spatial partial derivatives of the anelastic functions indirectly thereby saving computational storage and improving computational efficiency. The complex-frequency-shifted perfectly matched layer (CFS-PML) is used for the absorbing boundary condition based on the auxiliary difference equation (Zhang and Shen, 2010). The traction image method (Zhang and Chen, 2006) is employed for the free-surface boundary condition. We perform several numerical experiments including homogeneous full-space models and layered half-space models, considering both flat and 3D Gaussian-shape hill surfaces. The results match very well with those of the spectral-element method (Komatitisch and Tromp, 2002; Savage et al., 2010), verifying the simulations by our method in the anelastic model with surface topography.

  9. Stability of compressible Taylor-Couette flow

    NASA Technical Reports Server (NTRS)

    Kao, Kai-Hsiung; Chow, Chuen-Yen

    1991-01-01

    Compressible stability equations are solved using the spectral collocation method in an attempt to study the effects of temperature difference and compressibility on the stability of Taylor-Couette flow. It is found that the Chebyshev collocation spectral method yields highly accurate results using fewer grid points for solving stability problems. Comparisons are made between the result obtained by assuming small Mach number with a uniform temperature distribution and that based on fully incompressible analysis.

  10. Entropy Stable Staggered Grid Spectral Collocation for the Burgers' and Compressible Navier-Stokes Equations

    NASA Technical Reports Server (NTRS)

    Carpenter, Mark H.; Parsani, Matteo; Fisher, Travis C.; Nielsen, Eric J.

    2015-01-01

    Staggered grid, entropy stable discontinuous spectral collocation operators of any order are developed for Burgers' and the compressible Navier-Stokes equations on unstructured hexahedral elements. This generalization of previous entropy stable spectral collocation work [1, 2], extends the applicable set of points from tensor product, Legendre-Gauss-Lobatto (LGL) to a combination of tensor product Legendre-Gauss (LG) and LGL points. The new semi-discrete operators discretely conserve mass, momentum, energy and satisfy a mathematical entropy inequality for both Burgers' and the compressible Navier-Stokes equations in three spatial dimensions. They are valid for smooth as well as discontinuous flows. The staggered LG and conventional LGL point formulations are compared on several challenging test problems. The staggered LG operators are significantly more accurate, although more costly to implement. The LG and LGL operators exhibit similar robustness, as is demonstrated using test problems known to be problematic for operators that lack a nonlinearly stability proof for the compressible Navier-Stokes equations (e.g., discontinuous Galerkin, spectral difference, or flux reconstruction operators).

  11. Ensemble Grouping Strategies for Embedded Stochastic Collocation Methods Applied to Anisotropic Diffusion Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    D'Elia, M.; Edwards, H. C.; Hu, J.

    Previous work has demonstrated that propagating groups of samples, called ensembles, together through forward simulations can dramatically reduce the aggregate cost of sampling-based uncertainty propagation methods [E. Phipps, M. D'Elia, H. C. Edwards, M. Hoemmen, J. Hu, and S. Rajamanickam, SIAM J. Sci. Comput., 39 (2017), pp. C162--C193]. However, critical to the success of this approach when applied to challenging problems of scientific interest is the grouping of samples into ensembles to minimize the total computational work. For example, the total number of linear solver iterations for ensemble systems may be strongly influenced by which samples form the ensemble whenmore » applying iterative linear solvers to parameterized and stochastic linear systems. In this paper we explore sample grouping strategies for local adaptive stochastic collocation methods applied to PDEs with uncertain input data, in particular canonical anisotropic diffusion problems where the diffusion coefficient is modeled by truncated Karhunen--Loève expansions. Finally, we demonstrate that a measure of the total anisotropy of the diffusion coefficient is a good surrogate for the number of linear solver iterations for each sample and therefore provides a simple and effective metric for grouping samples.« less

  12. Ensemble Grouping Strategies for Embedded Stochastic Collocation Methods Applied to Anisotropic Diffusion Problems

    DOE PAGES

    D'Elia, M.; Edwards, H. C.; Hu, J.; ...

    2018-01-18

    Previous work has demonstrated that propagating groups of samples, called ensembles, together through forward simulations can dramatically reduce the aggregate cost of sampling-based uncertainty propagation methods [E. Phipps, M. D'Elia, H. C. Edwards, M. Hoemmen, J. Hu, and S. Rajamanickam, SIAM J. Sci. Comput., 39 (2017), pp. C162--C193]. However, critical to the success of this approach when applied to challenging problems of scientific interest is the grouping of samples into ensembles to minimize the total computational work. For example, the total number of linear solver iterations for ensemble systems may be strongly influenced by which samples form the ensemble whenmore » applying iterative linear solvers to parameterized and stochastic linear systems. In this paper we explore sample grouping strategies for local adaptive stochastic collocation methods applied to PDEs with uncertain input data, in particular canonical anisotropic diffusion problems where the diffusion coefficient is modeled by truncated Karhunen--Loève expansions. Finally, we demonstrate that a measure of the total anisotropy of the diffusion coefficient is a good surrogate for the number of linear solver iterations for each sample and therefore provides a simple and effective metric for grouping samples.« less

  13. Fourier Collocation Approach With Mesh Refinement Method for Simulating Transit-Time Ultrasonic Flowmeters Under Multiphase Flow Conditions.

    PubMed

    Simurda, Matej; Duggen, Lars; Basse, Nils T; Lassen, Benny

    2018-02-01

    A numerical model for transit-time ultrasonic flowmeters operating under multiphase flow conditions previously presented by us is extended by mesh refinement and grid point redistribution. The method solves modified first-order stress-velocity equations of elastodynamics with additional terms to account for the effect of the background flow. Spatial derivatives are calculated by a Fourier collocation scheme allowing the use of the fast Fourier transform, while the time integration is realized by the explicit third-order Runge-Kutta finite-difference scheme. The method is compared against analytical solutions and experimental measurements to verify the benefit of using mapped grids. Additionally, a study of clamp-on and in-line ultrasonic flowmeters operating under multiphase flow conditions is carried out.

  14. Reduced basis ANOVA methods for partial differential equations with high-dimensional random inputs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liao, Qifeng, E-mail: liaoqf@shanghaitech.edu.cn; Lin, Guang, E-mail: guanglin@purdue.edu

    2016-07-15

    In this paper we present a reduced basis ANOVA approach for partial deferential equations (PDEs) with random inputs. The ANOVA method combined with stochastic collocation methods provides model reduction in high-dimensional parameter space through decomposing high-dimensional inputs into unions of low-dimensional inputs. In this work, to further reduce the computational cost, we investigate spatial low-rank structures in the ANOVA-collocation method, and develop efficient spatial model reduction techniques using hierarchically generated reduced bases. We present a general mathematical framework of the methodology, validate its accuracy and demonstrate its efficiency with numerical experiments.

  15. Investigation of advanced UQ for CRUD prediction with VIPRE.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eldred, Michael Scott

    2011-09-01

    This document summarizes the results from a level 3 milestone study within the CASL VUQ effort. It demonstrates the application of 'advanced UQ,' in particular dimension-adaptive p-refinement for polynomial chaos and stochastic collocation. The study calculates statistics for several quantities of interest that are indicators for the formation of CRUD (Chalk River unidentified deposit), which can lead to CIPS (CRUD induced power shift). Stochastic expansion methods are attractive methods for uncertainty quantification due to their fast convergence properties. For smooth functions (i.e., analytic, infinitely-differentiable) in L{sup 2} (i.e., possessing finite variance), exponential convergence rates can be obtained under order refinementmore » for integrated statistical quantities of interest such as mean, variance, and probability. Two stochastic expansion methods are of interest: nonintrusive polynomial chaos expansion (PCE), which computes coefficients for a known basis of multivariate orthogonal polynomials, and stochastic collocation (SC), which forms multivariate interpolation polynomials for known coefficients. Within the DAKOTA project, recent research in stochastic expansion methods has focused on automated polynomial order refinement ('p-refinement') of expansions to support scalability to higher dimensional random input spaces [4, 3]. By preferentially refining only in the most important dimensions of the input space, the applicability of these methods can be extended from O(10{sup 0})-O(10{sup 1}) random variables to O(10{sup 2}) and beyond, depending on the degree of anisotropy (i.e., the extent to which randominput variables have differing degrees of influence on the statistical quantities of interest (QOIs)). Thus, the purpose of this study is to investigate the application of these adaptive stochastic expansion methods to the analysis of CRUD using the VIPRE simulation tools for two different plant models of differing random dimension, anisotropy, and smoothness.« less

  16. Multi-fidelity stochastic collocation method for computation of statistical moments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Xueyu, E-mail: xueyu-zhu@uiowa.edu; Linebarger, Erin M., E-mail: aerinline@sci.utah.edu; Xiu, Dongbin, E-mail: xiu.16@osu.edu

    We present an efficient numerical algorithm to approximate the statistical moments of stochastic problems, in the presence of models with different fidelities. The method extends the multi-fidelity approximation method developed in . By combining the efficiency of low-fidelity models and the accuracy of high-fidelity models, our method exhibits fast convergence with a limited number of high-fidelity simulations. We establish an error bound of the method and present several numerical examples to demonstrate the efficiency and applicability of the multi-fidelity algorithm.

  17. High resolution wind measurements for offshore wind energy development

    NASA Technical Reports Server (NTRS)

    Nghiem, Son Van (Inventor); Neumann, Gregory (Inventor)

    2013-01-01

    A method, apparatus, system, article of manufacture, and computer readable storage medium provide the ability to measure wind. Data at a first resolution (i.e., low resolution data) is collected by a satellite scatterometer. Thin slices of the data are determined. A collocation of the data slices are determined at each grid cell center to obtain ensembles of collocated data slices. Each ensemble of collocated data slices is decomposed into a mean part and a fluctuating part. The data is reconstructed at a second resolution from the mean part and a residue of the fluctuating part. A wind measurement is determined from the data at the second resolution using a wind model function. A description of the wind measurement is output.

  18. Estimation of Model's Marginal likelihood Using Adaptive Sparse Grid Surrogates in Bayesian Model Averaging

    NASA Astrophysics Data System (ADS)

    Zeng, X.

    2015-12-01

    A large number of model executions are required to obtain alternative conceptual models' predictions and their posterior probabilities in Bayesian model averaging (BMA). The posterior model probability is estimated through models' marginal likelihood and prior probability. The heavy computation burden hinders the implementation of BMA prediction, especially for the elaborated marginal likelihood estimator. For overcoming the computation burden of BMA, an adaptive sparse grid (SG) stochastic collocation method is used to build surrogates for alternative conceptual models through the numerical experiment of a synthetical groundwater model. BMA predictions depend on model posterior weights (or marginal likelihoods), and this study also evaluated four marginal likelihood estimators, including arithmetic mean estimator (AME), harmonic mean estimator (HME), stabilized harmonic mean estimator (SHME), and thermodynamic integration estimator (TIE). The results demonstrate that TIE is accurate in estimating conceptual models' marginal likelihoods. The BMA-TIE has better predictive performance than other BMA predictions. TIE has high stability for estimating conceptual model's marginal likelihood. The repeated estimated conceptual model's marginal likelihoods by TIE have significant less variability than that estimated by other estimators. In addition, the SG surrogates are efficient to facilitate BMA predictions, especially for BMA-TIE. The number of model executions needed for building surrogates is 4.13%, 6.89%, 3.44%, and 0.43% of the required model executions of BMA-AME, BMA-HME, BMA-SHME, and BMA-TIE, respectively.

  19. The Space-Wise Global Gravity Model from GOCE Nominal Mission Data

    NASA Astrophysics Data System (ADS)

    Gatti, A.; Migliaccio, F.; Reguzzoni, M.; Sampietro, D.; Sanso, F.

    2011-12-01

    In the framework of the GOCE data analysis, the space-wise approach implements a multi-step collocation solution for the estimation of a global geopotential model in terms of spherical harmonic coefficients and their error covariance matrix. The main idea is to use the collocation technique to exploit the spatial correlation of the gravity field in the GOCE data reduction. In particular the method consists of an along-track Wiener filter, a collocation gridding at satellite altitude and a spherical harmonic analysis by integration. All these steps are iterated, also to account for the rotation between local orbital and gradiometer reference frame. Error covariances are computed by Montecarlo simulations. The first release of the space-wise approach was presented at the ESA Living Planet Symposium in July 2010. This model was based on only two months of GOCE data and partially contained a priori information coming from other existing gravity models, especially at low degrees and low orders. A second release was distributed after the 4th International GOCE User Workshop in May 2011. In this solution, based on eight months of GOCE data, all the dependencies from external gravity information were removed thus giving rise to a GOCE-only space-wise model. However this model showed an over-regularization at the highest degrees of the spherical harmonic expansion due to the combination technique of intermediate solutions (based on about two months of data). In this work a new space-wise solution is presented. It is based on all nominal mission data from November 2009 to mid April 2011, and its main novelty is that the intermediate solutions are now computed in such a way to avoid over-regularization in the final solution. Beyond the spherical harmonic coefficients of the global model and their error covariance matrix, the space-wise approach is able to deliver as by-products a set of spherical grids of potential and of its second derivatives at mean satellite altitude. These grids have an information content that is very similar to the original along-orbit data, but they are much easier to handle. In addition they are estimated by local least-squares collocation and therefore, although computed by a unique global covariance function, they could yield more information at local level than the spherical harmonic coefficients of the global model. For this reason these grids seem to be useful for local geophysical investigations. The estimated grids with their estimated errors are presented in this work together with proposals on possible future improvements. A test to compare the different information contents of the along-orbit data, the gridded data and the spherical harmonic coefficients is also shown.

  20. An efficient framework for optimization and parameter sensitivity analysis in arterial growth and remodeling computations

    PubMed Central

    Sankaran, Sethuraman; Humphrey, Jay D.; Marsden, Alison L.

    2013-01-01

    Computational models for vascular growth and remodeling (G&R) are used to predict the long-term response of vessels to changes in pressure, flow, and other mechanical loading conditions. Accurate predictions of these responses are essential for understanding numerous disease processes. Such models require reliable inputs of numerous parameters, including material properties and growth rates, which are often experimentally derived, and inherently uncertain. While earlier methods have used a brute force approach, systematic uncertainty quantification in G&R models promises to provide much better information. In this work, we introduce an efficient framework for uncertainty quantification and optimal parameter selection, and illustrate it via several examples. First, an adaptive sparse grid stochastic collocation scheme is implemented in an established G&R solver to quantify parameter sensitivities, and near-linear scaling with the number of parameters is demonstrated. This non-intrusive and parallelizable algorithm is compared with standard sampling algorithms such as Monte-Carlo. Second, we determine optimal arterial wall material properties by applying robust optimization. We couple the G&R simulator with an adaptive sparse grid collocation approach and a derivative-free optimization algorithm. We show that an artery can achieve optimal homeostatic conditions over a range of alterations in pressure and flow; robustness of the solution is enforced by including uncertainty in loading conditions in the objective function. We then show that homeostatic intramural and wall shear stress is maintained for a wide range of material properties, though the time it takes to achieve this state varies. We also show that the intramural stress is robust and lies within 5% of its mean value for realistic variability of the material parameters. We observe that prestretch of elastin and collagen are most critical to maintaining homeostasis, while values of the material properties are most critical in determining response time. Finally, we outline several challenges to the G&R community for future work. We suggest that these tools provide the first systematic and efficient framework to quantify uncertainties and optimally identify G&R model parameters. PMID:23626380

  1. Geoid undulations and gravity anomalies over the Aral Sea, the Black Sea and the Caspian Sea from a combined GEOS-3/SEASAT/GEOSAT altimeter data set

    NASA Technical Reports Server (NTRS)

    Au, Andrew Y.; Brown, Richard D.; Welker, Jean E.

    1991-01-01

    Satellite-based altimetric data taken by GOES-3, SEASAT, and GEOSAT over the Aral Sea, the Black Sea, and the Caspian Sea are analyzed and a least squares collocation technique is used to predict the geoid undulations on a 0.25x0.25 deg. grid and to transform these geoid undulations to free air gravity anomalies. Rapp's 180x180 geopotential model is used as the reference surface for the collocation procedure. The result of geoid to gravity transformation is, however, sensitive to the information content of the reference geopotential model used. For example, considerable detailed surface gravity data were incorporated into the reference model over the Black Sea, resulting in a reference model with significant information content at short wavelengths. Thus, estimation of short wavelength gravity anomalies from gridded geoid heights is generally reliable over regions such as the Black Sea, using the conventional collocation technique with local empirical covariance functions. Over regions such as the Caspian Sea, where detailed surface data are generally not incorporated into the reference model, unconventional techniques are needed to obtain reliable gravity anomalies. Based on the predicted gravity anomalies over these inland seas, speculative tectonic structures are identified and geophysical processes are inferred.

  2. A Fourier collocation time domain method for numerically solving Maxwell's equations

    NASA Technical Reports Server (NTRS)

    Shebalin, John V.

    1991-01-01

    A new method for solving Maxwell's equations in the time domain for arbitrary values of permittivity, conductivity, and permeability is presented. Spatial derivatives are found by a Fourier transform method and time integration is performed using a second order, semi-implicit procedure. Electric and magnetic fields are collocated on the same grid points, rather than on interleaved points, as in the Finite Difference Time Domain (FDTD) method. Numerical results are presented for the propagation of a 2-D Transverse Electromagnetic (TEM) mode out of a parallel plate waveguide and into a dielectric and conducting medium.

  3. The Benard problem: A comparison of finite difference and spectral collocation eigen value solutions

    NASA Technical Reports Server (NTRS)

    Skarda, J. Raymond Lee; Mccaughan, Frances E.; Fitzmaurice, Nessan

    1995-01-01

    The application of spectral methods, using a Chebyshev collocation scheme, to solve hydrodynamic stability problems is demonstrated on the Benard problem. Implementation of the Chebyshev collocation formulation is described. The performance of the spectral scheme is compared with that of a 2nd order finite difference scheme. An exact solution to the Marangoni-Benard problem is used to evaluate the performance of both schemes. The error of the spectral scheme is at least seven orders of magnitude smaller than finite difference error for a grid resolution of N = 15 (number of points used). The performance of the spectral formulation far exceeded the performance of the finite difference formulation for this problem. The spectral scheme required only slightly more effort to set up than the 2nd order finite difference scheme. This suggests that the spectral scheme may actually be faster to implement than higher order finite difference schemes.

  4. Versions of the collocation and least squares method for solving biharmonic equations in non-canonical domains

    NASA Astrophysics Data System (ADS)

    Belyaev, V. A.; Shapeev, V. P.

    2017-10-01

    New versions of the collocations and least squares method of high-order accuracy are proposed and implemented for the numerical solution of the boundary value problems for the biharmonic equation in non-canonical domains. The solution of the biharmonic equation is used for simulating the stress-strain state of an isotropic plate under the action of transverse load. The differential problem is projected into a space of fourth-degree polynomials by the CLS method. The boundary conditions for the approximate solution are put down exactly on the boundary of the computational domain. The versions of the CLS method are implemented on the grids which are constructed in two different ways. It is shown that the approximate solution of problems converges with high order. Thus it matches with high accuracy with the analytical solution of the test problems in the case of known solution in the numerical experiments on the convergence of the solution of various problems on a sequence of grids.

  5. A conservative staggered-grid Chebyshev multidomain method for compressible flows

    NASA Technical Reports Server (NTRS)

    Kopriva, David A.; Kolias, John H.

    1995-01-01

    We present a new multidomain spectral collocation method that uses staggered grids for the solution of compressible flow problems. The solution unknowns are defined at the nodes of a Gauss quadrature rule. The fluxes are evaluated at the nodes of a Gauss-Lobatto rule. The method is conservative, free-stream preserving, and exponentially accurate. A significant advantage of the method is that subdomain corners are not included in the approximation, making solutions in complex geometries easier to compute.

  6. Impact of uncertainties in free stream conditions on the aerodynamics of a rectangular cylinder

    NASA Astrophysics Data System (ADS)

    Mariotti, Alessandro; Shoeibi Omrani, Pejman; Witteveen, Jeroen; Salvetti, Maria Vittoria

    2015-11-01

    The BARC benchmark deals with the flow around a rectangular cylinder with chord-to-depth ratio equal to 5. This flow configuration is of practical interest for civil and industrial structures and it is characterized by massively separated flow and unsteadiness. In a recent review of BARC results, significant dispersion was observed both in experimental and numerical predictions of some flow quantities, which are extremely sensitive to various uncertainties, which may be present in experiments and simulations. Besides modeling and numerical errors, in simulations it is difficult to exactly reproduce the experimental conditions due to uncertainties in the set-up parameters, which sometimes cannot be exactly controlled or characterized. Probabilistic methods and URANS simulations are used to investigate the impact of the uncertainties in the following set-up parameters: the angle of incidence, the free stream longitudinal turbulence intensity and length scale. Stochastic collocation is employed to perform the probabilistic propagation of the uncertainty. The discretization and modeling errors are estimated by repeating the same analysis for different grids and turbulence models. The results obtained for different assumed PDF of the set-up parameters are also compared.

  7. Performance of FFT methods in local gravity field modelling

    NASA Technical Reports Server (NTRS)

    Forsberg, Rene; Solheim, Dag

    1989-01-01

    Fast Fourier transform (FFT) methods provide a fast and efficient means of processing large amounts of gravity or geoid data in local gravity field modelling. The FFT methods, however, has a number of theoretical and practical limitations, especially the use of flat-earth approximation, and the requirements for gridded data. In spite of this the method often yields excellent results in practice when compared to other more rigorous (and computationally expensive) methods, such as least-squares collocation. The good performance of the FFT methods illustrate that the theoretical approximations are offset by the capability of taking into account more data in larger areas, especially important for geoid predictions. For best results good data gridding algorithms are essential. In practice truncated collocation approaches may be used. For large areas at high latitudes the gridding must be done using suitable map projections such as UTM, to avoid trivial errors caused by the meridian convergence. The FFT methods are compared to ground truth data in New Mexico (xi, eta from delta g), Scandinavia (N from delta g, the geoid fits to 15 cm over 2000 km), and areas of the Atlantic (delta g from satellite altimetry using Wiener filtering). In all cases the FFT methods yields results comparable or superior to other methods.

  8. Computational Electromagnetics Application to Small Geometric Anomalies and Associated Ucertainty Evaluation

    DTIC Science & Technology

    2010-02-28

    implemented a fast method to enable the statistical characterization of electromagnetic interference and compatibility (EMI/EMC) phenomena on electrically...higher accuracy is needed, e.g., to compute higher moment statistics . To address this problem, we have developed adaptive stochastic collocation methods ...SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) AF OFFICE OF SCIENTIFIC RESEARCH 875 N. RANDOLPH ST. ROOM 3112 ARLINGTON VA 22203 UA

  9. Parallel adaptive wavelet collocation method for PDEs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nejadmalayeri, Alireza, E-mail: Alireza.Nejadmalayeri@gmail.com; Vezolainen, Alexei, E-mail: Alexei.Vezolainen@Colorado.edu; Brown-Dymkoski, Eric, E-mail: Eric.Browndymkoski@Colorado.edu

    2015-10-01

    A parallel adaptive wavelet collocation method for solving a large class of Partial Differential Equations is presented. The parallelization is achieved by developing an asynchronous parallel wavelet transform, which allows one to perform parallel wavelet transform and derivative calculations with only one data synchronization at the highest level of resolution. The data are stored using tree-like structure with tree roots starting at a priori defined level of resolution. Both static and dynamic domain partitioning approaches are developed. For the dynamic domain partitioning, trees are considered to be the minimum quanta of data to be migrated between the processes. This allowsmore » fully automated and efficient handling of non-simply connected partitioning of a computational domain. Dynamic load balancing is achieved via domain repartitioning during the grid adaptation step and reassigning trees to the appropriate processes to ensure approximately the same number of grid points on each process. The parallel efficiency of the approach is discussed based on parallel adaptive wavelet-based Coherent Vortex Simulations of homogeneous turbulence with linear forcing at effective non-adaptive resolutions up to 2048{sup 3} using as many as 2048 CPU cores.« less

  10. Nested sparse grid collocation method with delay and transformation for subsurface flow and transport problems

    NASA Astrophysics Data System (ADS)

    Liao, Qinzhuo; Zhang, Dongxiao; Tchelepi, Hamdi

    2017-06-01

    In numerical modeling of subsurface flow and transport problems, formation properties may not be deterministically characterized, which leads to uncertainty in simulation results. In this study, we propose a sparse grid collocation method, which adopts nested quadrature rules with delay and transformation to quantify the uncertainty of model solutions. We show that the nested Kronrod-Patterson-Hermite quadrature is more efficient than the unnested Gauss-Hermite quadrature. We compare the convergence rates of various quadrature rules including the domain truncation and domain mapping approaches. To further improve accuracy and efficiency, we present a delayed process in selecting quadrature nodes and a transformed process for approximating unsmooth or discontinuous solutions. The proposed method is tested by an analytical function and in one-dimensional single-phase and two-phase flow problems with different spatial variances and correlation lengths. An additional example is given to demonstrate its applicability to three-dimensional black-oil models. It is found from these examples that the proposed method provides a promising approach for obtaining satisfactory estimation of the solution statistics and is much more efficient than the Monte-Carlo simulations.

  11. Space-Wise approach for airborne gravity data modelling

    NASA Astrophysics Data System (ADS)

    Sampietro, D.; Capponi, M.; Mansi, A. H.; Gatti, A.; Marchetti, P.; Sansò, F.

    2017-05-01

    Regional gravity field modelling by means of remove-compute-restore procedure is nowadays widely applied in different contexts: it is the most used technique for regional gravimetric geoid determination, and it is also used in exploration geophysics to predict grids of gravity anomalies (Bouguer, free-air, isostatic, etc.), which are useful to understand and map geological structures in a specific region. Considering this last application, due to the required accuracy and resolution, airborne gravity observations are usually adopted. However, due to the relatively high acquisition velocity, presence of atmospheric turbulence, aircraft vibration, instrumental drift, etc., airborne data are usually contaminated by a very high observation error. For this reason, a proper procedure to filter the raw observations in both the low and high frequencies should be applied to recover valuable information. In this work, a software to filter and grid raw airborne observations is presented: the proposed solution consists in a combination of an along-track Wiener filter and a classical Least Squares Collocation technique. Basically, the proposed procedure is an adaptation to airborne gravimetry of the Space-Wise approach, developed by Politecnico di Milano to process data coming from the ESA satellite mission GOCE. Among the main differences with respect to the satellite application of this approach, there is the fact that, while in processing GOCE data the stochastic characteristics of the observation error can be considered a-priori well known, in airborne gravimetry, due to the complex environment in which the observations are acquired, these characteristics are unknown and should be retrieved from the dataset itself. The presented solution is suited for airborne data analysis in order to be able to quickly filter and grid gravity observations in an easy way. Some innovative theoretical aspects focusing in particular on the theoretical covariance modelling are presented too. In the end, the goodness of the procedure is evaluated by means of a test on real data retrieving the gravitational signal with a predicted accuracy of about 0.4 mGal.

  12. Sparse-grid, reduced-basis Bayesian inversion: Nonaffine-parametric nonlinear equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Peng, E-mail: peng@ices.utexas.edu; Schwab, Christoph, E-mail: christoph.schwab@sam.math.ethz.ch

    2016-07-01

    We extend the reduced basis (RB) accelerated Bayesian inversion methods for affine-parametric, linear operator equations which are considered in [16,17] to non-affine, nonlinear parametric operator equations. We generalize the analysis of sparsity of parametric forward solution maps in [20] and of Bayesian inversion in [48,49] to the fully discrete setting, including Petrov–Galerkin high-fidelity (“HiFi”) discretization of the forward maps. We develop adaptive, stochastic collocation based reduction methods for the efficient computation of reduced bases on the parametric solution manifold. The nonaffinity and nonlinearity with respect to (w.r.t.) the distributed, uncertain parameters and the unknown solution is collocated; specifically, by themore » so-called Empirical Interpolation Method (EIM). For the corresponding Bayesian inversion problems, computational efficiency is enhanced in two ways: first, expectations w.r.t. the posterior are computed by adaptive quadratures with dimension-independent convergence rates proposed in [49]; the present work generalizes [49] to account for the impact of the PG discretization in the forward maps on the convergence rates of the Quantities of Interest (QoI for short). Second, we propose to perform the Bayesian estimation only w.r.t. a parsimonious, RB approximation of the posterior density. Based on the approximation results in [49], the infinite-dimensional parametric, deterministic forward map and operator admit N-term RB and EIM approximations which converge at rates which depend only on the sparsity of the parametric forward map. In several numerical experiments, the proposed algorithms exhibit dimension-independent convergence rates which equal, at least, the currently known rate estimates for N-term approximation. We propose to accelerate Bayesian estimation by first offline construction of reduced basis surrogates of the Bayesian posterior density. The parsimonious surrogates can then be employed for online data assimilation and for Bayesian estimation. They also open a perspective for optimal experimental design.« less

  13. Isotropic stochastic rotation dynamics

    NASA Astrophysics Data System (ADS)

    Mühlbauer, Sebastian; Strobl, Severin; Pöschel, Thorsten

    2017-12-01

    Stochastic rotation dynamics (SRD) is a widely used method for the mesoscopic modeling of complex fluids, such as colloidal suspensions or multiphase flows. In this method, however, the underlying Cartesian grid defining the coarse-grained interaction volumes induces anisotropy. We propose an isotropic, lattice-free variant of stochastic rotation dynamics, termed iSRD. Instead of Cartesian grid cells, we employ randomly distributed spherical interaction volumes. This eliminates the requirement of a grid shift, which is essential in standard SRD to maintain Galilean invariance. We derive analytical expressions for the viscosity and the diffusion coefficient in relation to the model parameters, which show excellent agreement with the results obtained in iSRD simulations. The proposed algorithm is particularly suitable to model systems bound by walls of complex shape, where the domain cannot be meshed uniformly. The presented approach is not limited to SRD but is applicable to any other mesoscopic method, where particles interact within certain coarse-grained volumes.

  14. Intrusive Method for Uncertainty Quantification in a Multiphase Flow Solver

    NASA Astrophysics Data System (ADS)

    Turnquist, Brian; Owkes, Mark

    2016-11-01

    Uncertainty quantification (UQ) is a necessary, interesting, and often neglected aspect of fluid flow simulations. To determine the significance of uncertain initial and boundary conditions, a multiphase flow solver is being created which extends a single phase, intrusive, polynomial chaos scheme into multiphase flows. Reliably estimating the impact of input uncertainty on design criteria can help identify and minimize unwanted variability in critical areas, and has the potential to help advance knowledge in atomizing jets, jet engines, pharmaceuticals, and food processing. Use of an intrusive polynomial chaos method has been shown to significantly reduce computational cost over non-intrusive collocation methods such as Monte-Carlo. This method requires transforming the model equations into a weak form through substitution of stochastic (random) variables. Ultimately, the model deploys a stochastic Navier Stokes equation, a stochastic conservative level set approach including reinitialization, as well as stochastic normals and curvature. By implementing these approaches together in one framework, basic problems may be investigated which shed light on model expansion, uncertainty theory, and fluid flow in general. NSF Grant Number 1511325.

  15. Absorption of Solar Radiation by Clouds: Interpretations of Satellite, Surface, and Aircraft Measurements

    NASA Technical Reports Server (NTRS)

    Cess, R. D.; Zhang, M. H.; Zhou, Y.; Jing, X.; Dvortsov, V.

    1996-01-01

    To investigate the absorption of shortwave radiation by clouds, we have collocated satellite and surface measurements of shortwave radiation at several locations. Considerable effort has been directed toward understanding and minimizing sampling errors caused by the satellite measurements being instantaneous and over a grid that is much larger than the field of view of an upward facing surface pyranometer. The collocated data indicate that clouds absorb considerably more shortwave radiation than is predicted by theoretical models. This is consistent with the finding from both satellite and aircraft measurements that observed clouds are darker than model clouds. In the limit of thick clouds, observed top-of-the-atmosphere albedos do not exceed a value of 0.7, whereas in models the maximum albedo can be 0.8.

  16. Modelling airborne gravity data by means of adapted Space-Wise approach

    NASA Astrophysics Data System (ADS)

    Sampietro, Daniele; Capponi, Martina; Hamdi Mansi, Ahmed; Gatti, Andrea

    2017-04-01

    Regional gravity field modelling by means of remove - restore procedure is nowadays widely applied to predict grids of gravity anomalies (Bouguer, free-air, isostatic, etc.) in gravimetric geoid determination as well as in exploration geophysics. Considering this last application, due to the required accuracy and resolution, airborne gravity observations are generally adopted. However due to the relatively high acquisition velocity, presence of atmospheric turbulence, aircraft vibration, instrumental drift, etc. airborne data are contaminated by a very high observation error. For this reason, a proper procedure to filter the raw observations both in the low and high frequency should be applied to recover valuable information. In this work, a procedure to predict a grid or a set of filtered along track gravity anomalies, by merging GGM and airborne dataset, is presented. The proposed algorithm, like the Space-Wise approach developed by Politecnico di Milano in the framework of GOCE data analysis, is based on a combination of along track Wiener filter and Least Squares Collocation adjustment and properly considers the different altitudes of the gravity observations. Among the main differences with respect to the satellite application of the Space-Wise approach there is the fact that, while in processing GOCE data the stochastic characteristics of the observation error can be considered a-priori well known, in airborne gravimetry, due to the complex environment in which the observations are acquired, these characteristics are unknown and should be retrieved from the dataset itself. Some innovative theoretical aspects focusing in particular on the theoretical covariance modelling are presented too. In the end, the goodness of the procedure is evaluated by means of a test on real data recovering the gravitational signal with a predicted accuracy of about 0.25 mGal.

  17. Predictive uncertainty analysis of plume distribution for geological carbon sequestration using sparse-grid Bayesian method

    NASA Astrophysics Data System (ADS)

    Shi, X.; Zhang, G.

    2013-12-01

    Because of the extensive computational burden, parametric uncertainty analyses are rarely conducted for geological carbon sequestration (GCS) process based multi-phase models. The difficulty of predictive uncertainty analysis for the CO2 plume migration in realistic GCS models is not only due to the spatial distribution of the caprock and reservoir (i.e. heterogeneous model parameters), but also because the GCS optimization estimation problem has multiple local minima due to the complex nonlinear multi-phase (gas and aqueous), and multi-component (water, CO2, salt) transport equations. The geological model built by Doughty and Pruess (2004) for the Frio pilot site (Texas) was selected and assumed to represent the 'true' system, which was composed of seven different facies (geological units) distributed among 10 layers. We chose to calibrate the permeabilities of these facies. Pressure and gas saturation values from this true model were then extracted and used as observations for subsequent model calibration. Random noise was added to the observations to approximate realistic field conditions. Each simulation of the model lasts about 2 hours. In this study, we develop a new approach that improves computational efficiency of Bayesian inference by constructing a surrogate system based on an adaptive sparse-grid stochastic collocation method. This surrogate response surface global optimization algorithm is firstly used to calibrate the model parameters, then prediction uncertainty of the CO2 plume position is quantified due to the propagation from parametric uncertainty in the numerical experiments, which is also compared to the actual plume from the 'true' model. Results prove that the approach is computationally efficient for multi-modal optimization and prediction uncertainty quantification for computationally expensive simulation models. Both our inverse methodology and findings can be broadly applicable to GCS in heterogeneous storage formations.

  18. The prediction and mapping of geoidal undulations from GEOS-3 altimetry. [gravity anomalies

    NASA Technical Reports Server (NTRS)

    Kearsley, W.

    1978-01-01

    From the adjusted altimeter data an approximation to the geoid height in ocean areas is obtained. Methods are developed to produce geoid maps in these areas. Geoid heights are obtained for grid points in the region to be mapped, and two of the parameters critical to the production of an accurate map are investigated. These are the spacing of the grid, which must be related to the half-wavelength of the altimeter signal whose amplitude is the desired accuracy of the contour; and the method adopted to predict the grid values. Least squares collocation was used to find geoid undulations on a 1 deg grid in the mapping area. Twenty maps, with their associated precisions, were produced and are included. These maps cover the Indian Ocean, Southwestern and Northeastern portions of the Pacific Ocean, and Southwest Atlantic and the U.S. Calibration Area.

  19. PARADIGM USING JOINT DETERMINISTIC GRID MODELING AND SUB-GRID VARIABILITY STOCHASTIC DESCRIPTION AS A TEMPLATE FOR MODEL EVALUATION

    EPA Science Inventory

    The goal of achieving verisimilitude of air quality simulations to observations is problematic. Chemical transport models such as the Community Multi-Scale Air Quality (CMAQ) modeling system produce volume averages of pollutant concentration fields. When grid sizes are such tha...

  20. Low Frequency Predictive Skill Despite Structural Instability and Model Error

    DTIC Science & Technology

    2014-09-30

    Majda, based on earlier theoretical work. 1. Dynamic Stochastic Superresolution of sparseley observed turbulent systems M. Branicki (Post doc...of numerical models. Here, we introduce and study a suite of general Dynamic Stochastic Superresolution (DSS) algorithms and show that, by...resolving subgridscale turbulence through Dynamic Stochastic Superresolution utilizing aliased grids is a potential breakthrough for practical online

  1. Stochastic sampling of quadrature grids for the evaluation of vibrational expectation values

    NASA Astrophysics Data System (ADS)

    López Ríos, Pablo; Monserrat, Bartomeu; Needs, Richard J.

    2018-02-01

    The thermal lines method for the evaluation of vibrational expectation values of electronic observables [B. Monserrat, Phys. Rev. B 93, 014302 (2016), 10.1103/PhysRevB.93.014302] was recently proposed as a physically motivated approximation offering balance between the accuracy of direct Monte Carlo integration and the low computational cost of using local quadratic approximations. In this paper we reformulate thermal lines as a stochastic implementation of quadrature-grid integration, analyze the analytical form of its bias, and extend the method to multiple-point quadrature grids applicable to any factorizable harmonic or anharmonic nuclear wave function. The bias incurred by thermal lines is found to depend on the local form of the expectation value, and we demonstrate that the use of finer quadrature grids along selected modes can eliminate this bias, while still offering an ˜30 % lower computational cost than direct Monte Carlo integration in our tests.

  2. An Improved Nested Sampling Algorithm for Model Selection and Assessment

    NASA Astrophysics Data System (ADS)

    Zeng, X.; Ye, M.; Wu, J.; WANG, D.

    2017-12-01

    Multimodel strategy is a general approach for treating model structure uncertainty in recent researches. The unknown groundwater system is represented by several plausible conceptual models. Each alternative conceptual model is attached with a weight which represents the possibility of this model. In Bayesian framework, the posterior model weight is computed as the product of model prior weight and marginal likelihood (or termed as model evidence). As a result, estimating marginal likelihoods is crucial for reliable model selection and assessment in multimodel analysis. Nested sampling estimator (NSE) is a new proposed algorithm for marginal likelihood estimation. The implementation of NSE comprises searching the parameters' space from low likelihood area to high likelihood area gradually, and this evolution is finished iteratively via local sampling procedure. Thus, the efficiency of NSE is dominated by the strength of local sampling procedure. Currently, Metropolis-Hasting (M-H) algorithm and its variants are often used for local sampling in NSE. However, M-H is not an efficient sampling algorithm for high-dimensional or complex likelihood function. For improving the performance of NSE, it could be feasible to integrate more efficient and elaborated sampling algorithm - DREAMzs into the local sampling. In addition, in order to overcome the computation burden problem of large quantity of repeating model executions in marginal likelihood estimation, an adaptive sparse grid stochastic collocation method is used to build the surrogates for original groundwater model.

  3. The influence of random element displacement on DOA estimates obtained with (Khatri-Rao-)root-MUSIC.

    PubMed

    Inghelbrecht, Veronique; Verhaevert, Jo; van Hecke, Tanja; Rogier, Hendrik

    2014-11-11

    Although a wide range of direction of arrival (DOA) estimation algorithms has been described for a diverse range of array configurations, no specific stochastic analysis framework has been established to assess the probability density function of the error on DOA estimates due to random errors in the array geometry. Therefore, we propose a stochastic collocation method that relies on a generalized polynomial chaos expansion to connect the statistical distribution of random position errors to the resulting distribution of the DOA estimates. We apply this technique to the conventional root-MUSIC and the Khatri-Rao-root-MUSIC methods. According to Monte-Carlo simulations, this novel approach yields a speedup by a factor of more than 100 in terms of CPU-time for a one-dimensional case and by a factor of 56 for a two-dimensional case.

  4. Pseudo spectral collocation with Maxwell polynomials for kinetic equations with energy diffusion

    NASA Astrophysics Data System (ADS)

    Sánchez-Vizuet, Tonatiuh; Cerfon, Antoine J.

    2018-02-01

    We study the approximation and stability properties of a recently popularized discretization strategy for the speed variable in kinetic equations, based on pseudo-spectral collocation on a grid defined by the zeros of a non-standard family of orthogonal polynomials called Maxwell polynomials. Taking a one-dimensional equation describing energy diffusion due to Fokker-Planck collisions with a Maxwell-Boltzmann background distribution as the test bench for the performance of the scheme, we find that Maxwell based discretizations outperform other commonly used schemes in most situations, often by orders of magnitude. This provides a strong motivation for their use in high-dimensional gyrokinetic simulations. However, we also show that Maxwell based schemes are subject to a non-modal time stepping instability in their most straightforward implementation, so that special care must be given to the discrete representation of the linear operators in order to benefit from the advantages provided by Maxwell polynomials.

  5. Polynomial Chaos Based Acoustic Uncertainty Predictions from Ocean Forecast Ensembles

    NASA Astrophysics Data System (ADS)

    Dennis, S.

    2016-02-01

    Most significant ocean acoustic propagation occurs at tens of kilometers, at scales small compared basin and to most fine scale ocean modeling. To address the increased emphasis on uncertainty quantification, for example transmission loss (TL) probability density functions (PDF) within some radius, a polynomial chaos (PC) based method is utilized. In order to capture uncertainty in ocean modeling, Navy Coastal Ocean Model (NCOM) now includes ensembles distributed to reflect the ocean analysis statistics. Since the ensembles are included in the data assimilation for the new forecast ensembles, the acoustic modeling uses the ensemble predictions in a similar fashion for creating sound speed distribution over an acoustically relevant domain. Within an acoustic domain, singular value decomposition over the combined time-space structure of the sound speeds can be used to create Karhunen-Loève expansions of sound speed, subject to multivariate normality testing. These sound speed expansions serve as a basis for Hermite polynomial chaos expansions of derived quantities, in particular TL. The PC expansion coefficients result from so-called non-intrusive methods, involving evaluation of TL at multi-dimensional Gauss-Hermite quadrature collocation points. Traditional TL calculation from standard acoustic propagation modeling could be prohibitively time consuming at all multi-dimensional collocation points. This method employs Smolyak order and gridding methods to allow adaptive sub-sampling of the collocation points to determine only the most significant PC expansion coefficients to within a preset tolerance. Practically, the Smolyak order and grid sizes grow only polynomially in the number of Karhunen-Loève terms, alleviating the curse of dimensionality. The resulting TL PC coefficients allow the determination of TL PDF normality and its mean and standard deviation. In the non-normal case, PC Monte Carlo methods are used to rapidly establish the PDF. This work was sponsored by the Office of Naval Research

  6. Analysis of complex environment effect on near-field emission

    NASA Astrophysics Data System (ADS)

    Ravelo, B.; Lalléchère, S.; Bonnet, P.; Paladian, F.

    2014-10-01

    The article is dealing with uncertainty analyses of radiofrequency circuits electromagnetic compatibility emission based on the near-field/near-field (NF/NF) transform combined with stochastic approach. By using 2D data corresponding to electromagnetic (EM) field (X=E or H) scanned in the observation plane placed at the position z0 above the circuit under test (CUT), the X field map was extracted. Then, uncertainty analyses were assessed via the statistical moments from X component. In addition, stochastic collocation based was considered and calculations were applied to planar EM NF radiated by the CUTs as Wilkinson power divider and a microstrip line operating at GHz levels. After Matlab implementation, the mean and standard deviation were assessed. The present study illustrates how the variations of environmental parameters may impact EM fields. The NF uncertainty methodology can be applied to any physical parameter effects in complex environment and useful for printed circuit board (PCBs) design guideline.

  7. Fully anisotropic 3-D EM modelling on a Lebedev grid with a multigrid pre-conditioner

    NASA Astrophysics Data System (ADS)

    Jaysaval, Piyoosh; Shantsev, Daniil V.; de la Kethulle de Ryhove, Sébastien; Bratteland, Tarjei

    2016-12-01

    We present a numerical algorithm for 3-D electromagnetic (EM) simulations in conducting media with general electric anisotropy. The algorithm is based on the finite-difference discretization of frequency-domain Maxwell's equations on a Lebedev grid, in which all components of the electric field are collocated but half a spatial step staggered with respect to the magnetic field components, which also are collocated. This leads to a system of linear equations that is solved using a stabilized biconjugate gradient method with a multigrid preconditioner. We validate the accuracy of the numerical results for layered and 3-D tilted transverse isotropic (TTI) earth models representing typical scenarios used in the marine controlled-source EM method. It is then demonstrated that not taking into account the full anisotropy of the conductivity tensor can lead to misleading inversion results. For synthetic data corresponding to a 3-D model with a TTI anticlinal structure, a standard vertical transverse isotropic (VTI) inversion is not able to image a resistor, while for a 3-D model with a TTI synclinal structure it produces a false resistive anomaly. However, if the VTI forward solver used in the inversion is replaced by the proposed TTI solver with perfect knowledge of the strike and dip of the dipping structures, the resulting resistivity images become consistent with the true models.

  8. Cardiac Position Sensitivity Study in the Electrocardiographic Forward Problem Using Stochastic Collocation and Boundary Element Methods

    PubMed Central

    Swenson, Darrell J.; Geneser, Sarah E.; Stinstra, Jeroen G.; Kirby, Robert M.; MacLeod, Rob S.

    2012-01-01

    The electrocardiogram (ECG) is ubiquitously employed as a diagnostic and monitoring tool for patients experiencing cardiac distress and/or disease. It is widely known that changes in heart position resulting from, for example, posture of the patient (sitting, standing, lying) and respiration significantly affect the body-surface potentials; however, few studies have quantitatively and systematically evaluated the effects of heart displacement on the ECG. The goal of this study was to evaluate the impact of positional changes of the heart on the ECG in the specific clinical setting of myocardial ischemia. To carry out the necessary comprehensive sensitivity analysis, we applied a relatively novel and highly efficient statistical approach, the generalized polynomial chaos-stochastic collocation method, to a boundary element formulation of the electrocardiographic forward problem, and we drove these simulations with measured epicardial potentials from whole-heart experiments. Results of the analysis identified regions on the body-surface where the potentials were especially sensitive to realistic heart motion. The standard deviation (STD) of ST-segment voltage changes caused by the apex of a normal heart, swinging forward and backward or side-to-side was approximately 0.2 mV. Variations were even larger, 0.3 mV, for a heart exhibiting elevated ischemic potentials. These variations could be large enough to mask or to mimic signs of ischemia in the ECG. Our results suggest possible modifications to ECG protocols that could reduce the diagnostic error related to postural changes in patients possibly suffering from myocardial ischemia. PMID:21909818

  9. Towards Stochastic Optimization-Based Electric Vehicle Penetration in a Novel Archipelago Microgrid.

    PubMed

    Yang, Qingyu; An, Dou; Yu, Wei; Tan, Zhengan; Yang, Xinyu

    2016-06-17

    Due to the advantage of avoiding upstream disturbance and voltage fluctuation from a power transmission system, Islanded Micro-Grids (IMG) have attracted much attention. In this paper, we first propose a novel self-sufficient Cyber-Physical System (CPS) supported by Internet of Things (IoT) techniques, namely "archipelago micro-grid (MG)", which integrates the power grid and sensor networks to make the grid operation effective and is comprised of multiple MGs while disconnected with the utility grid. The Electric Vehicles (EVs) are used to replace a portion of Conventional Vehicles (CVs) to reduce CO 2 emission and operation cost. Nonetheless, the intermittent nature and uncertainty of Renewable Energy Sources (RESs) remain a challenging issue in managing energy resources in the system. To address these issues, we formalize the optimal EV penetration problem as a two-stage Stochastic Optimal Penetration (SOP) model, which aims to minimize the emission and operation cost in the system. Uncertainties coming from RESs (e.g., wind, solar, and load demand) are considered in the stochastic model and random parameters to represent those uncertainties are captured by the Monte Carlo-based method. To enable the reasonable deployment of EVs in each MGs, we develop two scheduling schemes, namely Unlimited Coordinated Scheme (UCS) and Limited Coordinated Scheme (LCS), respectively. An extensive simulation study based on a modified 9 bus system with three MGs has been carried out to show the effectiveness of our proposed schemes. The evaluation data indicates that our proposed strategy can reduce both the environmental pollution created by CO 2 emissions and operation costs in UCS and LCS.

  10. Towards Stochastic Optimization-Based Electric Vehicle Penetration in a Novel Archipelago Microgrid

    PubMed Central

    Yang, Qingyu; An, Dou; Yu, Wei; Tan, Zhengan; Yang, Xinyu

    2016-01-01

    Due to the advantage of avoiding upstream disturbance and voltage fluctuation from a power transmission system, Islanded Micro-Grids (IMG) have attracted much attention. In this paper, we first propose a novel self-sufficient Cyber-Physical System (CPS) supported by Internet of Things (IoT) techniques, namely “archipelago micro-grid (MG)”, which integrates the power grid and sensor networks to make the grid operation effective and is comprised of multiple MGs while disconnected with the utility grid. The Electric Vehicles (EVs) are used to replace a portion of Conventional Vehicles (CVs) to reduce CO2 emission and operation cost. Nonetheless, the intermittent nature and uncertainty of Renewable Energy Sources (RESs) remain a challenging issue in managing energy resources in the system. To address these issues, we formalize the optimal EV penetration problem as a two-stage Stochastic Optimal Penetration (SOP) model, which aims to minimize the emission and operation cost in the system. Uncertainties coming from RESs (e.g., wind, solar, and load demand) are considered in the stochastic model and random parameters to represent those uncertainties are captured by the Monte Carlo-based method. To enable the reasonable deployment of EVs in each MGs, we develop two scheduling schemes, namely Unlimited Coordinated Scheme (UCS) and Limited Coordinated Scheme (LCS), respectively. An extensive simulation study based on a modified 9 bus system with three MGs has been carried out to show the effectiveness of our proposed schemes. The evaluation data indicates that our proposed strategy can reduce both the environmental pollution created by CO2 emissions and operation costs in UCS and LCS. PMID:27322281

  11. Geophysical data integration, stochastic simulation and significance analysis of groundwater responses using ANOVA in the Chicot Aquifer system, Louisiana, USA

    USGS Publications Warehouse

    Rahman, A.; Tsai, F.T.-C.; White, C.D.; Carlson, D.A.; Willson, C.S.

    2008-01-01

    Data integration is challenging where there are different levels of support between primary and secondary data that need to be correlated in various ways. A geostatistical method is described, which integrates the hydraulic conductivity (K) measurements and electrical resistivity data to better estimate the K distribution in the Upper Chicot Aquifer of southwestern Louisiana, USA. The K measurements were obtained from pumping tests and represent the primary (hard) data. Borehole electrical resistivity data from electrical logs were regarded as the secondary (soft) data, and were used to infer K values through Archie's law and the Kozeny-Carman equation. A pseudo cross-semivariogram was developed to cope with the resistivity data non-collocation. Uncertainties in the auto-semivariograms and pseudo cross-semivariogram were quantified. The groundwater flow model responses by the regionalized and coregionalized models of K were compared using analysis of variance (ANOVA). The results indicate that non-collocated secondary data may improve estimates of K and affect groundwater flow responses of practical interest, including specific capacity and drawdown. ?? Springer-Verlag 2007.

  12. Convergence results for pseudospectral approximations of hyperbolic systems by a penalty type boundary treatment

    NASA Technical Reports Server (NTRS)

    Funaro, Daniele; Gottlieb, David

    1989-01-01

    A new method of imposing boundary conditions in the pseudospectral approximation of hyperbolic systems of equations is proposed. It is suggested to collocate the equations, not only at the inner grid points, but also at the boundary points and use the boundary conditions as penalty terms. In the pseudo-spectral Legrendre method with the new boundary treatment, a stability analysis for the case of a constant coefficient hyperbolic system is presented and error estimates are derived.

  13. Chebyshev collocation spectral method for one-dimensional radiative heat transfer in linearly anisotropic-scattering cylindrical medium

    NASA Astrophysics Data System (ADS)

    Zhou, Rui-Rui; Li, Ben-Wen

    2017-03-01

    In this study, the Chebyshev collocation spectral method (CCSM) is developed to solve the radiative integro-differential transfer equation (RIDTE) for one-dimensional absorbing, emitting and linearly anisotropic-scattering cylindrical medium. The general form of quadrature formulas for Chebyshev collocation points is deduced. These formulas are proved to have the same accuracy as the Gauss-Legendre quadrature formula (GLQF) for the F-function (geometric function) in the RIDTE. The explicit expressions of the Lagrange basis polynomials and the differentiation matrices for Chebyshev collocation points are also given. These expressions are necessary for solving an integro-differential equation by the CCSM. Since the integrand in the RIDTE is continuous but non-smooth, it is treated by the segments integration method (SIM). The derivative terms in the RIDTE are carried out to improve the accuracy near the origin. In this way, a fourth order accuracy is achieved by the CCSM for the RIDTE, whereas it's only a second order one by the finite difference method (FDM). Several benchmark problems (BPs) with various combinations of optical thickness, medium temperature distribution, degree of anisotropy, and scattering albedo are solved. The results show that present CCSM is efficient to obtain high accurate results, especially for the optically thin medium. The solutions rounded to seven significant digits are given in tabular form, and show excellent agreement with the published data. Finally, the solutions of RIDTE are used as benchmarks for the solution of radiative integral transfer equations (RITEs) presented by Sutton and Chen (JQSRT 84 (2004) 65-103). A non-uniform grid refined near the wall is advised to improve the accuracy of RITEs solutions.

  14. A polynomial-chaos-expansion-based building block approach for stochastic analysis of photonic circuits

    NASA Astrophysics Data System (ADS)

    Waqas, Abi; Melati, Daniele; Manfredi, Paolo; Grassi, Flavia; Melloni, Andrea

    2018-02-01

    The Building Block (BB) approach has recently emerged in photonic as a suitable strategy for the analysis and design of complex circuits. Each BB can be foundry related and contains a mathematical macro-model of its functionality. As well known, statistical variations in fabrication processes can have a strong effect on their functionality and ultimately affect the yield. In order to predict the statistical behavior of the circuit, proper analysis of the uncertainties effects is crucial. This paper presents a method to build a novel class of Stochastic Process Design Kits for the analysis of photonic circuits. The proposed design kits directly store the information on the stochastic behavior of each building block in the form of a generalized-polynomial-chaos-based augmented macro-model obtained by properly exploiting stochastic collocation and Galerkin methods. Using this approach, we demonstrate that the augmented macro-models of the BBs can be calculated once and stored in a BB (foundry dependent) library and then used for the analysis of any desired circuit. The main advantage of this approach, shown here for the first time in photonics, is that the stochastic moments of an arbitrary photonic circuit can be evaluated by a single simulation only, without the need for repeated simulations. The accuracy and the significant speed-up with respect to the classical Monte Carlo analysis are verified by means of classical photonic circuit example with multiple uncertain variables.

  15. Incorporating Wind Power Forecast Uncertainties Into Stochastic Unit Commitment Using Neural Network-Based Prediction Intervals.

    PubMed

    Quan, Hao; Srinivasan, Dipti; Khosravi, Abbas

    2015-09-01

    Penetration of renewable energy resources, such as wind and solar power, into power systems significantly increases the uncertainties on system operation, stability, and reliability in smart grids. In this paper, the nonparametric neural network-based prediction intervals (PIs) are implemented for forecast uncertainty quantification. Instead of a single level PI, wind power forecast uncertainties are represented in a list of PIs. These PIs are then decomposed into quantiles of wind power. A new scenario generation method is proposed to handle wind power forecast uncertainties. For each hour, an empirical cumulative distribution function (ECDF) is fitted to these quantile points. The Monte Carlo simulation method is used to generate scenarios from the ECDF. Then the wind power scenarios are incorporated into a stochastic security-constrained unit commitment (SCUC) model. The heuristic genetic algorithm is utilized to solve the stochastic SCUC problem. Five deterministic and four stochastic case studies incorporated with interval forecasts of wind power are implemented. The results of these cases are presented and discussed together. Generation costs, and the scheduled and real-time economic dispatch reserves of different unit commitment strategies are compared. The experimental results show that the stochastic model is more robust than deterministic ones and, thus, decreases the risk in system operations of smart grids.

  16. Stochastic and Statistical Analysis of Utility Revenues and Weather Data Analysis for Consumer Demand Estimation in Smart Grids

    PubMed Central

    Ali, S. M.; Mehmood, C. A; Khan, B.; Jawad, M.; Farid, U; Jadoon, J. K.; Ali, M.; Tareen, N. K.; Usman, S.; Majid, M.; Anwar, S. M.

    2016-01-01

    In smart grid paradigm, the consumer demands are random and time-dependent, owning towards stochastic probabilities. The stochastically varying consumer demands have put the policy makers and supplying agencies in a demanding position for optimal generation management. The utility revenue functions are highly dependent on the consumer deterministic stochastic demand models. The sudden drifts in weather parameters effects the living standards of the consumers that in turn influence the power demands. Considering above, we analyzed stochastically and statistically the effect of random consumer demands on the fixed and variable revenues of the electrical utilities. Our work presented the Multi-Variate Gaussian Distribution Function (MVGDF) probabilistic model of the utility revenues with time-dependent consumer random demands. Moreover, the Gaussian probabilities outcome of the utility revenues is based on the varying consumer n demands data-pattern. Furthermore, Standard Monte Carlo (SMC) simulations are performed that validated the factor of accuracy in the aforesaid probabilistic demand-revenue model. We critically analyzed the effect of weather data parameters on consumer demands using correlation and multi-linear regression schemes. The statistical analysis of consumer demands provided a relationship between dependent (demand) and independent variables (weather data) for utility load management, generation control, and network expansion. PMID:27314229

  17. Stochastic and Statistical Analysis of Utility Revenues and Weather Data Analysis for Consumer Demand Estimation in Smart Grids.

    PubMed

    Ali, S M; Mehmood, C A; Khan, B; Jawad, M; Farid, U; Jadoon, J K; Ali, M; Tareen, N K; Usman, S; Majid, M; Anwar, S M

    2016-01-01

    In smart grid paradigm, the consumer demands are random and time-dependent, owning towards stochastic probabilities. The stochastically varying consumer demands have put the policy makers and supplying agencies in a demanding position for optimal generation management. The utility revenue functions are highly dependent on the consumer deterministic stochastic demand models. The sudden drifts in weather parameters effects the living standards of the consumers that in turn influence the power demands. Considering above, we analyzed stochastically and statistically the effect of random consumer demands on the fixed and variable revenues of the electrical utilities. Our work presented the Multi-Variate Gaussian Distribution Function (MVGDF) probabilistic model of the utility revenues with time-dependent consumer random demands. Moreover, the Gaussian probabilities outcome of the utility revenues is based on the varying consumer n demands data-pattern. Furthermore, Standard Monte Carlo (SMC) simulations are performed that validated the factor of accuracy in the aforesaid probabilistic demand-revenue model. We critically analyzed the effect of weather data parameters on consumer demands using correlation and multi-linear regression schemes. The statistical analysis of consumer demands provided a relationship between dependent (demand) and independent variables (weather data) for utility load management, generation control, and network expansion.

  18. Improved Climate Simulations through a Stochastic Parameterization of Ocean Eddies

    NASA Astrophysics Data System (ADS)

    Williams, Paul; Howe, Nicola; Gregory, Jonathan; Smith, Robin; Joshi, Manoj

    2016-04-01

    In climate simulations, the impacts of the sub-grid scales on the resolved scales are conventionally represented using deterministic closure schemes, which assume that the impacts are uniquely determined by the resolved scales. Stochastic parameterization relaxes this assumption, by sampling the sub-grid variability in a computationally inexpensive manner. This presentation shows that the simulated climatological state of the ocean is improved in many respects by implementing a simple stochastic parameterization of ocean eddies into a coupled atmosphere-ocean general circulation model. Simulations from a high-resolution, eddy-permitting ocean model are used to calculate the eddy statistics needed to inject realistic stochastic noise into a low-resolution, non-eddy-permitting version of the same model. A suite of four stochastic experiments is then run to test the sensitivity of the simulated climate to the noise definition, by varying the noise amplitude and decorrelation time within reasonable limits. The addition of zero-mean noise to the ocean temperature tendency is found to have a non-zero effect on the mean climate. Specifically, in terms of the ocean temperature and salinity fields both at the surface and at depth, the noise reduces many of the biases in the low-resolution model and causes it to more closely resemble the high-resolution model. The variability of the strength of the global ocean thermohaline circulation is also improved. It is concluded that stochastic ocean perturbations can yield reductions in climate model error that are comparable to those obtained by refining the resolution, but without the increased computational cost. Therefore, stochastic parameterizations of ocean eddies have the potential to significantly improve climate simulations. Reference PD Williams, NJ Howe, JM Gregory, RS Smith, and MM Joshi (2016) Improved Climate Simulations through a Stochastic Parameterization of Ocean Eddies. Journal of Climate, under revision.

  19. Stochastic Approaches Within a High Resolution Rapid Refresh Ensemble

    NASA Astrophysics Data System (ADS)

    Jankov, I.

    2017-12-01

    It is well known that global and regional numerical weather prediction (NWP) ensemble systems are under-dispersive, producing unreliable and overconfident ensemble forecasts. Typical approaches to alleviate this problem include the use of multiple dynamic cores, multiple physics suite configurations, or a combination of the two. While these approaches may produce desirable results, they have practical and theoretical deficiencies and are more difficult and costly to maintain. An active area of research that promotes a more unified and sustainable system is the use of stochastic physics. Stochastic approaches include Stochastic Parameter Perturbations (SPP), Stochastic Kinetic Energy Backscatter (SKEB), and Stochastic Perturbation of Physics Tendencies (SPPT). The focus of this study is to assess model performance within a convection-permitting ensemble at 3-km grid spacing across the Contiguous United States (CONUS) using a variety of stochastic approaches. A single physics suite configuration based on the operational High-Resolution Rapid Refresh (HRRR) model was utilized and ensemble members produced by employing stochastic methods. Parameter perturbations (using SPP) for select fields were employed in the Rapid Update Cycle (RUC) land surface model (LSM) and Mellor-Yamada-Nakanishi-Niino (MYNN) Planetary Boundary Layer (PBL) schemes. Within MYNN, SPP was applied to sub-grid cloud fraction, mixing length, roughness length, mass fluxes and Prandtl number. In the RUC LSM, SPP was applied to hydraulic conductivity and tested perturbing soil moisture at initial time. First iterative testing was conducted to assess the initial performance of several configuration settings (e.g. variety of spatial and temporal de-correlation lengths). Upon selection of the most promising candidate configurations using SPP, a 10-day time period was run and more robust statistics were gathered. SKEB and SPPT were included in additional retrospective tests to assess the impact of using all three stochastic approaches to address model uncertainty. Results from the stochastic perturbation testing were compared to a baseline multi-physics control ensemble. For probabilistic forecast performance the Model Evaluation Tools (MET) verification package was used.

  20. Nanosecond Enhancements of the Atmospheric Electron Density by Extensive Air Showers

    NASA Astrophysics Data System (ADS)

    Rutjes, C.; Camporeale, E.; Ebert, U.; Buitink, S.; Scholten, O.; Trinh, G. T. N.; Witteveen, J.

    2015-12-01

    As is well known a sufficient density of free electrons and strong electric fields are the basic requirements to start any electrical discharge. In the context of thunderstorm discharges it has become clear that in addition droplets and or ice particles are required to enhance the electric field to values above breakdown. In our recent study [1] we have shown that these three ingredients have to interplay to allow for lightning inception, triggered by an extensive air shower event. The extensive air showers are a very stochastic natural phenomenon, creating highly coherent sub-nanosecond enhancements of the atmospheric electron density. Predicting these electron density enhancements accurately one has to take the uncertainty of the input variables into account. For this study we use the initial energy, inclination and altitude of first interaction, which will influence the evolution of the shower significantly. To this end, we use the stochastic collocation method, [2] to post-process our detailed Monte Carlo extensive air shower simulations, done with the CORSIKA [3] software package, which provides an efficient and elegant way to determine the distribution of the atmospheric electron density enhancements. [1] Dubinova, A., Rutjes, C., Ebert, E., Buitink, S., Scholten, O., and Trinh, G. T. N. "Prediction of Lightning Inception by Large Ice Particles and Extensive Air Showers." PRL 115 015002 (2015)[2] G.J.A. Loeven, J.A.S. Witteveen, H. Bijl, Probabilistic collocation: an efficient nonintrusive approach for arbitrarily distributed parametric uncertainties, 45th AIAA Aerospace Sciences Meeting, Reno, Nevada, 2007, AIAA-2007-317[3] Heck, Dieter, et al. CORSIKA: A Monte Carlo code to simulate extensive air showers. No. FZKA-6019. 1998.

  1. SMERFS: Stochastic Markov Evaluation of Random Fields on the Sphere

    NASA Astrophysics Data System (ADS)

    Creasey, Peter; Lang, Annika

    2018-04-01

    SMERFS (Stochastic Markov Evaluation of Random Fields on the Sphere) creates large realizations of random fields on the sphere. It uses a fast algorithm based on Markov properties and fast Fourier Transforms in 1d that generates samples on an n X n grid in O(n2 log n) and efficiently derives the necessary conditional covariance matrices.

  2. Adaptive grid based multi-objective Cauchy differential evolution for stochastic dynamic economic emission dispatch with wind power uncertainty

    PubMed Central

    Lei, Xiaohui; Wang, Chao; Yue, Dong; Xie, Xiangpeng

    2017-01-01

    Since wind power is integrated into the thermal power operation system, dynamic economic emission dispatch (DEED) has become a new challenge due to its uncertain characteristics. This paper proposes an adaptive grid based multi-objective Cauchy differential evolution (AGB-MOCDE) for solving stochastic DEED with wind power uncertainty. To properly deal with wind power uncertainty, some scenarios are generated to simulate those possible situations by dividing the uncertainty domain into different intervals, the probability of each interval can be calculated using the cumulative distribution function, and a stochastic DEED model can be formulated under different scenarios. For enhancing the optimization efficiency, Cauchy mutation operation is utilized to improve differential evolution by adjusting the population diversity during the population evolution process, and an adaptive grid is constructed for retaining diversity distribution of Pareto front. With consideration of large number of generated scenarios, the reduction mechanism is carried out to decrease the scenarios number with covariance relationships, which can greatly decrease the computational complexity. Moreover, the constraint-handling technique is also utilized to deal with the system load balance while considering transmission loss among thermal units and wind farms, all the constraint limits can be satisfied under the permitted accuracy. After the proposed method is simulated on three test systems, the obtained results reveal that in comparison with other alternatives, the proposed AGB-MOCDE can optimize the DEED problem while handling all constraint limits, and the optimal scheme of stochastic DEED can decrease the conservation of interval optimization, which can provide a more valuable optimal scheme for real-world applications. PMID:28961262

  3. Stochastic cellular automata model for stock market dynamics

    NASA Astrophysics Data System (ADS)

    Bartolozzi, M.; Thomas, A. W.

    2004-04-01

    In the present work we introduce a stochastic cellular automata model in order to simulate the dynamics of the stock market. A direct percolation method is used to create a hierarchy of clusters of active traders on a two-dimensional grid. Active traders are characterized by the decision to buy, σi (t)=+1 , or sell, σi (t)=-1 , a stock at a certain discrete time step. The remaining cells are inactive, σi (t)=0 . The trading dynamics is then determined by the stochastic interaction between traders belonging to the same cluster. Extreme, intermittent events, such as crashes or bubbles, are triggered by a phase transition in the state of the bigger clusters present on the grid, where almost all the active traders come to share the same spin orientation. Most of the stylized aspects of the financial market time series, including multifractal proprieties, are reproduced by the model. A direct comparison is made with the daily closures of the S&P500 index.

  4. ecode - Electron Transport Algorithm Testing v. 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Franke, Brian C.; Olson, Aaron J.; Bruss, Donald Eugene

    2016-10-05

    ecode is a Monte Carlo code used for testing algorithms related to electron transport. The code can read basic physics parameters, such as energy-dependent stopping powers and screening parameters. The code permits simple planar geometries of slabs or cubes. Parallelization consists of domain replication, with work distributed at the start of the calculation and statistical results gathered at the end of the calculation. Some basic routines (such as input parsing, random number generation, and statistics processing) are shared with the Integrated Tiger Series codes. A variety of algorithms for uncertainty propagation are incorporated based on the stochastic collocation and stochasticmore » Galerkin methods. These permit uncertainty only in the total and angular scattering cross sections. The code contains algorithms for simulating stochastic mixtures of two materials. The physics is approximate, ranging from mono-energetic and isotropic scattering to screened Rutherford angular scattering and Rutherford energy-loss scattering (simple electron transport models). No production of secondary particles is implemented, and no photon physics is implemented.« less

  5. Application of stochastic particle swarm optimization algorithm to determine the graded refractive index distribution in participating media

    NASA Astrophysics Data System (ADS)

    Wei, Lin-Yang; Qi, Hong; Ren, Ya-Tao; Ruan, Li-Ming

    2016-11-01

    Inverse estimation of the refractive index distribution in one-dimensional participating media with graded refractive index (GRI) is investigated. The forward radiative transfer problem is solved by the Chebyshev collocation spectral method. The stochastic particle swarm optimization (SPSO) algorithm is employed to retrieve three kinds of GRI distribution, i.e. the linear, sinusoidal and quadratic GRI distribution. The retrieval accuracy of GRI distribution with different wall emissivity, optical thickness, absorption coefficients and scattering coefficients are discussed thoroughly. To improve the retrieval accuracy of quadratic GRI distribution, a double-layer model is proposed to supply more measurement information. The influence of measurement errors upon the precision of estimated results is also investigated. Considering the GRI distribution is unknown beforehand in practice, a quadratic function is employed to retrieve the linear GRI by SPSO algorithm. All the results show that the SPSO algorithm is applicable to retrieve different GRI distributions in participating media accurately even with noisy data.

  6. Three Dimensional Flow and Pressure Patterns in a Hydrostatic Journal Bearing

    NASA Technical Reports Server (NTRS)

    Braun, M. Jack; Dzodzo, Milorad B.

    1996-01-01

    The flow in a hydrostatic journal bearing (HJB) is described by a mathematical model that uses the three dimensional non-orthogonal form of the Navier-Stokes equations. Using the u, v, w, and p, as primary variables, a conservative formulation, finite volume multi-block method is applied through a collocated, body fitted grid. The HJB has four shallow pockets with a depth/length ratio of 0.067. This paper represents a natural extension to the two and three dimensional studies undertaken prior to this project.

  7. Scenario generation for stochastic optimization problems via the sparse grid method

    DOE PAGES

    Chen, Michael; Mehrotra, Sanjay; Papp, David

    2015-04-19

    We study the use of sparse grids in the scenario generation (or discretization) problem in stochastic programming problems where the uncertainty is modeled using a continuous multivariate distribution. We show that, under a regularity assumption on the random function involved, the sequence of optimal objective function values of the sparse grid approximations converges to the true optimal objective function values as the number of scenarios increases. The rate of convergence is also established. We treat separately the special case when the underlying distribution is an affine transform of a product of univariate distributions, and show how the sparse grid methodmore » can be adapted to the distribution by the use of quadrature formulas tailored to the distribution. We numerically compare the performance of the sparse grid method using different quadrature rules with classic quasi-Monte Carlo (QMC) methods, optimal rank-one lattice rules, and Monte Carlo (MC) scenario generation, using a series of utility maximization problems with up to 160 random variables. The results show that the sparse grid method is very efficient, especially if the integrand is sufficiently smooth. In such problems the sparse grid scenario generation method is found to need several orders of magnitude fewer scenarios than MC and QMC scenario generation to achieve the same accuracy. As a result, it is indicated that the method scales well with the dimension of the distribution--especially when the underlying distribution is an affine transform of a product of univariate distributions, in which case the method appears scalable to thousands of random variables.« less

  8. Stochastic dynamic modeling of regular and slow earthquakes

    NASA Astrophysics Data System (ADS)

    Aso, N.; Ando, R.; Ide, S.

    2017-12-01

    Both regular and slow earthquakes are slip phenomena on plate boundaries and are simulated by a (quasi-)dynamic modeling [Liu and Rice, 2005]. In these numerical simulations, spatial heterogeneity is usually considered not only for explaining real physical properties but also for evaluating the stability of the calculations or the sensitivity of the results on the condition. However, even though we discretize the model space with small grids, heterogeneity at smaller scales than the grid size is not considered in the models with deterministic governing equations. To evaluate the effect of heterogeneity at the smaller scales we need to consider stochastic interactions between slip and stress in a dynamic modeling. Tidal stress is known to trigger or affect both regular and slow earthquakes [Yabe et al., 2015; Ide et al., 2016], and such an external force with fluctuation can also be considered as a stochastic external force. A healing process of faults may also be stochastic, so we introduce stochastic friction law. In the present study, we propose a stochastic dynamic model to explain both regular and slow earthquakes. We solve mode III problem, which corresponds to the rupture propagation along the strike direction. We use BIEM (boundary integral equation method) scheme to simulate slip evolution, but we add stochastic perturbations in the governing equations, which is usually written in a deterministic manner. As the simplest type of perturbations, we adopt Gaussian deviations in the formulation of the slip-stress kernel, external force, and friction. By increasing the amplitude of perturbations of the slip-stress kernel, we reproduce complicated rupture process of regular earthquakes including unilateral and bilateral ruptures. By perturbing external force, we reproduce slow rupture propagation at a scale of km/day. The slow propagation generated by a combination of fast interaction at S-wave velocity is analogous to the kinetic theory of gasses: thermal diffusion appears much slower than the particle velocity of each molecule. The concept of stochastic triggering originates in the Brownian walk model [Ide, 2008], and the present study introduces the stochastic dynamics into dynamic simulations. The stochastic dynamic model has the potential to explain both regular and slow earthquakes more realistically.

  9. Towards an Entropy Stable Spectral Element Framework for Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Carpenter, Mark H.; Parsani, Matteo; Fisher, Travis C.; Nielsen, Eric J.

    2016-01-01

    Entropy stable (SS) discontinuous spectral collocation formulations of any order are developed for the compressible Navier-Stokes equations on hexahedral elements. Recent progress on two complementary efforts is presented. The first effort is a generalization of previous SS spectral collocation work to extend the applicable set of points from tensor product, Legendre-Gauss-Lobatto (LGL) to tensor product Legendre-Gauss (LG) points. The LG and LGL point formulations are compared on a series of test problems. Although being more costly to implement, it is shown that the LG operators are significantly more accurate on comparable grids. Both the LGL and LG operators are of comparable efficiency and robustness, as is demonstrated using test problems for which conventional FEM techniques suffer instability. The second effort generalizes previous SS work to include the possibility of p-refinement at non-conforming interfaces. A generalization of existing entropy stability machinery is developed to accommodate the nuances of fully multi-dimensional summation-by-parts (SBP) operators. The entropy stability of the compressible Euler equations on non-conforming interfaces is demonstrated using the newly developed LG operators and multi-dimensional interface interpolation operators.

  10. Instability, rupture and fluctuations in thin liquid films: Theory and computations

    NASA Astrophysics Data System (ADS)

    Gvalani, Rishabh; Duran-Olivencia, Miguel; Kalliadasis, Serafim; Pavliotis, Grigorios

    2017-11-01

    Thin liquid films are ubiquitous in natural phenomena and technological applications. They are commonly studied via deterministic hydrodynamic equations, but thermal fluctuations often play a crucial role that still needs to be understood. An example of this is dewetting, which involves the rupture of a thin liquid film and the formation of droplets. Such a process is thermally activated and requires fluctuations to be taken into account self-consistently. Here we present an analytical and numerical study of a stochastic thin-film equation derived from first principles. We scrutinise the behaviour of the stochastic thin film equation in the limit of perfectly correlated noise along the wall-normal direction. We also perform Monte Carlo simulations of the stochastic equation by adopting a numerical scheme based on a spectral collocation method. The numerical scheme allows us to explore the fluctuating dynamics of the thin film and the behaviour of the system's free energy close to rupture. Finally, we also study the effect of the noise intensity on the rupture time, which is in good agreement with previous works. Imperial College London (ICL) President's PhD Scholarship; European Research Council Advanced Grant No. 247031; EPSRC Grants EP/L025159, EP/L020564, EP/P031587, EP/L024926, and EP/L016230/1.

  11. Stochastic four-way coupling of gas-solid flows for Large Eddy Simulations

    NASA Astrophysics Data System (ADS)

    Curran, Thomas; Denner, Fabian; van Wachem, Berend

    2017-11-01

    The interaction of solid particles with turbulence has for long been a topic of interest for predicting the behavior of industrially relevant flows. For the turbulent fluid phase, Large Eddy Simulation (LES) methods are widely used for their low computational cost, leaving only the sub-grid scales (SGS) of turbulence to be modelled. Although LES has seen great success in predicting the behavior of turbulent single-phase flows, the development of LES for turbulent gas-solid flows is still in its infancy. This contribution aims at constructing a model to describe the four-way coupling of particles in an LES framework, by considering the role particles play in the transport of turbulent kinetic energy across the scales. Firstly, a stochastic model reconstructing the sub-grid velocities for the particle tracking is presented. Secondly, to solve particle-particle interaction, most models involve a deterministic treatment of the collisions. We finally introduce a stochastic model for estimating the collision probability. All results are validated against fully resolved DNS-DPS simulations. The final goal of this contribution is to propose a global stochastic method adapted to two-phase LES simulation where the number of particles considered can be significantly increased. Financial support from PetroBras is gratefully acknowledged.

  12. Shallow cumuli ensemble statistics for development of a stochastic parameterization

    NASA Astrophysics Data System (ADS)

    Sakradzija, Mirjana; Seifert, Axel; Heus, Thijs

    2014-05-01

    According to a conventional deterministic approach to the parameterization of moist convection in numerical atmospheric models, a given large scale forcing produces an unique response from the unresolved convective processes. This representation leaves out the small-scale variability of convection, as it is known from the empirical studies of deep and shallow convective cloud ensembles, there is a whole distribution of sub-grid states corresponding to the given large scale forcing. Moreover, this distribution gets broader with the increasing model resolution. This behavior is also consistent with our theoretical understanding of a coarse-grained nonlinear system. We propose an approach to represent the variability of the unresolved shallow-convective states, including the dependence of the sub-grid states distribution spread and shape on the model horizontal resolution. Starting from the Gibbs canonical ensemble theory, Craig and Cohen (2006) developed a theory for the fluctuations in a deep convective ensemble. The micro-states of a deep convective cloud ensemble are characterized by the cloud-base mass flux, which, according to the theory, is exponentially distributed (Boltzmann distribution). Following their work, we study the shallow cumulus ensemble statistics and the distribution of the cloud-base mass flux. We employ a Large-Eddy Simulation model (LES) and a cloud tracking algorithm, followed by a conditional sampling of clouds at the cloud base level, to retrieve the information about the individual cloud life cycles and the cloud ensemble as a whole. In the case of shallow cumulus cloud ensemble, the distribution of micro-states is a generalized exponential distribution. Based on the empirical and theoretical findings, a stochastic model has been developed to simulate the shallow convective cloud ensemble and to test the convective ensemble theory. Stochastic model simulates a compound random process, with the number of convective elements drawn from a Poisson distribution, and cloud properties sub-sampled from a generalized ensemble distribution. We study the role of the different cloud subtypes in a shallow convective ensemble and how the diverse cloud properties and cloud lifetimes affect the system macro-state. To what extent does the cloud-base mass flux distribution deviate from the simple Boltzmann distribution and how does it affect the results from the stochastic model? Is the memory, provided by the finite lifetime of individual clouds, of importance for the ensemble statistics? We also test for the minimal information given as an input to the stochastic model, able to reproduce the ensemble mean statistics and the variability in a convective ensemble. An important property of the resulting distribution of the sub-grid convective states is its scale-adaptivity - the smaller the grid-size, the broader the compound distribution of the sub-grid states.

  13. A Stochastic Mixed Finite Element Heterogeneous Multiscale Method for Flow in Porous Media

    DTIC Science & Technology

    2010-08-01

    applicable for flow in porous media has drawn significant interest in the last few years. Several techniques like generalized polynomial chaos expansions (gPC...represents the stochastic solution as a polynomial approxima- tion. This interpolant is constructed via independent function calls to the de- terministic...of orthogonal polynomials [34,38] or sparse grid approximations [39–41]. It is well known that the global polynomial interpolation cannot resolve lo

  14. Enhancing adaptive sparse grid approximations and improving refinement strategies using adjoint-based a posteriori error estimates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jakeman, J.D., E-mail: jdjakem@sandia.gov; Wildey, T.

    2015-01-01

    In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the physical discretization error and the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity of the sparse grid. Utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchicalmore » surplus based strategies. Throughout this paper we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.« less

  15. Global grids of gravity anomalies and vertical gravity gradients at 10 km altitude from GOCE gradient data 2009-2011 and polar gravity.

    NASA Astrophysics Data System (ADS)

    Tscherning, Carl Christian; Arabelos, Dimitrios; Reguzzoni, Mirko

    2013-04-01

    The GOCE satellite measures gravity gradients which are filtered and transformed to gradients into an Earth-referenced frame by the GOCE High Level processing Facility. More than 80000000 data with 6 components are available from the period 2009-2011. IAG Arctic gravity was used north of 83 deg., while data at the Antarctic was not used due to bureaucratic restrictions by the data-holders. Subsets of the data have been used to produce gridded values at 10 km altitude of gravity anomalies and vertical gravity gradients in 20 deg. x 20 deg. blocks with 10' spacing. Various combinations and densities of data were used to obtain values in areas with known gravity anomalies. The (marginally) best choice was vertical gravity gradients selected with an approximately 0.125 deg spacing. Using Least-Squares Collocation, error-estimates were computed and compared to the difference between the GOCE-grids and grids derived from EGM2008 to deg. 512. In general a good agreement was found, however with some inconsistencies in certain areas. The computation time on a usual server with 24 processors was typically 100 minutes for a block with generally 40000 GOCE vertical gradients as input. The computations will be updated with new Wiener-filtered data in the near future.

  16. FORTRAN programs to process Magsat data for lithospheric, external field, and residual core components

    NASA Technical Reports Server (NTRS)

    Alsdorf, Douglas E.; Vonfrese, Ralph R. B.

    1994-01-01

    The FORTRAN programs supplied in this document provide a complete processing package for statistically extracting residual core, external field and lithospheric components in Magsat observations. To process the individual passes: (1) orbits are separated into dawn and dusk local times and by altitude, (2) passes are selected based on the variance of the magnetic field observations after a least-squares fit of the core field is removed from each pass over the study area, and (3) spatially adjacent passes are processed with a Fourier correlation coefficient filter to separate coherent and non-coherent features between neighboring tracks. In the second state of map processing: (1) data from the passes are normalized to a common altitude and gridded into dawn and dusk maps with least squares collocation, (2) dawn and dusk maps are correlated with a Fourier correlation efficient filter to separate coherent and non-coherent features; the coherent features are averaged to produce a total field grid, (3) total field grids from all altitudes are continued to a common altitude, correlation filtered for coherent anomaly features, and subsequently averaged to produce the final total field grid for the study region, and (4) the total field map is differentially reduced to the pole.

  17. National Assessment of Energy Storage for Grid Balancing and Arbitrage: Phase 1, WECC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kintner-Meyer, Michael CW; Balducci, Patrick J.; Colella, Whitney G.

    2012-06-01

    To examine the role that energy storage could play in mitigating the impacts of the stochastic variability of wind generation on regional grid operation, the Pacific Northwest National Laboratory (PNNL) examined a hypothetical 2020 grid scenario in which additional wind generation capacity is built to meet renewable portfolio standard targets in the Western Interconnection. PNNL developed a stochastic model for estimating the balancing requirements using historical wind statistics and forecasting error, a detailed engineering model to analyze the dispatch of energy storage and fast-ramping generation devices for estimating size requirements of energy storage and generation systems for meeting new balancingmore » requirements, and financial models for estimating the life-cycle cost of storage and generation systems in addressing the future balancing requirements for sub-regions in the Western Interconnection. Evaluated technologies include combustion turbines, sodium sulfur (Na-S) batteries, lithium ion batteries, pumped-hydro energy storage, compressed air energy storage, flywheels, redox flow batteries, and demand response. Distinct power and energy capacity requirements were estimated for each technology option, and battery size was optimized to minimize costs. Modeling results indicate that in a future power grid with high-penetration of renewables, the most cost competitive technologies for meeting balancing requirements include Na-S batteries and flywheels.« less

  18. Enhancing adaptive sparse grid approximations and improving refinement strategies using adjoint-based a posteriori error estimates

    DOE PAGES

    Jakeman, J. D.; Wildey, T.

    2015-01-01

    In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity. We show that utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this papermore » we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.« less

  19. Stochastic Characterization of Communication Network Latency for Wide Area Grid Control Applications.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ameme, Dan Selorm Kwami; Guttromson, Ross

    This report characterizes communications network latency under various network topologies and qualities of service (QoS). The characterizations are probabilistic in nature, allowing deeper analysis of stability for Internet Protocol (IP) based feedback control systems used in grid applications. The work involves the use of Raspberry Pi computers as a proxy for a controlled resource, and an ns-3 network simulator on a Linux server to create an experimental platform (testbed) that can be used to model wide-area grid control network communications in smart grid. Modbus protocol is used for information transport, and Routing Information Protocol is used for dynamic route selectionmore » within the simulated network.« less

  20. Spatio-temporal representativeness of ground-based downward solar radiation measurements

    NASA Astrophysics Data System (ADS)

    Schwarz, Matthias; Wild, Martin; Folini, Doris

    2017-04-01

    Surface solar radiation (SSR) is most directly observed with ground based pyranometer measurements. Besides measurement uncertainties, which arise from the pyranometer instrument itself, also errors attributed to the limited spatial representativeness of observations from single sites for their large-scale surrounding have to be taken into account when using such measurements for energy balance studies. In this study the spatial representativeness of 157 homogeneous European downward surface solar radiation time series from the Global Energy Balance Archive (GEBA) and the Baseline Surface Radiation Network (BSRN) were examined for the period 1983-2015 by using the high resolution (0.05°) surface solar radiation data set from the Satellite Application Facility on Climate Monitoring (CM-SAF SARAH) as a proxy for the spatiotemporal variability of SSR. By correlating deseasonalized monthly SSR time series form surface observations against single collocated satellite derived SSR time series, a mean spatial correlation pattern was calculated and validated against purely observational based patterns. Generally decreasing correlations with increasing distance from station, with high correlations (R2 = 0.7) in proximity to the observational sites (±0.5°), was found. When correlating surface observations against time series from spatially averaged satellite derived SSR data (and thereby simulating coarser and coarser grids), very high correspondence between sites and the collocated pixels has been found for pixel sizes up to several degrees. Moreover, special focus was put on the quantification of errors which arise in conjunction to spatial sampling when estimating the temporal variability and trends for a larger region from a single surface observation site. For 15-year trends on a 1° grid, errors due to spatial sampling in the order of half of the measurement uncertainty for monthly mean values were found.

  1. Towards resiliency with micro-grids: Portfolio optimization and investment under uncertainty

    NASA Astrophysics Data System (ADS)

    Gharieh, Kaveh

    Energy security and sustained supply of power are critical for community welfare and economic growth. In the face of the increased frequency and intensity of extreme weather conditions which can result in power grid outage, the value of micro-grids to improve the communities' power reliability and resiliency is becoming more important. Micro-grids capability to operate in islanded mode in stressed-out conditions, dramatically decreases the economic loss of critical infrastructure in power shortage occasions. More wide-spread participation of micro-grids in the wholesale energy market in near future, makes the development of new investment models necessary. However, market and price risks in short term and long term along with risk factors' impacts shall be taken into consideration in development of new investment models. This work proposes a set of models and tools to address different problems associated with micro-grid assets including optimal portfolio selection, investment and financing in both community and a sample critical infrastructure (i.e. wastewater treatment plant) levels. The models account for short-term operational volatilities and long-term market uncertainties. A number of analytical methodologies and financial concepts have been adopted to develop the aforementioned models as follows. (1) Capital budgeting planning and portfolio optimization models with Monte Carlo stochastic scenario generation are applied to derive the optimal investment decision for a portfolio of micro-grid assets considering risk factors and multiple sources of uncertainties. (2) Real Option theory, Monte Carlo simulation and stochastic optimization techniques are applied to obtain optimal modularized investment decisions for hydrogen tri-generation systems in wastewater treatment facilities, considering multiple sources of uncertainty. (3) Public Private Partnership (PPP) financing concept coupled with investment horizon approach are applied to estimate public and private parties' revenue shares from a community-level micro-grid project over the course of assets' lifetime considering their optimal operation under uncertainty.

  2. Regional models of the gravity field from terrestrial gravity data of heterogeneous quality and density

    NASA Astrophysics Data System (ADS)

    Talvik, Silja; Oja, Tõnis; Ellmann, Artu; Jürgenson, Harli

    2014-05-01

    Gravity field models in a regional scale are needed for a number of applications, for example national geoid computation, processing of precise levelling data and geological modelling. Thus the methods applied for modelling the gravity field from surveyed gravimetric information need to be considered carefully. The influence of using different gridding methods, the inclusion of unit or realistic weights and indirect gridding of free air anomalies (FAA) are investigated in the study. Known gridding methods such as kriging (KRIG), least squares collocation (LSCO), continuous curvature (CCUR) and optimal Delaunay triangulation (ODET) are used for production of gridded gravity field surfaces. As the quality of data collected varies considerably depending on the methods and instruments available or used in surveying it is important to somehow weigh the input data. This puts additional demands on data maintenance as accuracy information needs to be available for each data point participating in the modelling which is complicated by older gravity datasets where the uncertainties of not only gravity values but also supplementary information such as survey point position are not always known very accurately. A number of gravity field applications (e.g. geoid computation) demand foran FAA model, the acquisition of which is also investigated. Instead of direct gridding it could be more appropriate to proceed with indirect FAA modelling using a Bouguer anomaly grid to reduce the effect of topography on the resulting FAA model (e.g. near terraced landforms). The inclusion of different gridding methods, weights and indirect FAA modelling helps to improve gravity field modelling methods. It becomes possible to estimate the impact of varying methodical approaches on the gravity field modelling as statistical output is compared. Such knowledge helps assess the accuracy of gravity field models and their effect on the aforementioned applications.

  3. Learning Collocations: Do the Number of Collocates, Position of the Node Word, and Synonymy Affect Learning?

    ERIC Educational Resources Information Center

    Webb, Stuart; Kagimoto, Eve

    2011-01-01

    This study investigated the effects of three factors (the number of collocates per node word, the position of the node word, synonymy) on learning collocations. Japanese students studying English as a foreign language learned five sets of 12 target collocations. Each collocation was presented in a single glossed sentence. The number of collocates…

  4. Evaluating the design of an earth radiation budget instrument with system simulations. Part 2: Minimization of instantaneous sampling errors for CERES-I

    NASA Technical Reports Server (NTRS)

    Stowe, Larry; Hucek, Richard; Ardanuy, Philip; Joyce, Robert

    1994-01-01

    Much of the new record of broadband earth radiation budget satellite measurements to be obtained during the late 1990s and early twenty-first century will come from the dual-radiometer Clouds and Earth's Radiant Energy System Instrument (CERES-I) flown aboard sun-synchronous polar orbiters. Simulation studies conducted in this work for an early afternoon satellite orbit indicate that spatial root-mean-square (rms) sampling errors of instantaneous CERES-I shortwave flux estimates will range from about 8.5 to 14.0 W/m on a 2.5 deg latitude and longitude grid resolution. Rms errors in longwave flux estimates are only about 20% as large and range from 1.5 to 3.5 W/sq m. These results are based on an optimal cross-track scanner design that includes 50% footprint overlap to eliminate gaps in the top-of-the-atmosphere coverage, and a 'smallest' footprint size to increase the ratio in the number of observations lying within to the number of observations lying on grid area boundaries. Total instantaneous measurement error also depends on the variability of anisotropic reflectance and emission patterns and on retrieval methods used to generate target area fluxes. Three retrieval procedures from both CERES-I scanners (cross-track and rotating azimuth plane) are used. (1) The baseline Earth Radiaton Budget Experiment (ERBE) procedure, which assumes that errors due to the use of mean angular dependence models (ADMs) in the radiance-to-flux inversion process nearly cancel when averaged over grid areas. (2) To estimate N, instantaneous ADMs are estimated from the multiangular, collocated observations of the two scanners. These observed models replace the mean models in computation of satellite flux estimates. (3) The scene flux approach, conducts separate target-area retrievals for each ERBE scene category and combines their results using area weighting by scene type. The ERBE retrieval performs best when the simulated radiance field departs from the ERBE mean models by less than 10%. For larger perturbations, both the scene flux and collocation methods produce less error than the ERBE retrieval. The scene flux technique is preferable, however, because it involves fewer restrictive assumptions.

  5. A Petri Net model for distributed energy system

    NASA Astrophysics Data System (ADS)

    Konopko, Joanna

    2015-12-01

    Electrical networks need to evolve to become more intelligent, more flexible and less costly. The smart grid is the next generation power energy, uses two-way flows of electricity and information to create a distributed automated energy delivery network. Building a comprehensive smart grid is a challenge for system protection, optimization and energy efficient. Proper modeling and analysis is needed to build an extensive distributed energy system and intelligent electricity infrastructure. In this paper, the whole model of smart grid have been proposed using Generalized Stochastic Petri Nets (GSPN). The simulation of created model is also explored. The simulation of the model has allowed the analysis of how close the behavior of the model is to the usage of the real smart grid.

  6. Solar irradiance assessment in insular areas using Himawari-8 satellite images

    NASA Astrophysics Data System (ADS)

    Liandrat, O.; Cros, S.; Turpin, M.; Pineau, J. F.

    2016-12-01

    The high amount of surface solar irradiance (SSI) in the tropics is an advantage for a profitable PV production. It will allow many tropical islands to pursue their economic growth with a clean, affordable and locally produced energy. However, the local meteorological conditions induce a very high variability which is problematic for a safe and gainful injection into the power grid. This issue is even more critical in non-interconnected territories where network stability is an absolute necessity. Therefore, the injection of PV power is legally limited in some European oversea territories. In this context, intraday irradiance forecasting (several hours ahead) is particularly useful to mitigate the production variability by reducing the cost of power storage management. At this time scale, cloud cover evolves with a stochastic behaviour not properly represented in numerical weather prediction (NWP) models. Analysing cloud motion using images from geostationary meteorological satellites is a well-known alternative to forecasting SSI up to 6 hours ahead with a better accuracy than NWP models. In this study, we present and apply our satellite-based solar irradiance forecasting methods over two measurement sites located in the field of view of the satellite Himawari-8: Cocos (Keeling) Islands (Australia) and New Caledonia (France). In particular, we converted 4 months of images from Himawari-8 visible channel into cloud index maps. Then, we applied an algorithm computing a cloud motion vector field from a short sequence of consecutive images. Comparisons between forecasted SSI at 1 hour of time horizon and collocated pyranometric measurements show a relative RMSE between 20 and 27%. Error sources related to the tropic insular context (coastal area heterogeneity, sub-pixel scale orographic cloud appearance, convective situation…) are discussed at every implementation step for the different methods.

  7. The Time Dependent Propensity Function for Acceleration of Spatial Stochastic Simulation of Reaction-Diffusion Systems

    PubMed Central

    Wu, Sheng; Li, Hong; Petzold, Linda R.

    2015-01-01

    The inhomogeneous stochastic simulation algorithm (ISSA) is a fundamental method for spatial stochastic simulation. However, when diffusion events occur more frequently than reaction events, simulating the diffusion events by ISSA is quite costly. To reduce this cost, we propose to use the time dependent propensity function in each step. In this way we can avoid simulating individual diffusion events, and use the time interval between two adjacent reaction events as the simulation stepsize. We demonstrate that the new algorithm can achieve orders of magnitude efficiency gains over widely-used exact algorithms, scales well with increasing grid resolution, and maintains a high level of accuracy. PMID:26609185

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malikopoulos, Andreas; Djouadi, Seddik M; Kuruganti, Teja

    We consider the optimal stochastic control problem for home energy systems with solar and energy storage devices when the demand is realized from the grid. The demand is subject to Brownian motions with both drift and variance parameters modulated by a continuous-time Markov chain that represents the regime of electricity price. We model the systems as pure stochastic differential equation models, and then we follow the completing square technique to solve the stochastic home energy management problem. The effectiveness of the efficiency of the proposed approach is validated through a simulation example. For practical situations with constraints consistent to thosemore » studied here, our results imply the proposed framework could reduce the electricity cost from short-term purchase in peak hour market.« less

  9. The Use of Verb Noun Collocations in Writing Stories among Iranian EFL Learners

    ERIC Educational Resources Information Center

    Bazzaz, Fatemeh Ebrahimi; Samad, Arshad Abd

    2011-01-01

    An important aspect of native speakers' communicative competence is collocational competence which involves knowing which words usually come together and which do not. This paper investigates the possible relationship between knowledge of collocations and the use of verb noun collocation in writing stories because collocational knowledge…

  10. Developing and Evaluating a Chinese Collocation Retrieval Tool for CFL Students and Teachers

    ERIC Educational Resources Information Center

    Chen, Howard Hao-Jan; Wu, Jian-Cheng; Yang, Christine Ting-Yu; Pan, Iting

    2016-01-01

    The development of collocational knowledge is important for foreign language learners; unfortunately, learners often have difficulties producing proper collocations in the target language. Among the various ways of collocation learning, the DDL (data-driven learning) approach encourages the independent learning of collocations and allows learners…

  11. 3D-calibration of three- and four-sensor hot-film probes based on collocated sonic using neural networks

    NASA Astrophysics Data System (ADS)

    Kit, Eliezer; Liberzon, Dan

    2016-09-01

    High resolution measurements of turbulence in the atmospheric boundary layer (ABL) are critical to the understanding of physical processes and parameterization of important quantities, such as the turbulent kinetic energy dissipation. Low spatio-temporal resolution of standard atmospheric instruments, sonic anemometers and LIDARs, limits their suitability for fine-scale measurements of ABL. The use of miniature hot-films is an alternative technique, although such probes require frequent calibration, which is logistically untenable in field setups. Accurate and truthful calibration is crucial for the multi-hot-films applications in atmospheric studies, because the ability to conduct calibration in situ ultimately determines the turbulence measurements quality. Kit et al (2010 J. Atmos. Ocean. Technol. 27 23-41) described a novel methodology for calibration of hot-film probes using a collocated sonic anemometer combined with a neural network (NN) approach. An important step in the algorithm is the generation of a calibration set for NN training by an appropriate low-pass filtering of the high resolution voltages, measured by the hot-film-sensors and low resolution velocities acquired by the sonic. In Kit et al (2010 J. Atmos. Ocean. Technol. 27 23-41), Kit and Grits (2011 J. Atmos. Ocean. Technol. 28 104-10) and Vitkin et al (2014 Meas. Sci. Technol. 25 75801), the authors reported on successful use of this approach for in situ calibration, but also on the method’s limitations and restricted range of applicability. In their earlier work, a jet facility and a probe, comprised of two orthogonal x-hot-films, were used for calibration and for full dataset generation. In the current work, a comprehensive laboratory study of 3D-calibration of two multi-hot-film probes (triple- and four-sensor) using a grid flow was conducted. The probes were embedded in a collocated sonic, and their relative pitch and yaw orientation to the mean flow was changed by means of motorized traverses. The study demonstrated that NN-calibration is a powerful tool for calibration of multi-sensor 3D-hot film probes embedded in a collocated sonic, and can be employed in long-lasting field campaigns.

  12. Corpus-Aided Business English Collocation Pedagogy: An Empirical Study in Chinese EFL Learners

    ERIC Educational Resources Information Center

    Chen, Lidan

    2017-01-01

    This study reports an empirical study of an explicit instruction of corpus-aided Business English collocations and verifies its effectiveness in improving learners' collocation awareness and learner autonomy, as a result of which is significant improvement of learners' collocation competence. An eight-week instruction in keywords' collocations,…

  13. Perceptions on L2 Lexical Collocation Translation with a Focus on English-Arabic

    ERIC Educational Resources Information Center

    Alqaed, Mai Abdullah

    2017-01-01

    This paper aims to shed light on recent research concerning translating English-Arabic lexical collocations. It begins with a brief overview of English and Arabic lexical collocations with reference to specialized dictionaries. Research views on translating lexical collocations are presented, with the focus on English-Arabic collocations. These…

  14. Linear and nonlinear properties of numerical methods for the rotating shallow water equations

    NASA Astrophysics Data System (ADS)

    Eldred, Chris

    The shallow water equations provide a useful analogue of the fully compressible Euler equations since they have similar conservation laws, many of the same types of waves and a similar (quasi-) balanced state. It is desirable that numerical models posses similar properties, and the prototypical example of such a scheme is the 1981 Arakawa and Lamb (AL81) staggered (C-grid) total energy and potential enstrophy conserving scheme, based on the vector invariant form of the continuous equations. However, this scheme is restricted to a subset of logically square, orthogonal grids. The current work extends the AL81 scheme to arbitrary non-orthogonal polygonal grids, by combining Hamiltonian methods (work done by Salmon, Gassmann, Dubos and others) and Discrete Exterior Calculus (Thuburn, Cotter, Dubos, Ringler, Skamarock, Klemp and others). It is also possible to obtain these properties (along with arguably superior wave dispersion properties) through the use of a collocated (Z-grid) scheme based on the vorticity-divergence form of the continuous equations. Unfortunately, existing examples of these schemes in the literature for general, spherical grids either contain computational modes; or do not conserve total energy and potential enstrophy. This dissertation extends an existing scheme for planar grids to spherical grids, through the use of Nambu brackets (as pioneered by Rick Salmon). To compare these two schemes, the linear modes (balanced states, stationary modes and propagating modes; with and without dissipation) are examined on both uniform planar grids (square, hexagonal) and quasi-uniform spherical grids (geodesic, cubed-sphere). In addition to evaluating the linear modes, the results of the two schemes applied to a set of standard shallow water test cases and a recently developed forced-dissipative turbulence test case from John Thuburn (intended to evaluate the ability the suitability of schemes as the basis for a climate model) on both hexagonal-pentagonal icosahedral grids and cubed-sphere grids are presented. Finally, some remarks and thoughts about the suitability of these two schemes as the basis for atmospheric dynamical development are given.

  15. Investigating ESL Learners' Lexical Collocations: The Acquisition of Verb + Noun Collocations by Japanese Learners of English

    ERIC Educational Resources Information Center

    Miyakoshi, Tomoko

    2009-01-01

    Although it is widely acknowledged that collocations play an important part in second language learning, especially at intermediate-advanced levels, learners' difficulties with collocations have not been investigated in much detail so far. The present study examines ESL learners' use of verb-noun collocations, such as "take notes," "place an…

  16. An Advanced Framework for Improving Situational Awareness in Electric Power Grid Operation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yousu; Huang, Zhenyu; Zhou, Ning

    With the deployment of new smart grid technologies and the penetration of renewable energy in power systems, significant uncertainty and variability is being introduced into power grid operation. Traditionally, the Energy Management System (EMS) operates the power grid in a deterministic mode, and thus will not be sufficient for the future control center in a stochastic environment with faster dynamics. One of the main challenges is to improve situational awareness. This paper reviews the current status of power grid operation and presents a vision of improving wide-area situational awareness for a future control center. An advanced framework, consisting of parallelmore » state estimation, state prediction, parallel contingency selection, parallel contingency analysis, and advanced visual analytics, is proposed to provide capabilities needed for better decision support by utilizing high performance computing (HPC) techniques and advanced visual analytic techniques. Research results are presented to support the proposed vision and framework.« less

  17. A Petri Net model for distributed energy system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Konopko, Joanna

    2015-12-31

    Electrical networks need to evolve to become more intelligent, more flexible and less costly. The smart grid is the next generation power energy, uses two-way flows of electricity and information to create a distributed automated energy delivery network. Building a comprehensive smart grid is a challenge for system protection, optimization and energy efficient. Proper modeling and analysis is needed to build an extensive distributed energy system and intelligent electricity infrastructure. In this paper, the whole model of smart grid have been proposed using Generalized Stochastic Petri Nets (GSPN). The simulation of created model is also explored. The simulation of themore » model has allowed the analysis of how close the behavior of the model is to the usage of the real smart grid.« less

  18. Investigating the Seasonal and Diurnal Evolution of Fog and its Effect on the Hydrometeorological Regime in the Southern Appalachian Mountains Using a Mobile Observing Platform

    NASA Astrophysics Data System (ADS)

    Wilson, A. M.; Barros, A.

    2015-12-01

    Accurate, high resolution observations of fog and low clouds in regions of complex terrain are largely unavailable, due to a lack of existing in situ observations and obstacles to satellite observations such as ground clutter. For the past year, a mobile observing platform including a ground-based passive cavity aerosol spectrometer probe (PCASP-X2), an optical disdrometer (PARSIVEL-2), a tipping bucket rain gauge, and a Vaisala weather station, collocated with a Micro Rain Radar, has been recording observations in valley locations in the inner mountain region of the Southern Appalachian Mountains (SAM). In 2014, the SAM hosted a Global Precipitation Mission field campaign (the Integrated Precipitation and Hydrology Experiment), and during this experiment the platform was also collocated at various times with a microwave radiometer, W- and X- band radars, a Pluvio weighing rain gauge, a 2D video disdrometer, among other instruments. These observations will be discussed in the context of previous findings based on observations and model results (stochastic column model and the Advanced Research Weather and Forecasting Model (WRF)). Specifically, in previous work, seeder-feeder processes have been found to govern the enhancement of light rainfall in the SAM through increased coalescence efficiency in stratiform rainfall due to the interactions with low level clouds and topography modulated fog. This presentation will focus on measurements made by the platform and collocated instruments, as well as observations made by fog collectors on ridges, with the aim of developing a process-based understanding of the characteristics of low cloud and fog through describing the diurnal cycle of microphysical and dynamical processes and properties in the region. The overarching goal is to employ observations of the formation and evolution of the "feeder" clouds and fog to further understand the magnitude and function of their contribution to the local hydrometeorological regime.

  19. Are Nonadjacent Collocations Processed Faster?

    ERIC Educational Resources Information Center

    Vilkaite, Laura

    2016-01-01

    Numerous studies have shown processing advantages for collocations, but they only investigated processing of adjacent collocations (e.g., "provide information"). However, in naturally occurring language, nonadjacent collocations ("provide" some of the "information") are equally, if not more frequent. This raises the…

  20. Efficient Operation of a Multi-purpose Reservoir in Chile: Integration of Economic Water Value for Irrigation and Hydropower

    NASA Astrophysics Data System (ADS)

    Olivares, M. A.; Gonzalez Cabrera, J. M., Sr.; Moreno, R.

    2016-12-01

    Operation of hydropower reservoirs in Chile is prescribed by an Independent Power System Operator. This study proposes a methodology that integrates power grid operations planning with basin-scale multi-use reservoir operations planning. The aim is to efficiently manage a multi-purpose reservoir, in which hydroelectric generation is competing with other water uses, most notably irrigation. Hydropower and irrigation are competing water uses due to a seasonality mismatch. Currently, the operation of multi-purpose reservoirs with substantial power capacity is prescribed as the result of a grid-wide cost-minimization model which takes irrigation requirements as constraints. We propose advancing in the economic co-optimization of reservoir water use for irrigation and hydropower at the basin level, by explicitly introducing the economic value of water for irrigation represented by a demand function for irrigation water. The proposed methodology uses the solution of a long-term grid-wide operations planning model, a stochastic dual dynamic program (SDDP), to obtain the marginal benefit function for water use in hydropower. This marginal benefit corresponds to the energy price in the power grid as a function of the water availability in the reservoir and the hydrologic scenarios. This function allows capture technical and economic aspects to the operation of hydropower reservoir in the power grid and is generated with the dual variable of the power-balance constraint, the optimal reservoir operation and the hydrologic scenarios used in SDDP. The economic value of water for irrigation and hydropower are then integrated into a basin scale stochastic dynamic program, from which stored water value functions are derived. These value functions are then used to re-optimize reservoir operations under several inflow scenarios.

  1. Removing Barriers for Effective Deployment of Intermittent Renewable Generation

    NASA Astrophysics Data System (ADS)

    Arabali, Amirsaman

    The stochastic nature of intermittent renewable resources is the main barrier to effective integration of renewable generation. This problem can be studied from feeder-scale and grid-scale perspectives. Two new stochastic methods are proposed to meet the feeder-scale controllable load with a hybrid renewable generation (including wind and PV) and energy storage system. For the first method, an optimization problem is developed whose objective function is the cost of the hybrid system including the cost of renewable generation and storage subject to constraints on energy storage and shifted load. A smart-grid strategy is developed to shift the load and match the renewable energy generation and controllable load. Minimizing the cost function guarantees minimum PV and wind generation installation, as well as storage capacity selection for supplying the controllable load. A confidence coefficient is allocated to each stochastic constraint which shows to what degree the constraint is satisfied. In the second method, a stochastic framework is developed for optimal sizing and reliability analysis of a hybrid power system including renewable resources (PV and wind) and energy storage system. The hybrid power system is optimally sized to satisfy the controllable load with a specified reliability level. A load-shifting strategy is added to provide more flexibility for the system and decrease the installation cost. Load shifting strategies and their potential impacts on the hybrid system reliability/cost analysis are evaluated trough different scenarios. Using a compromise-solution method, the best compromise between the reliability and cost will be realized for the hybrid system. For the second problem, a grid-scale stochastic framework is developed to examine the storage application and its optimal placement for the social cost and transmission congestion relief of wind integration. Storage systems are optimally placed and adequately sized to minimize the sum of operation and congestion costs over a scheduling period. A technical assessment framework is developed to enhance the efficiency of wind integration and evaluate the economics of storage technologies and conventional gas-fired alternatives. The proposed method is used to carry out a cost-benefit analysis for the IEEE 24-bus system and determine the most economical technology. In order to mitigate the financial and technical concerns of renewable energy integration into the power system, a stochastic framework is proposed for transmission grid reinforcement studies in a power system with wind generation. A multi-stage multi-objective transmission network expansion planning (TNEP) methodology is developed which considers the investment cost, absorption of private investment and reliability of the system as the objective functions. A Non-dominated Sorting Genetic Algorithm (NSGA II) optimization approach is used in combination with a probabilistic optimal power flow (POPF) to determine the Pareto optimal solutions considering the power system uncertainties. Using a compromise-solution method, the best final plan is then realized based on the decision maker preferences. The proposed methodology is applied to the IEEE 24-bus Reliability Tests System (RTS) to evaluate the feasibility and practicality of the developed planning strategy.

  2. 3D electromagnetic modelling of a TTI medium and TTI effects in inversion

    NASA Astrophysics Data System (ADS)

    Jaysaval, Piyoosh; Shantsev, Daniil; de la Kethulle de Ryhove, Sébastien

    2016-04-01

    We present a numerical algorithm for 3D electromagnetic (EM) forward modelling in conducting media with general electric anisotropy. The algorithm is based on the finite-difference discretization of frequency-domain Maxwell's equations on a Lebedev grid, in which all components of the electric field are collocated but half a spatial step staggered with respect to the magnetic field components, which also are collocated. This leads to a system of linear equations that is solved using a stabilized biconjugate gradient method with a multigrid preconditioner. We validate the accuracy of the numerical results for layered and 3D tilted transverse isotropic (TTI) earth models representing typical scenarios used in the marine controlled-source EM method. It is then demonstrated that not taking into account the full anisotropy of the conductivity tensor can lead to misleading inversion results. For simulation data corresponding to a 3D model with a TTI anticlinal structure, a standard vertical transverse isotropic inversion is not able to image a resistor, while for a 3D model with a TTI synclinal structure the inversion produces a false resistive anomaly. If inversion uses the proposed forward solver that can handle TTI anisotropy, it produces resistivity images consistent with the true models.

  3. Hybrid deterministic-stochastic modeling of x-ray beam bowtie filter scatter on a CT system.

    PubMed

    Liu, Xin; Hsieh, Jiang

    2015-01-01

    Knowledge of scatter generated by bowtie filter (i.e. x-ray beam compensator) is crucial for providing artifact free images on the CT scanners. Our approach is to use a hybrid deterministic-stochastic simulation to estimate the scatter level generated by a bowtie filter made of a material with low atomic number. First, major components of CT systems, such as source, flat filter, bowtie filter, body phantom, are built into a 3D model. The scattered photon fluence and the primary transmitted photon fluence are simulated by MCNP - a Monte Carlo simulation toolkit. The rejection of scattered photon by the post patient collimator (anti-scatter grid) is simulated with an analytical formula. The biased sinogram is created by superimposing scatter signal generated by the simulation onto the primary x-ray beam signal. Finally, images with artifacts are reconstructed with the biased signal. The effect of anti-scatter grid height on scatter rejection are also discussed and demonstrated.

  4. Operation of Power Grids with High Penetration of Wind Power

    NASA Astrophysics Data System (ADS)

    Al-Awami, Ali Taleb

    The integration of wind power into the power grid poses many challenges due to its highly uncertain nature. This dissertation involves two main components related to the operation of power grids with high penetration of wind energy: wind-thermal stochastic dispatch and wind-thermal coordinated bidding in short-term electricity markets. In the first part, a stochastic dispatch (SD) algorithm is proposed that takes into account the stochastic nature of the wind power output. The uncertainty associated with wind power output given the forecast is characterized using conditional probability density functions (CPDF). Several functions are examined to characterize wind uncertainty including Beta, Weibull, Extreme Value, Generalized Extreme Value, and Mixed Gaussian distributions. The unique characteristics of the Mixed Gaussian distribution are then utilized to facilitate the speed of convergence of the SD algorithm. A case study is carried out to evaluate the effectiveness of the proposed algorithm. Then, the SD algorithm is extended to simultaneously optimize the system operating costs and emissions. A modified multi-objective particle swarm optimization algorithm is suggested to identify the Pareto-optimal solutions defined by the two conflicting objectives. A sensitivity analysis is carried out to study the effect of changing load level and imbalance cost factors on the Pareto front. In the second part of this dissertation, coordinated trading of wind and thermal energy is proposed to mitigate risks due to those uncertainties. The problem of wind-thermal coordinated trading is formulated as a mixed-integer stochastic linear program. The objective is to obtain the optimal tradeoff bidding strategy that maximizes the total expected profits while controlling trading risks. For risk control, a weighted term of the conditional value at risk (CVaR) is included in the objective function. The CVaR aims to maximize the expected profits of the least profitable scenarios, thus improving trading risk control. A case study comparing coordinated with uncoordinated bidding strategies depending on the trader's risk attitude is included. Simulation results show that coordinated bidding can improve the expected profits while significantly improving the CVaR.

  5. Collocations: A Neglected Variable in EFL.

    ERIC Educational Resources Information Center

    Farghal, Mohammed; Obiedat, Hussein

    1995-01-01

    Addresses the issue of collocations as an important and neglected variable in English-as-a-Foreign-Language classes. Two questionnaires, in English and Arabic, involving common collocations relating to food, color, and weather were administered to English majors and English language teachers. Results show both groups deficient in collocations. (36…

  6. 47 CFR Appendix B to Part 1 - Nationwide Programmatic Agreement for the Collocation of Wireless Antennas

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Collocation of Wireless Antennas B Appendix B to Part 1 Telecommunication FEDERAL COMMUNICATIONS COMMISSION... the Collocation of Wireless Antennas Nationwide Programmatic Agreement for the Collocation of Wireless Antennas Executed by the Federal Communications Commission, the National Conference of State Historic...

  7. 47 CFR Appendix B to Part 1 - Nationwide Programmatic Agreement for the Collocation of Wireless Antennas

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Collocation of Wireless Antennas B Appendix B to Part 1 Telecommunication FEDERAL COMMUNICATIONS COMMISSION... the Collocation of Wireless Antennas Nationwide Programmatic Agreement for the Collocation of Wireless Antennas Executed by the Federal Communications Commission, the National Conference of State Historic...

  8. 47 CFR Appendix B to Part 1 - Nationwide Programmatic Agreement for the Collocation of Wireless Antennas

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Collocation of Wireless Antennas B Appendix B to Part 1 Telecommunication FEDERAL COMMUNICATIONS COMMISSION... the Collocation of Wireless Antennas Nationwide Programmatic Agreement for the Collocation of Wireless Antennas Executed by the Federal Communications Commission, the National Conference of State Historic...

  9. 47 CFR Appendix B to Part 1 - Nationwide Programmatic Agreement for the Collocation of Wireless Antennas

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Collocation of Wireless Antennas B Appendix B to Part 1 Telecommunication FEDERAL COMMUNICATIONS COMMISSION... the Collocation of Wireless Antennas Nationwide Programmatic Agreement for the Collocation of Wireless Antennas Executed by the Federal Communications Commission, the National Conference of State Historic...

  10. 47 CFR Appendix B to Part 1 - Nationwide Programmatic Agreement for the Collocation of Wireless Antennas

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Collocation of Wireless Antennas B Appendix B to Part 1 Telecommunication FEDERAL COMMUNICATIONS COMMISSION... the Collocation of Wireless Antennas Nationwide Programmatic Agreement for the Collocation of Wireless Antennas Executed by the Federal Communications Commission, the National Conference of State Historic...

  11. Advanced Stochastic Collocation Methods for Polynomial Chaos in RAVEN

    NASA Astrophysics Data System (ADS)

    Talbot, Paul W.

    As experiment complexity in fields such as nuclear engineering continually increases, so does the demand for robust computational methods to simulate them. In many simulations, input design parameters and intrinsic experiment properties are sources of uncertainty. Often small perturbations in uncertain parameters have significant impact on the experiment outcome. For instance, in nuclear fuel performance, small changes in fuel thermal conductivity can greatly affect maximum stress on the surrounding cladding. The difficulty quantifying input uncertainty impact in such systems has grown with the complexity of numerical models. Traditionally, uncertainty quantification has been approached using random sampling methods like Monte Carlo. For some models, the input parametric space and corresponding response output space is sufficiently explored with few low-cost calculations. For other models, it is computationally costly to obtain good understanding of the output space. To combat the expense of random sampling, this research explores the possibilities of using advanced methods in Stochastic Collocation for generalized Polynomial Chaos (SCgPC) as an alternative to traditional uncertainty quantification techniques such as Monte Carlo (MC) and Latin Hypercube Sampling (LHS) methods for applications in nuclear engineering. We consider traditional SCgPC construction strategies as well as truncated polynomial spaces using Total Degree and Hyperbolic Cross constructions. We also consider applying anisotropy (unequal treatment of different dimensions) to the polynomial space, and offer methods whereby optimal levels of anisotropy can be approximated. We contribute development to existing adaptive polynomial construction strategies. Finally, we consider High-Dimensional Model Reduction (HDMR) expansions, using SCgPC representations for the subspace terms, and contribute new adaptive methods to construct them. We apply these methods on a series of models of increasing complexity. We use analytic models of various levels of complexity, then demonstrate performance on two engineering-scale problems: a single-physics nuclear reactor neutronics problem, and a multiphysics fuel cell problem coupling fuels performance and neutronics. Lastly, we demonstrate sensitivity analysis for a time-dependent fuels performance problem. We demonstrate the application of all the algorithms in RAVEN, a production-level uncertainty quantification framework.

  12. Evaluation of multiple precipitation products across Mainland China using the triple collocation method without ground truth

    NASA Astrophysics Data System (ADS)

    Tang, G.; Li, C.; Hong, Y.; Long, D.

    2017-12-01

    Proliferation of satellite and reanalysis precipitation products underscores the need to evaluate their reliability, particularly over ungauged or poorly gauged regions. However, it is really challenging to perform such evaluations over regions lacking ground truth data. Here, using the triple collocation (TC) method that is capable of evaluating relative uncertainties in different products without ground truth, we evaluate five satellite-based precipitation products and comparatively assess uncertainties in three types of independent precipitation products, e.g., satellite-based, ground-observed, and model reanalysis over Mainland China, including a ground-based precipitation dataset (the gauge based daily precipitation analysis, CGDPA), the latest version of the European reanalysis agency reanalysis (ERA-interim) product, and five satellite-based products (i.e., 3B42V7, 3B42RT of TMPA, IMERG, CMORPH-CRT, PERSIANN-CDR) on a regular 0.25° grid at the daily timescale from 2013 to 2015. First, the effectiveness of the TC method is evaluated by comparison with traditional methods based on ground observations in a densely gauged region. Results show that the TC method is reliable because the correlation coefficient (CC) and root mean square error (RMSE) are close to those based on the traditional method with a maximum difference only up to 0.08 and 0.71 (mm/day) for CC and RMSE, respectively. Then, the TC method is applied to Mainland China and the Tibetan Plateau (TP). Results indicate that: (1) the overall performance of IMERG is better than the other satellite products over Mainland China; (2) over grid cells without rain gauges in the TP, IMERG and ERA show better performance than CGDPA, indicating the potential of remote sensing and reanalysis data over these regions and the inherent uncertainty of CGDPA due to interpolation using sparsely gauged data; (3) both TMPA-3B42 and CMORPH-CRT have some unexpected CC values over certain grid cells that contain water bodies, reaffirming the overestimation of precipitation over inland water bodies. Overall, the TC method provides not only reliable cross-validation results of precipitation estimates over Mainland China but also a new perspective as to compressively assess multi-source precipitation products, particularly over poorly gauged regions.

  13. Full moment tensors with uncertainties for the 2017 North Korea declared nuclear test and for a collocated, subsequent event

    NASA Astrophysics Data System (ADS)

    Alvizuri, C. R.; Tape, C.

    2017-12-01

    A seismic moment tensor is a 3×3 symmetric matrix that characterizes the far-field seismic radiation from a source, whether it be an earthquake, volcanic event, explosion. We estimate full moment tensors and their uncertainties for the North Korea declared nuclear test and for a collocated event that occurred eight minutes later. The nuclear test and the subsequent event occurred on September 3, 2017 at around 03:30 and 03:38 UTC time. We perform a grid search over the six-dimensional space of moment tensors, generating synthetic waveforms at each moment tensor grid point and then evaluating a misfit function between the observed and synthetic waveforms. The synthetic waveforms are computed using a 1-D structure model for the region; this approximation requires careful assessment of time shifts between data and synthetics, as well as careful choice of the bandpass for filtering. For each moment tensor we characterize its uncertainty in terms of waveform misfit, a probability function, and a confidence curve for the probability that the true moment tensor lies within the neighborhood of the optimal moment tensor. For each event we estimate its moment tensor using observed waveforms from all available seismic stations within a 2000-km radius. We use as much of the waveform as possible, including surface waves for all stations, and body waves above 1 Hz for some of the closest stations. Our preliminary magnitude estimates are Mw 5.1-5.3 for the first event and Mw 4.7 for the second event. Our results show a dominantly positive isotropic moment tensor for the first event, and a dominantly negative isotropic moment tensor for the subsequent event. As expected, the details of the probability density, waveform fit, and confidence curves are influenced by the structural model, the choice of filter frequencies, and the selection of stations.

  14. Examining Second Language Receptive Knowledge of Collocation and Factors That Affect Learning

    ERIC Educational Resources Information Center

    Nguyen, Thi My Hang; Webb, Stuart

    2017-01-01

    This study investigated Vietnamese EFL learners' knowledge of verb-noun and adjective-noun collocations at the first three 1,000 word frequency levels, and the extent to which five factors (node word frequency, collocation frequency, mutual information score, congruency, and part of speech) predicted receptive knowledge of collocation. Knowledge…

  15. Collocation and Galerkin Time-Stepping Methods

    NASA Technical Reports Server (NTRS)

    Huynh, H. T.

    2011-01-01

    We study the numerical solutions of ordinary differential equations by one-step methods where the solution at tn is known and that at t(sub n+1) is to be calculated. The approaches employed are collocation, continuous Galerkin (CG) and discontinuous Galerkin (DG). Relations among these three approaches are established. A quadrature formula using s evaluation points is employed for the Galerkin formulations. We show that with such a quadrature, the CG method is identical to the collocation method using quadrature points as collocation points. Furthermore, if the quadrature formula is the right Radau one (including t(sub n+1)), then the DG and CG methods also become identical, and they reduce to the Radau IIA collocation method. In addition, we present a generalization of DG that yields a method identical to CG and collocation with arbitrary collocation points. Thus, the collocation, CG, and generalized DG methods are equivalent, and the latter two methods can be formulated using the differential instead of integral equation. Finally, all schemes discussed can be cast as s-stage implicit Runge-Kutta methods.

  16. A stochastic parameterization for deep convection using cellular automata

    NASA Astrophysics Data System (ADS)

    Bengtsson, L.; Steinheimer, M.; Bechtold, P.; Geleyn, J.

    2012-12-01

    Cumulus parameterizations used in most operational weather and climate models today are based on the mass-flux concept which took form in the early 1970's. In such schemes it is assumed that a unique relationship exists between the ensemble-average of the sub-grid convection, and the instantaneous state of the atmosphere in a vertical grid box column. However, such a relationship is unlikely to be described by a simple deterministic function (Palmer, 2011). Thus, because of the statistical nature of the parameterization challenge, it has been recognized by the community that it is important to introduce stochastic elements to the parameterizations (for instance: Plant and Craig, 2008, Khouider et al. 2010, Frenkel et al. 2011, Bentsson et al. 2011, but the list is far from exhaustive). There are undoubtedly many ways in which stochastisity can enter new developments. In this study we use a two-way interacting cellular automata (CA), as its intrinsic nature possesses many qualities interesting for deep convection parameterization. In the one-dimensional entraining plume approach, there is no parameterization of horizontal transport of heat, moisture or momentum due to cumulus convection. In reality, mass transport due to gravity waves that propagate in the horizontal can trigger new convection, important for the organization of deep convection (Huang, 1988). The self-organizational characteristics of the CA allows for lateral communication between adjacent NWP model grid-boxes, and temporal memory. Thus the CA scheme used in this study contain three interesting components for representation of cumulus convection, which are not present in the traditional one-dimensional bulk entraining plume method: horizontal communication, memory and stochastisity. The scheme is implemented in the high resolution regional NWP model ALARO, and simulations show enhanced organization of convective activity along squall-lines. Probabilistic evaluation demonstrate an enhanced spread in large-scale variables in regions where convective activity is large. A two month extended evaluation of the deterministic behaviour of the scheme indicate a neutral impact on forecast skill. References: Bengtsson, L., H. Körnich, E. Källén, and G. Svensson, 2011: Large-scale dynamical response to sub-grid scale organization provided by cellular automata. Journal of the Atmospheric Sciences, 68, 3132-3144. Frenkel, Y., A. Majda, and B. Khouider, 2011: Using the stochastic multicloud model to improve tropical convective parameterization: A paradigm example. Journal of the Atmospheric Sciences, doi: 10.1175/JAS-D-11-0148.1. Huang, X.-Y., 1988: The organization of moist convection by internal 365 gravity waves. Tellus A, 42, 270-285. Khouider, B., J. Biello, and A. Majda, 2010: A Stochastic Multicloud Model for Tropical Convection. Comm. Math. Sci., 8, 187-216. Palmer, T., 2011: Towards the Probabilistic Earth-System Simulator: A Vision for the Future of Climate and Weather Prediction. Quarterly Journal of the Royal Meteorological Society 138 (2012) 841-861 Plant, R. and G. Craig, 2008: A stochastic parameterization for deep convection based on equilibrium statistics. J. Atmos. Sci., 65, 87-105.

  17. Pricing real estate index options under stochastic interest rates

    NASA Astrophysics Data System (ADS)

    Gong, Pu; Dai, Jun

    2017-08-01

    Real estate derivatives as new financial instruments are not merely risk management tools but also provide a novel way to gain exposure to real estate assets without buying or selling the physical assets. Although real estate derivatives market has exhibited a rapid development in recent years, the valuation challenge of real estate derivatives remains a great obstacle for further development in this market. In this paper, we derive a partial differential equation contingent on a real estate index in a stochastic interest rate environment and propose a modified finite difference method that adopts the non-uniform grids to solve this problem. Numerical results confirm the efficiency of the method and indicate that constant interest rate models lead to the mispricing of options and the effects of stochastic interest rates on option prices depend on whether the term structure of interest rates is rising or falling. Finally, we have investigated and compared the different effects of stochastic interest rates on European and American option prices.

  18. A framework to analyze the stochastic harmonics and resonance of wind energy grid interconnection

    DOE PAGES

    Cho, Youngho; Lee, Choongman; Hur, Kyeon; ...

    2016-08-31

    This study addresses a modeling and analysis methodology for investigating the stochastic harmonics and resonance concerns of wind power plants (WPPs). Wideband harmonics from modern wind turbines are observed to be stochastic, associated with real power production, and they may adversely interact with the grid impedance and cause unexpected harmonic resonance if not comprehensively addressed in the planning and commissioning of the WPPs. These issues should become more critical as wind penetration levels increase. We thus propose a planning study framework comprising the following functional steps: First, the best-fitted probability density functions (PDFs) of the harmonic components of interest inmore » the frequency domain are determined. In operations planning, maximum likelihood estimations followed by a chi-square test are used once field measurements or manufacturers' data are available. Second, harmonic currents from the WPP are represented by randomly-generating harmonic components based on their PDFs (frequency spectrum) and then synthesized for time-domain simulations via inverse Fourier transform. Finally, we conduct a comprehensive assessment by including the impacts of feeder configurations, harmonic filters, and the variability of parameters. We demonstrate the efficacy of the proposed study approach for a 100-MW offshore WPP consisting of 20 units of 5-MW full-converter turbines, a realistic benchmark system adapted from a WPP under development in Korea, and discuss lessons learned through this research.« less

  19. A computer code for multiphase all-speed transient flows in complex geometries. MAST version 1.0

    NASA Technical Reports Server (NTRS)

    Chen, C. P.; Jiang, Y.; Kim, Y. M.; Shang, H. M.

    1991-01-01

    The operation of the MAST code, which computes transient solutions to the multiphase flow equations applicable to all-speed flows, is described. Two-phase flows are formulated based on the Eulerian-Lagrange scheme in which the continuous phase is described by the Navier-Stokes equation (or Reynolds equations for turbulent flows). Dispersed phase is formulated by a Lagrangian tracking scheme. The numerical solution algorithms utilized for fluid flows is a newly developed pressure-implicit algorithm based on the operator-splitting technique in generalized nonorthogonal coordinates. This operator split allows separate operation on each of the variable fields to handle pressure-velocity coupling. The obtained pressure correction equation has the hyperbolic nature and is effective for Mach numbers ranging from the incompressible limit to supersonic flow regimes. The present code adopts a nonstaggered grid arrangement; thus, the velocity components and other dependent variables are collocated at the same grid. A sequence of benchmark-quality problems, including incompressible, subsonic, transonic, supersonic, gas-droplet two-phase flows, as well as spray-combustion problems, were performed to demonstrate the robustness and accuracy of the present code.

  20. Distributed Relaxation for Conservative Discretizations

    NASA Technical Reports Server (NTRS)

    Diskin, Boris; Thomas, James L.

    2001-01-01

    A multigrid method is defined as having textbook multigrid efficiency (TME) if the solutions to the governing system of equations are attained in a computational work that is a small (less than 10) multiple of the operation count in one target-grid residual evaluation. The way to achieve this efficiency is the distributed relaxation approach. TME solvers employing distributed relaxation have already been demonstrated for nonconservative formulations of high-Reynolds-number viscous incompressible and subsonic compressible flow regimes. The purpose of this paper is to provide foundations for applications of distributed relaxation to conservative discretizations. A direct correspondence between the primitive variable interpolations for calculating fluxes in conservative finite-volume discretizations and stencils of the discretized derivatives in the nonconservative formulation has been established. Based on this correspondence, one can arrive at a conservative discretization which is very efficiently solved with a nonconservative relaxation scheme and this is demonstrated for conservative discretization of the quasi one-dimensional Euler equations. Formulations for both staggered and collocated grid arrangements are considered and extensions of the general procedure to multiple dimensions are discussed.

  1. English Learners' Knowledge of Prepositions: Collocational Knowledge or Knowledge Based on Meaning?

    ERIC Educational Resources Information Center

    Mueller, Charles M.

    2011-01-01

    Second language (L2) learners' successful performance in an L2 can be partly attributed to their knowledge of collocations. In some cases, this knowledge is accompanied by knowledge of the semantic and/or grammatical patterns that motivate the collocation. At other times, collocational knowledge may serve a compensatory role. To determine the…

  2. 47 CFR 51.321 - Methods of obtaining interconnection and access to unbundled elements under section 251 of the Act.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... elements include, but are not limited to: (1) Physical collocation and virtual collocation at the premises... seeking a particular collocation arrangement, either physical or virtual, is entitled to a presumption... incumbent LEC shall be required to provide virtual collocation, except at points where the incumbent LEC...

  3. Frequency of Input and L2 Collocational Processing: A Comparison of Congruent and Incongruent Collocations

    ERIC Educational Resources Information Center

    Wolter, Brent; Gyllstad, Henrik

    2013-01-01

    This study investigated the influence of frequency effects on the processing of congruent (i.e., having an equivalent first language [L1] construction) collocations and incongruent (i.e., not having an equivalent L1 construction) collocations in a second language (L2). An acceptability judgment task was administered to native and advanced…

  4. Corpus-Based versus Traditional Learning of Collocations

    ERIC Educational Resources Information Center

    Daskalovska, Nina

    2015-01-01

    One of the aspects of knowing a word is the knowledge of which words it is usually used with. Since knowledge of collocations is essential for appropriate and fluent use of language, learning collocations should have a central place in the study of vocabulary. There are different opinions about the best ways of learning collocations. This study…

  5. Collocations in Corpus-Based Language Learning Research: Identifying, Comparing, and Interpreting the Evidence

    ERIC Educational Resources Information Center

    Gablasova, Dana; Brezina, Vaclav; McEnery, Tony

    2017-01-01

    This article focuses on the use of collocations in language learning research (LLR). Collocations, as units of formulaic language, are becoming prominent in our understanding of language learning and use; however, while the number of corpus-based LLR studies of collocations is growing, there is still a need for a deeper understanding of factors…

  6. Multifidelity, Multidisciplinary Design Under Uncertainty with Non-Intrusive Polynomial Chaos

    NASA Technical Reports Server (NTRS)

    West, Thomas K., IV; Gumbert, Clyde

    2017-01-01

    The primary objective of this work is to develop an approach for multifidelity uncertainty quantification and to lay the framework for future design under uncertainty efforts. In this study, multifidelity is used to describe both the fidelity of the modeling of the physical systems, as well as the difference in the uncertainty in each of the models. For computational efficiency, a multifidelity surrogate modeling approach based on non-intrusive polynomial chaos using the point-collocation technique is developed for the treatment of both multifidelity modeling and multifidelity uncertainty modeling. Two stochastic model problems are used to demonstrate the developed methodologies: a transonic airfoil model and multidisciplinary aircraft analysis model. The results of both showed the multifidelity modeling approach was able to predict the output uncertainty predicted by the high-fidelity model as a significant reduction in computational cost.

  7. Parallel stochastic simulation of macroscopic calcium currents.

    PubMed

    González-Vélez, Virginia; González-Vélez, Horacio

    2007-06-01

    This work introduces MACACO, a macroscopic calcium currents simulator. It provides a parameter-sweep framework which computes macroscopic Ca(2+) currents from the individual aggregation of unitary currents, using a stochastic model for L-type Ca(2+) channels. MACACO uses a simplified 3-state Markov model to simulate the response of each Ca(2+) channel to different voltage inputs to the cell. In order to provide an accurate systematic view for the stochastic nature of the calcium channels, MACACO is composed of an experiment generator, a central simulation engine and a post-processing script component. Due to the computational complexity of the problem and the dimensions of the parameter space, the MACACO simulation engine employs a grid-enabled task farm. Having been designed as a computational biology tool, MACACO heavily borrows from the way cell physiologists conduct and report their experimental work.

  8. Evaluation of the Plant-Craig stochastic convection scheme in an ensemble forecasting system

    NASA Astrophysics Data System (ADS)

    Keane, R. J.; Plant, R. S.; Tennant, W. J.

    2015-12-01

    The Plant-Craig stochastic convection parameterization (version 2.0) is implemented in the Met Office Regional Ensemble Prediction System (MOGREPS-R) and is assessed in comparison with the standard convection scheme with a simple stochastic element only, from random parameter variation. A set of 34 ensemble forecasts, each with 24 members, is considered, over the month of July 2009. Deterministic and probabilistic measures of the precipitation forecasts are assessed. The Plant-Craig parameterization is found to improve probabilistic forecast measures, particularly the results for lower precipitation thresholds. The impact on deterministic forecasts at the grid scale is neutral, although the Plant-Craig scheme does deliver improvements when forecasts are made over larger areas. The improvements found are greater in conditions of relatively weak synoptic forcing, for which convective precipitation is likely to be less predictable.

  9. Stochastic Simulation and Forecast of Hydrologic Time Series Based on Probabilistic Chaos Expansion

    NASA Astrophysics Data System (ADS)

    Li, Z.; Ghaith, M.

    2017-12-01

    Hydrological processes are characterized by many complex features, such as nonlinearity, dynamics and uncertainty. How to quantify and address such complexities and uncertainties has been a challenging task for water engineers and managers for decades. To support robust uncertainty analysis, an innovative approach for the stochastic simulation and forecast of hydrologic time series is developed is this study. Probabilistic Chaos Expansions (PCEs) are established through probabilistic collocation to tackle uncertainties associated with the parameters of traditional hydrological models. The uncertainties are quantified in model outputs as Hermite polynomials with regard to standard normal random variables. Sequentially, multivariate analysis techniques are used to analyze the complex nonlinear relationships between meteorological inputs (e.g., temperature, precipitation, evapotranspiration, etc.) and the coefficients of the Hermite polynomials. With the established relationships between model inputs and PCE coefficients, forecasts of hydrologic time series can be generated and the uncertainties in the future time series can be further tackled. The proposed approach is demonstrated using a case study in China and is compared to a traditional stochastic simulation technique, the Markov-Chain Monte-Carlo (MCMC) method. Results show that the proposed approach can serve as a reliable proxy to complicated hydrological models. It can provide probabilistic forecasting in a more computationally efficient manner, compared to the traditional MCMC method. This work provides technical support for addressing uncertainties associated with hydrological modeling and for enhancing the reliability of hydrological modeling results. Applications of the developed approach can be extended to many other complicated geophysical and environmental modeling systems to support the associated uncertainty quantification and risk analysis.

  10. On shifted Jacobi spectral method for high-order multi-point boundary value problems

    NASA Astrophysics Data System (ADS)

    Doha, E. H.; Bhrawy, A. H.; Hafez, R. M.

    2012-10-01

    This paper reports a spectral tau method for numerically solving multi-point boundary value problems (BVPs) of linear high-order ordinary differential equations. The construction of the shifted Jacobi tau approximation is based on conventional differentiation. This use of differentiation allows the imposition of the governing equation at the whole set of grid points and the straight forward implementation of multiple boundary conditions. Extension of the tau method for high-order multi-point BVPs with variable coefficients is treated using the shifted Jacobi Gauss-Lobatto quadrature. Shifted Jacobi collocation method is developed for solving nonlinear high-order multi-point BVPs. The performance of the proposed methods is investigated by considering several examples. Accurate results and high convergence rates are achieved.

  11. Uncertainty and Sensitivity Analysis of Afterbody Radiative Heating Predictions for Earth Entry

    NASA Technical Reports Server (NTRS)

    West, Thomas K., IV; Johnston, Christopher O.; Hosder, Serhat

    2016-01-01

    The objective of this work was to perform sensitivity analysis and uncertainty quantification for afterbody radiative heating predictions of Stardust capsule during Earth entry at peak afterbody radiation conditions. The radiation environment in the afterbody region poses significant challenges for accurate uncertainty quantification and sensitivity analysis due to the complexity of the flow physics, computational cost, and large number of un-certain variables. In this study, first a sparse collocation non-intrusive polynomial chaos approach along with global non-linear sensitivity analysis was used to identify the most significant uncertain variables and reduce the dimensions of the stochastic problem. Then, a total order stochastic expansion was constructed over only the important parameters for an efficient and accurate estimate of the uncertainty in radiation. Based on previous work, 388 uncertain parameters were considered in the radiation model, which came from the thermodynamics, flow field chemistry, and radiation modeling. The sensitivity analysis showed that only four of these variables contributed significantly to afterbody radiation uncertainty, accounting for almost 95% of the uncertainty. These included the electronic- impact excitation rate for N between level 2 and level 5 and rates of three chemical reactions in uencing N, N(+), O, and O(+) number densities in the flow field.

  12. A LES-Langevin model for turbulence

    NASA Astrophysics Data System (ADS)

    Dolganov, Rostislav; Dubrulle, Bérengère; Laval, Jean-Philippe

    2006-11-01

    The rationale for Large Eddy Simulation is rooted in our inability to handle all degrees of freedom (N˜10^16 for Re˜10^7). ``Deterministic'' models based on eddy-viscosity seek to reproduce the intensification of the energy transport. However, they fail to reproduce backward energy transfer (backscatter) from small to large scale, which is an essentiel feature of the turbulence near wall or in boundary layer. To capture this backscatter, ``stochastic'' strategies have been developed. In the present talk, we shall discuss such a strategy, based on a Rapid Distorsion Theory (RDT). Specifically, we first divide the small scale contribution to the Reynolds Stress Tensor in two parts: a turbulent viscosity and the pseudo-Lamb vector, representing the nonlinear cross terms of resolved and sub-grid scales. We then estimate the dynamics of small-scale motion by the RDT applied to Navier-Stockes equation. We use this to model the cross term evolution by a Langevin equation, in which the random force is provided by sub-grid pressure terms. Our LES model is thus made of a truncated Navier-Stockes equation including the turbulent force and a generalized Langevin equation for the latter, integrated on a twice-finer grid. The backscatter is automatically included in our stochastic model of the pseudo-Lamb vector. We apply this model to the case of homogeneous isotropic turbulence and turbulent channel flow.

  13. Hydrogeologic unit flow characterization using transition probability geostatistics.

    PubMed

    Jones, Norman L; Walker, Justin R; Carle, Steven F

    2005-01-01

    This paper describes a technique for applying the transition probability geostatistics method for stochastic simulation to a MODFLOW model. Transition probability geostatistics has some advantages over traditional indicator kriging methods including a simpler and more intuitive framework for interpreting geologic relationships and the ability to simulate juxtapositional tendencies such as fining upward sequences. The indicator arrays generated by the transition probability simulation are converted to layer elevation and thickness arrays for use with the new Hydrogeologic Unit Flow package in MODFLOW 2000. This makes it possible to preserve complex heterogeneity while using reasonably sized grids and/or grids with nonuniform cell thicknesses.

  14. Potential applications of skip SMV with thrust engine

    NASA Astrophysics Data System (ADS)

    Wang, Weilin; Savvaris, Al

    2016-11-01

    This paper investigates the potential applications of Space Maneuver Vehicles (SMV) with skip trajectory. Due to soaring space operations over the past decades, the risk of space debris has considerably increased such as collision risks with space asset, human property on ground and even aviation. Many active debris removal methods have been investigated and in this paper, a debris remediation method is first proposed based on skip SMV. The key point is to perform controlled re-entry. These vehicles are expected to achieve a trans-atmospheric maneuver with thrust engine. If debris is released at altitude below 80 km, debris could be captured by the atmosphere drag force and re-entry interface prediction accuracy is improved. Moreover if the debris is released in a cargo at a much lower altitude, this technique protects high value space asset from break up by the atmosphere and improves landing accuracy. To demonstrate the feasibility of this concept, the present paper presents the simulation results for two specific mission profiles: (1) descent to predetermined altitude; (2) descent to predetermined point (altitude, longitude and latitude). The evolutionary collocation method is adopted for skip trajectory optimization due to its global optimality and high-accuracy. This method is actually a two-step optimization approach based on the heuristic algorithm and the collocation method. The optimal-control problem is transformed into a nonlinear programming problem (NLP) which can be efficiently and accurately solved by the sequential quadratic programming (SQP) procedure. However, such a method is sensitive to initial values. To reduce the sensitivity problem, genetic algorithm (GA) is adopted to refine the grids and provide near optimum initial values. By comparing the simulation data from different scenarios, it is found that skip SMV is feasible in active debris removal and the evolutionary collocation method gives a truthful re-entry trajectory that satisfies the path and boundary constraints.

  15. A Study on the Phenomenon of Collocations: Methodology of Teaching English and German Collocations to Russian Students

    ERIC Educational Resources Information Center

    Varlamova, Elena V.; Naciscione, Anita; Tulusina, Elena A.

    2016-01-01

    Relevance of the issue stated in the article is determined by the fact that there is a lack of research devoted to the methods of teaching English and German collocations. The aim of our work is to determine methods of teaching English and German collocations to Russian university students studying foreign languages through experimental testing.…

  16. Efficient Simulation of Tropical Cyclone Pathways with Stochastic Perturbations

    NASA Astrophysics Data System (ADS)

    Webber, R.; Plotkin, D. A.; Abbot, D. S.; Weare, J.

    2017-12-01

    Global Climate Models (GCMs) are known to statistically underpredict intense tropical cyclones (TCs) because they fail to capture the rapid intensification and high wind speeds characteristic of the most destructive TCs. Stochastic parametrization schemes have the potential to improve the accuracy of GCMs. However, current analysis of these schemes through direct sampling is limited by the computational expense of simulating a rare weather event at fine spatial gridding. The present work introduces a stochastically perturbed parametrization tendency (SPPT) scheme to increase simulated intensity of TCs. We adapt the Weighted Ensemble algorithm to simulate the distribution of TCs at a fraction of the computational effort required in direct sampling. We illustrate the efficiency of the SPPT scheme by comparing simulations at different spatial resolutions and stochastic parameter regimes. Stochastic parametrization and rare event sampling strategies have great potential to improve TC prediction and aid understanding of tropical cyclogenesis. Since rising sea surface temperatures are postulated to increase the intensity of TCs, these strategies can also improve predictions about climate change-related weather patterns. The rare event sampling strategies used in the current work are not only a novel tool for studying TCs, but they may also be applied to sampling any range of extreme weather events.

  17. 47 CFR 69.121 - Connection charges for expanded interconnection.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... separations. (2) Charges for subelements associated with physical collocation or virtual collocation, other... of the virtual collocation equipment described in § 64.1401(e)(1) of this chapter, may reasonably...

  18. Multiscale computations with a wavelet-adaptive algorithm

    NASA Astrophysics Data System (ADS)

    Rastigejev, Yevgenii Anatolyevich

    A wavelet-based adaptive multiresolution algorithm for the numerical solution of multiscale problems governed by partial differential equations is introduced. The main features of the method include fast algorithms for the calculation of wavelet coefficients and approximation of derivatives on nonuniform stencils. The connection between the wavelet order and the size of the stencil is established. The algorithm is based on the mathematically well established wavelet theory. This allows us to provide error estimates of the solution which are used in conjunction with an appropriate threshold criteria to adapt the collocation grid. The efficient data structures for grid representation as well as related computational algorithms to support grid rearrangement procedure are developed. The algorithm is applied to the simulation of phenomena described by Navier-Stokes equations. First, we undertake the study of the ignition and subsequent viscous detonation of a H2 : O2 : Ar mixture in a one-dimensional shock tube. Subsequently, we apply the algorithm to solve the two- and three-dimensional benchmark problem of incompressible flow in a lid-driven cavity at large Reynolds numbers. For these cases we show that solutions of comparable accuracy as the benchmarks are obtained with more than an order of magnitude reduction in degrees of freedom. The simulations show the striking ability of the algorithm to adapt to a solution having different scales at different spatial locations so as to produce accurate results at a relatively low computational cost.

  19. Evaluation of the Plant-Craig stochastic convection scheme (v2.0) in the ensemble forecasting system MOGREPS-R (24 km) based on the Unified Model (v7.3)

    NASA Astrophysics Data System (ADS)

    Keane, Richard J.; Plant, Robert S.; Tennant, Warren J.

    2016-05-01

    The Plant-Craig stochastic convection parameterization (version 2.0) is implemented in the Met Office Regional Ensemble Prediction System (MOGREPS-R) and is assessed in comparison with the standard convection scheme with a simple stochastic scheme only, from random parameter variation. A set of 34 ensemble forecasts, each with 24 members, is considered, over the month of July 2009. Deterministic and probabilistic measures of the precipitation forecasts are assessed. The Plant-Craig parameterization is found to improve probabilistic forecast measures, particularly the results for lower precipitation thresholds. The impact on deterministic forecasts at the grid scale is neutral, although the Plant-Craig scheme does deliver improvements when forecasts are made over larger areas. The improvements found are greater in conditions of relatively weak synoptic forcing, for which convective precipitation is likely to be less predictable.

  20. Great Britain Storm Surge Modeling for a 10,000-Year Stochastic Catalog with the Effect of Sea Level Rise

    NASA Astrophysics Data System (ADS)

    Keshtpoor, M.; Carnacina, I.; Blair, A.; Yablonsky, R. M.

    2017-12-01

    Storm surge caused by Extratropical Cyclones (ETCs) has significantly impacted not only the life of private citizens but also the insurance and reinsurance industry in Great Britain. The storm surge risk assessment requires a larger dataset of storms than the limited recorded historical ETCs. Thus, historical ETCs were perturbed to generate a 10,000-year stochastic catalog that accounts for surge-generating ETCs in the study area with return periods from one year to 10,000 years. Delft3D-Flexible Mesh hydrodynamic model was used to numerically simulate the storm surge along the Great Britain coastline. A nested grid technique was used to increase the simulation grid resolution up to 200 m near the highly populated coastal areas. Coarse and fine mesh models were calibrated and validated using historical recorded water elevations. Then, numerical simulations were performed on a 10,000-year stochastic catalog. The 50-, 100-, and 500-year return period maps were generated for Great Britain coastal areas. The corresponding events with return periods of 50-, 100-, and 500-years in Humber Bay and Thames River coastal areas were identified, and simulated with the consideration of projected sea level rises to reveal the effect of rising sea levels on the inundation return period maps in two highly-populated coastal areas. Finally, the return period of Storm Xaver (2013) was determined with and without the effect of rising sea levels.

  1. Isogeometric Collocation for Elastostatics and Explicit Dynamics

    DTIC Science & Technology

    2012-01-25

    ICES REPORT 12-07 January 2012 Isogeometric collocation for elastostatics and explicit dynamics by F. Auricchio, L. Beirao da Veiga , T.J.R. Hughes, A...Auricchio, L. Beirao da Veiga , T.J.R. Hughes, A. Reali, G. Sangalli, Isogeometric collocation for elastostatics and explicit dynamics, ICES REPORT 12-07...Isogeometric collocation for elastostatics and explicit dynamics F. Auricchio a,c, L. Beirão da Veiga b,c, T.J.R. Hughes d, A. Reali a,c,∗, G

  2. Predicting the enhancement of mixing-driven reactions in nonuniform flows using measures of flow topology.

    PubMed

    Engdahl, Nicholas B; Benson, David A; Bolster, Diogo

    2014-11-01

    The ability for reactive constituents to mix is often the key limiting factor for the completion of reactions across a huge range of scales in a variety of media. In flowing systems, deformation and shear enhance mixing by bringing constituents into closer proximity, thus increasing reaction potential. Accurately quantifying this enhanced mixing is key to predicting reactions and typically is done by observing or simulating scalar transport. To eliminate this computationally expensive step, we use a Lagrangian stochastic framework to derive the enhancement to reaction potential by calculating the collocation probability of particle pairs in a heterogeneous flow field accounting for deformations. We relate the enhanced reaction potential to three well known flow topology metrics and demonstrate that it is best correlated to (and asymptotically linear with) one: the largest eigenvalue of the (right) Cauchy-Green tensor.

  3. Estimating the Effective Permittivity for Reconstructing Accurate Microwave-Radar Images.

    PubMed

    Lavoie, Benjamin R; Okoniewski, Michal; Fear, Elise C

    2016-01-01

    We present preliminary results from a method for estimating the optimal effective permittivity for reconstructing microwave-radar images. Using knowledge of how microwave-radar images are formed, we identify characteristics that are typical of good images, and define a fitness function to measure the relative image quality. We build a polynomial interpolant of the fitness function in order to identify the most likely permittivity values of the tissue. To make the estimation process more efficient, the polynomial interpolant is constructed using a locally and dimensionally adaptive sampling method that is a novel combination of stochastic collocation and polynomial chaos. Examples, using a series of simulated, experimental and patient data collected using the Tissue Sensing Adaptive Radar system, which is under development at the University of Calgary, are presented. These examples show how, using our method, accurate images can be reconstructed starting with only a broad estimate of the permittivity range.

  4. A non-stochastic iterative computational method to model light propagation in turbid media

    NASA Astrophysics Data System (ADS)

    McIntyre, Thomas J.; Zemp, Roger J.

    2015-03-01

    Monte Carlo models are widely used to model light transport in turbid media, however their results implicitly contain stochastic variations. These fluctuations are not ideal, especially for inverse problems where Jacobian matrix errors can lead to large uncertainties upon matrix inversion. Yet Monte Carlo approaches are more computationally favorable than solving the full Radiative Transport Equation. Here, a non-stochastic computational method of estimating fluence distributions in turbid media is proposed, which is called the Non-Stochastic Propagation by Iterative Radiance Evaluation method (NSPIRE). Rather than using stochastic means to determine a random walk for each photon packet, the propagation of light from any element to all other elements in a grid is modelled simultaneously. For locally homogeneous anisotropic turbid media, the matrices used to represent scattering and projection are shown to be block Toeplitz, which leads to computational simplifications via convolution operators. To evaluate the accuracy of the algorithm, 2D simulations were done and compared against Monte Carlo models for the cases of an isotropic point source and a pencil beam incident on a semi-infinite turbid medium. The model was shown to have a mean percent error less than 2%. The algorithm represents a new paradigm in radiative transport modelling and may offer a non-stochastic alternative to modeling light transport in anisotropic scattering media for applications where the diffusion approximation is insufficient.

  5. Stochastic Ocean Eddy Perturbations in a Coupled General Circulation Model.

    NASA Astrophysics Data System (ADS)

    Howe, N.; Williams, P. D.; Gregory, J. M.; Smith, R. S.

    2014-12-01

    High-resolution ocean models, which are eddy permitting and resolving, require large computing resources to produce centuries worth of data. Also, some previous studies have suggested that increasing resolution does not necessarily solve the problem of unresolved scales, because it simply introduces a new set of unresolved scales. Applying stochastic parameterisations to ocean models is one solution that is expected to improve the representation of small-scale (eddy) effects without increasing run-time. Stochastic parameterisation has been shown to have an impact in atmosphere-only models and idealised ocean models, but has not previously been studied in ocean general circulation models. Here we apply simple stochastic perturbations to the ocean temperature and salinity tendencies in the low-resolution coupled climate model, FAMOUS. The stochastic perturbations are implemented according to T(t) = T(t-1) + (ΔT(t) + ξ(t)), where T is temperature or salinity, ΔT is the corresponding deterministic increment in one time step, and ξ(t) is Gaussian noise. We use high-resolution HiGEM data coarse-grained to the FAMOUS grid to provide information about the magnitude and spatio-temporal correlation structure of the noise to be added to the lower resolution model. Here we present results of adding white and red noise, showing the impacts of an additive stochastic perturbation on mean climate state and variability in an AOGCM.

  6. Stochastic Multiscale Modeling of Polycrystalline Materials

    DTIC Science & Technology

    2013-01-01

    The single-grid strategy is adopted. The crystal visco-plastic constitutive model proposed in [7] along with a Voce type hardening model described...in [97] is used with γ̇0 = 1s−1 and m = 0.1. The parameters in the Voce type hardening law are selected according to [97]: κ0 = 47.0MPa, κ1 = 86.0MPa

  7. Indicator of reliability of power grids and networks for environmental monitoring

    NASA Astrophysics Data System (ADS)

    Shaptsev, V. A.

    2017-10-01

    The energy supply of the mining enterprises includes power networks in particular. Environmental monitoring relies on the data network between the observers and the facilitators. Weather and conditions of their work change over time randomly. Temperature, humidity, wind strength and other stochastic processes are interconnecting in different segments of the power grid. The article presents analytical expressions for the probability of failure of the power grid as a whole or its particular segment. These expressions can contain one or more parameters of the operating conditions, simulated by Monte Carlo. In some cases, one can get the ultimate mathematical formula for calculation on the computer. In conclusion, the expression, including the probability characteristic function of one random parameter, for example, wind, temperature or humidity, is given. The parameters of this characteristic function can be given by retrospective or special observations (measurements).

  8. Optimisation of an idealised primitive equation ocean model using stochastic parameterization

    NASA Astrophysics Data System (ADS)

    Cooper, Fenwick C.

    2017-05-01

    Using a simple parameterization, an idealised low resolution (biharmonic viscosity coefficient of 5 × 1012 m4s-1 , 128 × 128 grid) primitive equation baroclinic ocean gyre model is optimised to have a much more accurate climatological mean, variance and response to forcing, in all model variables, with respect to a high resolution (biharmonic viscosity coefficient of 8 × 1010 m4s-1 , 512 × 512 grid) equivalent. For example, the change in the climatological mean due to a small change in the boundary conditions is more accurate in the model with parameterization. Both the low resolution and high resolution models are strongly chaotic. We also find that long timescales in the model temperature auto-correlation at depth are controlled by the vertical temperature diffusion parameter and time mean vertical advection and are caused by short timescale random forcing near the surface. This paper extends earlier work that considered a shallow water barotropic gyre. Here the analysis is extended to a more turbulent multi-layer primitive equation model that includes temperature as a prognostic variable. The parameterization consists of a constant forcing, applied to the velocity and temperature equations at each grid point, which is optimised to obtain a model with an accurate climatological mean, and a linear stochastic forcing, that is optimised to also obtain an accurate climatological variance and 5 day lag auto-covariance. A linear relaxation (nudging) is not used. Conservation of energy and momentum is discussed in an appendix.

  9. PROPAGATOR: a synchronous stochastic wildfire propagation model with distributed computation engine

    NASA Astrophysics Data System (ADS)

    D´Andrea, M.; Fiorucci, P.; Biondi, G.; Negro, D.

    2012-04-01

    PROPAGATOR is a stochastic model of forest fire spread, useful as a rapid method for fire risk assessment. The model is based on a 2D stochastic cellular automaton. The domain of simulation is discretized using a square regular grid with cell size of 20x20 meters. The model uses high-resolution information such as elevation and type of vegetation on the ground. Input parameters are wind direction, speed and the ignition point of fire. The simulation of fire propagation is done via a stochastic mechanism of propagation between a burning cell and a non-burning cell belonging to its neighbourhood, i.e. the 8 adjacent cells in the rectangular grid. The fire spreads from one cell to its neighbours with a certain base probability, defined using vegetation types of two adjacent cells, and modified by taking into account the slope between them, wind direction and speed. The simulation is synchronous, and takes into account the time needed by the burning fire to cross each cell. Vegetation cover, slope, wind speed and direction affect the fire-propagation speed from cell to cell. The model simulates several mutually independent realizations of the same stochastic fire propagation process. Each of them provides a map of the area burned at each simulation time step. Propagator simulates self-extinction of the fire, and the propagation process continues until at least one cell of the domain is burning in each realization. The output of the model is a series of maps representing the probability of each cell of the domain to be affected by the fire at each time-step: these probabilities are obtained by evaluating the relative frequency of ignition of each cell with respect to the complete set of simulations. Propagator is available as a module in the OWIS (Opera Web Interfaces) system. The model simulation runs on a dedicated server and it is remote controlled from the client program, NAZCA. Ignition points of the simulation can be selected directly in a high-resolution, three-dimensional graphical representation of the Italian territory within NAZCA. The other simulation parameters, namely wind speed and direction, number of simulations, computing grid size and temporal resolution, can be selected from within the program interface. The output of the simulation is showed in real-time during the simulation, and are also available off-line and on the DEWETRA system, a Web GIS-based system for environmental risk assessment, developed according to OGC-INSPIRE standards. The model execution is very fast, providing a full prevision for the scenario in few minutes, and can be useful for real-time active fire management and suppression.

  10. Fast Computation of Ground Motion Shaking Map base on the Modified Stochastic Finite Fault Modeling

    NASA Astrophysics Data System (ADS)

    Shen, W.; Zhong, Q.; Shi, B.

    2012-12-01

    Rapidly regional MMI mapping soon after a moderate-large earthquake is crucial to loss estimation, emergency services and planning of emergency action by the government. In fact, many countries show different degrees of attention on the technology of rapid estimation of MMI , and this technology has made significant progress in earthquake-prone countries. In recent years, numerical modeling of strong ground motion has been well developed with the advances of computation technology and earthquake science. The computational simulation of strong ground motion caused by earthquake faulting has become an efficient way to estimate the regional MMI distribution soon after earthquake. In China, due to the lack of strong motion observation in network sparse or even completely missing areas, the development of strong ground motion simulation method has become an important means of quantitative estimation of strong motion intensity. In many of the simulation models, stochastic finite fault model is preferred to rapid MMI estimating for its time-effectiveness and accuracy. In finite fault model, a large fault is divided into N subfaults, and each subfault is considered as a small point source. The ground motions contributed by each subfault are calculated by the stochastic point source method which is developed by Boore, and then summed at the observation point to obtain the ground motion from the entire fault with a proper time delay. Further, Motazedian and Atkinson proposed the concept of Dynamic Corner Frequency, with the new approach, the total radiated energy from the fault and the total seismic moment are conserved independent of subfault size over a wide range of subfault sizes. In current study, the program EXSIM developed by Motazedian and Atkinson has been modified for local or regional computations of strong motion parameters such as PGA, PGV and PGD, which are essential for MMI estimating. To make the results more reasonable, we consider the impact of V30 for the ground shaking intensity, and the results of the comparisons between the simulated and observed MMI for the 2004 Mw 6.0 Parkfield earthquake, the 2008 Mw 7.9Wenchuan earthquake and the 1976 Mw 7.6Tangshan earthquake is fairly well. Take Parkfield earthquake as example, the simulative result reflect the directivity effect and the influence of the shallow velocity structure well. On the other hand, the simulative data is in good agreement with the network data and NGA (Next Generation Attenuation). The consumed time depends on the number of the subfaults and the number of the grid point. For the 2004 Mw 6.0 Parkfield earthquake, the grid size we calculated is 2.5° × 2.5°, the grid space is 0.025°, and the total time consumed is about 1.3hours. For the 2008 Mw 7.9 Wenchuan earthquake, the grid size calculated is 10° × 10°, the grid space is 0.05°, the total number of grid point is more than 40,000, and the total time consumed is about 7.5 hours. For t the 1976 Mw 7.6 Tangshan earthquake, the grid size we calculated is 4° × 6°, the grid space is 0.05°, and the total time consumed is about 2.1 hours. The CPU we used is 3.40GHz, and such computational time could further reduce by using GPU computing technique and other parallel computing technique. This is also our next focus.

  11. Least-squares collocation meshless approach for radiative heat transfer in absorbing and scattering media

    NASA Astrophysics Data System (ADS)

    Liu, L. H.; Tan, J. Y.

    2007-02-01

    A least-squares collocation meshless method is employed for solving the radiative heat transfer in absorbing, emitting and scattering media. The least-squares collocation meshless method for radiative transfer is based on the discrete ordinates equation. A moving least-squares approximation is applied to construct the trial functions. Except for the collocation points which are used to construct the trial functions, a number of auxiliary points are also adopted to form the total residuals of the problem. The least-squares technique is used to obtain the solution of the problem by minimizing the summation of residuals of all collocation and auxiliary points. Three numerical examples are studied to illustrate the performance of this new solution method. The numerical results are compared with the other benchmark approximate solutions. By comparison, the results show that the least-squares collocation meshless method is efficient, accurate and stable, and can be used for solving the radiative heat transfer in absorbing, emitting and scattering media.

  12. Six-Degree-of-Freedom Trajectory Optimization Utilizing a Two-Timescale Collocation Architecture

    NASA Technical Reports Server (NTRS)

    Desai, Prasun N.; Conway, Bruce A.

    2005-01-01

    Six-degree-of-freedom (6DOF) trajectory optimization of a reentry vehicle is solved using a two-timescale collocation methodology. This class of 6DOF trajectory problems are characterized by two distinct timescales in their governing equations, where a subset of the states have high-frequency dynamics (the rotational equations of motion) while the remaining states (the translational equations of motion) vary comparatively slowly. With conventional collocation methods, the 6DOF problem size becomes extraordinarily large and difficult to solve. Utilizing the two-timescale collocation architecture, the problem size is reduced significantly. The converged solution shows a realistic landing profile and captures the appropriate high-frequency rotational dynamics. A large reduction in the overall problem size (by 55%) is attained with the two-timescale architecture as compared to the conventional single-timescale collocation method. Consequently, optimum 6DOF trajectory problems can now be solved efficiently using collocation, which was not previously possible for a system with two distinct timescales in the governing states.

  13. Collocation and Pattern Recognition Effects on System Failure Remediation

    NASA Technical Reports Server (NTRS)

    Trujillo, Anna C.; Press, Hayes N.

    2007-01-01

    Previous research found that operators prefer to have status, alerts, and controls located on the same screen. Unfortunately, that research was done with displays that were not designed specifically for collocation. In this experiment, twelve subjects evaluated two displays specifically designed for collocating system information against a baseline that consisted of dial status displays, a separate alert area, and a controls panel. These displays differed in the amount of collocation, pattern matching, and parameter movement compared to display size. During the data runs, subjects kept a randomly moving target centered on a display using a left-handed joystick and they scanned system displays to find a problem in order to correct it using the provided checklist. Results indicate that large parameter movement aided detection and then pattern recognition is needed for diagnosis but the collocated displays centralized all the information subjects needed, which reduced workload. Therefore, the collocated display with large parameter movement may be an acceptable display after familiarization because of the possible pattern recognition developed with training and its use.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Jay Tillay

    For three years, Sandia National Laboratories, Georgia Institute of Technology, and University of Illinois at Urbana-Champaign investigated a smart grid vision in which renewable-centric Virtual Power Plants (VPPs) provided ancillary services with interoperable distributed energy resources (DER). This team researched, designed, built, and evaluated real-time VPP designs incorporating DER forecasting, stochastic optimization, controls, and cyber security to construct a system capable of delivering reliable ancillary services, which have been traditionally provided by large power plants or other dedicated equipment. VPPs have become possible through an evolving landscape of state and national interconnection standards, which now require DER to include grid-supportmore » functionality and communications capabilities. This makes it possible for third party aggregators to provide a range of critical grid services such as voltage regulation, frequency regulation, and contingency reserves to grid operators. This paradigm (a) enables renewable energy, demand response, and energy storage to participate in grid operations and provide grid services, (b) improves grid reliability by providing additional operating reserves for utilities, independent system operators (ISOs), and regional transmission organization (RTOs), and (c) removes renewable energy high-penetration barriers by providing services with photovoltaics and wind resources that traditionally were the jobs of thermal generators. Therefore, it is believed VPP deployment will have far-reaching positive consequences for grid operations and may provide a robust pathway to high penetrations of renewables on US power systems. In this report, we design VPPs to provide a range of grid-support services and demonstrate one VPP which simultaneously provides bulk-system energy and ancillary reserves.« less

  15. Guest Editorial High Performance Computing (HPC) Applications for a More Resilient and Efficient Power Grid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Zhenyu Henry; Tate, Zeb; Abhyankar, Shrirang

    The power grid has been evolving over the last 120 years, but it is seeing more changes in this decade and next than it has seen over the past century. In particular, the widespread deployment of intermittent renewable generation, smart loads and devices, hierarchical and distributed control technologies, phasor measurement units, energy storage, and widespread usage of electric vehicles will require fundamental changes in methods and tools for the operation and planning of the power grid. The resulting new dynamic and stochastic behaviors will demand the inclusion of more complexity in modeling the power grid. Solving such complex models inmore » the traditional computing environment will be a major challenge. Along with the increasing complexity of power system models, the increasing complexity of smart grid data further adds to the prevailing challenges. In this environment, the myriad of smart sensors and meters in the power grid increase by multiple orders of magnitude, so do the volume and speed of the data. The information infrastructure will need to drastically change to support the exchange of enormous amounts of data as smart grid applications will need the capability to collect, assimilate, analyze and process the data, to meet real-time grid functions. High performance computing (HPC) holds the promise to enhance these functions, but it is a great resource that has not been fully explored and adopted for the power grid domain.« less

  16. Development of a CFD Code for Analysis of Fluid Dynamic Forces in Seals

    NASA Technical Reports Server (NTRS)

    Athavale, Mahesh M.; Przekwas, Andrzej J.; Singhal, Ashok K.

    1991-01-01

    The aim is to develop a 3-D computational fluid dynamics (CFD) code for the analysis of fluid flow in cylindrical seals and evaluation of the dynamic forces on the seals. This code is expected to serve as a scientific tool for detailed flow analysis as well as a check for the accuracy of the 2D industrial codes. The features necessary in the CFD code are outlined. The initial focus was to develop or modify and implement new techniques and physical models. These include collocated grid formulation, rotating coordinate frames and moving grid formulation. Other advanced numerical techniques include higher order spatial and temporal differencing and an efficient linear equation solver. These techniques were implemented in a 2D flow solver for initial testing. Several benchmark test cases were computed using the 2D code, and the results of these were compared to analytical solutions or experimental data to check the accuracy. Tests presented here include planar wedge flow, flow due to an enclosed rotor, and flow in a 2D seal with a whirling rotor. Comparisons between numerical and experimental results for an annular seal and a 7-cavity labyrinth seal are also included.

  17. [Spatio-Temporal Bioelectrical Brain Activity Organization during Reading Syntagmatic and Paradigmatic Collocations by Students with Different Foreign Language Proficiency].

    PubMed

    Sokolova, L V; Cherkasova, A S

    2015-01-01

    Texts or words/pseudowords are often used as stimuli for human verbal activity research. Our study pays attention to decoding processes of grammatical constructions consisted of two-three words--collocations. Russian and English collocation sets without any narrative were presented to Russian-speaking students with different English language skill. Stimulus material had two types of collocations: paradigmatic and syntagmatic. 30 students (average age--20.4 ± 0.22) took part in the study, they were divided into two equal groups depending on their English language skill (linguists/nonlinguists). During reading brain bioelectrical activity of cortex has been registered from 12 electrodes in alfa-, beta-, theta-bands. Coherent function reflecting cooperation of different cortical areas during reading collocations has been analyzed. Increase of interhemispheric and diagonal connections while reading collocations in different languages in the group of students with low knowledge of foreign language testifies of importance of functional cooperation between the hemispheres. It has been found out that brain bioelectrical activity of students with good foreign language knowledge during reading of all collocation types in Russian and English is characterized by economization of nervous substrate resources compared to nonlinguists. Selective activation of certain cortical areas has also been observed (depending on the grammatical construction type) in nonlinguists group that is probably related to special decoding system which processes presented stimuli. Reading Russian paradigmatic constructions by nonlinguists entailed increase between left cortical areas, reading of English syntagmatic collocations--between right ones.

  18. Evaluation of a risk-based environmental hot spot delineation algorithm.

    PubMed

    Sinha, Parikhit; Lambert, Michael B; Schew, William A

    2007-10-22

    Following remedial investigations of hazardous waste sites, remedial strategies may be developed that target the removal of "hot spots," localized areas of elevated contamination. For a given exposure area, a hot spot may be defined as a sub-area that causes risks for the whole exposure area to be unacceptable. The converse of this statement may also apply: when a hot spot is removed from within an exposure area, risks for the exposure area may drop below unacceptable thresholds. The latter is the motivation for a risk-based approach to hot spot delineation, which was evaluated using Monte Carlo simulation. Random samples taken from a virtual site ("true site") were used to create an interpolated site. The latter was gridded and concentrations from the center of each grid box were used to calculate 95% upper confidence limits on the mean site contaminant concentration and corresponding hazard quotients for a potential receptor. Grid cells with the highest concentrations were removed and hazard quotients were recalculated until the site hazard quotient dropped below the threshold of 1. The grid cells removed in this way define the spatial extent of the hot spot. For each of the 100,000 Monte Carlo iterations, the delineated hot spot was compared to the hot spot in the "true site." On average, the algorithm was able to delineate hot spots that were collocated with and equal to or greater in size than the "true hot spot." When delineated hot spots were mapped onto the "true site," setting contaminant concentrations in the mapped area to zero, the hazard quotients for these "remediated true sites" were on average within 5% of the acceptable threshold of 1.

  19. An algorithm for fast elastic wave simulation using a vectorized finite difference operator

    NASA Astrophysics Data System (ADS)

    Malkoti, Ajay; Vedanti, Nimisha; Tiwari, Ram Krishna

    2018-07-01

    Modern geophysical imaging techniques exploit the full wavefield information which can be simulated numerically. These numerical simulations are computationally expensive due to several factors, such as a large number of time steps and nodes, big size of the derivative stencil and huge model size. Besides these constraints, it is also important to reformulate the numerical derivative operator for improved efficiency. In this paper, we have introduced a vectorized derivative operator over the staggered grid with shifted coordinate systems. The operator increases the efficiency of simulation by exploiting the fact that each variable can be represented in the form of a matrix. This operator allows updating all nodes of a variable defined on the staggered grid, in a manner similar to the collocated grid scheme and thereby reducing the computational run-time considerably. Here we demonstrate an application of this operator to simulate the seismic wave propagation in elastic media (Marmousi model), by discretizing the equations on a staggered grid. We have compared the performance of this operator on three programming languages, which reveals that it can increase the execution speed by a factor of at least 2-3 times for FORTRAN and MATLAB; and nearly 100 times for Python. We have further carried out various tests in MATLAB to analyze the effect of model size and the number of time steps on total simulation run-time. We find that there is an additional, though small, computational overhead for each step and it depends on total number of time steps used in the simulation. A MATLAB code package, 'FDwave', for the proposed simulation scheme is available upon request.

  20. Learning L2 Collocations Incidentally from Reading

    ERIC Educational Resources Information Center

    Pellicer-Sánchez, Ana

    2017-01-01

    Previous studies have shown that intentional learning through explicit instruction is effective for the acquisition of collocations in a second language (L2) (e.g. Peters, 2014, 2015), but relatively little is known about the effectiveness of incidental approaches for the acquisition of L2 collocations. The present study examined the incidental…

  1. Incidental Learning of Collocation

    ERIC Educational Resources Information Center

    Webb, Stuart; Newton, Jonathan; Chang, Anna

    2013-01-01

    This study investigated the effects of repetition on the learning of collocation. Taiwanese university students learning English as a foreign language simultaneously read and listened to one of four versions of a modified graded reader that included different numbers of encounters (1, 5, 10, and 15 encounters) with a set of 18 target collocations.…

  2. 47 CFR 51.323 - Standards for physical collocation and virtual collocation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... accessible by both the incumbent LEC and the collocating telecommunications carrier, at which the fiber optic... technically feasible, the incumbent LEC shall provide the connection using copper, dark fiber, lit fiber, or... that the incumbent LEC may adopt include: (1) Installing security cameras or other monitoring systems...

  3. 47 CFR 51.323 - Standards for physical collocation and virtual collocation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... accessible by both the incumbent LEC and the collocating telecommunications carrier, at which the fiber optic... technically feasible, the incumbent LEC shall provide the connection using copper, dark fiber, lit fiber, or... that the incumbent LEC may adopt include: (1) Installing security cameras or other monitoring systems...

  4. Accurate, efficient, and (iso)geometrically flexible collocation methods for phase-field models

    NASA Astrophysics Data System (ADS)

    Gomez, Hector; Reali, Alessandro; Sangalli, Giancarlo

    2014-04-01

    We propose new collocation methods for phase-field models. Our algorithms are based on isogeometric analysis, a new technology that makes use of functions from computational geometry, such as, for example, Non-Uniform Rational B-Splines (NURBS). NURBS exhibit excellent approximability and controllable global smoothness, and can represent exactly most geometries encapsulated in Computer Aided Design (CAD) models. These attributes permitted us to derive accurate, efficient, and geometrically flexible collocation methods for phase-field models. The performance of our method is demonstrated by several numerical examples of phase separation modeled by the Cahn-Hilliard equation. We feel that our method successfully combines the geometrical flexibility of finite elements with the accuracy and simplicity of pseudo-spectral collocation methods, and is a viable alternative to classical collocation methods.

  5. Usability Study of Two Collocated Prototype System Displays

    NASA Technical Reports Server (NTRS)

    Trujillo, Anna C.

    2007-01-01

    Currently, most of the displays in control rooms can be categorized as status screens, alerts/procedures screens (or paper), or control screens (where the state of a component is changed by the operator). The primary focus of this line of research is to determine which pieces of information (status, alerts/procedures, and control) should be collocated. Two collocated displays were tested for ease of understanding in an automated desktop survey. This usability study was conducted as a prelude to a larger human-in-the-loop experiment in order to verify that the 2 new collocated displays were easy to learn and usable. The results indicate that while the DC display was preferred and yielded better performance than the MDO display, both collocated displays can be easily learned and used.

  6. Operation of a wind turbine-flywheel energy storage system under conditions of stochastic change of wind energy.

    PubMed

    Tomczewski, Andrzej

    2014-01-01

    The paper presents the issues of a wind turbine-flywheel energy storage system (WT-FESS) operation under real conditions. Stochastic changes of wind energy in time cause significant fluctuations of the system output power and as a result have a negative impact on the quality of the generated electrical energy. In the author's opinion it is possible to reduce the aforementioned effects by using an energy storage of an appropriate type and capacity. It was assumed that based on the technical parameters of a wind turbine-energy storage system and its geographical location one can determine the boundary capacity of the storage, which helps prevent power cuts to the grid at the assumed probability. Flywheel energy storage was selected due to its characteristics and technical parameters. The storage capacity was determined based on an empirical relationship using the results of the proposed statistical and energetic analysis of the measured wind velocity courses. A detailed algorithm of the WT-FESS with the power grid system was developed, eliminating short-term breaks in the turbine operation and periods when the wind turbine power was below the assumed level.

  7. Operation of a Wind Turbine-Flywheel Energy Storage System under Conditions of Stochastic Change of Wind Energy

    PubMed Central

    2014-01-01

    The paper presents the issues of a wind turbine-flywheel energy storage system (WT-FESS) operation under real conditions. Stochastic changes of wind energy in time cause significant fluctuations of the system output power and as a result have a negative impact on the quality of the generated electrical energy. In the author's opinion it is possible to reduce the aforementioned effects by using an energy storage of an appropriate type and capacity. It was assumed that based on the technical parameters of a wind turbine-energy storage system and its geographical location one can determine the boundary capacity of the storage, which helps prevent power cuts to the grid at the assumed probability. Flywheel energy storage was selected due to its characteristics and technical parameters. The storage capacity was determined based on an empirical relationship using the results of the proposed statistical and energetic analysis of the measured wind velocity courses. A detailed algorithm of the WT-FESS with the power grid system was developed, eliminating short-term breaks in the turbine operation and periods when the wind turbine power was below the assumed level. PMID:25215326

  8. Pinpointing the North Korea Nuclear tests with body waves scattered by surface topography

    NASA Astrophysics Data System (ADS)

    Wang, N.; Shen, Y.; Bao, X.; Flinders, A. F.

    2017-12-01

    On September 3, 2017, North Korea conducted its sixth and by far the largest nuclear test at the Punggye-ri test site. In this work, we apply a novel full-wave location method that combines a non-linear grid-search algorithm with the 3D strain Green's tensor database to locate this event. We use the first arrivals (Pn waves) and their immediate codas, which are likely dominated by waves scattered by the surface topography near the source, to pinpoint the source location. We assess the solution in the search volume using a least-squares misfit between the observed and synthetic waveforms, which are obtained using the collocated-grid finite difference method on curvilinear grids. We calculate the one standard deviation level of the 'best' solution as a posterior error estimation. Our results show that the waveform based location method allows us to obtain accurate solutions with a small number of stations. The solutions are absolute locations as opposed to relative locations based on relative travel times, because topography-scattered waves depend on the geometric relations between the source and the unique topography near the source. Moreover, we use both differential waveforms and traveltimes to locate pairs of the North Korea tests in years 2016 and 2017 to further reduce the effects of inaccuracies in the reference velocity model (CRUST 1.0). Finally, we compare our solutions with those of other studies based on satellite images and relative traveltimes.

  9. A tesselated probabilistic representation for spatial robot perception and navigation

    NASA Technical Reports Server (NTRS)

    Elfes, Alberto

    1989-01-01

    The ability to recover robust spatial descriptions from sensory information and to efficiently utilize these descriptions in appropriate planning and problem-solving activities are crucial requirements for the development of more powerful robotic systems. Traditional approaches to sensor interpretation, with their emphasis on geometric models, are of limited use for autonomous mobile robots operating in and exploring unknown and unstructured environments. Here, researchers present a new approach to robot perception that addresses such scenarios using a probabilistic tesselated representation of spatial information called the Occupancy Grid. The Occupancy Grid is a multi-dimensional random field that maintains stochastic estimates of the occupancy state of each cell in the grid. The cell estimates are obtained by interpreting incoming range readings using probabilistic models that capture the uncertainty in the spatial information provided by the sensor. A Bayesian estimation procedure allows the incremental updating of the map using readings taken from several sensors over multiple points of view. An overview of the Occupancy Grid framework is given, and its application to a number of problems in mobile robot mapping and navigation are illustrated. It is argued that a number of robotic problem-solving activities can be performed directly on the Occupancy Grid representation. Some parallels are drawn between operations on Occupancy Grids and related image processing operations.

  10. Uncertainty Analysis Based on Sparse Grid Collocation and Quasi-Monte Carlo Sampling with Application in Groundwater Modeling

    NASA Astrophysics Data System (ADS)

    Zhang, G.; Lu, D.; Ye, M.; Gunzburger, M.

    2011-12-01

    Markov Chain Monte Carlo (MCMC) methods have been widely used in many fields of uncertainty analysis to estimate the posterior distributions of parameters and credible intervals of predictions in the Bayesian framework. However, in practice, MCMC may be computationally unaffordable due to slow convergence and the excessive number of forward model executions required, especially when the forward model is expensive to compute. Both disadvantages arise from the curse of dimensionality, i.e., the posterior distribution is usually a multivariate function of parameters. Recently, sparse grid method has been demonstrated to be an effective technique for coping with high-dimensional interpolation or integration problems. Thus, in order to accelerate the forward model and avoid the slow convergence of MCMC, we propose a new method for uncertainty analysis based on sparse grid interpolation and quasi-Monte Carlo sampling. First, we construct a polynomial approximation of the forward model in the parameter space by using the sparse grid interpolation. This approximation then defines an accurate surrogate posterior distribution that can be evaluated repeatedly at minimal computational cost. Second, instead of using MCMC, a quasi-Monte Carlo method is applied to draw samples in the parameter space. Then, the desired probability density function of each prediction is approximated by accumulating the posterior density values of all the samples according to the prediction values. Our method has the following advantages: (1) the polynomial approximation of the forward model on the sparse grid provides a very efficient evaluation of the surrogate posterior distribution; (2) the quasi-Monte Carlo method retains the same accuracy in approximating the PDF of predictions but avoids all disadvantages of MCMC. The proposed method is applied to a controlled numerical experiment of groundwater flow modeling. The results show that our method attains the same accuracy much more efficiently than traditional MCMC.

  11. Not Just "Small Potatoes": Knowledge of the Idiomatic Meanings of Collocations

    ERIC Educational Resources Information Center

    Macis, Marijana; Schmitt, Norbert

    2017-01-01

    This study investigated learner knowledge of the figurative meanings of 30 collocations that can be both literal and figurative. One hundred and seven Chilean Spanish-speaking university students of English were asked to complete a meaning-recall collocation test in which the target items were embedded in non-defining sentences. Results showed…

  12. Teaching and Learning Collocation in Adult Second and Foreign Language Learning

    ERIC Educational Resources Information Center

    Boers, Frank; Webb, Stuart

    2018-01-01

    Perhaps the greatest challenge to creating a research timeline on teaching and learning collocation is deciding how wide to cast the net in the search for relevant publications. For one thing, the term "collocation" does not have the same meaning for all (applied) linguists and practitioners (Barfield & Gyllstad 2009) (see timeline).…

  13. Supporting Collocation Learning with a Digital Library

    ERIC Educational Resources Information Center

    Wu, Shaoqun; Franken, Margaret; Witten, Ian H.

    2010-01-01

    Extensive knowledge of collocations is a key factor that distinguishes learners from fluent native speakers. Such knowledge is difficult to acquire simply because there is so much of it. This paper describes a system that exploits the facilities offered by digital libraries to provide a rich collocation-learning environment. The design is based on…

  14. Cross-Linguistic Influence: Its Impact on L2 English Collocation Production

    ERIC Educational Resources Information Center

    Phoocharoensil, Supakorn

    2013-01-01

    This research study investigated the influence of learners' mother tongue on their acquisition of English collocations. Having drawn the linguistic data from two groups of Thai EFL learners differing in English proficiency level, the researcher found that the native language (L1) plays a significant role in the participants' collocation learning…

  15. Going beyond Patterns: Involving Cognitive Analysis in the Learning of Collocations

    ERIC Educational Resources Information Center

    Liu, Dilin

    2010-01-01

    Since the late 1980s, collocations have received increasing attention in applied linguistics, especially language teaching, as is evidenced by the many publications on the topic. These works fall roughly into two lines of research (a) those focusing on the identification and use of collocations (Benson, 1989; Hunston, 2002; Hunston & Francis,…

  16. English Collocation Learning through Corpus Data: On-Line Concordance and Statistical Information

    ERIC Educational Resources Information Center

    Ohtake, Hiroshi; Fujita, Nobuyuki; Kawamoto, Takeshi; Morren, Brian; Ugawa, Yoshihiro; Kaneko, Shuji

    2012-01-01

    We developed an English Collocations On Demand system offering on-line corpus and concordance information to help Japanese researchers acquire a better command of English collocation patterns. The Life Science Dictionary Corpus consists of approximately 90,000,000 words collected from life science related research papers published in academic…

  17. The Effect of Error Correction Feedback on the Collocation Competence of Iranian EFL Learners

    ERIC Educational Resources Information Center

    Jafarpour, Ali Akbar; Sharifi, Abolghasem

    2012-01-01

    Collocations are one of the most important elements in language proficiency but the effect of error correction feedback of collocations has not been thoroughly examined. Some researchers report the usefulness and importance of error correction (Hyland, 1990; Bartram & Walton, 1991; Ferris, 1999; Chandler, 2003), while others showed that error…

  18. Collocations of High Frequency Noun Keywords in Prescribed Science Textbooks

    ERIC Educational Resources Information Center

    Menon, Sujatha; Mukundan, Jayakaran

    2012-01-01

    This paper analyses the discourse of science through the study of collocational patterns of high frequency noun keywords in science textbooks used by upper secondary students in Malaysia. Research has shown that one of the areas of difficulty in science discourse concerns lexis, especially that of collocations. This paper describes a corpus-based…

  19. The Effect of Grouping and Presenting Collocations on Retention

    ERIC Educational Resources Information Center

    Akpinar, Kadriye Dilek; Bardakçi, Mehmet

    2015-01-01

    The aim of this study is two-fold. Firstly, it attempts to determine the role of presenting collocations by organizing them based on (i) the keyword, (ii) topic related and (iii) grammatical aspect on retention of collocations. Secondly, it investigates the relationship between participants' general English proficiency and the presentation types…

  20. Temi firthiani di linguistica applicata: "Restricted Languages" e "Collocation" (Firthian Themes in Applied Linguistics: "Restricted Languages" and "Collocation")

    ERIC Educational Resources Information Center

    Leonardi, Magda

    1977-01-01

    Discusses the importance of two Firthian themes for language teaching. The first theme, "Restricted Languages," concerns the "microlanguages" of every language (e.g., literary language, scientific, etc.). The second theme, "Collocation," shows that equivalent words in two languages rarely have the same position in…

  1. Corpora and Collocations in Chinese-English Dictionaries for Chinese Users

    ERIC Educational Resources Information Center

    Xia, Lixin

    2015-01-01

    The paper identifies the major problems of the Chinese-English dictionary in representing collocational information after an extensive survey of nine dictionaries popular among Chinese users. It is found that the Chinese-English dictionary only provides the collocation types of "v+n" and "v+n," but completely ignores those of…

  2. Simulation of demand management and grid balancing with electric vehicles

    NASA Astrophysics Data System (ADS)

    Druitt, James; Früh, Wolf-Gerrit

    2012-10-01

    This study investigates the potential role of electric vehicles in an electricity network with a high contribution from variable generation such as wind power. Electric vehicles are modelled to provide demand management through flexible charging requirements and energy balancing for the network. Balancing applications include both demand balancing and vehicle-to-grid discharging. This study is configured to represent the UK grid with balancing requirements derived from wind generation calculated from weather station wind speeds on the supply side and National Grid data from on the demand side. The simulation models 1000 individual vehicle entities to represent the behaviour of larger numbers of vehicles. A stochastic trip generation profile is used to generate realistic journey characteristics, whilst a market pricing model allows charging and balancing decisions to be based on realistic market price conditions. The simulation has been tested with wind generation capacities representing up to 30% of UK consumption. Results show significant improvements to load following conditions with the introduction of electric vehicles, suggesting that they could substantially facilitate the uptake of intermittent renewable generation. Electric vehicle owners would benefit from flexible charging and selling tariffs, with the majority of revenue derived from vehicle-to-grid participation in balancing markets.

  3. Research on unit commitment with large-scale wind power connected power system

    NASA Astrophysics Data System (ADS)

    Jiao, Ran; Zhang, Baoqun; Chi, Zhongjun; Gong, Cheng; Ma, Longfei; Yang, Bing

    2017-01-01

    Large-scale integration of wind power generators into power grid brings severe challenges to power system economic dispatch due to its stochastic volatility. Unit commitment including wind farm is analyzed from the two parts of modeling and solving methods. The structures and characteristics can be summarized after classification has been done according to different objective function and constraints. Finally, the issues to be solved and possible directions of research and development in the future are discussed, which can adapt to the requirements of the electricity market, energy-saving power generation dispatching and smart grid, even providing reference for research and practice of researchers and workers in this field.

  4. On estimating gravity anomalies - A comparison of least squares collocation with conventional least squares techniques

    NASA Technical Reports Server (NTRS)

    Argentiero, P.; Lowrey, B.

    1977-01-01

    The least squares collocation algorithm for estimating gravity anomalies from geodetic data is shown to be an application of the well known regression equations which provide the mean and covariance of a random vector (gravity anomalies) given a realization of a correlated random vector (geodetic data). It is also shown that the collocation solution for gravity anomalies is equivalent to the conventional least-squares-Stokes' function solution when the conventional solution utilizes properly weighted zero a priori estimates. The mathematical and physical assumptions underlying the least squares collocation estimator are described.

  5. Impact of a Stochastic Parameterization Scheme on El Nino-Southern Oscillation in the Community Climate System Model

    NASA Astrophysics Data System (ADS)

    Christensen, H. M.; Berner, J.; Sardeshmukh, P. D.

    2017-12-01

    Stochastic parameterizations have been used for more than a decade in atmospheric models. They provide a way to represent model uncertainty through representing the variability of unresolved sub-grid processes, and have been shown to have a beneficial effect on the spread and mean state for medium- and extended-range forecasts. There is increasing evidence that stochastic parameterization of unresolved processes can improve the bias in mean and variability, e.g. by introducing a noise-induced drift (nonlinear rectification), and by changing the residence time and structure of flow regimes. We present results showing the impact of including the Stochastically Perturbed Parameterization Tendencies scheme (SPPT) in coupled runs of the National Center for Atmospheric Research (NCAR) Community Atmosphere Model, version 4 (CAM4) with historical forcing. SPPT results in a significant improvement in the representation of the El Nino-Southern Oscillation in CAM4, improving the power spectrum, as well as both the inter- and intra-annual variability of tropical pacific sea surface temperatures. We use a Linear Inverse Modelling framework to gain insight into the mechanisms by which SPPT has improved ENSO-variability.

  6. Limitations and tradeoffs in synchronization of large-scale networks with uncertain links

    PubMed Central

    Diwadkar, Amit; Vaidya, Umesh

    2016-01-01

    The synchronization of nonlinear systems connected over large-scale networks has gained popularity in a variety of applications, such as power grids, sensor networks, and biology. Stochastic uncertainty in the interconnections is a ubiquitous phenomenon observed in these physical and biological networks. We provide a size-independent network sufficient condition for the synchronization of scalar nonlinear systems with stochastic linear interactions over large-scale networks. This sufficient condition, expressed in terms of nonlinear dynamics, the Laplacian eigenvalues of the nominal interconnections, and the variance and location of the stochastic uncertainty, allows us to define a synchronization margin. We provide an analytical characterization of important trade-offs between the internal nonlinear dynamics, network topology, and uncertainty in synchronization. For nearest neighbour networks, the existence of an optimal number of neighbours with a maximum synchronization margin is demonstrated. An analytical formula for the optimal gain that produces the maximum synchronization margin allows us to compare the synchronization properties of various complex network topologies. PMID:27067994

  7. On the Effect of Gender and Years of Instruction on Iranian EFL Learners' Collocational Competence

    ERIC Educational Resources Information Center

    Ganji, Mansoor

    2012-01-01

    This study investigates the Iranian EFL learners' Knowledge of Lexical Collocation at three academic levels: freshmen, sophomores, and juniors. The participants were forty three English majors doing their B.A. in English Translation studies in Chabahar Maritime University. They took a 50-item fill-in-the-blank test of lexical collocations. The…

  8. Iranian Pre-University Student's Retention of Collocations: Implicit Exposure or Explicit Instruction

    ERIC Educational Resources Information Center

    Gheisari, Nouzar; Yousofi, Nouroldin

    2016-01-01

    The effectiveness of different teaching methods of collocational expressions in ESL/EFL contexts of education has been a point of debate for more than two decades, with some believing in explicit and the others in implicit instruction of collocations. In this regard, the present study aimed at finding about which kind of instruction is more…

  9. Formulaic Language and Collocations in German Essays: From Corpus-Driven Data to Corpus-Based Materials

    ERIC Educational Resources Information Center

    Krummes, Cedric; Ensslin, Astrid

    2015-01-01

    Whereas there exists a plethora of research on collocations and formulaic language in English, this article contributes towards a somewhat less developed area: the understanding and teaching of formulaic language in German as a foreign language. It analyses formulaic sequences and collocations in German writing (corpus-driven) and provides modern…

  10. Symmetrical and Asymmetrical Scaffolding of L2 Collocations in the Context of Concordancing

    ERIC Educational Resources Information Center

    Rezaee, Abbas Ali; Marefat, Hamideh; Saeedakhtar, Afsaneh

    2015-01-01

    Collocational competence is recognized to be integral to native-like L2 performance, and concordancing can be of assistance in gaining this competence. This study reports on an investigation into the effect of symmetrical and asymmetrical scaffolding on the collocational competence of Iranian intermediate learners of English in the context of…

  11. Profiling the Collocation Use in ELT Textbooks and Learner Writing

    ERIC Educational Resources Information Center

    Tsai, Kuei-Ju

    2015-01-01

    The present study investigates the collocational profiles of (1) three series of graded textbooks for English as a foreign language (EFL) commonly used in Taiwan, (2) the written productions of EFL learners, and (3) the written productions of native speakers (NS) of English. These texts were examined against a purpose-built collocation list. Based…

  12. Learning and Teaching L2 Collocations: Insights from Research

    ERIC Educational Resources Information Center

    Szudarski, Pawel

    2017-01-01

    The aim of this article is to present and summarize the main research findings in the area of learning and teaching second language (L2) collocations. Being a large part of naturally occurring language, collocations and other types of multiword units (e.g., idioms, phrasal verbs, lexical bundles) have been identified as important aspects of L2…

  13. Time-Spectral Rotorcraft Simulations on Overset Grids

    NASA Technical Reports Server (NTRS)

    Leffell, Joshua I.; Murman, Scott M.; Pulliam, Thomas H.

    2014-01-01

    The Time-Spectral method is derived as a Fourier collocation scheme and applied to NASA's overset Reynolds-averaged Navier-Stokes (RANS) solver OVERFLOW. The paper outlines the Time-Spectral OVERFLOWimplementation. Successful low-speed laminar plunging NACA 0012 airfoil simulations demonstrate the capability of the Time-Spectral method to resolve the highly-vortical wakes typical of more expensive three-dimensional rotorcraft configurations. Dealiasing, in the form of spectral vanishing viscosity (SVV), facilitates the convergence of Time-Spectral calculations of high-frequency flows. Finally, simulations of the isolated V-22 Osprey tiltrotor for both hover and forward (edgewise) flight validate the three-dimensional Time-Spectral OVERFLOW implementation. The Time-Spectral hover simulation matches the time-accurate calculation using a single harmonic. Significantly more temporal modes and SVV are required to accurately compute the forward flight case because of its more active, high-frequency wake.

  14. A new sampling scheme for developing metamodels with the zeros of Chebyshev polynomials

    NASA Astrophysics Data System (ADS)

    Wu, Jinglai; Luo, Zhen; Zhang, Nong; Zhang, Yunqing

    2015-09-01

    The accuracy of metamodelling is determined by both the sampling and approximation. This article proposes a new sampling method based on the zeros of Chebyshev polynomials to capture the sampling information effectively. First, the zeros of one-dimensional Chebyshev polynomials are applied to construct Chebyshev tensor product (CTP) sampling, and the CTP is then used to construct high-order multi-dimensional metamodels using the 'hypercube' polynomials. Secondly, the CTP sampling is further enhanced to develop Chebyshev collocation method (CCM) sampling, to construct the 'simplex' polynomials. The samples of CCM are randomly and directly chosen from the CTP samples. Two widely studied sampling methods, namely the Smolyak sparse grid and Hammersley, are used to demonstrate the effectiveness of the proposed sampling method. Several numerical examples are utilized to validate the approximation accuracy of the proposed metamodel under different dimensions.

  15. A Cartesian Adaptive Level Set Method for Two-Phase Flows

    NASA Technical Reports Server (NTRS)

    Ham, F.; Young, Y.-N.

    2003-01-01

    In the present contribution we develop a level set method based on local anisotropic Cartesian adaptation as described in Ham et al. (2002). Such an approach should allow for the smallest possible Cartesian grid capable of resolving a given flow. The remainder of the paper is organized as follows. In section 2 the level set formulation for free surface calculations is presented and its strengths and weaknesses relative to the other free surface methods reviewed. In section 3 the collocated numerical method is described. In section 4 the method is validated by solving the 2D and 3D drop oscilation problem. In section 5 we present some results from more complex cases including the 3D drop breakup in an impulsively accelerated free stream, and the 3D immiscible Rayleigh-Taylor instability. Conclusions are given in section 6.

  16. High performance GPU processing for inversion using uniform grid searches

    NASA Astrophysics Data System (ADS)

    Venetis, Ioannis E.; Saltogianni, Vasso; Stiros, Stathis; Gallopoulos, Efstratios

    2017-04-01

    Many geophysical problems are described by systems of redundant, highly non-linear systems of ordinary equations with constant terms deriving from measurements and hence representing stochastic variables. Solution (inversion) of such problems is based on numerical, optimization methods, based on Monte Carlo sampling or on exhaustive searches in cases of two or even three "free" unknown variables. Recently the TOPological INVersion (TOPINV) algorithm, a grid search-based technique in the Rn space, has been proposed. TOPINV is not based on the minimization of a certain cost function and involves only forward computations, hence avoiding computational errors. The basic concept is to transform observation equations into inequalities on the basis of an optimization parameter k and of their standard errors, and through repeated "scans" of n-dimensional search grids for decreasing values of k to identify the optimal clusters of gridpoints which satisfy observation inequalities and by definition contain the "true" solution. Stochastic optimal solutions and their variance-covariance matrices are then computed as first and second statistical moments. Such exhaustive uniform searches produce an excessive computational load and are extremely time consuming for common computers based on a CPU. An alternative is to use a computing platform based on a GPU, which nowadays is affordable to the research community, which provides a much higher computing performance. Using the CUDA programming language to implement TOPINV allows the investigation of the attained speedup in execution time on such a high performance platform. Based on synthetic data we compared the execution time required for two typical geophysical problems, modeling magma sources and seismic faults, described with up to 18 unknown variables, on both CPU/FORTRAN and GPU/CUDA platforms. The same problems for several different sizes of search grids (up to 1012 gridpoints) and numbers of unknown variables were solved on both platforms, and execution time as a function of the grid dimension for each problem was recorded. Results indicate an average speedup in calculations by a factor of 100 on the GPU platform; for example problems with 1012 grid-points require less than two hours instead of several days on conventional desktop computers. Such a speedup encourages the application of TOPINV on high performance platforms, as a GPU, in cases where nearly real time decisions are necessary, for example finite fault modeling to identify possible tsunami sources.

  17. Addressing model uncertainty through stochastic parameter perturbations within the High Resolution Rapid Refresh (HRRR) ensemble

    NASA Astrophysics Data System (ADS)

    Wolff, J.; Jankov, I.; Beck, J.; Carson, L.; Frimel, J.; Harrold, M.; Jiang, H.

    2016-12-01

    It is well known that global and regional numerical weather prediction ensemble systems are under-dispersive, producing unreliable and overconfident ensemble forecasts. Typical approaches to alleviate this problem include the use of multiple dynamic cores, multiple physics suite configurations, or a combination of the two. While these approaches may produce desirable results, they have practical and theoretical deficiencies and are more difficult and costly to maintain. An active area of research that promotes a more unified and sustainable system for addressing the deficiencies in ensemble modeling is the use of stochastic physics to represent model-related uncertainty. Stochastic approaches include Stochastic Parameter Perturbations (SPP), Stochastic Kinetic Energy Backscatter (SKEB), Stochastic Perturbation of Physics Tendencies (SPPT), or some combination of all three. The focus of this study is to assess the model performance within a convection-permitting ensemble at 3-km grid spacing across the Contiguous United States (CONUS) when using stochastic approaches. For this purpose, the test utilized a single physics suite configuration based on the operational High-Resolution Rapid Refresh (HRRR) model, with ensemble members produced by employing stochastic methods. Parameter perturbations were employed in the Rapid Update Cycle (RUC) land surface model and Mellor-Yamada-Nakanishi-Niino (MYNN) planetary boundary layer scheme. Results will be presented in terms of bias, error, spread, skill, accuracy, reliability, and sharpness using the Model Evaluation Tools (MET) verification package. Due to the high level of complexity of running a frequently updating (hourly), high spatial resolution (3 km), large domain (CONUS) ensemble system, extensive high performance computing (HPC) resources were needed to meet this objective. Supercomputing resources were provided through the National Center for Atmospheric Research (NCAR) Strategic Capability (NSC) project support, allowing for a more extensive set of tests over multiple seasons, consequently leading to more robust results. Through the use of these stochastic innovations and powerful supercomputing at NCAR, further insights and advancements in ensemble forecasting at convection-permitting scales will be possible.

  18. The Statistical Differences Between the Gridded Temperature Datasets, and its Implications for Stochastic Modelling

    NASA Astrophysics Data System (ADS)

    Fredriksen, H. B.; Løvsletten, O.; Rypdal, M.; Rypdal, K.

    2014-12-01

    Several research groups around the world collect instrumental temperature data and combine them in different ways to obtain global gridded temperature fields. The three most well known datasets are HadCRUT4 produced by the Climatic Research Unit and the Met Office Hadley Centre in UK, one produced by NASA GISS, and one produced by NOAA. Recently Berkeley Earth has also developed a gridded dataset. All these four will be compared in our analysis. The statistical properties we will focus on are the standard deviation and the Hurst exponent. These two parameters are sufficient to describe the temperatures as long-range memory stochastic processes; the standard deviation describes the general fluctuation level, while the Hurst exponent relates the strength of the long-term variability to the strength of the short-term variability. A higher Hurst exponent means that the slow variations are stronger compared to the fast, and that the autocovariance function will have a stronger tail. Hence the Hurst exponent gives us information about the persistence or memory of the process. We make use of these data to show that data averaged over a larger area exhibit higher Hurst exponents and lower variance than data averaged over a smaller area, which provides information about the relationship between temporal and spatial correlations of the temperature fluctuations. Interpolation in space has some similarities with averaging over space, although interpolation is more weighted towards the measurement locations. We demonstrate that the degree of spatial interpolation used can explain some differences observed between the variances and memory exponents computed from the various datasets.

  19. Optimal policies for simultaneous energy consumption and ancillary service provision for flexible loads under stochastic prices and no capacity reservation constraint

    NASA Astrophysics Data System (ADS)

    Kefayati, Mahdi; Baldick, Ross

    2015-07-01

    Flexible loads, i.e. the loads whose power trajectory is not bound to a specific one, constitute a sizable portion of current and future electric demand. This flexibility can be used to improve the performance of the grid, should the right incentives be in place. In this paper, we consider the optimal decision making problem faced by a flexible load, demanding a certain amount of energy over its availability period, subject to rate constraints. The load is also capable of providing ancillary services (AS) by decreasing or increasing its consumption in response to signals from the independent system operator (ISO). Under arbitrarily distributed and correlated Markovian energy and AS prices, we obtain the optimal policy for minimising expected total cost, which includes cost of energy and benefits from AS provision, assuming no capacity reservation requirement for AS provision. We also prove that the optimal policy has a multi-threshold form and can be computed, stored and operated efficiently. We further study the effectiveness of our proposed optimal policy and its impact on the grid. We show that, while optimal simultaneous consumption and AS provision under real-time stochastic prices are achievable with acceptable computational burden, the impact of adopting such real-time pricing schemes on the network might not be as good as suggested by the majority of the existing literature. In fact, we show that such price responsive loads are likely to induce peak-to-average ratios much more than what is observed in the current distribution networks and adversely affect the grid.

  20. Explosion Source Location Study Using Collocated Acoustic and Seismic Networks in Israel

    NASA Astrophysics Data System (ADS)

    Pinsky, V.; Gitterman, Y.; Arrowsmith, S.; Ben-Horin, Y.

    2013-12-01

    We explore a joined analysis of seismic and infrasonic signals for improvement in automatic monitoring of small local/regional events, such as construction and quarry blasts, military chemical explosions, sonic booms, etc. using collocated seismic and infrasonic networks recently build in Israel (ISIN) in the frame of the project sponsored by the Bi-national USA-Israel Science Foundation (BSF). The general target is to create an automatic system, which will provide detection, location and identification of explosions in real-time or close-to-real time manner. At the moment the network comprises 15 stations hosting a microphone and seismometer (or accelerometer), operated by the Geophysical Institute of Israel (GII), plus two infrasonic arrays, operated by the National Data Center, Soreq: IOB in the South (Negev desert) and IMA in the North of Israel (Upper Galilee),collocated with the IMS seismic array MMAI. The study utilizes a ground-truth data-base of numerous Rotem phosphate quarry blasts, a number of controlled explosions for demolition of outdated ammunitions and experimental surface explosions for a structure protection research, at the Sayarim Military Range. A special event, comprising four military explosions in a neighboring country, that provided both strong seismic (up to 400 km) and infrasound waves (up to 300 km), is also analyzed. For all of these events the ground-truth coordinates and/or the results of seismic location by the Israel Seismic Network (ISN) have been provided. For automatic event detection and phase picking we tested the new recursive picker, based on Statistically optimal detector. The results were compared to the manual picks. Several location techniques have been tested using the ground-truth event recordings and the preliminary results obtained have been compared to the ground-truth locations: 1) a number of events have been located as intersection of azimuths estimated using the wide-band F-K analysis technique applied to the infrasonic phases of the two distant arrays; 2) a standard robust grid-search location procedure based on phase picks and a constant celerity for a phase (tropospheric or stratospheric) was applied; 3) a joint coordinate grid-search procedure using array waveforms and phase picks was tested, 4) the Bayesian Infrasonic Source Localization (BISL) method, incorporating semi-empirical model-based prior information, was modified for array+network configuration and applied to the ground-truth events. For this purpose we accumulated data of the former observations of the air-to-ground infrasonic phases to compute station specific ground-truth Celerity-Range Histograms (ssgtCRH) and/or model-based CRH (mbCRH), which allow to essentially improve the location results. For building the mbCRH the local meteo-data and the ray-tracing modeling in 3 available azimuth ranges, accounting seasonal variations of winds directivity (quadrants North:315-45, South: 135-225, East 45-135) have been used.

  1. Optimal Operation of Energy Storage in Power Transmission and Distribution

    NASA Astrophysics Data System (ADS)

    Akhavan Hejazi, Seyed Hossein

    In this thesis, we investigate optimal operation of energy storage units in power transmission and distribution grids. At transmission level, we investigate the problem where an investor-owned independently-operated energy storage system seeks to offer energy and ancillary services in the day-ahead and real-time markets. We specifically consider the case where a significant portion of the power generated in the grid is from renewable energy resources and there exists significant uncertainty in system operation. In this regard, we formulate a stochastic programming framework to choose optimal energy and reserve bids for the storage units that takes into account the fluctuating nature of the market prices due to the randomness in the renewable power generation availability. At distribution level, we develop a comprehensive data set to model various stochastic factors on power distribution networks, with focus on networks that have high penetration of electric vehicle charging load and distributed renewable generation. Furthermore, we develop a data-driven stochastic model for energy storage operation at distribution level, where the distribution of nodal voltage and line power flow are modelled as stochastic functions of the energy storage unit's charge and discharge schedules. In particular, we develop new closed-form stochastic models for such key operational parameters in the system. Our approach is analytical and allows formulating tractable optimization problems. Yet, it does not involve any restricting assumption on the distribution of random parameters, hence, it results in accurate modeling of uncertainties. By considering the specific characteristics of random variables, such as their statistical dependencies and often irregularly-shaped probability distributions, we propose a non-parametric chance-constrained optimization approach to operate and plan energy storage units in power distribution girds. In the proposed stochastic optimization, we consider uncertainty from various elements, such as solar photovoltaic , electric vehicle chargers, and residential baseloads, in the form of discrete probability functions. In the last part of this thesis we address some other resources and concepts for enhancing the operation of power distribution and transmission systems. In particular, we proposed a new framework to determine the best sites, sizes, and optimal payment incentives under special contracts for committed-type DG projects to offset distribution network investment costs. In this framework, the aim is to allocate DGs such that the profit gained by the distribution company is maximized while each DG unit's individual profit is also taken into account to assure that private DG investment remains economical.

  2. The Challenge of English Language Collocation Learning in an ES/FL Environment: PRC Students in Singapore

    ERIC Educational Resources Information Center

    Ying, Yang

    2015-01-01

    This study aimed to seek an in-depth understanding about English collocation learning and the development of learner autonomy through investigating a group of English as a Second Language (ESL) learners' perspectives and practices in their learning of English collocations using an AWARE approach. A group of 20 PRC students learning English in…

  3. An Automatic Collocation Writing Assistant for Taiwanese EFL Learners: A Case of Corpus-Based NLP Technology

    ERIC Educational Resources Information Center

    Chang, Yu-Chia; Chang, Jason S.; Chen, Hao-Jan; Liou, Hsien-Chin

    2008-01-01

    Previous work in the literature reveals that EFL learners were deficient in collocations that are a hallmark of near native fluency in learner's writing. Among different types of collocations, the verb-noun (V-N) one was found to be particularly difficult to master, and learners' first language was also found to heavily influence their collocation…

  4. First and Second Generation New York City Bilinguals: What Is the Role of Input in Their Collocational Knowledge of English and Spanish?

    ERIC Educational Resources Information Center

    Heidrick, Ingrid T.

    2017-01-01

    This study compares monolinguals and different kinds of bilinguals with respect to their knowledge of the type of lexical phenomenon known as collocation. Collocations are word combinations that speakers use recurrently, forming the basis of conventionalized lexical patterns that are shared by a linguistic community. Examples of collocations…

  5. Stochastic and Perturbed Parameter Representations of Model Uncertainty in Convection Parameterization

    NASA Astrophysics Data System (ADS)

    Christensen, H. M.; Moroz, I.; Palmer, T.

    2015-12-01

    It is now acknowledged that representing model uncertainty in atmospheric simulators is essential for the production of reliable probabilistic ensemble forecasts, and a number of different techniques have been proposed for this purpose. Stochastic convection parameterization schemes use random numbers to represent the difference between a deterministic parameterization scheme and the true atmosphere, accounting for the unresolved sub grid-scale variability associated with convective clouds. An alternative approach varies the values of poorly constrained physical parameters in the model to represent the uncertainty in these parameters. This study presents new perturbed parameter schemes for use in the European Centre for Medium Range Weather Forecasts (ECMWF) convection scheme. Two types of scheme are developed and implemented. Both schemes represent the joint uncertainty in four of the parameters in the convection parametrisation scheme, which was estimated using the Ensemble Prediction and Parameter Estimation System (EPPES). The first scheme developed is a fixed perturbed parameter scheme, where the values of uncertain parameters are changed between ensemble members, but held constant over the duration of the forecast. The second is a stochastically varying perturbed parameter scheme. The performance of these schemes was compared to the ECMWF operational stochastic scheme, Stochastically Perturbed Parametrisation Tendencies (SPPT), and to a model which does not represent uncertainty in convection. The skill of probabilistic forecasts made using the different models was evaluated. While the perturbed parameter schemes improve on the stochastic parametrisation in some regards, the SPPT scheme outperforms the perturbed parameter approaches when considering forecast variables that are particularly sensitive to convection. Overall, SPPT schemes are the most skilful representations of model uncertainty due to convection parametrisation. Reference: H. M. Christensen, I. M. Moroz, and T. N. Palmer, 2015: Stochastic and Perturbed Parameter Representations of Model Uncertainty in Convection Parameterization. J. Atmos. Sci., 72, 2525-2544.

  6. Theoretical and numerical investigations towards a new geoid model for the Mediterranean Sea - The GEOMED2 project

    NASA Astrophysics Data System (ADS)

    Barzaghi, Riccardo; Vergos, Georgios S.; Albertella, Alberta; Carrion, Daniela; Cazzaniga, Noemi; Tziavos, Ilias N.; Grigoriadis, Vassilios N.; Natsiopoulos, Dimitrios A.; Bruinsma, Sean; Bonvalot, Sylvain; Lequentrec-Lalancette, Marie-Françoise; Bonnefond, Pascal; Knudsen, Per; Andersen, Ole; Simav, Mehmet; Yildiz, Hasan; Basic, Tomislav; Gil, Antonio J.

    2016-04-01

    The unique features of the Mediterranean Sea, with its large gravity variations, complex circulation, and geodynamic peculiarities have always constituted this semi-enclosed sea area as a unique geodetic, geodynamics and ocean laboratory. The main scope of the GEOMED 2 project is the collection of all available gravity, topography/bathymetry and satellite altimetry data in order to improve the representation of the marine geoid and estimate the Mean Dynamic sea surface Topography (MDT) and the circulation with higher accuracy and resolution. Within GEOMED2, the data employed are land and marine gravity data, GOCE/GRACE based Global Geopotential Models and a combination after proper validation of MISTRAL, HOMONIM and SRTM/bathymetry terrain models. In this work we present the results achieved for an inner test region spanning the Adriatic Sea area, bounded between 36o < φ < 48o and 10o < λ < 22o. Within this test region, the available terrain/bathymetry models have been evaluated in terms of their contribution to geoid modeling, the processing methodologies have been tested in terms of the provided geoid accuracy and finally some preliminary results on the MDT determination have been compiled. The aforementioned will server as the guide for the Mediterranean-wide marine geoid estimation. The processing methodology was based on the well-known remove-compute-restore method following both stochastic and spectral methods. Classic least-squares collocation (LSC) with errors has been employed, along with fast Fourier transform (FFT)-based techniques, the Least-Squares Modification of Stokes' Formula (KTH) method and windowed LSC. All methods have been evaluated against in-situ collocated GPS/Levelling geoid heights, using EGM2008 as a reference, in order to conclude on the one(s) to be used for the basin-wide geoid evaluation.

  7. GRID: a high-resolution protein structure refinement algorithm.

    PubMed

    Chitsaz, Mohsen; Mayo, Stephen L

    2013-03-05

    The energy-based refinement of protein structures generated by fold prediction algorithms to atomic-level accuracy remains a major challenge in structural biology. Energy-based refinement is mainly dependent on two components: (1) sufficiently accurate force fields, and (2) efficient conformational space search algorithms. Focusing on the latter, we developed a high-resolution refinement algorithm called GRID. It takes a three-dimensional protein structure as input and, using an all-atom force field, attempts to improve the energy of the structure by systematically perturbing backbone dihedrals and side-chain rotamer conformations. We compare GRID to Backrub, a stochastic algorithm that has been shown to predict a significant fraction of the conformational changes that occur with point mutations. We applied GRID and Backrub to 10 high-resolution (≤ 2.8 Å) crystal structures from the Protein Data Bank and measured the energy improvements obtained and the computation times required to achieve them. GRID resulted in energy improvements that were significantly better than those attained by Backrub while expending about the same amount of computational resources. GRID resulted in relaxed structures that had slightly higher backbone RMSDs compared to Backrub relative to the starting crystal structures. The average RMSD was 0.25 ± 0.02 Å for GRID versus 0.14 ± 0.04 Å for Backrub. These relatively minor deviations indicate that both algorithms generate structures that retain their original topologies, as expected given the nature of the algorithms. Copyright © 2012 Wiley Periodicals, Inc.

  8. Collocational Links in the L2 Mental Lexicon and the Influence of L1 Intralexical Knowledge

    ERIC Educational Resources Information Center

    Wolter, Brent; Gyllstad, Henrik

    2011-01-01

    This article assesses the influence of L1 intralexical knowledge on the formation of L2 intralexical collocations. Two tests, a primed lexical decision task (LDT) and a test of receptive collocational knowledge, were administered to a group of non-native speakers (NNSs) (L1 Swedish), with native speakers (NSs) of English serving as controls on the…

  9. A Corpus-Driven Design of a Test for Assessing the ESL Collocational Competence of University Students

    ERIC Educational Resources Information Center

    Jaen, Maria Moreno

    2007-01-01

    This paper reports an assessment of the collocational competence of students of English Linguistics at the University of Granada. This was carried out to meet a two-fold purpose. On the one hand, we aimed to establish a solid corpus-driven approach based upon a systematic and reliable framework for the evaluation of collocational competence in…

  10. A new Downscaling Approach for SMAP, SMOS and ASCAT by predicting sub-grid Soil Moisture Variability based on Soil Texture

    NASA Astrophysics Data System (ADS)

    Montzka, C.; Rötzer, K.; Bogena, H. R.; Vereecken, H.

    2017-12-01

    Improving the coarse spatial resolution of global soil moisture products from SMOS, SMAP and ASCAT is currently an up-to-date topic. Soil texture heterogeneity is known to be one of the main sources of soil moisture spatial variability. A method has been developed that predicts the soil moisture standard deviation as a function of the mean soil moisture based on soil texture information. It is a closed-form expression using stochastic analysis of 1D unsaturated gravitational flow in an infinitely long vertical profile based on the Mualem-van Genuchten model and first-order Taylor expansions. With the recent development of high resolution maps of basic soil properties such as soil texture and bulk density, relevant information to estimate soil moisture variability within a satellite product grid cell is available. Here, we predict for each SMOS, SMAP and ASCAT grid cell the sub-grid soil moisture variability based on the SoilGrids1km data set. We provide a look-up table that indicates the soil moisture standard deviation for any given soil moisture mean. The resulting data set provides important information for downscaling coarse soil moisture observations of the SMOS, SMAP and ASCAT missions. Downscaling SMAP data by a field capacity proxy indicates adequate accuracy of the sub-grid soil moisture patterns.

  11. On estimating gravity anomalies: A comparison of least squares collocation with least squares techniques

    NASA Technical Reports Server (NTRS)

    Argentiero, P.; Lowrey, B.

    1976-01-01

    The least squares collocation algorithm for estimating gravity anomalies from geodetic data is shown to be an application of the well known regression equations which provide the mean and covariance of a random vector (gravity anomalies) given a realization of a correlated random vector (geodetic data). It is also shown that the collocation solution for gravity anomalies is equivalent to the conventional least-squares-Stokes' function solution when the conventional solution utilizes properly weighted zero a priori estimates. The mathematical and physical assumptions underlying the least squares collocation estimator are described, and its numerical properties are compared with the numerical properties of the conventional least squares estimator.

  12. Western Wind Data Set | Grid Modernization | NREL

    Science.gov Websites

    replicates the stochastic nature of wind power plant output. NREL modeled hysteresis around wind turbine cut where wind speeds are often near wind turbine cut-out (~25 m/s), SCORE output does not replicate the Vestas V90). The hysteresis-corrected SCORE is an attempt to put the wind turbine hysteresis at cut-out

  13. Optimizing event selection with the random grid search

    NASA Astrophysics Data System (ADS)

    Bhat, Pushpalatha C.; Prosper, Harrison B.; Sekmen, Sezen; Stewart, Chip

    2018-07-01

    The random grid search (RGS) is a simple, but efficient, stochastic algorithm to find optimal cuts that was developed in the context of the search for the top quark at Fermilab in the mid-1990s. The algorithm, and associated code, have been enhanced recently with the introduction of two new cut types, one of which has been successfully used in searches for supersymmetry at the Large Hadron Collider. The RGS optimization algorithm is described along with the recent developments, which are illustrated with two examples from particle physics. One explores the optimization of the selection of vector boson fusion events in the four-lepton decay mode of the Higgs boson and the other optimizes SUSY searches using boosted objects and the razor variables.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lei, Huan; Yang, Xiu; Zheng, Bin

    Biomolecules exhibit conformational fluctuations near equilibrium states, inducing uncertainty in various biological properties in a dynamic way. We have developed a general method to quantify the uncertainty of target properties induced by conformational fluctuations. Using a generalized polynomial chaos (gPC) expansion, we construct a surrogate model of the target property with respect to varying conformational states. We also propose a method to increase the sparsity of the gPC expansion by defining a set of conformational “active space” random variables. With the increased sparsity, we employ the compressive sensing method to accurately construct the surrogate model. We demonstrate the performance ofmore » the surrogate model by evaluating fluctuation-induced uncertainty in solvent-accessible surface area for the bovine trypsin inhibitor protein system and show that the new approach offers more accurate statistical information than standard Monte Carlo approaches. Further more, the constructed surrogate model also enables us to directly evaluate the target property under various conformational states, yielding a more accurate response surface than standard sparse grid collocation methods. In particular, the new method provides higher accuracy in high-dimensional systems, such as biomolecules, where sparse grid performance is limited by the accuracy of the computed quantity of interest. Finally, our new framework is generalizable and can be used to investigate the uncertainty of a wide variety of target properties in biomolecular systems.« less

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lei, Huan; Yang, Xiu; Zheng, Bin

    Biomolecules exhibit conformational fluctuations near equilibrium states, inducing uncertainty in various biological properties in a dynamic way. We have developed a general method to quantify the uncertainty of target properties induced by conformational fluctuations. Using a generalized polynomial chaos (gPC) expansion, we construct a surrogate model of the target property with respect to varying conformational states. We also propose a method to increase the sparsity of the gPC expansion by defining a set of conformational “active space” random variables. With the increased sparsity, we employ the compressive sensing method to accurately construct the surrogate model. We demonstrate the performance ofmore » the surrogate model by evaluating fluctuation-induced uncertainty in solvent-accessible surface area for the bovine trypsin inhibitor protein system and show that the new approach offers more accurate statistical information than standard Monte Carlo approaches. Further more, the constructed surrogate model also enables us to directly evaluate the target property under various conformational states, yielding a more accurate response surface than standard sparse grid collocation methods. In particular, the new method provides higher accuracy in high-dimensional systems, such as biomolecules, where sparse grid performance is limited by the accuracy of the computed quantity of interest. Our new framework is generalizable and can be used to investigate the uncertainty of a wide variety of target properties in biomolecular systems.« less

  16. Modeling nonlinear ultrasound propagation in heterogeneous media with power law absorption using a k-space pseudospectral method.

    PubMed

    Treeby, Bradley E; Jaros, Jiri; Rendell, Alistair P; Cox, B T

    2012-06-01

    The simulation of nonlinear ultrasound propagation through tissue realistic media has a wide range of practical applications. However, this is a computationally difficult problem due to the large size of the computational domain compared to the acoustic wavelength. Here, the k-space pseudospectral method is used to reduce the number of grid points required per wavelength for accurate simulations. The model is based on coupled first-order acoustic equations valid for nonlinear wave propagation in heterogeneous media with power law absorption. These are derived from the equations of fluid mechanics and include a pressure-density relation that incorporates the effects of nonlinearity, power law absorption, and medium heterogeneities. The additional terms accounting for convective nonlinearity and power law absorption are expressed as spatial gradients making them efficient to numerically encode. The governing equations are then discretized using a k-space pseudospectral technique in which the spatial gradients are computed using the Fourier-collocation method. This increases the accuracy of the gradient calculation and thus relaxes the requirement for dense computational grids compared to conventional finite difference methods. The accuracy and utility of the developed model is demonstrated via several numerical experiments, including the 3D simulation of the beam pattern from a clinical ultrasound probe.

  17. Ensemble Kalman Filter for Dynamic State Estimation of Power Grids Stochastically Driven by Time-correlated Mechanical Input Power

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosenthal, William Steven; Tartakovsky, Alex; Huang, Zhenyu

    State and parameter estimation of power transmission networks is important for monitoring power grid operating conditions and analyzing transient stability. Wind power generation depends on fluctuating input power levels, which are correlated in time and contribute to uncertainty in turbine dynamical models. The ensemble Kalman filter (EnKF), a standard state estimation technique, uses a deterministic forecast and does not explicitly model time-correlated noise in parameters such as mechanical input power. However, this uncertainty affects the probability of fault-induced transient instability and increased prediction bias. Here a novel approach is to model input power noise with time-correlated stochastic fluctuations, and integratemore » them with the network dynamics during the forecast. While the EnKF has been used to calibrate constant parameters in turbine dynamical models, the calibration of a statistical model for a time-correlated parameter has not been investigated. In this study, twin experiments on a standard transmission network test case are used to validate our time-correlated noise model framework for state estimation of unsteady operating conditions and transient stability analysis, and a methodology is proposed for the inference of the mechanical input power time-correlation length parameter using time-series data from PMUs monitoring power dynamics at generator buses.« less

  18. Ensemble Kalman Filter for Dynamic State Estimation of Power Grids Stochastically Driven by Time-correlated Mechanical Input Power

    DOE PAGES

    Rosenthal, William Steven; Tartakovsky, Alex; Huang, Zhenyu

    2017-10-31

    State and parameter estimation of power transmission networks is important for monitoring power grid operating conditions and analyzing transient stability. Wind power generation depends on fluctuating input power levels, which are correlated in time and contribute to uncertainty in turbine dynamical models. The ensemble Kalman filter (EnKF), a standard state estimation technique, uses a deterministic forecast and does not explicitly model time-correlated noise in parameters such as mechanical input power. However, this uncertainty affects the probability of fault-induced transient instability and increased prediction bias. Here a novel approach is to model input power noise with time-correlated stochastic fluctuations, and integratemore » them with the network dynamics during the forecast. While the EnKF has been used to calibrate constant parameters in turbine dynamical models, the calibration of a statistical model for a time-correlated parameter has not been investigated. In this study, twin experiments on a standard transmission network test case are used to validate our time-correlated noise model framework for state estimation of unsteady operating conditions and transient stability analysis, and a methodology is proposed for the inference of the mechanical input power time-correlation length parameter using time-series data from PMUs monitoring power dynamics at generator buses.« less

  19. Application of stochastic inversion in auroral tomography

    NASA Astrophysics Data System (ADS)

    Nygrén, T.; Markkanen, M.; Lehtinen, M.; Kaila, K.

    1996-11-01

    A software package originally developed for satellite radio tomography is briefly introduced and its use in two-dimensional auroral tomography is described. The method is based on stochastic inversion, i.e. finding the most probable values of the unknown volume emission rates once the optical measurements are made using either a scanning photometer or an auroral camera. A set of simulation results is shown for a different number and separations of optical instruments at ground level. It is observed that arcs with a thickness of a few kilometers and separated by a few tens of kilometers are easily reconstructed. The maximum values of the inversion results, however, are often weaker than in the model. The most obvious reason for this is the grid size, which cannot be much smaller than the arc thickness. The grid necessarily generates a spatial averaging effect broadening the arc cross-sections and reducing the peak values. Finally, results from TV-camera observations at Tromsø and Esrange are shown. Although these sites are separated by more than 200 km, arcs close to Tromsø have been successfully reconstructed. Acknowledgements. The work done by P. Henelius and E. Vilenius in programme development is gratefully acknowledged. Topical Editor D. Alcayde thanks I. Pryse and A. Vallance-Jones for their help in evaluating this paper.--> Correspondence to: T. Nygrén-->

  20. Analysis and Reduction of Complex Networks Under Uncertainty.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghanem, Roger G

    2014-07-31

    This effort was a collaboration with Youssef Marzouk of MIT, Omar Knio of Duke University (at the time at Johns Hopkins University) and Habib Najm of Sandia National Laboratories. The objective of this effort was to develop the mathematical and algorithmic capacity to analyze complex networks under uncertainty. Of interest were chemical reaction networks and smart grid networks. The statements of work for USC focused on the development of stochastic reduced models for uncertain networks. The USC team was led by Professor Roger Ghanem and consisted of one graduate student and a postdoc. The contributions completed by the USC teammore » consisted of 1) methodology and algorithms to address the eigenvalue problem, a problem of significance in the stability of networks under stochastic perturbations, 2) methodology and algorithms to characterize probability measures on graph structures with random flows. This is an important problem in characterizing random demand (encountered in smart grid) and random degradation (encountered in infrastructure systems), as well as modeling errors in Markov Chains (with ubiquitous relevance !). 3) methodology and algorithms for treating inequalities in uncertain systems. This is an important problem in the context of models for material failure and network flows under uncertainty where conditions of failure or flow are described in the form of inequalities between the state variables.« less

  1. A real-space stochastic density matrix approach for density functional electronic structure.

    PubMed

    Beck, Thomas L

    2015-12-21

    The recent development of real-space grid methods has led to more efficient, accurate, and adaptable approaches for large-scale electrostatics and density functional electronic structure modeling. With the incorporation of multiscale techniques, linear-scaling real-space solvers are possible for density functional problems if localized orbitals are used to represent the Kohn-Sham energy functional. These methods still suffer from high computational and storage overheads, however, due to extensive matrix operations related to the underlying wave function grid representation. In this paper, an alternative stochastic method is outlined that aims to solve directly for the one-electron density matrix in real space. In order to illustrate aspects of the method, model calculations are performed for simple one-dimensional problems that display some features of the more general problem, such as spatial nodes in the density matrix. This orbital-free approach may prove helpful considering a future involving increasingly parallel computing architectures. Its primary advantage is the near-locality of the random walks, allowing for simultaneous updates of the density matrix in different regions of space partitioned across the processors. In addition, it allows for testing and enforcement of the particle number and idempotency constraints through stabilization of a Feynman-Kac functional integral as opposed to the extensive matrix operations in traditional approaches.

  2. Optimal design and uncertainty quantification in blood flow simulations for congenital heart disease

    NASA Astrophysics Data System (ADS)

    Marsden, Alison

    2009-11-01

    Recent work has demonstrated substantial progress in capabilities for patient-specific cardiovascular flow simulations. Recent advances include increasingly complex geometries, physiological flow conditions, and fluid structure interaction. However inputs to these simulations, including medical image data, catheter-derived pressures and material properties, can have significant uncertainties associated with them. For simulations to predict clinically useful and reliable output information, it is necessary to quantify the effects of input uncertainties on outputs of interest. In addition, blood flow simulation tools can now be efficiently coupled to shape optimization algorithms for surgery design applications, and these tools should incorporate uncertainty information. We present a unified framework to systematically and efficient account for uncertainties in simulations using adaptive stochastic collocation. In addition, we present a framework for derivative-free optimization of cardiovascular geometries, and layer these tools to perform optimization under uncertainty. These methods are demonstrated using simulations and surgery optimization to improve hemodynamics in pediatric cardiology applications.

  3. Spectral analysis of GEOS-3 altimeter data and frequency domain collocation. [to estimate gravity anomalies

    NASA Technical Reports Server (NTRS)

    Eren, K.

    1980-01-01

    The mathematical background in spectral analysis as applied to geodetic applications is summarized. The resolution (cut-off frequency) of the GEOS 3 altimeter data is examined by determining the shortest wavelength (corresponding to the cut-off frequency) recoverable. The data from some 18 profiles are used. The total power (variance) in the sea surface topography with respect to the reference ellipsoid as well as with respect to the GEM-9 surface is computed. A fast inversion algorithm for matrices of simple and block Toeplitz matrices and its application to least squares collocation is explained. This algorithm yields a considerable gain in computer time and storage in comparison with conventional least squares collocation. Frequency domain least squares collocation techniques are also introduced and applied to estimating gravity anomalies from GEOS 3 altimeter data. These techniques substantially reduce the computer time and requirements in storage associated with the conventional least squares collocation. Numerical examples given demonstrate the efficiency and speed of these techniques.

  4. High-resolution gravity and geoid models in Tahiti obtained from new airborne and land gravity observations: data fusion by spectral combination

    NASA Astrophysics Data System (ADS)

    Shih, Hsuan-Chang; Hwang, Cheinway; Barriot, Jean-Pierre; Mouyen, Maxime; Corréia, Pascal; Lequeux, Didier; Sichoix, Lydie

    2015-08-01

    For the first time, we carry out an airborne gravity survey and we collect new land gravity data over the islands of Tahiti and Moorea in French Polynesia located in the South Pacific Ocean. The new land gravity data are registered with GPS-derived coordinates, network-adjusted and outlier-edited, resulting in a mean standard error of 17 μGal. A crossover analysis of the airborne gravity data indicates a mean gravity accuracy of 1.7 mGal. New marine gravity around the two islands is derived from Geosat/GM, ERS-1/GM, Jason-1/GM, and Cryosat-2 altimeter data. A new 1-s digital topography model is constructed and is used to compute the topographic gravitational effects. To use EGM08 over Tahiti and Moorea, the optimal degree of spherical harmonic expansion is 1500. The fusion of the gravity datasets is made by the band-limited least-squares collocation, which best integrates datasets of different accuracies and spatial resolutions. The new high-resolution gravity and geoid grids are constructed on a 9-s grid. Assessments of the grids by measurements of ground gravity and geometric geoidal height result in RMS differences of 0.9 mGal and 0.4 cm, respectively. The geoid model allows 1-cm orthometric height determination by GPS and Lidar and yields a consistent height datum for Tahiti and Moorea. The new Bouguer anomalies show gravity highs and lows in the centers and land-sea zones of the two islands, allowing further studies of the density structure and volcanism in the region.

  5. Science Enabling Applications of Gridded Radiances and Products

    NASA Astrophysics Data System (ADS)

    Goldberg, M.; Wolf, W.; Zhou, L.

    2005-12-01

    New generations of hyperspectral sounders and imagers are not only providing vastly improved information to monitor, assess and predict the Earth's environment, they also provide tremendous volumes of data to manage. Key management challenges must include data processing, distribution, archive and utilization. At the NOAA/NESDIS Office of Research and Applications, we have started to address the challenge of utilizing high volume satellite by thinning observations and developing gridded datasets from the observations made from the NASA AIRS, AMSU and MODIS instrument. We have developed techniques for intelligent thinning of AIRS data for numerical weather prediction, by selecting the clearest AIRS 14 km field of view within a 3 x 3 array. The selection uses high spatial resolution 1 km MODIS data which are spatially convolved to the AIRS field of view. The MODIS cloud masks and AIRS cloud tests are used to select the clearest. During the real-time processing the data are thinned and gridded to support monitoring, validation and scientific studies. Products from AIRS, which includes profiles of temperature, water vapor and ozone and cloud-corrected infrared radiances for more than 2000 channels, are derived from a single AIRS/AMSU field of regard, which is a 3 x 3 array of AIRS footprints (each with a 14 km spatial resolution) collocated with a single AMSU footprint (42 km). One of our key gridded dataset is a daily 3 x 3 latitude/longitude projection which contains the nearest AIRS/AMSU field of regard with respect to the center of the 3 x 3 lat/lon grid. This particular gridded dataset is 1/40 the size of the full resolution data. This gridded dataset is the type of product request that can be used to support algorithm validation and improvements. It also provides for a very economical approach for reprocessing, testing and improving algorithms for climate studies without having to reprocess the full resolution data stored at the DAAC. For example, on a single CPU workstation, all the AIRS derived products can be derived from a single year of gridded data in 5 days. This relatively short turnaround time, which can be reduced considerably to 3 hours by using a cluster of 40 pc G5processors, allows for repeated reprocessing at the PIs home institution before substantial investments are made to reprocess the full resolution data sets archived at the DAAC. In other words, do not reprocess the full resolution data until the science community have tested and selected the optimal algorithm on the gridded data. Development and applications of gridded radiances and products will be discussed. The applications can be provided as part of a web-based service.

  6. Optimizing Water Management for Collocated Conventional and Unconventional Reservoirs

    NASA Astrophysics Data System (ADS)

    Reedy, R. C.; Scanlon, B. R.; Walsh, M.

    2016-12-01

    With the U.S. producing much more water than oil from oil and gas reservoirs, managing produced water is becoming a critical issue. Here we quantify water production from collocated conventional and unconventional reservoirs using well by well analysis and evaluate various water management strategies using the U.S. Permian Basin as a case study. Water production during the past 15 years in the Permian Basin totaled 55×109 barrels (bbl), 95% from wells in conventional reservoirs resulting in an average water to oil ratio of 12 compared to ratios of 2-3 in wells in unconventional reservoirs. Some of this water ( 25%) is returned to the reservoir for secondary oil recovery (water flooding) while the remaining water is injected into an average of 18,000 salt water disposal wells. Total water production over the past 15 yr (2000 - 2015) exceeds water used for hydraulic fracturing by almost 40 times. Analyzing water injection into salt water disposal wells relative to water requirements for hydraulic fracturing at a 5 square mile grid scale based on 2014 data indicates that water disposal exceeds water requirements for hydraulic fracturing throughout most of the play. Reusing/recycling of produced water for hydraulic fracturing would reduce sourcing and disposal issues related to hydraulic fracturing. Because shales (unconventional reservoirs) provide the source rocks for many conventional reservoirs, coordinating water management from both conventional and unconventional reservoirs can help resolve issues related to sourcing of water for hydraulic fracturing and disposing of produced water. Reusing/recycling produced water can also help reduce water scarcity concerns in some regions.

  7. Systematic parameter inference in stochastic mesoscopic modeling

    NASA Astrophysics Data System (ADS)

    Lei, Huan; Yang, Xiu; Li, Zhen; Karniadakis, George Em

    2017-02-01

    We propose a method to efficiently determine the optimal coarse-grained force field in mesoscopic stochastic simulations of Newtonian fluid and polymer melt systems modeled by dissipative particle dynamics (DPD) and energy conserving dissipative particle dynamics (eDPD). The response surfaces of various target properties (viscosity, diffusivity, pressure, etc.) with respect to model parameters are constructed based on the generalized polynomial chaos (gPC) expansion using simulation results on sampling points (e.g., individual parameter sets). To alleviate the computational cost to evaluate the target properties, we employ the compressive sensing method to compute the coefficients of the dominant gPC terms given the prior knowledge that the coefficients are "sparse". The proposed method shows comparable accuracy with the standard probabilistic collocation method (PCM) while it imposes a much weaker restriction on the number of the simulation samples especially for systems with high dimensional parametric space. Fully access to the response surfaces within the confidence range enables us to infer the optimal force parameters given the desirable values of target properties at the macroscopic scale. Moreover, it enables us to investigate the intrinsic relationship between the model parameters, identify possible degeneracies in the parameter space, and optimize the model by eliminating model redundancies. The proposed method provides an efficient alternative approach for constructing mesoscopic models by inferring model parameters to recover target properties of the physics systems (e.g., from experimental measurements), where those force field parameters and formulation cannot be derived from the microscopic level in a straight forward way.

  8. Entropy Stable Wall Boundary Conditions for the Three-Dimensional Compressible Navier-Stokes Equations

    NASA Technical Reports Server (NTRS)

    Parsani, Matteo; Carpenter, Mark H.; Nielsen, Eric J.

    2015-01-01

    Non-linear entropy stability and a summation-by-parts framework are used to derive entropy stable wall boundary conditions for the three-dimensional compressible Navier-Stokes equations. A semi-discrete entropy estimate for the entire domain is achieved when the new boundary conditions are coupled with an entropy stable discrete interior operator. The data at the boundary are weakly imposed using a penalty flux approach and a simultaneous-approximation-term penalty technique. Although discontinuous spectral collocation operators on unstructured grids are used herein for the purpose of demonstrating their robustness and efficacy, the new boundary conditions are compatible with any diagonal norm summation-by-parts spatial operator, including finite element, finite difference, finite volume, discontinuous Galerkin, and flux reconstruction/correction procedure via reconstruction schemes. The proposed boundary treatment is tested for three-dimensional subsonic and supersonic flows. The numerical computations corroborate the non-linear stability (entropy stability) and accuracy of the boundary conditions.

  9. Numerical pricing of options using high-order compact finite difference schemes

    NASA Astrophysics Data System (ADS)

    Tangman, D. Y.; Gopaul, A.; Bhuruth, M.

    2008-09-01

    We consider high-order compact (HOC) schemes for quasilinear parabolic partial differential equations to discretise the Black-Scholes PDE for the numerical pricing of European and American options. We show that for the heat equation with smooth initial conditions, the HOC schemes attain clear fourth-order convergence but fail if non-smooth payoff conditions are used. To restore the fourth-order convergence, we use a grid stretching that concentrates grid nodes at the strike price for European options. For an American option, an efficient procedure is also described to compute the option price, Greeks and the optimal exercise curve. Comparisons with a fourth-order non-compact scheme are also done. However, fourth-order convergence is not experienced with this strategy. To improve the convergence rate for American options, we discuss the use of a front-fixing transformation with the HOC scheme. We also show that the HOC scheme with grid stretching along the asset price dimension gives accurate numerical solutions for European options under stochastic volatility.

  10. A Stochastic Model of Space-Time Variability of Mesoscale Rainfall: Statistics of Spatial Averages

    NASA Technical Reports Server (NTRS)

    Kundu, Prasun K.; Bell, Thomas L.

    2003-01-01

    A characteristic feature of rainfall statistics is that they depend on the space and time scales over which rain data are averaged. A previously developed spectral model of rain statistics that is designed to capture this property, predicts power law scaling behavior for the second moment statistics of area-averaged rain rate on the averaging length scale L as L right arrow 0. In the present work a more efficient method of estimating the model parameters is presented, and used to fit the model to the statistics of area-averaged rain rate derived from gridded radar precipitation data from TOGA COARE. Statistical properties of the data and the model predictions are compared over a wide range of averaging scales. An extension of the spectral model scaling relations to describe the dependence of the average fraction of grid boxes within an area containing nonzero rain (the "rainy area fraction") on the grid scale L is also explored.

  11. Improved Modeling of Finite-Rate Turbulent Combustion Processes in Research Combustors

    NASA Technical Reports Server (NTRS)

    VanOverbeke, Thomas J.

    1998-01-01

    The objective of this thesis is to further develop and test a stochastic model of turbulent combustion in recirculating flows. There is a requirement to increase the accuracy of multi-dimensional combustion predictions. As turbulence affects reaction rates, this interaction must be more accurately evaluated. In this work a more physically correct way of handling the interaction of turbulence on combustion is further developed and tested. As turbulence involves randomness, stochastic modeling is used. Averaged values such as temperature and species concentration are found by integrating the probability density function (pdf) over the range of the scalar. The model in this work does not assume the pdf type, but solves for the evolution of the pdf using the Monte Carlo solution technique. The model is further developed by including a more robust reaction solver, by using accurate thermodynamics and by more accurate transport elements. The stochastic method is used with Semi-Implicit Method for Pressure-Linked Equations. The SIMPLE method is used to solve for velocity, pressure, turbulent kinetic energy and dissipation. The pdf solver solves for temperature and species concentration. Thus, the method is partially familiar to combustor engineers. The method is compared to benchmark experimental data and baseline calculations. The baseline method was tested on isothermal flows, evaporating sprays and combusting sprays. Pdf and baseline predictions were performed for three diffusion flames and one premixed flame. The pdf method predicted lower combustion rates than the baseline method in agreement with the data, except for the premixed flame. The baseline and stochastic predictions bounded the experimental data for the premixed flame. The use of a continuous mixing model or relax to mean mixing model had little effect on the prediction of average temperature. Two grids were used in a hydrogen diffusion flame simulation. Grid density did not effect the predictions except for peak temperature and tangential velocity. The hybrid pdf method did take longer and required more memory, but has a theoretical basis to extend to many reaction steps which cannot be said of current turbulent combustion models.

  12. Detection methods for non-Gaussian gravitational wave stochastic backgrounds

    NASA Astrophysics Data System (ADS)

    Drasco, Steve; Flanagan, Éanna É.

    2003-04-01

    A gravitational wave stochastic background can be produced by a collection of independent gravitational wave events. There are two classes of such backgrounds, one for which the ratio of the average time between events to the average duration of an event is small (i.e., many events are on at once), and one for which the ratio is large. In the first case the signal is continuous, sounds something like a constant hiss, and has a Gaussian probability distribution. In the second case, the discontinuous or intermittent signal sounds something like popcorn popping, and is described by a non-Gaussian probability distribution. In this paper we address the issue of finding an optimal detection method for such a non-Gaussian background. As a first step, we examine the idealized situation in which the event durations are short compared to the detector sampling time, so that the time structure of the events cannot be resolved, and we assume white, Gaussian noise in two collocated, aligned detectors. For this situation we derive an appropriate version of the maximum likelihood detection statistic. We compare the performance of this statistic to that of the standard cross-correlation statistic both analytically and with Monte Carlo simulations. In general the maximum likelihood statistic performs better than the cross-correlation statistic when the stochastic background is sufficiently non-Gaussian, resulting in a gain factor in the minimum gravitational-wave energy density necessary for detection. This gain factor ranges roughly between 1 and 3, depending on the duty cycle of the background, for realistic observing times and signal strengths for both ground and space based detectors. The computational cost of the statistic, although significantly greater than that of the cross-correlation statistic, is not unreasonable. Before the statistic can be used in practice with real detector data, further work is required to generalize our analysis to accommodate separated, misaligned detectors with realistic, colored, non-Gaussian noise.

  13. Selection of Polynomial Chaos Bases via Bayesian Model Uncertainty Methods with Applications to Sparse Approximation of PDEs with Stochastic Inputs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karagiannis, Georgios; Lin, Guang

    2014-02-15

    Generalized polynomial chaos (gPC) expansions allow the representation of the solution of a stochastic system as a series of polynomial terms. The number of gPC terms increases dramatically with the dimension of the random input variables. When the number of the gPC terms is larger than that of the available samples, a scenario that often occurs if the evaluations of the system are expensive, the evaluation of the gPC expansion can be inaccurate due to over-fitting. We propose a fully Bayesian approach that allows for global recovery of the stochastic solution, both in spacial and random domains, by coupling Bayesianmore » model uncertainty and regularization regression methods. It allows the evaluation of the PC coefficients on a grid of spacial points via (1) Bayesian model average or (2) medial probability model, and their construction as functions on the spacial domain via spline interpolation. The former accounts the model uncertainty and provides Bayes-optimal predictions; while the latter, additionally, provides a sparse representation of the solution by evaluating the expansion on a subset of dominating gPC bases when represented as a gPC expansion. Moreover, the method quantifies the importance of the gPC bases through inclusion probabilities. We design an MCMC sampler that evaluates all the unknown quantities without the need of ad-hoc techniques. The proposed method is suitable for, but not restricted to, problems whose stochastic solution is sparse at the stochastic level with respect to the gPC bases while the deterministic solver involved is expensive. We demonstrate the good performance of the proposed method and make comparisons with others on 1D, 14D and 40D in random space elliptic stochastic partial differential equations.« less

  14. Accurate source location from waves scattered by surface topography: Applications to the Nevada and North Korean test sites

    NASA Astrophysics Data System (ADS)

    Shen, Y.; Wang, N.; Bao, X.; Flinders, A. F.

    2016-12-01

    Scattered waves generated near the source contains energy converted from the near-field waves to the far-field propagating waves, which can be used to achieve location accuracy beyond the diffraction limit. In this work, we apply a novel full-wave location method that combines a grid-search algorithm with the 3D Green's tensor database to locate the Non-Proliferation Experiment (NPE) at the Nevada test site and the North Korean nuclear tests. We use the first arrivals (Pn/Pg) and their immediate codas, which are likely dominated by waves scattered at the surface topography near the source, to determine the source location. We investigate seismograms in the frequency of [1.0 2.0] Hz to reduce noises in the data and highlight topography scattered waves. High resolution topographic models constructed from 10 and 90 m grids are used for Nevada and North Korea, respectively. The reference velocity model is based on CRUST 1.0. We use the collocated-grid finite difference method on curvilinear grids to calculate the strain Green's tensor and obtain synthetic waveforms using source-receiver reciprocity. The `best' solution is found based on the least-square misfit between the observed and synthetic waveforms. To suppress random noises, an optimal weighting method for three-component seismograms is applied in misfit calculation. Our results show that the scattered waves are crucial in improving resolution and allow us to obtain accurate solutions with a small number of stations. Since the scattered waves depends on topography, which is known at the wavelengths of regional seismic waves, our approach yields absolute, instead of relative, source locations. We compare our solutions with those of USGS and other studies. Moreover, we use differential waveforms to locate pairs of the North Korea tests from years 2006, 2009, 2013 and 2016 to further reduce the effects of unmodeled heterogeneities and errors in the reference velocity model.

  15. Multistage Stochastic Programming and its Applications in Energy Systems Modeling and Optimization

    NASA Astrophysics Data System (ADS)

    Golari, Mehdi

    Electric energy constitutes one of the most crucial elements to almost every aspect of life of people. The modern electric power systems face several challenges such as efficiency, economics, sustainability, and reliability. Increase in electrical energy demand, distributed generations, integration of uncertain renewable energy resources, and demand side management are among the main underlying reasons of such growing complexity. Additionally, the elements of power systems are often vulnerable to failures because of many reasons, such as system limits, weak conditions, unexpected events, hidden failures, human errors, terrorist attacks, and natural disasters. One common factor complicating the operation of electrical power systems is the underlying uncertainties from the demands, supplies and failures of system components. Stochastic programming provides a mathematical framework for decision making under uncertainty. It enables a decision maker to incorporate some knowledge of the intrinsic uncertainty into the decision making process. In this dissertation, we focus on application of two-stage and multistage stochastic programming approaches to electric energy systems modeling and optimization. Particularly, we develop models and algorithms addressing the sustainability and reliability issues in power systems. First, we consider how to improve the reliability of power systems under severe failures or contingencies prone to cascading blackouts by so called islanding operations. We present a two-stage stochastic mixed-integer model to find optimal islanding operations as a powerful preventive action against cascading failures in case of extreme contingencies. Further, we study the properties of this problem and propose efficient solution methods to solve this problem for large-scale power systems. We present the numerical results showing the effectiveness of the model and investigate the performance of the solution methods. Next, we address the sustainability issue considering the integration of renewable energy resources into production planning of energy-intensive manufacturing industries. Recently, a growing number of manufacturing companies are considering renewable energies to meet their energy requirements to move towards green manufacturing as well as decreasing their energy costs. However, the intermittent nature of renewable energies imposes several difficulties in long term planning of how to efficiently exploit renewables. In this study, we propose a scheme for manufacturing companies to use onsite and grid renewable energies provided by their own investments and energy utilities as well as conventional grid energy to satisfy their energy requirements. We propose a multistage stochastic programming model and study an efficient solution method to solve this problem. We examine the proposed framework on a test case simulated based on a real-world semiconductor company. Moreover, we evaluate long-term profitability of such scheme via so called value of multistage stochastic programming.

  16. Optimizing event selection with the random grid search

    DOE PAGES

    Bhat, Pushpalatha C.; Prosper, Harrison B.; Sekmen, Sezen; ...

    2018-02-27

    In this paper, the random grid search (RGS) is a simple, but efficient, stochastic algorithm to find optimal cuts that was developed in the context of the search for the top quark at Fermilab in the mid-1990s. The algorithm, and associated code, have been enhanced recently with the introduction of two new cut types, one of which has been successfully used in searches for supersymmetry at the Large Hadron Collider. The RGS optimization algorithm is described along with the recent developments, which are illustrated with two examples from particle physics. One explores the optimization of the selection of vector bosonmore » fusion events in the four-lepton decay mode of the Higgs boson and the other optimizes SUSY searches using boosted objects and the razor variables.« less

  17. Optimizing Event Selection with the Random Grid Search

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhat, Pushpalatha C.; Prosper, Harrison B.; Sekmen, Sezen

    2017-06-29

    The random grid search (RGS) is a simple, but efficient, stochastic algorithm to find optimal cuts that was developed in the context of the search for the top quark at Fermilab in the mid-1990s. The algorithm, and associated code, have been enhanced recently with the introduction of two new cut types, one of which has been successfully used in searches for supersymmetry at the Large Hadron Collider. The RGS optimization algorithm is described along with the recent developments, which are illustrated with two examples from particle physics. One explores the optimization of the selection of vector boson fusion events inmore » the four-lepton decay mode of the Higgs boson and the other optimizes SUSY searches using boosted objects and the razor variables.« less

  18. Optimizing event selection with the random grid search

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhat, Pushpalatha C.; Prosper, Harrison B.; Sekmen, Sezen

    In this paper, the random grid search (RGS) is a simple, but efficient, stochastic algorithm to find optimal cuts that was developed in the context of the search for the top quark at Fermilab in the mid-1990s. The algorithm, and associated code, have been enhanced recently with the introduction of two new cut types, one of which has been successfully used in searches for supersymmetry at the Large Hadron Collider. The RGS optimization algorithm is described along with the recent developments, which are illustrated with two examples from particle physics. One explores the optimization of the selection of vector bosonmore » fusion events in the four-lepton decay mode of the Higgs boson and the other optimizes SUSY searches using boosted objects and the razor variables.« less

  19. Comparison of Implicit Collocation Methods for the Heat Equation

    NASA Technical Reports Server (NTRS)

    Kouatchou, Jules; Jezequel, Fabienne; Zukor, Dorothy (Technical Monitor)

    2001-01-01

    We combine a high-order compact finite difference scheme to approximate spatial derivatives arid collocation techniques for the time component to numerically solve the two dimensional heat equation. We use two approaches to implement the collocation methods. The first one is based on an explicit computation of the coefficients of polynomials and the second one relies on differential quadrature. We compare them by studying their merits and analyzing their numerical performance. All our computations, based on parallel algorithms, are carried out on the CRAY SV1.

  20. Entropy Stable Spectral Collocation Schemes for the Navier-Stokes Equations: Discontinuous Interfaces

    NASA Technical Reports Server (NTRS)

    Carpenter, Mark H.; Fisher, Travis C.; Nielsen, Eric J.; Frankel, Steven H.

    2013-01-01

    Nonlinear entropy stability and a summation-by-parts framework are used to derive provably stable, polynomial-based spectral collocation methods of arbitrary order. The new methods are closely related to discontinuous Galerkin spectral collocation methods commonly known as DGFEM, but exhibit a more general entropy stability property. Although the new schemes are applicable to a broad class of linear and nonlinear conservation laws, emphasis herein is placed on the entropy stability of the compressible Navier-Stokes equations.

  1. Comparing different approaches - data mining, geostatistic, and deterministic pedology - to assess the frequency of WRB Reference Soil Groups in the Italian soil regions

    NASA Astrophysics Data System (ADS)

    Lorenzetti, Romina; Barbetti, Roberto; L'Abate, Giovanni; Fantappiè, Maria; Costantini, Edoardo A. C.

    2013-04-01

    Estimating frequency of soil classes in map unit is always affected by some degree of uncertainty, especially at small scales, with a larger generalization. The aim of this study was to compare different possible approaches - data mining, geostatistic, deterministic pedology - to assess the frequency of WRB Reference Soil Groups (RSG) in the major Italian soil regions. In the soil map of Italy (Costantini et al., 2012), a list of the first five RSG was reported in each major 10 soil regions. The soil map was produced using the national soil geodatabase, which stored 22,015 analyzed and classified pedons, 1,413 soil typological unit (STU) and a set of auxiliary variables (lithology, land-use, DEM). Other variables were added, to better consider the influence of soil forming factors (slope, soil aridity index, carbon stock, soil inorganic carbon content, clay, sand, geography of soil regions and soil systems) and a grid at 1 km mesh was set up. The traditional deterministic pedology assessed the STU frequency according to the expert judgment presence in every elementary landscape which formed the mapping unit. Different data mining techniques were firstly compared in their ability to predict RSG through auxiliary variables (neural networks, random forests, boosted tree, supported vector machine (SVM)). We selected SVM according to the result of a testing set. A SVM model is a representation of the examples as points in space, mapped so that examples of separate categories are divided by a clear gap that is as wide as possible. The geostatistic algorithm we used was an indicator collocated cokriging. The class values of the auxiliary variables, available at all the points of the grid, were transformed in indicator variables (values 0, 1). A principal component analysis allowed us to select the variables that were able to explain the largest variability, and to correlate each RSG with the first principal component, which explained the 51% of the total variability. The principal component was used as collocated variable. The results were as many probability maps as the estimated WRB classes. They were summed up in a unique map, with the most probable class at each pixel. The first five more frequent RSG resulting from the three methods were compared. The outcomes were validated with a subset of the 10% of the pedons, kept out before the elaborations. The error estimate was produced for each estimated RSG. The first results, obtained in one of the most widespread soil region (plains and low hills of central and southern Italy) showed that the first two frequency classes were the same for all the three methods. The deterministic method differed from the others at the third position, while the statistical methods inverted the third and fourth position. An advantage of the SVM was the possibility to use in the same elaboration numeric and categorical variable, without any previous transformation, which reduced the processing time. A Bayesian validation indicated that the SVM method was as reliable as the indicator collocated cokriging, and better than the deterministic pedological approach.

  2. Parameterizing Grid-Averaged Longwave Fluxes for Inhomogeneous Marine Boundary Layer Clouds

    NASA Technical Reports Server (NTRS)

    Barker, Howard W.; Wielicki, Bruce A.

    1997-01-01

    This paper examines the relative impacts on grid-averaged longwave flux transmittance (emittance) for Marine Boundary Layer (MBL) cloud fields arising from horizontal variability of optical depth tau and cloud sides, First, using fields of Landsat-inferred tau and a Monte Carlo photon transport algorithm, it is demonstrated that mean all-sky transmittances for 3D variable MBL clouds can be computed accurately by the conventional method of linearly weighting clear and cloudy transmittances by their respective sky fractions. Then, the approximations of decoupling cloud and radiative properties and assuming independent columns are shown to be adequate for computation of mean flux transmittance. Since real clouds have nonzero geometric thicknesses, cloud fractions A'(sub c) presented to isotropic beams usually exceed the more familiar vertically projected cloud fractions A(sub c). It is shown, however, that when A(sub c)less than or equal to 0.9, biases for all-sky transmittance stemming from use of A(sub c) as opposed to A'(sub c) are roughly 2-5 times smaller than, and opposite in sign to, biases due to neglect of horizontal variability of tau. By neglecting variable tau, all-sky transmittances are underestimated often by more than 0.1 for A(sub c) near 0.75 and this translates into relative errors that can exceed 40% (corresponding errors for all-sky emittance are about 20% for most values of A(sub c). Thus, priority should be given to development of General Circulation Model (GCM) parameterizations that account for the effects of horizontal variations in unresolved tau, effects of cloud sides are of secondary importance. On this note, an efficient stochastic model for computing grid-averaged cloudy-sky flux transmittances is furnished that assumes that distributions of tau, for regions comparable in size to GCM grid cells, can be described adequately by gamma distribution functions. While the plane-parallel, homogeneous model underestimates cloud transmittance by about an order of magnitude when 3D variable cloud transmittances are less than or equal to 0.2 and by approx. 20% to 100% otherwise, the stochastic model reduces these biases often by more than 80%.

  3. Systematic Evaluation of Stochastic Methods in Power System Scheduling and Dispatch with Renewable Energy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yishen; Zhou, Zhi; Liu, Cong

    2016-08-01

    As more wind power and other renewable resources are being integrated into the electric power grid, the forecast uncertainty brings operational challenges for the power system operators. In this report, different operational strategies for uncertainty management are presented and evaluated. A comprehensive and consistent simulation framework is developed to analyze the performance of different reserve policies and scheduling techniques under uncertainty in wind power. Numerical simulations are conducted on a modified version of the IEEE 118-bus system with a 20% wind penetration level, comparing deterministic, interval, and stochastic unit commitment strategies. The results show that stochastic unit commitment provides amore » reliable schedule without large increases in operational costs. Moreover, decomposition techniques, such as load shift factor and Benders decomposition, can help in overcoming the computational obstacles to stochastic unit commitment and enable the use of a larger scenario set to represent forecast uncertainty. In contrast, deterministic and interval unit commitment tend to give higher system costs as more reserves are being scheduled to address forecast uncertainty. However, these approaches require a much lower computational effort Choosing a proper lower bound for the forecast uncertainty is important for balancing reliability and system operational cost in deterministic and interval unit commitment. Finally, we find that the introduction of zonal reserve requirements improves reliability, but at the expense of higher operational costs.« less

  4. Nonparametric triple collocation

    USDA-ARS?s Scientific Manuscript database

    Triple collocation derives variance-covariance relationships between three or more independent measurement sources and an indirectly observed truth variable in the case where the measurement operators are linear-Gaussian. We generalize that theory to arbitrary observation operators by deriving nonpa...

  5. Radiometric consistency assessment of hyperspectral infrared sounders

    NASA Astrophysics Data System (ADS)

    Wang, L.; Han, Y.; Jin, X.; Chen, Y.; Tremblay, D. A.

    2015-07-01

    The radiometric and spectral consistency among the Atmospheric Infrared Sounder (AIRS), the Infrared Atmospheric Sounding Interferometer (IASI), and the Cross-track Infrared Sounder (CrIS) is fundamental for the creation of long-term infrared (IR) hyperspectral radiance benchmark datasets for both inter-calibration and climate-related studies. In this study, the CrIS radiance measurements on Suomi National Polar-orbiting Partnership (SNPP) satellite are directly compared with IASI on MetOp-A and -B at the finest spectral scale and with AIRS on Aqua in 25 selected spectral regions through one year of simultaneous nadir overpass (SNO) observations to evaluate radiometric consistency of these four hyperspectral IR sounders. The spectra from different sounders are paired together through strict spatial and temporal collocation. The uniform scenes are selected by examining the collocated Visible Infrared Imaging Radiometer Suite (VIIRS) pixels. Their brightness temperature (BT) differences are then calculated by converting the spectra onto common spectral grids. The results indicate that CrIS agrees well with IASI on MetOp-A and IASI on MetOp-B at the longwave IR (LWIR) and middle-wave IR (MWIR) bands with 0.1-0.2 K differences. There are no apparent scene-dependent patterns for BT differences between CrIS and IASI for individual spectral channels. CrIS and AIRS are compared at the 25 spectral regions for both Polar and Tropical SNOs. The combined global SNO datasets indicate that, the CrIS-AIRS BT differences are less than or around 0.1 K among 21 of 25 comparison spectral regions and they range from 0.15 to 0.21 K in the remaining 4 spectral regions. CrIS-AIRS BT differences in some comparison spectral regions show weak scene-dependent features.

  6. Radiometric consistency assessment of hyperspectral infrared sounders

    NASA Astrophysics Data System (ADS)

    Wang, L.; Han, Y.; Jin, X.; Chen, Y.; Tremblay, D. A.

    2015-11-01

    The radiometric and spectral consistency among the Atmospheric Infrared Sounder (AIRS), the Infrared Atmospheric Sounding Interferometer (IASI), and the Cross-track Infrared Sounder (CrIS) is fundamental for the creation of long-term infrared (IR) hyperspectral radiance benchmark data sets for both intercalibration and climate-related studies. In this study, the CrIS radiance measurements on Suomi National Polar-orbiting Partnership (SNPP) satellite are directly compared with IASI on MetOp-A and MetOp-B at the finest spectral scale and with AIRS on Aqua in 25 selected spectral regions through simultaneous nadir overpass (SNO) observations in 2013, to evaluate radiometric consistency of these four hyperspectral IR sounders. The spectra from different sounders are paired together through strict spatial and temporal collocation. The uniform scenes are selected by examining the collocated Visible Infrared Imaging Radiometer Suite (VIIRS) pixels. Their brightness temperature (BT) differences are then calculated by converting the spectra onto common spectral grids. The results indicate that CrIS agrees well with IASI on MetOp-A and IASI on MetOp-B at the long-wave IR (LWIR) and middle-wave IR (MWIR) bands with 0.1-0.2 K differences. There are no apparent scene-dependent patterns for BT differences between CrIS and IASI for individual spectral channels. CrIS and AIRS are compared at the 25 spectral regions for both polar and tropical SNOs. The combined global SNO data sets indicate that the CrIS-AIRS BT differences are less than or around 0.1 K among 21 of 25 spectral regions and they range from 0.15 to 0.21 K in the remaining four spectral regions. CrIS-AIRS BT differences in some comparison spectral regions show weak scene-dependent features.

  7. Non-standard finite difference and Chebyshev collocation methods for solving fractional diffusion equation

    NASA Astrophysics Data System (ADS)

    Agarwal, P.; El-Sayed, A. A.

    2018-06-01

    In this paper, a new numerical technique for solving the fractional order diffusion equation is introduced. This technique basically depends on the Non-Standard finite difference method (NSFD) and Chebyshev collocation method, where the fractional derivatives are described in terms of the Caputo sense. The Chebyshev collocation method with the (NSFD) method is used to convert the problem into a system of algebraic equations. These equations solved numerically using Newton's iteration method. The applicability, reliability, and efficiency of the presented technique are demonstrated through some given numerical examples.

  8. Constructing Surrogate Models of Complex Systems with Enhanced Sparsity: Quantifying the Influence of Conformational Uncertainty in Biomolecular Solvation

    DOE PAGES

    Lei, Huan; Yang, Xiu; Zheng, Bin; ...

    2015-11-05

    Biomolecules exhibit conformational fluctuations near equilibrium states, inducing uncertainty in various biological properties in a dynamic way. We have developed a general method to quantify the uncertainty of target properties induced by conformational fluctuations. Using a generalized polynomial chaos (gPC) expansion, we construct a surrogate model of the target property with respect to varying conformational states. We also propose a method to increase the sparsity of the gPC expansion by defining a set of conformational “active space” random variables. With the increased sparsity, we employ the compressive sensing method to accurately construct the surrogate model. We demonstrate the performance ofmore » the surrogate model by evaluating fluctuation-induced uncertainty in solvent-accessible surface area for the bovine trypsin inhibitor protein system and show that the new approach offers more accurate statistical information than standard Monte Carlo approaches. Further more, the constructed surrogate model also enables us to directly evaluate the target property under various conformational states, yielding a more accurate response surface than standard sparse grid collocation methods. In particular, the new method provides higher accuracy in high-dimensional systems, such as biomolecules, where sparse grid performance is limited by the accuracy of the computed quantity of interest. Finally, our new framework is generalizable and can be used to investigate the uncertainty of a wide variety of target properties in biomolecular systems.« less

  9. Glucose control and medication adherence among veterans with diabetes and serious mental illness: does collocation of primary care and mental health care matter?

    PubMed

    Long, Judith A; Wang, Andrew; Medvedeva, Elina L; Eisen, Susan V; Gordon, Adam J; Kreyenbuhl, Julie; Marcus, Steven C

    2014-08-01

    Persons with serious mental illness (SMI) may benefit from collocation of medical and mental health healthcare professionals and services in attending to their chronic comorbid medical conditions. We evaluated and compared glucose control and diabetes medication adherence among patients with SMI who received collocated care to those not receiving collocated care (which we call usual care). We performed a cross-sectional, observational cohort study of 363 veteran patients with type 2 diabetes and SMI who received care from one of three Veterans Affairs medical facilities: two sites that provided both collocated and usual care and one site that provided only usual care. Through a survey, laboratory tests, and medical records, we assessed patient characteristics, glucose control as measured by a current HbA1c, and adherence to diabetes medication as measured by the medication possession ration (MPR) and self-report. In the sample, the mean HbA1c was 7.4% (57 mmol/mol), the mean MPR was 80%, and 51% reported perfect adherence to their diabetes medications. In both unadjusted and adjusted analyses, there were no differences in glucose control and medication adherence by collocation of care. Patients seen in collocated care tended to have better HbA1c levels (β = -0.149; P = 0.393) and MPR values (β = 0.34; P = 0.132) and worse self-reported adherence (odds ratio 0.71; P = 0.143), but these were not statistically significant. In a population of veterans with comorbid diabetes and SMI, patients on average had good glucose control and medication adherence regardless of where they received primary care. © 2014 by the American Diabetes Association. Readers may use this article as long as the work is properly cited, the use is educational and not for profit, and the work is not altered.

  10. Systematic parameter inference in stochastic mesoscopic modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lei, Huan; Yang, Xiu; Li, Zhen

    2017-02-01

    We propose a method to efficiently determine the optimal coarse-grained force field in mesoscopic stochastic simulations of Newtonian fluid and polymer melt systems modeled by dissipative particle dynamics (DPD) and energy conserving dissipative particle dynamics (eDPD). The response surfaces of various target properties (viscosity, diffusivity, pressure, etc.) with respect to model parameters are constructed based on the generalized polynomial chaos (gPC) expansion using simulation results on sampling points (e.g., individual parameter sets). To alleviate the computational cost to evaluate the target properties, we employ the compressive sensing method to compute the coefficients of the dominant gPC terms given the priormore » knowledge that the coefficients are “sparse”. The proposed method shows comparable accuracy with the standard probabilistic collocation method (PCM) while it imposes a much weaker restriction on the number of the simulation samples especially for systems with high dimensional parametric space. Fully access to the response surfaces within the confidence range enables us to infer the optimal force parameters given the desirable values of target properties at the macroscopic scale. Moreover, it enables us to investigate the intrinsic relationship between the model parameters, identify possible degeneracies in the parameter space, and optimize the model by eliminating model redundancies. The proposed method provides an efficient alternative approach for constructing mesoscopic models by inferring model parameters to recover target properties of the physics systems (e.g., from experimental measurements), where those force field parameters and formulation cannot be derived from the microscopic level in a straight forward way.« less

  11. On the Stability of Collocated Controllers in the Presence or Uncertain Nonlinearities and Other Perils

    NASA Technical Reports Server (NTRS)

    Joshi, S. M.

    1985-01-01

    Robustness properties are investigated for two types of controllers for large flexible space structures, which use collocated sensors and actuators. The first type is an attitude controller which uses negative definite feedback of measured attitude and rate, while the second type is a damping enhancement controller which uses only velocity (rate) feedback. It is proved that collocated attitude controllers preserve closed loop global asymptotic stability when linear actuator/sensor dynamics satisfying certain phase conditions are present, or monotonic increasing nonlinearities are present. For velocity feedback controllers, the global asymptotic stability is proved under much weaker conditions. In particular, they have 90 phase margin and can tolerate nonlinearities belonging to the (0,infinity) sector in the actuator/sensor characteristics. The results significantly enhance the viability of both types of collocated controllers, especially when the available information about the large space structure (LSS) parameters is inadequate or inaccurate.

  12. Understanding a reference-free impedance method using collocated piezoelectric transducers

    NASA Astrophysics Data System (ADS)

    Kim, Eun Jin; Kim, Min Koo; Sohn, Hoon; Park, Hyun Woo

    2010-03-01

    A new concept of a reference-free impedance method, which does not require direct comparison with a baseline impedance signal, is proposed for damage detection in a plate-like structure. A single pair of piezoelectric (PZT) wafers collocated on both surfaces of a plate are utilized for extracting electro-mechanical signatures (EMS) associated with mode conversion due to damage. A numerical simulation is conducted to investigate the EMS of collocated PZT wafers in the frequency domain at the presence of damage through spectral element analysis. Then, the EMS due to mode conversion induced by damage are extracted using the signal decomposition technique based on the polarization characteristics of the collocated PZT wafers. The effects of the size and the location of damage on the decomposed EMS are investigated as well. Finally, the applicability of the decomposed EMS to the reference-free damage diagnosis is discussed.

  13. Locating PHEV exchange stations in V2G

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pan, Feng; Bent, Russell; Berscheid, Alan

    2010-01-01

    Plug-in hybrid electric vehicle (PREV) is an environment friendly modem transportation method and has been rapidly penetrate the transportation system. Renewable energy is another contributor to clean power but the associated intermittence increases the uncertainty in power generation. As a foreseen benefit of a vchicle-to-grid (V2G) system, PREV supporting infrastructures like battery exchange stations can provide battery service to PREV customers as well as being plugged into a power grid as energy sources and stabilizer. The locations of exchange stations are important for these two objectives under constraints from both ,transportation system and power grid. To model this location problemmore » and to understand and analyze the benefit of a V2G system, we develop a two-stage stochastic program to optimally locate the stations prior to the realizations of battery demands, loads, and generation capacity of renewable power sources. Based on this model, we use two data sets to construct the V2G systems and test the benefit and the performance of these systems.« less

  14. Energy storage arbitrage under day-ahead and real-time price uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krishnamurthy, Dheepak; Uckun, Canan; Zhou, Zhi

    Electricity markets must match real-time supply and demand of electricity. With increasing penetration of renewable resources, it is important that this balancing is done effectively, considering the high uncertainty of wind and solar energy. Storing electrical energy can make the grid more reliable and efficient and energy storage is proposed as a complement to highly variable renewable energy sources. However, for investments in energy storage to increase, participating in the market must become economically viable for owners. This paper proposes a stochastic formulation of a storage owner’s arbitrage profit maximization problem under uncertainty in day-ahead (DA) and real-time (RT) marketmore » prices. The proposed model helps storage owners in market bidding and operational decisions and in estimation of the economic viability of energy storage. Finally, case study results on realistic market price data show that the novel stochastic bidding approach does significantly better than the deterministic benchmark.« less

  15. Kinetic Monte Carlo modeling of chemical reactions coupled with heat transfer.

    PubMed

    Castonguay, Thomas C; Wang, Feng

    2008-03-28

    In this paper, we describe two types of effective events for describing heat transfer in a kinetic Monte Carlo (KMC) simulation that may involve stochastic chemical reactions. Simulations employing these events are referred to as KMC-TBT and KMC-PHE. In KMC-TBT, heat transfer is modeled as the stochastic transfer of "thermal bits" between adjacent grid points. In KMC-PHE, heat transfer is modeled by integrating the Poisson heat equation for a short time. Either approach is capable of capturing the time dependent system behavior exactly. Both KMC-PHE and KMC-TBT are validated by simulating pure heat transfer in a rod and a square and modeling a heated desorption problem where exact numerical results are available. KMC-PHE is much faster than KMC-TBT and is used to study the endothermic desorption of a lattice gas. Interesting findings from this study are reported.

  16. Kinetic Monte Carlo modeling of chemical reactions coupled with heat transfer

    NASA Astrophysics Data System (ADS)

    Castonguay, Thomas C.; Wang, Feng

    2008-03-01

    In this paper, we describe two types of effective events for describing heat transfer in a kinetic Monte Carlo (KMC) simulation that may involve stochastic chemical reactions. Simulations employing these events are referred to as KMC-TBT and KMC-PHE. In KMC-TBT, heat transfer is modeled as the stochastic transfer of "thermal bits" between adjacent grid points. In KMC-PHE, heat transfer is modeled by integrating the Poisson heat equation for a short time. Either approach is capable of capturing the time dependent system behavior exactly. Both KMC-PHE and KMC-TBT are validated by simulating pure heat transfer in a rod and a square and modeling a heated desorption problem where exact numerical results are available. KMC-PHE is much faster than KMC-TBT and is used to study the endothermic desorption of a lattice gas. Interesting findings from this study are reported.

  17. Energy storage arbitrage under day-ahead and real-time price uncertainty

    DOE PAGES

    Krishnamurthy, Dheepak; Uckun, Canan; Zhou, Zhi; ...

    2017-04-04

    Electricity markets must match real-time supply and demand of electricity. With increasing penetration of renewable resources, it is important that this balancing is done effectively, considering the high uncertainty of wind and solar energy. Storing electrical energy can make the grid more reliable and efficient and energy storage is proposed as a complement to highly variable renewable energy sources. However, for investments in energy storage to increase, participating in the market must become economically viable for owners. This paper proposes a stochastic formulation of a storage owner’s arbitrage profit maximization problem under uncertainty in day-ahead (DA) and real-time (RT) marketmore » prices. The proposed model helps storage owners in market bidding and operational decisions and in estimation of the economic viability of energy storage. Finally, case study results on realistic market price data show that the novel stochastic bidding approach does significantly better than the deterministic benchmark.« less

  18. Meshless collocation methods for the numerical solution of elliptic boundary valued problems the rotational shallow water equations on the sphere

    NASA Astrophysics Data System (ADS)

    Blakely, Christopher D.

    This dissertation thesis has three main goals: (1) To explore the anatomy of meshless collocation approximation methods that have recently gained attention in the numerical analysis community; (2) Numerically demonstrate why the meshless collocation method should clearly become an attractive alternative to standard finite-element methods due to the simplicity of its implementation and its high-order convergence properties; (3) Propose a meshless collocation method for large scale computational geophysical fluid dynamics models. We provide numerical verification and validation of the meshless collocation scheme applied to the rotational shallow-water equations on the sphere and demonstrate computationally that the proposed model can compete with existing high performance methods for approximating the shallow-water equations such as the SEAM (spectral-element atmospheric model) developed at NCAR. A detailed analysis of the parallel implementation of the model, along with the introduction of parallel algorithmic routines for the high-performance simulation of the model will be given. We analyze the programming and computational aspects of the model using Fortran 90 and the message passing interface (mpi) library along with software and hardware specifications and performance tests. Details from many aspects of the implementation in regards to performance, optimization, and stabilization will be given. In order to verify the mathematical correctness of the algorithms presented and to validate the performance of the meshless collocation shallow-water model, we conclude the thesis with numerical experiments on some standardized test cases for the shallow-water equations on the sphere using the proposed method.

  19. Parameterization of GCM subgrid nonprecipitating cumulus and stratocumulus clouds using stochastic/phenomenological methods. Annual technical progress report, 1 December 1992--30 November 1993

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stull, R.B.

    1993-08-27

    This document is a progress report to the USDOE Atmospheric Radiation and Measurement Program (ARM). The overall project goal is to relate subgrid-cumulus-cloud formation, coverage, and population characteristics to statistical properties of surface-layer air, which in turn are modulated by heterogeneous land-usage within GCM-grid-box-size regions. The motivation is to improve the understanding and prediction of climate change by more accurately describing radiative and cloud processes.

  20. Multilevel ensemble Kalman filtering

    DOE PAGES

    Hoel, Hakon; Law, Kody J. H.; Tempone, Raul

    2016-06-14

    This study embeds a multilevel Monte Carlo sampling strategy into the Monte Carlo step of the ensemble Kalman filter (EnKF) in the setting of finite dimensional signal evolution and noisy discrete-time observations. The signal dynamics is assumed to be governed by a stochastic differential equation (SDE), and a hierarchy of time grids is introduced for multilevel numerical integration of that SDE. Finally, the resulting multilevel EnKF is proved to asymptotically outperform EnKF in terms of computational cost versus approximation accuracy. The theoretical results are illustrated numerically.

  1. Transition probability-based stochastic geological modeling using airborne geophysical data and borehole data

    NASA Astrophysics Data System (ADS)

    He, Xin; Koch, Julian; Sonnenborg, Torben O.; Jørgensen, Flemming; Schamper, Cyril; Christian Refsgaard, Jens

    2014-04-01

    Geological heterogeneity is a very important factor to consider when developing geological models for hydrological purposes. Using statistically based stochastic geological simulations, the spatial heterogeneity in such models can be accounted for. However, various types of uncertainties are associated with both the geostatistical method and the observation data. In the present study, TProGS is used as the geostatistical modeling tool to simulate structural heterogeneity for glacial deposits in a head water catchment in Denmark. The focus is on how the observation data uncertainty can be incorporated in the stochastic simulation process. The study uses two types of observation data: borehole data and airborne geophysical data. It is commonly acknowledged that the density of the borehole data is usually too sparse to characterize the horizontal heterogeneity. The use of geophysical data gives an unprecedented opportunity to obtain high-resolution information and thus to identify geostatistical properties more accurately especially in the horizontal direction. However, since such data are not a direct measurement of the lithology, larger uncertainty of point estimates can be expected as compared to the use of borehole data. We have proposed a histogram probability matching method in order to link the information on resistivity to hydrofacies, while considering the data uncertainty at the same time. Transition probabilities and Markov Chain models are established using the transformed geophysical data. It is shown that such transformation is in fact practical; however, the cutoff value for dividing the resistivity data into facies is difficult to determine. The simulated geological realizations indicate significant differences of spatial structure depending on the type of conditioning data selected. It is to our knowledge the first time that grid-to-grid airborne geophysical data including the data uncertainty are used in conditional geostatistical simulations in TProGS. Therefore, it provides valuable insights regarding the advantages and challenges of using such comprehensive data.

  2. “Skin-Core-Skin” Structure of Polymer Crystallization Investigated by Multiscale Simulation

    PubMed Central

    Ruan, Chunlei

    2018-01-01

    “Skin-core-skin” structure is a typical crystal morphology in injection products. Previous numerical works have rarely focused on crystal evolution; rather, they have mostly been based on the prediction of temperature distribution or crystallization kinetics. The aim of this work was to achieve the “skin-core-skin” structure and investigate the role of external flow and temperature fields on crystal morphology. Therefore, the multiscale algorithm was extended to the simulation of polymer crystallization in a pipe flow. The multiscale algorithm contains two parts: a collocated finite volume method at the macroscopic level and a morphological Monte Carlo method at the microscopic level. The SIMPLE (semi-implicit method for pressure linked equations) algorithm was used to calculate the polymeric model at the macroscopic level, while the Monte Carlo method with stochastic birth-growth process of spherulites and shish-kebabs was used at the microscopic level. Results show that our algorithm is valid to predict “skin-core-skin” structure, and the initial melt temperature and the maximum velocity of melt at the inlet mainly affects the morphology of shish-kebabs. PMID:29659516

  3. Data Assimilation and Propagation of Uncertainty in Multiscale Cardiovascular Simulation

    NASA Astrophysics Data System (ADS)

    Schiavazzi, Daniele; Marsden, Alison

    2015-11-01

    Cardiovascular modeling is the application of computational tools to predict hemodynamics. State-of-the-art techniques couple a 3D incompressible Navier-Stokes solver with a boundary circulation model and can predict local and peripheral hemodynamics, analyze the post-operative performance of surgical designs and complement clinical data collection minimizing invasive and risky measurement practices. The ability of these tools to make useful predictions is directly related to their accuracy in representing measured physiologies. Tuning of model parameters is therefore a topic of paramount importance and should include clinical data uncertainty, revealing how this uncertainty will affect the predictions. We propose a fully Bayesian, multi-level approach to data assimilation of uncertain clinical data in multiscale circulation models. To reduce the computational cost, we use a stable, condensed approximation of the 3D model build by linear sparse regression of the pressure/flow rate relationship at the outlets. Finally, we consider the problem of non-invasively propagating the uncertainty in model parameters to the resulting hemodynamics and compare Monte Carlo simulation with Stochastic Collocation approaches based on Polynomial or Multi-resolution Chaos expansions.

  4. DAKOTA Design Analysis Kit for Optimization and Terascale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Brian M.; Dalbey, Keith R.; Eldred, Michael S.

    2010-02-24

    The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes (computational models) and iterative analysis methods. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and analysis of computational models on high performance computers.A user provides a set of DAKOTA commands in an input file and launches DAKOTA. DAKOTA invokes instances of the computational models, collects their results, and performs systems analyses. DAKOTA contains algorithms for optimization with gradient and nongradient-basedmore » methods; uncertainty quantification with sampling, reliability, polynomial chaos, stochastic collocation, and epistemic methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as hybrid optimization, surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. Services for parallel computing, simulation interfacing, approximation modeling, fault tolerance, restart, and graphics are also included.« less

  5. On the Gibbs phenomenon 5: Recovering exponential accuracy from collocation point values of a piecewise analytic function

    NASA Technical Reports Server (NTRS)

    Gottlieb, David; Shu, Chi-Wang

    1994-01-01

    The paper presents a method to recover exponential accuracy at all points (including at the discontinuities themselves), from the knowledge of an approximation to the interpolation polynomial (or trigonometrical polynomial). We show that if we are given the collocation point values (or a highly accurate approximation) at the Gauss or Gauss-Lobatto points, we can reconstruct a uniform exponentially convergent approximation to the function f(x) in any sub-interval of analyticity. The proof covers the cases of Fourier, Chebyshev, Legendre, and more general Gegenbauer collocation methods.

  6. On the anomaly of velocity-pressure decoupling in collocated mesh solutions

    NASA Technical Reports Server (NTRS)

    Kim, Sang-Wook; Vanoverbeke, Thomas

    1991-01-01

    The use of various pressure correction algorithms originally developed for fully staggered meshes can yield a velocity-pressure decoupled solution for collocated meshes. The mechanism that causes velocity-pressure decoupling is identified. It is shown that the use of a partial differential equation for the incremental pressure eliminates such a mechanism and yields a velocity-pressure coupled solution. Example flows considered are a three dimensional lid-driven cavity flow and a laminar flow through a 90 deg bend square duct. Numerical results obtained using the collocated mesh are in good agreement with the measured data and other numerical results.

  7. Occurrence of dead core in catalytic particles containing immobilized enzymes: analysis for the Michaelis-Menten kinetics and assessment of numerical methods.

    PubMed

    Pereira, Félix Monteiro; Oliveira, Samuel Conceição

    2016-11-01

    In this article, the occurrence of dead core in catalytic particles containing immobilized enzymes is analyzed for the Michaelis-Menten kinetics. An assessment of numerical methods is performed to solve the boundary value problem generated by the mathematical modeling of diffusion and reaction processes under steady state and isothermal conditions. Two classes of numerical methods were employed: shooting and collocation. The shooting method used the ode function from Scilab software. The collocation methods included: that implemented by the bvode function of Scilab, the orthogonal collocation, and the orthogonal collocation on finite elements. The methods were validated for simplified forms of the Michaelis-Menten equation (zero-order and first-order kinetics), for which analytical solutions are available. Among the methods covered in this article, the orthogonal collocation on finite elements proved to be the most robust and efficient method to solve the boundary value problem concerning Michaelis-Menten kinetics. For this enzyme kinetics, it was found that the dead core can occur when verified certain conditions of diffusion-reaction within the catalytic particle. The application of the concepts and methods presented in this study will allow for a more generalized analysis and more accurate designs of heterogeneous enzymatic reactors.

  8. Evaluation of a binary optimization approach to find the optimum locations of energy storage devices in a power grid with stochastically varying loads and wind generation

    NASA Astrophysics Data System (ADS)

    Dar, Zamiyad

    The prices in the electricity market change every five minutes. The prices in peak demand hours can be four or five times more than the prices in normal off peak hours. Renewable energy such as wind power has zero marginal cost and a large percentage of wind energy in a power grid can reduce the price significantly. The variability of wind power prevents it from being constantly available in peak hours. The price differentials between off-peak and on-peak hours due to wind power variations provide an opportunity for a storage device owner to buy energy at a low price and sell it in high price hours. In a large and complex power grid, there are many locations for installation of a storage device. Storage device owners prefer to install their device at locations that allow them to maximize profit. Market participants do not possess much information about the system operator's dispatch, power grid, competing generators and transmission system. The publicly available data from the system operator usually consists of Locational Marginal Prices (LMP), load, reserve prices and regulation prices. In this thesis, we develop a method to find the optimum location of a storage device without using the grid, transmission or generator data. We formulate and solve an optimization problem to find the most profitable location for a storage device using only the publicly available market pricing data such as LMPs, and reserve prices. We consider constraints arising due to storage device operation limitations in our objective function. We use binary optimization and branch and bound method to optimize the operation of a storage device at a given location to earn maximum profit. We use two different versions of our method and optimize the profitability of a storage unit at each location in a 36 bus model of north eastern United States and south eastern Canada for four representative days representing four seasons in a year. Finally, we compare our results from the two versions of our method with a multi period stochastically optimized economic dispatch of the same power system with storage device at locations proposed by our method. We observe a small gap in profit values arising due to the effect of storage device on market prices. However, we observe that the ranking of different locations in terms of profitability remains almost unchanged. This leads us to conclude that our method can successfully predict the optimum locations for installation of storage units in a complex grid using only the publicly available electricity market data.

  9. A multidomain spectral collocation method for the Stokes problem

    NASA Technical Reports Server (NTRS)

    Landriani, G. Sacchi; Vandeven, H.

    1989-01-01

    A multidomain spectral collocation scheme is proposed for the approximation of the two-dimensional Stokes problem. It is shown that the discrete velocity vector field is exactly divergence-free and we prove error estimates both for the velocity and the pressure.

  10. Evaluation of assumptions in soil moisture triple collocation analysis

    USDA-ARS?s Scientific Manuscript database

    Triple collocation analysis (TCA) enables estimation of error variances for three or more products that retrieve or estimate the same geophysical variable using mutually-independent methods. Several statistical assumptions regarding the statistical nature of errors (e.g., mutual independence and ort...

  11. Beyond triple collocation: Applications to satellite soil moisture

    USDA-ARS?s Scientific Manuscript database

    Triple collocation is now routinely used to resolve the exact (linear) relationships between multiple measurements and/or representations of a geophysical variable that are subject to errors. It has been utilized in the context of calibration, rescaling and error characterisation to allow comparison...

  12. Evaluating Remotely-Sensed Surface Soil Moisture Estimates Using Triple Collocation

    USDA-ARS?s Scientific Manuscript database

    Recent work has demonstrated the potential of enhancing remotely-sensed surface soil moisture validation activities through the application of triple collocation techniques which compare time series of three mutually independent geophysical variable estimates in order to acquire the root-mean-square...

  13. Incompressible Navier-Stokes and parabolized Navier-Stokes solution procedures and computational techniques

    NASA Technical Reports Server (NTRS)

    Rubin, S. G.

    1982-01-01

    Recent developments with finite-difference techniques are emphasized. The quotation marks reflect the fact that any finite discretization procedure can be included in this category. Many so-called finite element collocation and galerkin methods can be reproduced by appropriate forms of the differential equations and discretization formulas. Many of the difficulties encountered in early Navier-Stokes calculations were inherent not only in the choice of the different equations (accuracy), but also in the method of solution or choice of algorithm (convergence and stability, in the manner in which the dependent variables or discretized equations are related (coupling), in the manner that boundary conditions are applied, in the manner that the coordinate mesh is specified (grid generation), and finally, in recognizing that for many high Reynolds number flows not all contributions to the Navier-Stokes equations are necessarily of equal importance (parabolization, preferred direction, pressure interaction, asymptotic and mathematical character). It is these elements that are reviewed. Several Navier-Stokes and parabolized Navier-Stokes formulations are also presented.

  14. On the properties of energy stable flux reconstruction schemes for implicit large eddy simulation

    NASA Astrophysics Data System (ADS)

    Vermeire, B. C.; Vincent, P. E.

    2016-12-01

    We begin by investigating the stability, order of accuracy, and dispersion and dissipation characteristics of the extended range of energy stable flux reconstruction (E-ESFR) schemes in the context of implicit large eddy simulation (ILES). We proceed to demonstrate that subsets of the E-ESFR schemes are more stable than collocation nodal discontinuous Galerkin methods recovered with the flux reconstruction approach (FRDG) for marginally-resolved ILES simulations of the Taylor-Green vortex. These schemes are shown to have reduced dissipation and dispersion errors relative to FRDG schemes of the same polynomial degree and, simultaneously, have increased Courant-Friedrichs-Lewy (CFL) limits. Finally, we simulate turbulent flow over an SD7003 aerofoil using two of the most stable E-ESFR schemes identified by the aforementioned Taylor-Green vortex experiments. Results demonstrate that subsets of E-ESFR schemes appear more stable than the commonly used FRDG method, have increased CFL limits, and are suitable for ILES of complex turbulent flows on unstructured grids.

  15. Multigrid calculation of internal flows in complex geometries

    NASA Technical Reports Server (NTRS)

    Smith, K. M.; Vanka, S. P.

    1992-01-01

    The development, validation, and application of a general purpose multigrid solution algorithm and computer program for the computation of elliptic flows in complex geometries is presented. This computer program combines several desirable features including a curvilinear coordinate system, collocated arrangement of the variables, and Full Multi-Grid/Full Approximation Scheme (FMG/FAS). Provisions are made for the inclusion of embedded obstacles and baffles inside the flow domain. The momentum and continuity equations are solved in a decoupled manner and a pressure corrective equation is used to update the pressures such that the fluxes at the cell faces satisfy local mass continuity. Despite the computational overhead required in the restriction and prolongation phases of the multigrid cycling, the superior convergence results in reduced overall CPU time. The numerical scheme and selected results of several validation flows are presented. Finally, the procedure is applied to study the flowfield in a side-inlet dump combustor and twin jet impingement from a simulated aircraft fuselage.

  16. Addressing model error through atmospheric stochastic physical parametrizations: impact on the coupled ECMWF seasonal forecasting system

    PubMed Central

    Weisheimer, Antje; Corti, Susanna; Palmer, Tim; Vitart, Frederic

    2014-01-01

    The finite resolution of general circulation models of the coupled atmosphere–ocean system and the effects of sub-grid-scale variability present a major source of uncertainty in model simulations on all time scales. The European Centre for Medium-Range Weather Forecasts has been at the forefront of developing new approaches to account for these uncertainties. In particular, the stochastically perturbed physical tendency scheme and the stochastically perturbed backscatter algorithm for the atmosphere are now used routinely for global numerical weather prediction. The European Centre also performs long-range predictions of the coupled atmosphere–ocean climate system in operational forecast mode, and the latest seasonal forecasting system—System 4—has the stochastically perturbed tendency and backscatter schemes implemented in a similar way to that for the medium-range weather forecasts. Here, we present results of the impact of these schemes in System 4 by contrasting the operational performance on seasonal time scales during the retrospective forecast period 1981–2010 with comparable simulations that do not account for the representation of model uncertainty. We find that the stochastic tendency perturbation schemes helped to reduce excessively strong convective activity especially over the Maritime Continent and the tropical Western Pacific, leading to reduced biases of the outgoing longwave radiation (OLR), cloud cover, precipitation and near-surface winds. Positive impact was also found for the statistics of the Madden–Julian oscillation (MJO), showing an increase in the frequencies and amplitudes of MJO events. Further, the errors of El Niño southern oscillation forecasts become smaller, whereas increases in ensemble spread lead to a better calibrated system if the stochastic tendency is activated. The backscatter scheme has overall neutral impact. Finally, evidence for noise-activated regime transitions has been found in a cluster analysis of mid-latitude circulation regimes over the Pacific–North America region. PMID:24842026

  17. Selection of polynomial chaos bases via Bayesian model uncertainty methods with applications to sparse approximation of PDEs with stochastic inputs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karagiannis, Georgios, E-mail: georgios.karagiannis@pnnl.gov; Lin, Guang, E-mail: guang.lin@pnnl.gov

    2014-02-15

    Generalized polynomial chaos (gPC) expansions allow us to represent the solution of a stochastic system using a series of polynomial chaos basis functions. The number of gPC terms increases dramatically as the dimension of the random input variables increases. When the number of the gPC terms is larger than that of the available samples, a scenario that often occurs when the corresponding deterministic solver is computationally expensive, evaluation of the gPC expansion can be inaccurate due to over-fitting. We propose a fully Bayesian approach that allows for global recovery of the stochastic solutions, in both spatial and random domains, bymore » coupling Bayesian model uncertainty and regularization regression methods. It allows the evaluation of the PC coefficients on a grid of spatial points, via (1) the Bayesian model average (BMA) or (2) the median probability model, and their construction as spatial functions on the spatial domain via spline interpolation. The former accounts for the model uncertainty and provides Bayes-optimal predictions; while the latter provides a sparse representation of the stochastic solutions by evaluating the expansion on a subset of dominating gPC bases. Moreover, the proposed methods quantify the importance of the gPC bases in the probabilistic sense through inclusion probabilities. We design a Markov chain Monte Carlo (MCMC) sampler that evaluates all the unknown quantities without the need of ad-hoc techniques. The proposed methods are suitable for, but not restricted to, problems whose stochastic solutions are sparse in the stochastic space with respect to the gPC bases while the deterministic solver involved is expensive. We demonstrate the accuracy and performance of the proposed methods and make comparisons with other approaches on solving elliptic SPDEs with 1-, 14- and 40-random dimensions.« less

  18. Addressing model error through atmospheric stochastic physical parametrizations: impact on the coupled ECMWF seasonal forecasting system.

    PubMed

    Weisheimer, Antje; Corti, Susanna; Palmer, Tim; Vitart, Frederic

    2014-06-28

    The finite resolution of general circulation models of the coupled atmosphere-ocean system and the effects of sub-grid-scale variability present a major source of uncertainty in model simulations on all time scales. The European Centre for Medium-Range Weather Forecasts has been at the forefront of developing new approaches to account for these uncertainties. In particular, the stochastically perturbed physical tendency scheme and the stochastically perturbed backscatter algorithm for the atmosphere are now used routinely for global numerical weather prediction. The European Centre also performs long-range predictions of the coupled atmosphere-ocean climate system in operational forecast mode, and the latest seasonal forecasting system--System 4--has the stochastically perturbed tendency and backscatter schemes implemented in a similar way to that for the medium-range weather forecasts. Here, we present results of the impact of these schemes in System 4 by contrasting the operational performance on seasonal time scales during the retrospective forecast period 1981-2010 with comparable simulations that do not account for the representation of model uncertainty. We find that the stochastic tendency perturbation schemes helped to reduce excessively strong convective activity especially over the Maritime Continent and the tropical Western Pacific, leading to reduced biases of the outgoing longwave radiation (OLR), cloud cover, precipitation and near-surface winds. Positive impact was also found for the statistics of the Madden-Julian oscillation (MJO), showing an increase in the frequencies and amplitudes of MJO events. Further, the errors of El Niño southern oscillation forecasts become smaller, whereas increases in ensemble spread lead to a better calibrated system if the stochastic tendency is activated. The backscatter scheme has overall neutral impact. Finally, evidence for noise-activated regime transitions has been found in a cluster analysis of mid-latitude circulation regimes over the Pacific-North America region.

  19. Stochastic Fermi Energization of Coronal Plasma during Explosive Magnetic Energy Release

    NASA Astrophysics Data System (ADS)

    Pisokas, Theophilos; Vlahos, Loukas; Isliker, Heinz; Tsiolis, Vassilis; Anastasiadis, Anastasios

    2017-02-01

    The aim of this study is to analyze the interaction of charged particles (ions and electrons) with randomly formed particle scatterers (e.g., large-scale local “magnetic fluctuations” or “coherent magnetic irregularities”) using the setup proposed initially by Fermi. These scatterers are formed by the explosive magnetic energy release and propagate with the Alfvén speed along the irregular magnetic fields. They are large-scale local fluctuations (δB/B ≈ 1) randomly distributed inside the unstable magnetic topology and will here be called Alfvénic Scatterers (AS). We constructed a 3D grid on which a small fraction of randomly chosen grid points are acting as AS. In particular, we study how a large number of test particles evolves inside a collection of AS, analyzing the evolution of their energy distribution and their escape-time distribution. We use a well-established method to estimate the transport coefficients directly from the trajectories of the particles. Using the estimated transport coefficients and solving the Fokker-Planck equation numerically, we can recover the energy distribution of the particles. We have shown that the stochastic Fermi energization of mildly relativistic and relativistic plasma can heat and accelerate the tail of the ambient particle distribution as predicted by Parker & Tidman and Ramaty. The temperature of the hot plasma and the tail of the energetic particles depend on the mean free path (λsc) of the particles between the scatterers inside the energization volume.

  20. Pseudospectral collocation methods for fourth order differential equations

    NASA Technical Reports Server (NTRS)

    Malek, Alaeddin; Phillips, Timothy N.

    1994-01-01

    Collocation schemes are presented for solving linear fourth order differential equations in one and two dimensions. The variational formulation of the model fourth order problem is discretized by approximating the integrals by a Gaussian quadrature rule generalized to include the values of the derivative of the integrand at the boundary points. Collocation schemes are derived which are equivalent to this discrete variational problem. An efficient preconditioner based on a low-order finite difference approximation to the same differential operator is presented. The corresponding multidomain problem is also considered and interface conditions are derived. Pseudospectral approximations which are C1 continuous at the interfaces are used in each subdomain to approximate the solution. The approximations are also shown to be C3 continuous at the interfaces asymptotically. A complete analysis of the collocation scheme for the multidomain problem is provided. The extension of the method to the biharmonic equation in two dimensions is discussed and results are presented for a problem defined in a nonrectangular domain.

  1. Plug-in hybrid electric vehicles in smart grid

    NASA Astrophysics Data System (ADS)

    Yao, Yin

    In this thesis, in order to investigate the impact of charging load from plug-in hybrid electric vehicles (PHEVs), a stochastic model is developed in Matlab. In this model, two main types of PHEVs are defined: public transportation vehicles and private vehicles. Different charging time schedule, charging speed and battery capacity are considered for each type of vehicles. The simulation results reveal that there will be two load peaks (at noon and in evening) when the penetration level of PHEVs increases continuously to 30% in 2030. Therefore, optimization tool is utilized to shift load peaks. This optimization process is based on real time pricing and wind power output data. With the help of smart grid, power allocated to each vehicle could be controlled. As a result, this optimization could fulfill the goal of shifting load peaks to valley areas where real time price is low or wind output is high.

  2. Deterministic multidimensional nonuniform gap sampling.

    PubMed

    Worley, Bradley; Powers, Robert

    2015-12-01

    Born from empirical observations in nonuniformly sampled multidimensional NMR data relating to gaps between sampled points, the Poisson-gap sampling method has enjoyed widespread use in biomolecular NMR. While the majority of nonuniform sampling schemes are fully randomly drawn from probability densities that vary over a Nyquist grid, the Poisson-gap scheme employs constrained random deviates to minimize the gaps between sampled grid points. We describe a deterministic gap sampling method, based on the average behavior of Poisson-gap sampling, which performs comparably to its random counterpart with the additional benefit of completely deterministic behavior. We also introduce a general algorithm for multidimensional nonuniform sampling based on a gap equation, and apply it to yield a deterministic sampling scheme that combines burst-mode sampling features with those of Poisson-gap schemes. Finally, we derive a relationship between stochastic gap equations and the expectation value of their sampling probability densities. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. SALUTE Grid Application using Message-Oriented Middleware

    NASA Astrophysics Data System (ADS)

    Atanassov, E.; Dimitrov, D. Sl.; Gurov, T.

    2009-10-01

    Stochastic ALgorithms for Ultra-fast Transport in sEmiconductors (SALUTE) is a grid application developed for solving various computationally intensive problems which describe ultra-fast carrier transport in semiconductors. SALUTE studies memory and quantum effects during the relaxation process due to electronphonon interaction in one-band semiconductors or quantum wires. Formally, SALUTE integrates a set of novel Monte Carlo, quasi-Monte Carlo and hybrid algorithms for solving various computationally intensive problems which describe the femtosecond relaxation process of optically excited carriers in one-band semiconductors or quantum wires. In this paper we present application-specific job submission and reservation management tool named a Job Track Server (JTS). It is developed using Message-Oriented middleware to implement robust, versatile job submission and tracing mechanism, which can be tailored to application specific failover and quality of service requirements. Experience from using the JTS for submission of SALUTE jobs is presented.

  4. Adaptive wavelet collocation methods for initial value boundary problems of nonlinear PDE's

    NASA Technical Reports Server (NTRS)

    Cai, Wei; Wang, Jian-Zhong

    1993-01-01

    We have designed a cubic spline wavelet decomposition for the Sobolev space H(sup 2)(sub 0)(I) where I is a bounded interval. Based on a special 'point-wise orthogonality' of the wavelet basis functions, a fast Discrete Wavelet Transform (DWT) is constructed. This DWT transform will map discrete samples of a function to its wavelet expansion coefficients in O(N log N) operations. Using this transform, we propose a collocation method for the initial value boundary problem of nonlinear PDE's. Then, we test the efficiency of the DWT transform and apply the collocation method to solve linear and nonlinear PDE's.

  5. Some spectral approximation of one-dimensional fourth-order problems

    NASA Technical Reports Server (NTRS)

    Bernardi, Christine; Maday, Yvon

    1989-01-01

    Some spectral type collocation method well suited for the approximation of fourth-order systems are proposed. The model problem is the biharmonic equation, in one and two dimensions when the boundary conditions are periodic in one direction. It is proved that the standard Gauss-Lobatto nodes are not the best choice for the collocation points. Then, a new set of nodes related to some generalized Gauss type quadrature formulas is proposed. Also provided is a complete analysis of these formulas including some new issues about the asymptotic behavior of the weights and we apply these results to the analysis of the collocation method.

  6. Evaluation of the Relative Contribution of Observing Systems in Reanalyses: Aircraft Temperature Bias and Analysis Innovations

    NASA Technical Reports Server (NTRS)

    Bosilovich, Michael G.; Dasilva, Arindo M.

    2012-01-01

    Reanalyses have become important sources of data in weather and climate research. While observations are the most crucial component of the systems, few research projects consider carefully the multitudes of assimilated observations and their impact on the results. This is partly due to the diversity of observations and their individual complexity, but also due to the unfriendly nature of the data formats. Here, we discuss the NASA Modern-Era Retrospective analysis for Research and Applications (MERRA) and a companion dataset, the Gridded Innovations and Observations (GIO). GIO is simply a post-processing of the assimilated observations and their innovations (forecast error and analysis error) to a common spatio-temporal grid, following that of the MERRA analysis fields. This data includes in situ, retrieved and radiance observations that are assimilated and used in the reanalysis. While all these disparate observations and statistics are in a uniform easily accessible format, there are some limitations. Similar observations are binned to the grid, so that multiple observations are combined in the gridding process. The data is then implicitly thinned. Some details in the meta data may also be lost (e.g. aircraft or station ID). Nonetheless, the gridded observations should provide easy access to all the observations input to the reanalysis. To provide an example of the GIO data, a case study evaluating observing systems over the United States and statistics is presented, and demonstrates the evaluation of the observations and the data assimilation. The GIO data is used to collocate 200mb Radiosonde and Aircraft temperature measurements from 1979-2009. A known warm bias of the aircraft measurements is apparent compared to the radiosonde data. However, when larger quantities of aircraft data are available, they dominate the analysis and the radiosonde data become biased against the forecast. When AMSU radiances become available the radiosonde and aircraft analysis and forecast error take on an annual cycle. While this supports results of previous work that recommend bias corrections for the aircraft measurements, the interactions with AMSU radiances will also require further investigation. This also provides an example for reanalysis users in examining the available observations and their impact on the analysis. GIO data is presently available alongside the MERRA reanalysis.

  7. Recent advances in (soil moisture) triple collocation analysis

    USDA-ARS?s Scientific Manuscript database

    To date, triple collocation (TC) analysis is one of the most important methods for the global scale evaluation of remotely sensed soil moisture data sets. In this study we review existing implementations of soil moisture TC analysis as well as investigations of the assumptions underlying the method....

  8. Spectral collocation methods

    NASA Technical Reports Server (NTRS)

    Hussaini, M. Y.; Kopriva, D. A.; Patera, A. T.

    1987-01-01

    This review covers the theory and application of spectral collocation methods. Section 1 describes the fundamentals, and summarizes results pertaining to spectral approximations of functions. Some stability and convergence results are presented for simple elliptic, parabolic, and hyperbolic equations. Applications of these methods to fluid dynamics problems are discussed in Section 2.

  9. A Comparative Usage-Based Approach to the Reduction of the Spanish and Portuguese Preposition "Para"

    ERIC Educational Resources Information Center

    Gradoville, Michael Stephen

    2013-01-01

    This study examines the frequency effect of two-word collocations involving "para" "to," "for" (e.g. "fui para," "para que") on the reduction of "para" to "pa" (in Spanish) and "pra" (in Portuguese). Collocation frequency effects demonstrate that language speakers…

  10. The creation of future daily gridded datasets of precipitation and temperature with a spatial weather generator, Cyprus 2020-2050

    NASA Astrophysics Data System (ADS)

    Camera, Corrado; Bruggeman, Adriana; Hadjinicolaou, Panos; Pashiardis, Stelios; Lange, Manfred

    2014-05-01

    High-resolution gridded daily datasets are essential for natural resource management and the analysis of climate changes and their effects. This study aimed to create gridded datasets of daily precipitation and daily minimum and maximum temperature, for the future (2020-2050). The horizontal resolution of the developed datasets is 1 x 1 km2, covering the area under control of the Republic of Cyprus (5.760 km2). The study is divided into two parts. The first consists of the evaluation of the performance of different interpolation techniques for daily rainfall and temperature data (1980-2010) for the creation of the gridded datasets. Rainfall data recorded at 145 stations and temperature data from 34 stations were used. For precipitation, inverse distance weighting (IDW) performs best for local events, while a combination of step-wise geographically weighted regression and IDW proves to be the best method for large scale events. For minimum and maximum temperature, a combination of step-wise linear multiple regression and thin plate splines is recognized as the best method. Six Regional Climate Models (RCMs) for the A1B SRES emission scenario from the EU ENSEMBLE project database were selected as sources for future climate projections. The RCMs were evaluated for their capacity to simulate Cyprus climatology for the period 1980-2010. Data for the period 2020-2050 from the three best performing RCMs were downscaled, using the change factors approach, at the location of observational stations. Daily time series were created with a stochastic rainfall and temperature generator. The RainSim V3 software (Burton et al., 2008) was used to generate spatial-temporal coherent rainfall fields. The temperature generator was developed in R and modeled temperature as a weakly stationary process with the daily mean and standard deviation conditioned on the wet and dry state of the day (Richardson, 1981). Finally gridded datasets depicting projected future climate conditions were created with the identified best interpolation methods. The difference between the input and simulated mean daily rainfall, averaged over all the stations, was 0.03 mm (2.2%), while the error related to the number of dry days was 2 (0.6%). For mean daily minimum temperature the error was 0.005 ºC (0.04%), while for maximum temperature it was 0.01 ºC (0.04%). Overall, the weather generators were found to be reliable instruments for the downscaling of precipitation and temperature. The resulting datasets indicate a decrease of the mean annual rainfall over the study area between 5 and 70 mm (1-15%) for 2020-2050, relative to 1980-2010. Average annual minimum and maximum temperature over the Republic of Cyprus are projected to increase between 1.2 and 1.5 ºC. The dataset is currently used to compute agricultural production and water use indicators, as part of the AGWATER project (AEIFORIA/GEORGO/0311(BIE)/06), co-financed by the European Regional Development Fund and the Republic of Cyprus through the Research Promotion Foundation. Burton, A., Kilsby, C.G., Fowler, H.J., Cowpertwait, P.S.P., and O'Connell, P.E.: RainSim: A spatial-temporal stochastic rainfall modelling system. Environ. Model. Software 23, 1356-1369, 2008 Richardson, C.W.: Stochastic simulation of daily precipitation, temperature, and solar radiation. Water Resour. Res. 17, 182-190, 1981.

  11. Reports 10, The Yugoslav Serbo-Croatian-English Contrastive Project.

    ERIC Educational Resources Information Center

    Filipovic, Rudolf

    The tenth volume in this series contains five articles dealing with various aspects of Serbo-Croatian-English contrastive analysis. They are: "The Infinitive as Subject in English and Serbo-Croatian," by Ljiljana Bibovic; "The Contrastive Analysis of Collocations: Collocational Ranges of "Make" and "Take" with Nouns and Their Serbo-Croatian…

  12. No Silver Bullet: L2 Collocation Instruction in an Advanced Spanish Classroom

    ERIC Educational Resources Information Center

    Jensen, Eric Carl

    2017-01-01

    Many contemporary second language (L2) instructional materials feature collocation exercises; however, few studies have verified their effectiveness (Boers, Demecheleer, Coxhead, & Webb, 2014) or whether these exercises can be utilized for target languages beyond English (Higueras García, 2017). This study addresses these issues by…

  13. Assessing Team Learning in Technology-Mediated Collaboration: An Experimental Study

    ERIC Educational Resources Information Center

    Andres, Hayward P.; Akan, Obasi H.

    2010-01-01

    This study examined the effects of collaboration mode (collocated versus non-collocated videoconferencing-mediated) on team learning and team interaction quality in a team-based problem solving context. Situated learning theory and the theory of affordances are used to provide a framework that describes how technology-mediated collaboration…

  14. Collocation in Regional Development--The Peel Education and TAFE Response.

    ERIC Educational Resources Information Center

    Goff, Malcolm H.; Nevard, Jennifer

    The collocation of services in regional Western Australia (WA) is an important strand of WA's regional development policy. The initiative is intended to foster working relationships among stakeholder groups with a view toward ensuring that regional WA communities have access to quality services. Clustering compatible services in smaller…

  15. Interlanguage Development and Collocational Clash

    ERIC Educational Resources Information Center

    Shahheidaripour, Gholamabbass

    2000-01-01

    Background: Persian English learners committed mistakes and errors which were due to insufficient knowledge of different senses of the words and collocational structures they formed. Purpose: The study reported here was conducted for a thesis submitted in partial fulfillment of the requirements for The Master of Arts degree, School of Graduate…

  16. 47 CFR 51.323 - Standards for physical collocation and virtual collocation.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... unbundled network element if and only if the primary purpose and function of the equipment, as the... nondiscriminatory access to that unbundled network element, including any of its features, functions, or... must be a logical nexus between the additional functions the equipment would perform and the...

  17. Testing ESL Learners' Knowledge of Collocations.

    ERIC Educational Resources Information Center

    Bonk, William J.

    This study reports on the development, administration, and analysis of a test of collocational knowledge for English-as-a-Second-Language (ESL) learners of a wide range of proficiency levels. Through native speaker item validation and pilot testing, three subtests were developed and administered to 98 ESL learners of low-intermediate to advanced…

  18. An Accurate GPS-IMU/DR Data Fusion Method for Driverless Car Based on a Set of Predictive Models and Grid Constraints

    PubMed Central

    Wang, Shiyao; Deng, Zhidong; Yin, Gang

    2016-01-01

    A high-performance differential global positioning system (GPS)  receiver with real time kinematics provides absolute localization for driverless cars. However, it is not only susceptible to multipath effect but also unable to effectively fulfill precise error correction in a wide range of driving areas. This paper proposes an accurate GPS–inertial measurement unit (IMU)/dead reckoning (DR) data fusion method based on a set of predictive models and occupancy grid constraints. First, we employ a set of autoregressive and moving average (ARMA) equations that have different structural parameters to build maximum likelihood models of raw navigation. Second, both grid constraints and spatial consensus checks on all predictive results and current measurements are required to have removal of outliers. Navigation data that satisfy stationary stochastic process are further fused to achieve accurate localization results. Third, the standard deviation of multimodal data fusion can be pre-specified by grid size. Finally, we perform a lot of field tests on a diversity of real urban scenarios. The experimental results demonstrate that the method can significantly smooth small jumps in bias and considerably reduce accumulated position errors due to DR. With low computational complexity, the position accuracy of our method surpasses existing state-of-the-arts on the same dataset and the new data fusion method is practically applied in our driverless car. PMID:26927108

  19. An Accurate GPS-IMU/DR Data Fusion Method for Driverless Car Based on a Set of Predictive Models and Grid Constraints.

    PubMed

    Wang, Shiyao; Deng, Zhidong; Yin, Gang

    2016-02-24

    A high-performance differential global positioning system (GPS)  receiver with real time kinematics provides absolute localization for driverless cars. However, it is not only susceptible to multipath effect but also unable to effectively fulfill precise error correction in a wide range of driving areas. This paper proposes an accurate GPS-inertial measurement unit (IMU)/dead reckoning (DR) data fusion method based on a set of predictive models and occupancy grid constraints. First, we employ a set of autoregressive and moving average (ARMA) equations that have different structural parameters to build maximum likelihood models of raw navigation. Second, both grid constraints and spatial consensus checks on all predictive results and current measurements are required to have removal of outliers. Navigation data that satisfy stationary stochastic process are further fused to achieve accurate localization results. Third, the standard deviation of multimodal data fusion can be pre-specified by grid size. Finally, we perform a lot of field tests on a diversity of real urban scenarios. The experimental results demonstrate that the method can significantly smooth small jumps in bias and considerably reduce accumulated position errors due to DR. With low computational complexity, the position accuracy of our method surpasses existing state-of-the-arts on the same dataset and the new data fusion method is practically applied in our driverless car.

  20. An advanced stochastic weather generator for simulating 2-D high-resolution climate variables

    NASA Astrophysics Data System (ADS)

    Peleg, Nadav; Fatichi, Simone; Paschalis, Athanasios; Molnar, Peter; Burlando, Paolo

    2017-07-01

    A new stochastic weather generator, Advanced WEather GENerator for a two-dimensional grid (AWE-GEN-2d) is presented. The model combines physical and stochastic approaches to simulate key meteorological variables at high spatial and temporal resolution: 2 km × 2 km and 5 min for precipitation and cloud cover and 100 m × 100 m and 1 h for near-surface air temperature, solar radiation, vapor pressure, atmospheric pressure, and near-surface wind. The model requires spatially distributed data for the calibration process, which can nowadays be obtained by remote sensing devices (weather radar and satellites), reanalysis data sets and ground stations. AWE-GEN-2d is parsimonious in terms of computational demand and therefore is particularly suitable for studies where exploring internal climatic variability at multiple spatial and temporal scales is fundamental. Applications of the model include models of environmental systems, such as hydrological and geomorphological models, where high-resolution spatial and temporal meteorological forcing is crucial. The weather generator was calibrated and validated for the Engelberg region, an area with complex topography in the Swiss Alps. Model test shows that the climate variables are generated by AWE-GEN-2d with a level of accuracy that is sufficient for many practical applications.

  1. Effects of dynamic-demand-control appliances on the power grid frequency.

    PubMed

    Tchuisseu, E B Tchawou; Gomila, D; Brunner, D; Colet, P

    2017-08-01

    Power grid frequency control is a demanding task requiring expensive idle power plants to adapt the supply to the fluctuating demand. An alternative approach is controlling the demand side in such a way that certain appliances modify their operation to adapt to the power availability. This is especially important to achieve a high penetration of renewable energy sources. A number of methods to manage the demand side have been proposed. In this work we focus on dynamic demand control (DDC), where smart appliances can delay their switchings depending on the frequency of the system. We introduce a simple model to study the effects of DDC on the frequency of the power grid. The model includes the power plant equations, a stochastic model for the demand that reproduces, adjusting a single parameter, the statistical properties of frequency fluctuations measured experimentally, and a generic DDC protocol. We find that DDC can reduce small and medium-size fluctuations but it can also increase the probability of observing large frequency peaks due to the necessity of recovering pending task. We also conclude that a deployment of DDC around 30-40% already allows a significant reduction of the fluctuations while keeping the number of pending tasks low.

  2. Effects of dynamic-demand-control appliances on the power grid frequency

    NASA Astrophysics Data System (ADS)

    Tchuisseu, E. B. Tchawou; Gomila, D.; Brunner, D.; Colet, P.

    2017-08-01

    Power grid frequency control is a demanding task requiring expensive idle power plants to adapt the supply to the fluctuating demand. An alternative approach is controlling the demand side in such a way that certain appliances modify their operation to adapt to the power availability. This is especially important to achieve a high penetration of renewable energy sources. A number of methods to manage the demand side have been proposed. In this work we focus on dynamic demand control (DDC), where smart appliances can delay their switchings depending on the frequency of the system. We introduce a simple model to study the effects of DDC on the frequency of the power grid. The model includes the power plant equations, a stochastic model for the demand that reproduces, adjusting a single parameter, the statistical properties of frequency fluctuations measured experimentally, and a generic DDC protocol. We find that DDC can reduce small and medium-size fluctuations but it can also increase the probability of observing large frequency peaks due to the necessity of recovering pending task. We also conclude that a deployment of DDC around 30-40% already allows a significant reduction of the fluctuations while keeping the number of pending tasks low.

  3. Model Uncertainty Quantification Methods For Data Assimilation In Partially Observed Multi-Scale Systems

    NASA Astrophysics Data System (ADS)

    Pathiraja, S. D.; van Leeuwen, P. J.

    2017-12-01

    Model Uncertainty Quantification remains one of the central challenges of effective Data Assimilation (DA) in complex partially observed non-linear systems. Stochastic parameterization methods have been proposed in recent years as a means of capturing the uncertainty associated with unresolved sub-grid scale processes. Such approaches generally require some knowledge of the true sub-grid scale process or rely on full observations of the larger scale resolved process. We present a methodology for estimating the statistics of sub-grid scale processes using only partial observations of the resolved process. It finds model error realisations over a training period by minimizing their conditional variance, constrained by available observations. Special is that these realisations are binned conditioned on the previous model state during the minimization process, allowing for the recovery of complex error structures. The efficacy of the approach is demonstrated through numerical experiments on the multi-scale Lorenz 96' model. We consider different parameterizations of the model with both small and large time scale separations between slow and fast variables. Results are compared to two existing methods for accounting for model uncertainty in DA and shown to provide improved analyses and forecasts.

  4. Elastostatic stress analysis of orthotropic rectangular center-cracked plates

    NASA Technical Reports Server (NTRS)

    Gyekenyesi, G. S.; Mendelson, A.

    1972-01-01

    A mapping-collocation method was developed for the elastostatic stress analysis of finite, anisotropic plates with centrally located traction-free cracks. The method essentially consists of mapping the crack into the unit circle and satisfying the crack boundary conditions exactly with the help of Muskhelishvili's function extension concept. The conditions on the outer boundary are satisfied approximately by applying the method of least-squares boundary collocation. A parametric study of finite-plate stress intensity factors, employing this mapping-collocation method, is presented. It shows the effects of varying material properties, orientation angle, and crack-length-to-plate-width and plate-height-to-plate-width ratios for rectangular orthotropic plates under constant tensile and shear loads.

  5. Locating CVBEM collocation points for steady state heat transfer problems

    USGS Publications Warehouse

    Hromadka, T.V.

    1985-01-01

    The Complex Variable Boundary Element Method or CVBEM provides a highly accurate means of developing numerical solutions to steady state two-dimensional heat transfer problems. The numerical approach exactly solves the Laplace equation and satisfies the boundary conditions at specified points on the boundary by means of collocation. The accuracy of the approximation depends upon the nodal point distribution specified by the numerical analyst. In order to develop subsequent, refined approximation functions, four techniques for selecting additional collocation points are presented. The techniques are compared as to the governing theory, representation of the error of approximation on the problem boundary, the computational costs, and the ease of use by the numerical analyst. ?? 1985.

  6. Efficient Jacobi-Gauss collocation method for solving initial value problems of Bratu type

    NASA Astrophysics Data System (ADS)

    Doha, E. H.; Bhrawy, A. H.; Baleanu, D.; Hafez, R. M.

    2013-09-01

    In this paper, we propose the shifted Jacobi-Gauss collocation spectral method for solving initial value problems of Bratu type, which is widely applicable in fuel ignition of the combustion theory and heat transfer. The spatial approximation is based on shifted Jacobi polynomials J {/n (α,β)}( x) with α, β ∈ (-1, ∞), x ∈ [0, 1] and n the polynomial degree. The shifted Jacobi-Gauss points are used as collocation nodes. Illustrative examples have been discussed to demonstrate the validity and applicability of the proposed technique. Comparing the numerical results of the proposed method with some well-known results show that the method is efficient and gives excellent numerical results.

  7. Collocational Processing in Light of the Phraseological Continuum Model: Does Semantic Transparency Matter?

    ERIC Educational Resources Information Center

    Gyllstad, Henrik; Wolter, Brent

    2016-01-01

    The present study investigates whether two types of word combinations (free combinations and collocations) differ in terms of processing by testing Howarth's Continuum Model based on word combination typologies from a phraseological tradition. A visual semantic judgment task was administered to advanced Swedish learners of English (n = 27) and…

  8. 47 CFR 51.323 - Standards for physical collocation and virtual collocation.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... unbundled network elements. (1) Equipment is necessary for interconnection if an inability to deploy that... obtains within its own network or the incumbent provides to any affiliate, subsidiary, or other party. (2) Equipment is necessary for access to an unbundled network element if an inability to deploy that equipment...

  9. 47 CFR 51.323 - Standards for physical collocation and virtual collocation.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... unbundled network elements. (1) Equipment is necessary for interconnection if an inability to deploy that... obtains within its own network or the incumbent provides to any affiliate, subsidiary, or other party. (2) Equipment is necessary for access to an unbundled network element if an inability to deploy that equipment...

  10. Improving English Learners' Productive Collocation Knowledge: The Effects of Involvement Load, Spacing, and Intentionality

    ERIC Educational Resources Information Center

    Snoder, Per

    2017-01-01

    This article reports on a classroom-based experiment that tested the effects of three vocabulary teaching constructs (involvement load, spacing, and intentionality) on the learning of English verb-noun collocations--for example, "shelve a plan." Laufer and Hulstijn's (2001) "involvement load" predicts that the higher the…

  11. Strategies in Translating Collocations in Religious Texts from Arabic into English

    ERIC Educational Resources Information Center

    Dweik, Bader S.; Shakra, Mariam M. Abu

    2010-01-01

    The present study investigated the strategies adopted by students in translating specific lexical and semantic collocations in three religious texts namely, the Holy Quran, the Hadith and the Bible. For this purpose, the researchers selected a purposive sample of 35 MA translation students enrolled in three different public and private Jordanian…

  12. 77 FR 60089 - Approval and Promulgation of Air Quality Implementation Plans; Delaware, New Jersey, and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-02

    ... quarter substitution test. ``Collocated'' indicates that the collocated data was substituted for missing... 24-hour standard design value is greater than the level of the standard. EPA addresses missing data... substituted for the missing data. In the maximum quarter test, maximum recorded values are substituted for the...

  13. Shape Control of Plates with Piezo Actuators and Collocated Position/Rate Sensors

    NASA Technical Reports Server (NTRS)

    Balakrishnan, A. V.

    1994-01-01

    This paper treats the control problem of shaping the surface deformation of a circular plate using embedded piezo-electric actuators and collocated rate sensors. An explicit Linear Quadratic Gaussian (LQG) optimizer stability augmentation compensator is derived as well as the optimal feed-forward control. Corresponding performance evaluation formulas are also derived.

  14. Shape Control of Plates with Piezo Actuators and Collocated Position/Rate Sensors

    NASA Technical Reports Server (NTRS)

    Balakrishnan, A. V.

    1994-01-01

    This paper treats the control problem of shaping the surface deformation of a circular plate using embedded piezo-electric actuator and collocated rate sensors. An explicit Linear Quadratic Gaussian (LQG) optimizer stability augmentation compensator is derived as well as the optimal feed-forward control. Corresponding performance evaluation formulas are also derived.

  15. An improved triple collocation algorithm for decomposing autocorrelated and white soil moisture retrieval errors

    USDA-ARS?s Scientific Manuscript database

    If not properly account for, auto-correlated errors in observations can lead to inaccurate results in soil moisture data analysis and reanalysis. Here, we propose a more generalized form of the triple collocation algorithm (GTC) capable of decomposing the total error variance of remotely-sensed surf...

  16. Collocational Competence of Arabic Speaking Learners of English: A Study in Lexical Semantics.

    ERIC Educational Resources Information Center

    Zughoul, Muhammad Raji; Abdul-Fattah, Hussein S.

    This study examined learners' productive competence in collocations and idioms by means of their performance on two interdependent tasks. Participants were two groups of English as a Foreign Language undergraduate and graduate students from the English department at Jordan's Yarmouk University. The two tasks included the following: a multiple…

  17. Processing and Learning of Enhanced English Collocations: An Eye Movement Study

    ERIC Educational Resources Information Center

    Choi, Sungmook

    2017-01-01

    Research to date suggests that textual enhancement may positively affect the learning of multiword combinations known as collocations, but may impair recall of unenhanced text. However, the attentional mechanisms underlying such effects remain unclear. In this study, 38 undergraduate students were divided into two groups: one read a text…

  18. Mars Mission Optimization Based on Collocation of Resources

    NASA Technical Reports Server (NTRS)

    Chamitoff, G. E.; James, G. H.; Barker, D. C.; Dershowitz, A. L.

    2003-01-01

    This paper presents a powerful approach for analyzing Martian data and for optimizing mission site selection based on resource collocation. This approach is implemented in a program called PROMT (Planetary Resource Optimization and Mapping Tool), which provides a wide range of analysis and display functions that can be applied to raw data or imagery. Thresholds, contours, custom algorithms, and graphical editing are some of the various methods that can be used to process data. Output maps can be created to identify surface regions on Mars that meet any specific criteria. The use of this tool for analyzing data, generating maps, and collocating features is demonstrated using data from the Mars Global Surveyor and the Odyssey spacecraft. The overall mission design objective is to maximize a combination of scientific return and self-sufficiency based on utilization of local materials. Landing site optimization involves maximizing accessibility to collocated science and resource features within a given mission radius. Mission types are categorized according to duration, energy resources, and in-situ resource utilization. Optimization results are shown for a number of mission scenarios.

  19. Research on trading patterns of large users' direct power purchase considering consumption of clean energy

    NASA Astrophysics Data System (ADS)

    Guojun, He; Lin, Guo; Zhicheng, Yu; Xiaojun, Zhu; Lei, Wang; Zhiqiang, Zhao

    2017-03-01

    In order to reduce the stochastic volatility of supply and demand, and maintain the electric power system's stability after large scale stochastic renewable energy sources connected to grid, the development and consumption should be promoted by marketing means. Bilateral contract transaction model of large users' direct power purchase conforms to the actual situation of our country. Trading pattern of large users' direct power purchase is analyzed in this paper, characteristics of each power generation are summed up, and centralized matching mode is mainly introduced. Through the establishment of power generation enterprises' priority evaluation index system and the analysis of power generation enterprises' priority based on fuzzy clustering, the sorting method of power generation enterprises' priority in trading patterns of large users' direct power purchase is put forward. Suggestions for trading mechanism of large users' direct power purchase are offered by this method, which is good for expand the promotion of large users' direct power purchase further.

  20. Probabilistic risk assessment for CO2 storage in geological formations: robust design and support for decision making under uncertainty

    NASA Astrophysics Data System (ADS)

    Oladyshkin, Sergey; Class, Holger; Helmig, Rainer; Nowak, Wolfgang

    2010-05-01

    CO2 storage in geological formations is currently being discussed intensively as a technology for mitigating CO2 emissions. However, any large-scale application requires a thorough analysis of the potential risks. Current numerical simulation models are too expensive for probabilistic risk analysis and for stochastic approaches based on brute-force repeated simulation. Even single deterministic simulations may require parallel high-performance computing. The multiphase flow processes involved are too non-linear for quasi-linear error propagation and other simplified stochastic tools. As an alternative approach, we propose a massive stochastic model reduction based on the probabilistic collocation method. The model response is projected onto a orthogonal basis of higher-order polynomials to approximate dependence on uncertain parameters (porosity, permeability etc.) and design parameters (injection rate, depth etc.). This allows for a non-linear propagation of model uncertainty affecting the predicted risk, ensures fast computation and provides a powerful tool for combining design variables and uncertain variables into one approach based on an integrative response surface. Thus, the design task of finding optimal injection regimes explicitly includes uncertainty, which leads to robust designs of the non-linear system that minimize failure probability and provide valuable support for risk-informed management decisions. We validate our proposed stochastic approach by Monte Carlo simulation using a common 3D benchmark problem (Class et al. Computational Geosciences 13, 2009). A reasonable compromise between computational efforts and precision was reached already with second-order polynomials. In our case study, the proposed approach yields a significant computational speedup by a factor of 100 compared to Monte Carlo simulation. We demonstrate that, due to the non-linearity of the flow and transport processes during CO2 injection, including uncertainty in the analysis leads to a systematic and significant shift of predicted leakage rates towards higher values compared with deterministic simulations, affecting both risk estimates and the design of injection scenarios. This implies that, neglecting uncertainty can be a strong simplification for modeling CO2 injection, and the consequences can be stronger than when neglecting several physical phenomena (e.g. phase transition, convective mixing, capillary forces etc.). The authors would like to thank the German Research Foundation (DFG) for financial support of the project within the Cluster of Excellence in Simulation Technology (EXC 310/1) at the University of Stuttgart. Keywords: polynomial chaos; CO2 storage; multiphase flow; porous media; risk assessment; uncertainty; integrative response surfaces

  1. The Chebyshev-Legendre method: Implementing Legendre methods on Chebyshev points

    NASA Technical Reports Server (NTRS)

    Don, Wai Sun; Gottlieb, David

    1993-01-01

    We present a new collocation method for the numerical solution of partial differential equations. This method uses the Chebyshev collocation points, but because of the way the boundary conditions are implemented, it has all the advantages of the Legendre methods. In particular, L2 estimates can be obtained easily for hyperbolic and parabolic problems.

  2. Domain identification in impedance computed tomography by spline collocation method

    NASA Technical Reports Server (NTRS)

    Kojima, Fumio

    1990-01-01

    A method for estimating an unknown domain in elliptic boundary value problems is considered. The problem is formulated as an inverse problem of integral equations of the second kind. A computational method is developed using a splice collocation scheme. The results can be applied to the inverse problem of impedance computed tomography (ICT) for image reconstruction.

  3. Factors Impacting Recognition of False Collo­cations by Speakers of English as L1 and L2

    ERIC Educational Resources Information Center

    Makinina, Olga

    2017-01-01

    Currently there is a general uncertainty about what makes collocations (i.e., fixed word combinations with specific, not easily interpreted relations between their components) hard for ESL learners to master, and about how to improve collocation recognition and learning process. This study explored and designed a comparative classification of…

  4. Frequent Collocates and Major Senses of Two Prepositions in ESL and ENL Corpora

    ERIC Educational Resources Information Center

    Nkemleke, Daniel

    2009-01-01

    This contribution assesses in quantitative terms frequent collocates and major senses of "between" and "through" in the corpus of Cameroonian English (CCE), the corpus of East-African (Kenya and Tanzania) English which is part of the International Corpus of English (ICE) project (ICE-EA), and the London Oslo/Bergen (LOB) corpus…

  5. Geostationary Collocation: Case Studies for Optimal Maneuvers

    DTIC Science & Technology

    2016-03-01

    NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS Approved for public release; distribution is unlimited GEOSTATIONARY ...DATES COVERED Master’s thesis 4. TITLE AND SUBTITLE GEOSTATIONARY COLLOCATION: CASE STUDIES FOR OPTIMAL MANEUVERS 5. FUNDING NUMBERS 6...The geostationary belt is considered a natural resource, and as time goes by, the physical spaces for geostationary satellites will run out. The

  6. The Effect of Critical Reading Strategies on EFL Learners' Recall and Retention of Collocations

    ERIC Educational Resources Information Center

    NematTabrizi, Amir Reza; Saber, Mehrnoush Akhavan

    2016-01-01

    The study was an attempt to measure the effect of critical reading strategies, namely; re-reading, questioning and annotating on recall and retention of collocations by intermediate Iranian EFL learners. To this end, Nelson proficiency test was administered to ninety (n = 90) Iranian EFL learners studying at Zaban Sara language institute in…

  7. Collocational Differences between L1 and L2: Implications for EFL Learners and Teachers

    ERIC Educational Resources Information Center

    Sadeghi, Karim

    2009-01-01

    Collocations are one of the areas that produce problems for learners of English as a foreign language. Iranian learners of English are by no means an exception. Teaching experience at schools, private language centers, and universities in Iran suggests that a significant part of EFL learners' problems with producing the language, especially at…

  8. A nonclassical Radau collocation method for solving the Lane-Emden equations of the polytropic index 4.75 ≤ α < 5

    NASA Astrophysics Data System (ADS)

    Tirani, M. D.; Maleki, M.; Kajani, M. T.

    2014-11-01

    A numerical method for solving the Lane-Emden equations of the polytropic index α when 4.75 ≤ α ≤ 5 is introduced. The method is based upon nonclassical Gauss-Radau collocation points and Freud type weights. Nonclassical orthogonal polynomials, nonclassical Radau points and weighted interpolation are introduced and are utilized in the interval [0,1]. A smooth, strictly monotonic transformation is used to map the infinite domain x ∈ [0,∞) onto a half-open interval t ∈ [0,1). The resulting problem on the finite interval is then transcribed to a system of nonlinear algebraic equations using collocation. The method is easy to implement and yields very accurate results.

  9. Stochasticity and organization of tropical convection: Role of stratiform heating in the simulation of MJO in an aquaplanet coarse resolution GCM using a stochastic multicloud parameterization

    NASA Astrophysics Data System (ADS)

    Khouider, B.; Majda, A.; Deng, Q.; Ravindran, A. M.

    2015-12-01

    Global climate models (GCMs) are large computer codes based on the discretization of the equations of atmospheric and oceanic motions coupled to various processes of transfer of heat, moisture and other constituents between land, atmosphere, and oceans. Because of computing power limitations, typical GCM grid resolution is on the order of 100 km and the effects of many physical processes, occurring on smaller scales, on the climate system are represented through various closure recipes known as parameterizations. The parameterization of convective motions and many processes associated with cumulus clouds such as the exchange of latent heat and cloud radiative forcing are believed to be behind much of uncertainty in GCMs. Based on a lattice particle interacting system, the stochastic multicloud model (SMCM) provide a novel and efficient representation of the unresolved variability in GCMs due to organized tropical convection and the cloud cover. It is widely recognized that stratiform heating contributes significantly to tropical rainfall and to the dynamics of tropical convective systems by inducing a front-to-rear tilt in the heating profile. Stratiform anvils forming in the wake of deep convection play a central role in the dynamics of tropical mesoscale convective systems. Here, aquaplanet simulations with a warm pool like surface forcing, based on a coarse-resolution GCM , of ˜170 km grid mesh, coupled with SMCM, are used to demonstrate the importance of stratiform heating for the organization of convection on planetary and intraseasonal scales. When some key model parameters are set to produce higher stratiform heating fractions, the model produces low-frequency and planetary-scale Madden Julian oscillation (MJO)-like wave disturbances while lower to moderate stratiform heating fractions yield mainly synoptic-scale convectively coupled Kelvin-like waves. Rooted from the stratiform instability, it is conjectured here that the strength and extent of stratiform downdrafts are key contributors to the scale selection of convective organizations perhaps with mechanisms that are in essence similar to those of mesoscale convective systems.

  10. Impact of parametric uncertainty on estimation of the energy deposition into an irradiated brain tumor

    NASA Astrophysics Data System (ADS)

    Taverniers, Søren; Tartakovsky, Daniel M.

    2017-11-01

    Predictions of the total energy deposited into a brain tumor through X-ray irradiation are notoriously error-prone. We investigate how this predictive uncertainty is affected by uncertainty in both the location of the region occupied by a dose-enhancing iodinated contrast agent and the agent's concentration. This is done within the probabilistic framework in which these uncertain parameters are modeled as random variables. We employ the stochastic collocation (SC) method to estimate statistical moments of the deposited energy in terms of statistical moments of the random inputs, and the global sensitivity analysis (GSA) to quantify the relative importance of uncertainty in these parameters on the overall predictive uncertainty. A nonlinear radiation-diffusion equation dramatically magnifies the coefficient of variation of the uncertain parameters, yielding a large coefficient of variation for the predicted energy deposition. This demonstrates that accurate prediction of the energy deposition requires a proper treatment of even small parametric uncertainty. Our analysis also reveals that SC outperforms standard Monte Carlo, but its relative efficiency decreases as the number of uncertain parameters increases from one to three. A robust GSA ameliorates this problem by reducing this number.

  11. CEOS Ocean Variables Enabling Research and Applications for Geo (COVERAGE)

    NASA Astrophysics Data System (ADS)

    Tsontos, V. M.; Vazquez, J.; Zlotnicki, V.

    2017-12-01

    The CEOS Ocean Variables Enabling Research and Applications for GEO (COVERAGE) initiative seeks to facilitate joint utilization of different satellite data streams on ocean physics, better integrated with biological and in situ observations, including near real-time data streams in support of oceanographic and decision support applications for societal benefit. COVERAGE aligns with programmatic objectives of CEOS (the Committee on Earth Observation Satellites) and the missions of GEO-MBON (Marine Biodiversity Observation Network) and GEO-Blue Planet, which are to advance and exploit synergies among the many observational programs devoted to ocean and coastal waters. COVERAGE is conceived of as 3 year pilot project involving international collaboration. It focuses on implementing technologies, including cloud based solutions, to provide a data rich, web-based platform for integrated ocean data delivery and access: multi-parameter observations, easily discoverable and usable, organized by disciplines, available in near real-time, collocated to a common grid and including climatologies. These will be complemented by a set of value-added data services available via the COVERAGE portal including an advanced Web-based visualization interface, subsetting/extraction, data collocation/matchup and other relevant on demand processing capabilities. COVERAGE development will be organized around priority use cases and applications identified by GEO and agency partners. The initial phase will be to develop co-located 25km products from the four Ocean Virtual Constellations (VCs), Sea Surface Temperature, Sea Level, Ocean Color, and Sea Surface Winds. This aims to stimulate work among the ocean VCs while developing products and system functionality based on community recommendations. Such products as anomalies from a time mean, would build on the theme of applications with a relevance to CEOS/GEO mission and vision. Here we provide an overview of the COVERAGE initiative with an emphasis on international collaborative aspects entailed with the intent of soliciting community feedback as we develop and implement

  12. Stretched Verb Collocations with "Give": Their Use and Translation into Spanish Using the BNC and CREA Corpora

    ERIC Educational Resources Information Center

    Molina-Plaza, Silvia; de Gregorio-Godeo, Eduardo

    2010-01-01

    Within the context of on-going research, this paper explores the pedagogical implications of contrastive analyses of multiword units in English and Spanish based on electronic corpora as a CALL resource. The main tenets of collocations from a contrastive perspective--and the points of contact and departure between both languages--are discussed…

  13. Action Research: Applying a Bilingual Parallel Corpus Collocational Concordancer to Taiwanese Medical School EFL Academic Writing

    ERIC Educational Resources Information Center

    Reynolds, Barry Lee

    2016-01-01

    Lack of knowledge in the conventional usage of collocations in one's respective field of expertise cause Taiwanese students to produce academic writing that is markedly different than more competent writing. This is because Taiwanese students are first and foremost English as a Foreign language (EFL) readers and may have difficulties picking up on…

  14. Utilizing Lexical Data from a Web-Derived Corpus to Expand Productive Collocation Knowledge

    ERIC Educational Resources Information Center

    Wu, Shaoqun; Witten, Ian H.; Franken, Margaret

    2010-01-01

    Collocations are of great importance for second language learners, and a learner's knowledge of them plays a key role in producing language fluently (Nation, 2001: 323). In this article we describe and evaluate an innovative system that uses a Web-derived corpus and digital library software to produce a vast concordance and present it in a way…

  15. The Effect of Corpus-Based Activities on Verb-Noun Collocations in EFL Classes

    ERIC Educational Resources Information Center

    Ucar, Serpil; Yükselir, Ceyhun

    2015-01-01

    This current study sought to reveal the impacts of corpus-based activities on verb-noun collocation learning in EFL classes. This study was carried out on two groups--experimental and control groups- each of which consists of 15 students. The students were preparatory class students at School of Foreign Languages, Osmaniye Korkut Ata University.…

  16. A Study of Learners' Usage of a Mobile Learning Application for Learning Idioms and Collocations

    ERIC Educational Resources Information Center

    Amer, Mahmoud

    2014-01-01

    This study explored how four groups of language learners used a mobile software application developed by the researcher for learning idiomatic expressions and collocations. A total of 45 participants in the study used the application for a period of one week. Data for this study was collected from the application, a questionnaire, and follow-up…

  17. Retention and Use of Lexical Collocations (Verb + Noun and Adjective + Noun) by Applying Lexical Approach in a Reading Course

    ERIC Educational Resources Information Center

    Ördem, Eser; Paker, Turan

    2016-01-01

    The purpose of this study was to investigate whether teaching vocabulary via collocations would contribute to retention and use of foreign language, English. A quasi-experimental design was formed to see whether there would be a significant difference between the treatment and control groups. Three instruments developed were conducted to 60…

  18. A Comparative Study on the Effects of Negative Evidence and Enriched Input on Learning of Verb-Noun Collocations

    ERIC Educational Resources Information Center

    Okyar, Hatice; Yangin Eksi, Gonca

    2017-01-01

    This study compared the effectiveness of negative evidence and enriched input on learning the verb-noun collocations. There were 52 English as Foreign Language (EFL) learners in this research study and they were randomly assigned to the negative evidence or enriched input groups. While the negative evidence group (n = 27) was provided with…

  19. The Effects of Utilizing Corpus Resources to Correct Collocation Errors in L2 Writing--Students' Performance, Corpus Use and Perceptions

    ERIC Educational Resources Information Center

    Wu, Yi-ju

    2016-01-01

    Data-Driven Learning (DDL), in which learners "confront [themselves] directly with the corpus data" (Johns, 2002, p. 108), has shown to be effective in collocation learning in L2 writing. Nevertheless, there have been only few research studies of this type examining the relationship between English proficiency and corpus consultation.…

  20. Direct Methods for Predicting Movement Biomechanics Based Upon Optimal Control Theory with Implementation in OpenSim.

    PubMed

    Porsa, Sina; Lin, Yi-Chung; Pandy, Marcus G

    2016-08-01

    The aim of this study was to compare the computational performances of two direct methods for solving large-scale, nonlinear, optimal control problems in human movement. Direct shooting and direct collocation were implemented on an 8-segment, 48-muscle model of the body (24 muscles on each side) to compute the optimal control solution for maximum-height jumping. Both algorithms were executed on a freely-available musculoskeletal modeling platform called OpenSim. Direct collocation converged to essentially the same optimal solution up to 249 times faster than direct shooting when the same initial guess was assumed (3.4 h of CPU time for direct collocation vs. 35.3 days for direct shooting). The model predictions were in good agreement with the time histories of joint angles, ground reaction forces and muscle activation patterns measured for subjects jumping to their maximum achievable heights. Both methods converged to essentially the same solution when started from the same initial guess, but computation time was sensitive to the initial guess assumed. Direct collocation demonstrates exceptional computational performance and is well suited to performing predictive simulations of movement using large-scale musculoskeletal models.

  1. VEMAP Phase 2 bioclimatic database. I. Gridded historical (20th century) climate for modeling ecosystem dynamics across the conterminous USA

    USGS Publications Warehouse

    Kittel, T.G.F.; Rosenbloom, N.A.; Royle, J. Andrew; Daly, Christopher; Gibson, W.P.; Fisher, H.H.; Thornton, P.; Yates, D.N.; Aulenbach, S.; Kaufman, C.; McKeown, R.; Bachelet, D.; Schimel, D.S.; Neilson, R.; Lenihan, J.; Drapek, R.; Ojima, D.S.; Parton, W.J.; Melillo, J.M.; Kicklighter, D.W.; Tian, H.; McGuire, A.D.; Sykes, M.T.; Smith, B.; Cowling, S.; Hickler, T.; Prentice, I.C.; Running, S.; Hibbard, K.A.; Post, W.M.; King, A.W.; Smith, T.; Rizzo, B.; Woodward, F.I.

    2004-01-01

    Analysis and simulation of biospheric responses to historical forcing require surface climate data that capture those aspects of climate that control ecological processes, including key spatial gradients and modes of temporal variability. We developed a multivariate, gridded historical climate dataset for the conterminous USA as a common input database for the Vegetation/Ecosystem Modeling and Analysis Project (VEMAP), a biogeochemical and dynamic vegetation model intercomparison. The dataset covers the period 1895-1993 on a 0.5?? latitude/longitude grid. Climate is represented at both monthly and daily timesteps. Variables are: precipitation, mininimum and maximum temperature, total incident solar radiation, daylight-period irradiance, vapor pressure, and daylight-period relative humidity. The dataset was derived from US Historical Climate Network (HCN), cooperative network, and snowpack telemetry (SNOTEL) monthly precipitation and mean minimum and maximum temperature station data. We employed techniques that rely on geostatistical and physical relationships to create the temporally and spatially complete dataset. We developed a local kriging prediction model to infill discontinuous and limited-length station records based on spatial autocorrelation structure of climate anomalies. A spatial interpolation model (PRISM) that accounts for physiographic controls was used to grid the infilled monthly station data. We implemented a stochastic weather generator (modified WGEN) to disaggregate the gridded monthly series to dailies. Radiation and humidity variables were estimated from the dailies using a physically-based empirical surface climate model (MTCLIM3). Derived datasets include a 100 yr model spin-up climate and a historical Palmer Drought Severity Index (PDSI) dataset. The VEMAP dataset exhibits statistically significant trends in temperature, precipitation, solar radiation, vapor pressure, and PDSI for US National Assessment regions. The historical climate and companion datasets are available online at data archive centers. ?? Inter-Research 2004.

  2. Radiation energy budget studies using collocated AVHRR and ERBE observations

    NASA Technical Reports Server (NTRS)

    Ackerman, Steven A.; Inoue, Toshiro

    1994-01-01

    Changes in the energy balance at the top of the atmosphere are specified as a function of atmospheric and surface properties using observations from the Advanced Very High Resolution Radiometer (AVHRR) and the Earth Radiation Budget Experiment (ERBE) scanner. By collocating the observations from the two instruments, flown on NOAA-9, the authors take advantage of the remote-sensing capabilities of each instrument. The AVHRR spectral channels were selected based on regions that are strongly transparent to clear sky conditions and are therefore useful for characterizing both surface and cloud-top conditions. The ERBE instruments make broadband observations that are important for climate studies. The approach of collocating these observations in time and space is used to study the radiative energy budget of three geographic regions: oceanic, savanna, and desert.

  3. Inversion of the strain-life and strain-stress relationships for use in metal fatigue analysis

    NASA Technical Reports Server (NTRS)

    Manson, S. S.

    1979-01-01

    The paper presents closed-form solutions (collocation method and spline-function method) for the constants of the cyclic fatigue life equation so that they can be easily incorporated into cumulative damage analysis. The collocation method involves conformity with the experimental curve at specific life values. The spline-function method is such that the basic life relation is expressed as a two-part function, one applicable at strains above the transition strain (strain at intersection of elastic and plastic lines), the other below. An illustrative example is treated by both methods. It is shown that while the collocation representation has the advantage of simplicity of form, the spline-function representation can be made more accurate over a wider life range, and is simpler to use.

  4. A Christoffel function weighted least squares algorithm for collocation approximations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Narayan, Akil; Jakeman, John D.; Zhou, Tao

    Here, we propose, theoretically investigate, and numerically validate an algorithm for the Monte Carlo solution of least-squares polynomial approximation problems in a collocation framework. Our investigation is motivated by applications in the collocation approximation of parametric functions, which frequently entails construction of surrogates via orthogonal polynomials. A standard Monte Carlo approach would draw samples according to the density defining the orthogonal polynomial family. Our proposed algorithm instead samples with respect to the (weighted) pluripotential equilibrium measure of the domain, and subsequently solves a weighted least-squares problem, with weights given by evaluations of the Christoffel function. We present theoretical analysis tomore » motivate the algorithm, and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest.« less

  5. A Christoffel function weighted least squares algorithm for collocation approximations

    DOE PAGES

    Narayan, Akil; Jakeman, John D.; Zhou, Tao

    2016-11-28

    Here, we propose, theoretically investigate, and numerically validate an algorithm for the Monte Carlo solution of least-squares polynomial approximation problems in a collocation framework. Our investigation is motivated by applications in the collocation approximation of parametric functions, which frequently entails construction of surrogates via orthogonal polynomials. A standard Monte Carlo approach would draw samples according to the density defining the orthogonal polynomial family. Our proposed algorithm instead samples with respect to the (weighted) pluripotential equilibrium measure of the domain, and subsequently solves a weighted least-squares problem, with weights given by evaluations of the Christoffel function. We present theoretical analysis tomore » motivate the algorithm, and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest.« less

  6. A Two-Timescale Discretization Scheme for Collocation

    NASA Technical Reports Server (NTRS)

    Desai, Prasun; Conway, Bruce A.

    2004-01-01

    The development of a two-timescale discretization scheme for collocation is presented. This scheme allows a larger discretization to be utilized for smoothly varying state variables and a second finer discretization to be utilized for state variables having higher frequency dynamics. As such. the discretization scheme can be tailored to the dynamics of the particular state variables. In so doing. the size of the overall Nonlinear Programming (NLP) problem can be reduced significantly. Two two-timescale discretization architecture schemes are described. Comparison of results between the two-timescale method and conventional collocation show very good agreement. Differences of less than 0.5 percent are observed. Consequently. a significant reduction (by two-thirds) in the number of NLP parameters and iterations required for convergence can be achieved without sacrificing solution accuracy.

  7. Monte Carlo PDF method for turbulent reacting flow in a jet-stirred reactor

    NASA Astrophysics Data System (ADS)

    Roekaerts, D.

    1992-01-01

    A stochastic algorithm for the solution of the modeled scalar probability density function (PDF) transport equation for single-phase turbulent reacting flow is described. Cylindrical symmetry is assumed. The PDF is represented by ensembles of N representative values of the thermochemical variables in each cell of a nonuniform finite-difference grid and operations on these elements representing convection, diffusion, mixing and reaction are derived. A simplified model and solution algorithm which neglects the influence of turbulent fluctuations on mean reaction rates is also described. Both algorithms are applied to a selectivity problem in a real reactor.

  8. Reactive Power Pricing Model Considering the Randomness of Wind Power Output

    NASA Astrophysics Data System (ADS)

    Dai, Zhong; Wu, Zhou

    2018-01-01

    With the increase of wind power capacity integrated into grid, the influence of the randomness of wind power output on the reactive power distribution of grid is gradually highlighted. Meanwhile, the power market reform puts forward higher requirements for reasonable pricing of reactive power service. Based on it, the article combined the optimal power flow model considering wind power randomness with integrated cost allocation method to price reactive power. Meanwhile, considering the advantages and disadvantages of the present cost allocation method and marginal cost pricing, an integrated cost allocation method based on optimal power flow tracing is proposed. The model realized the optimal power flow distribution of reactive power with the minimal integrated cost and wind power integration, under the premise of guaranteeing the balance of reactive power pricing. Finally, through the analysis of multi-scenario calculation examples and the stochastic simulation of wind power outputs, the article compared the results of the model pricing and the marginal cost pricing, which proved that the model is accurate and effective.

  9. Oceanwide gravity anomalies from Geos-3, Seasat and Geosat altimeter data

    NASA Technical Reports Server (NTRS)

    Rapp, Richard H.; Basic, Tomislav

    1992-01-01

    Three kinds of satellite altimeter data have been combined, along with 5 x 5 arcmin bathymetric data, to calculate a 0.125 deg ocean wide gridded set of 2.3 x 10 exp 6 free-air gravity anomalies. The procedure used was least squares collocation that yields the predicted anomaly and standard deviation. The value of including the bathymetric data was shown in a test around the Dowd Seamount where the root mean square (rms) difference between ship gravity measurements decreased from +/- 40 mgal to +/- 20 mgal when the bathymetry was included. Comparisons between the predicted anomalies and ship gravity data is described in three cases. In the Banda Sea the rms differences were +/- 20 mgal for two lines. In the South Atlantic rms differences over lines of 2000 km in length were +/- 7 mgal. For cruise data in the Antarctica region the discrepancies were +/- 12 mgal. Comparisons of anomalies derived from the Geosat geodetic mission data by Marks and McAdoo (1992) with ship dta gave differences of +/- 6 mgal showing the value of the much denser Geosat geodetic mission altimeter data.

  10. How does increasing horizontal resolution in a global climate model improve the simulation of aerosol-cloud interactions?

    DOE PAGES

    Ma, Po-Lun; Rasch, Philip J.; Wang, Minghuai; ...

    2015-06-23

    We report the Community Atmosphere Model Version 5 is run at horizontal grid spacing of 2, 1, 0.5, and 0.25°, with the meteorology nudged toward the Year Of Tropical Convection analysis, and cloud simulators and the collocated A-Train satellite observations are used to explore the resolution dependence of aerosol-cloud interactions. The higher-resolution model produces results that agree better with observations, showing an increase of susceptibility of cloud droplet size, indicating a stronger first aerosol indirect forcing (AIF), and a decrease of susceptibility of precipitation probability, suggesting a weaker second AIF. The resolution sensitivities of AIF are attributed to those ofmore » droplet nucleation and precipitation parameterizations. Finally, the annual average AIF in the Northern Hemisphere midlatitudes (where most anthropogenic emissions occur) in the 0.25° model is reduced by about 1 W m -2 (-30%) compared to the 2° model, leading to a 0.26 W m -2 reduction (-15%) in the global annual average AIF.« less

  11. SPAGETTA: a Multi-Purpose Gridded Stochastic Weather Generator

    NASA Astrophysics Data System (ADS)

    Dubrovsky, M.; Huth, R.; Rotach, M. W.; Dabhi, H.

    2017-12-01

    SPAGETTA is a new multisite/gridded multivariate parametric stochastic weather generator (WG). Site-specific precipitation occurrence and amount are modelled by Markov chain and Gamma distribution, the non-precipitation variables are modelled by an autoregressive (AR) model conditioned on precipitation occurrence, and the spatial coherence of all variables is modelled following the Wilks' (2009) approach. SPAGETTA may be run in two modes. Mode 1: it is run as a classical WG, which is calibrated using weather series from multiple sites, and only then it may produce arbitrarily long synthetic series mimicking the spatial and temporal structure of the calibration data. To generate the weather series representing the future climate, the WG parameters are modified according to the climate change scenario, typically derived from GCM or RCM simulations. Mode 2: the user provides only basic information (not necessarily to be realistic) on the temporal and spatial auto-correlation structure of the weather variables and their mean annual cycle; the generator itself derives the parameters of the underlying AR model, which produces the multi-site weather series. Optionally, the user may add the spatially varying trend, which is superimposed to the synthetic series. The contribution consists of following parts: (a) Model of the WG. (b) Validation of WG in terms of the spatial temperature and precipitation characteristics, including characteristics of spatial hot/cold/dry/wet spells. (c) Results of the climate change impact experiment, in which the WG parameters representing the spatial and temporal variability are modified using the climate change scenarios and the effect on the above spatial validation indices is analysed. In this experiment, the WG is calibrated using the E-OBS gridded daily weather data for several European regions, and the climate change scenarios are derived from the selected RCM simulations (CORDEX database). (d) The second mode of operation will be demonstrated by results obtained while developing the methodology for assessing collective significance of trends in multi-site weather series. The performance of the proposed test statistics is assessed based on large number of realisations of synthetic series produced by WG assuming a given statistical structure and trend of the weather series.

  12. Feasibility of Stochastic Voltage/VAr Optimization Considering Renewable Energy Resources for Smart Grid

    NASA Astrophysics Data System (ADS)

    Momoh, James A.; Salkuti, Surender Reddy

    2016-06-01

    This paper proposes a stochastic optimization technique for solving the Voltage/VAr control problem including the load demand and Renewable Energy Resources (RERs) variation. The RERs often take along some inputs like stochastic behavior. One of the important challenges i. e., Voltage/VAr control is a prime source for handling power system complexity and reliability, hence it is the fundamental requirement for all the utility companies. There is a need for the robust and efficient Voltage/VAr optimization technique to meet the peak demand and reduction of system losses. The voltages beyond the limit may damage costly sub-station devices and equipments at consumer end as well. Especially, the RERs introduces more disturbances and some of the RERs are not even capable enough to meet the VAr demand. Therefore, there is a strong need for the Voltage/VAr control in RERs environment. This paper aims at the development of optimal scheme for Voltage/VAr control involving RERs. In this paper, Latin Hypercube Sampling (LHS) method is used to cover full range of variables by maximally satisfying the marginal distribution. Here, backward scenario reduction technique is used to reduce the number of scenarios effectively and maximally retain the fitting accuracy of samples. The developed optimization scheme is tested on IEEE 24 bus Reliability Test System (RTS) considering the load demand and RERs variation.

  13. Stochastic Short-term High-resolution Prediction of Solar Irradiance and Photovoltaic Power Output

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Melin, Alexander M.; Olama, Mohammed M.; Dong, Jin

    The increased penetration of solar photovoltaic (PV) energy sources into electric grids has increased the need for accurate modeling and prediction of solar irradiance and power production. Existing modeling and prediction techniques focus on long-term low-resolution prediction over minutes to years. This paper examines the stochastic modeling and short-term high-resolution prediction of solar irradiance and PV power output. We propose a stochastic state-space model to characterize the behaviors of solar irradiance and PV power output. This prediction model is suitable for the development of optimal power controllers for PV sources. A filter-based expectation-maximization and Kalman filtering mechanism is employed tomore » estimate the parameters and states in the state-space model. The mechanism results in a finite dimensional filter which only uses the first and second order statistics. The structure of the scheme contributes to a direct prediction of the solar irradiance and PV power output without any linearization process or simplifying assumptions of the signal’s model. This enables the system to accurately predict small as well as large fluctuations of the solar signals. The mechanism is recursive allowing the solar irradiance and PV power to be predicted online from measurements. The mechanism is tested using solar irradiance and PV power measurement data collected locally in our lab.« less

  14. Hybrid geometric-random template-placement algorithm for gravitational wave searches from compact binary coalescences

    NASA Astrophysics Data System (ADS)

    Roy, Soumen; Sengupta, Anand S.; Thakor, Nilay

    2017-05-01

    Astrophysical compact binary systems consisting of neutron stars and black holes are an important class of gravitational wave (GW) sources for advanced LIGO detectors. Accurate theoretical waveform models from the inspiral, merger, and ringdown phases of such systems are used to filter detector data under the template-based matched-filtering paradigm. An efficient grid over the parameter space at a fixed minimal match has a direct impact on the overall time taken by these searches. We present a new hybrid geometric-random template placement algorithm for signals described by parameters of two masses and one spin magnitude. Such template banks could potentially be used in GW searches from binary neutron stars and neutron star-black hole systems. The template placement is robust and is able to automatically accommodate curvature and boundary effects with no fine-tuning. We also compare these banks against vanilla stochastic template banks and show that while both are equally efficient in the fitting-factor sense, the bank sizes are ˜25 % larger in the stochastic method. Further, we show that the generation of the proposed hybrid banks can be sped up by nearly an order of magnitude over the stochastic bank. Generic issues related to optimal implementation are discussed in detail. These improvements are expected to directly reduce the computational cost of gravitational wave searches.

  15. ICESat Calibration and Validation Experiments at White Sands, New Mexico, 2003-2010

    NASA Astrophysics Data System (ADS)

    Schutz, B. E.; Urban, T. J.

    2010-12-01

    The Center for Space Research (CSR) at the University of Texas at Austin has operated a primary site for ICESat cal/val activities near the White Sands Space Harbor (WSSH) area of the White Sands Missile Range, NM. This site was chosen for both geophysical (flat, reflective) and logistical (domestic, secure site) reasons. Before launch in 2003, a several-hundred-meter-scale grid comprised of hundreds of numbered PVC base-plates was installed at the chosen site to permanently mark the locations of various pieces of experiment hardware. In summary, CSR has supported four primary types of experiments at the cal/val site: (1) a permanent grid of laser retro-reflectors (corner cube reflectors) placed on top of poles of various known heights and collocated with 25 of the base plates, in use for the duration of the mission, (2) a set of computer-monitored position and timing detectors utilized for cal/val during the first three years of the project, (3) several camera-equipped aircraft flyovers of the area designed to capture images of the green and infrared footprints on the surface at the precise time of ICESat overflights, (4) elevation comparisons between the ICESat data and a high-resolution (1 m) DEM derived via small-footprint airborne lidar collections in 2003 and 2007. The experiments at WSSH were targeted by the ICESat spacecraft approximately four times per campaign, making this cal/val site one of the most sampled locations in the world. This presentation will chronicle the extensive collection of ICESat and experimental data collected at WSSH from 2003 to 2010.

  16. Computational aeroelasticity using a pressure-based solver

    NASA Astrophysics Data System (ADS)

    Kamakoti, Ramji

    A computational methodology for performing fluid-structure interaction computations for three-dimensional elastic wing geometries is presented. The flow solver used is based on an unsteady Reynolds-Averaged Navier-Stokes (RANS) model. A well validated k-ε turbulence model with wall function treatment for near wall region was used to perform turbulent flow calculations. Relative merits of alternative flow solvers were investigated. The predictor-corrector-based Pressure Implicit Splitting of Operators (PISO) algorithm was found to be computationally economic for unsteady flow computations. Wing structure was modeled using Bernoulli-Euler beam theory. A fully implicit time-marching scheme (using the Newmark integration method) was used to integrate the equations of motion for structure. Bilinear interpolation and linear extrapolation techniques were used to transfer necessary information between fluid and structure solvers. Geometry deformation was accounted for by using a moving boundary module. The moving grid capability was based on a master/slave concept and transfinite interpolation techniques. Since computations were performed on a moving mesh system, the geometric conservation law must be preserved. This is achieved by appropriately evaluating the Jacobian values associated with each cell. Accurate computation of contravariant velocities for unsteady flows using the momentum interpolation method on collocated, curvilinear grids was also addressed. Flutter computations were performed for the AGARD 445.6 wing at subsonic, transonic and supersonic Mach numbers. Unsteady computations were performed at various dynamic pressures to predict the flutter boundary. Results showed favorable agreement of experiment and previous numerical results. The computational methodology exhibited capabilities to predict both qualitative and quantitative features of aeroelasticity.

  17. Decentralized Energy Management System for Networked Microgrids in Grid-connected and Islanded Modes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Zhaoyu; Chen, Bokan; Wang, Jianhui

    This paper proposes a decentralized energy management system (EMS) for the coordinated operation of networked Microgirds (MGs) in a distribution system. In the grid-connected mode, the distribution network operator (DNO) and each MG are considered as distinct entities with individual objectives to minimize their own operation costs. It is assumed that both dispatchable and renewable energy source (RES)-based distributed generators (DGs) exist in the distribution network and the networked MGs. In order to coordinate the operation of all entities, we apply a decentralized bi-level algorithm to solve the problem with the first level to conduct negotiations among all entities andmore » the second level to update the non-converging penalties. In the islanded mode, the objective of each MG is to maintain a reliable power supply to its customers. In order to take into account the uncertainties of DG outputs and load consumption, we formulate the problems as two-stage stochastic programs. The first stage is to determine base generation setpoints based on the forecasts and the second stage is to adjust the generation outputs based on the realized scenarios. Case studies of a distribution system with networked MGs demonstrate the effectiveness of the proposed methodology in both grid-connected and islanded modes.« less

  18. Domain decomposition preconditioners for the spectral collocation method

    NASA Technical Reports Server (NTRS)

    Quarteroni, Alfio; Sacchilandriani, Giovanni

    1988-01-01

    Several block iteration preconditioners are proposed and analyzed for the solution of elliptic problems by spectral collocation methods in a region partitioned into several rectangles. It is shown that convergence is achieved with a rate which does not depend on the polynomial degree of the spectral solution. The iterative methods here presented can be effectively implemented on multiprocessor systems due to their high degree of parallelism.

  19. Lexical Collocation and Topic Occurrence in Well-Written Editorials: A Study in Form.

    ERIC Educational Resources Information Center

    Addison, James C., Jr.

    To explore the concept of lexical collocation, or relationships between words, a study was conducted based on three assumptions: (1) that a text structure for a unit of discourse was analogous to that existing at the level of the sentence, (2) that such a text form could be discovered if a large enough sample of generically similar texts was…

  20. The Impact of Length of Study Abroad on Collocational Knowledge: The Case of Saudi Students in Australia

    ERIC Educational Resources Information Center

    Alqarni, Ibrahim R.

    2017-01-01

    This study investigates the impact that study in Australia has on the lexical knowledge of Saudi Arabian students. It focuses on: 1) the effects that the length of study in Australia has on the acquisition of lexical collocations, as reflected by lexical knowledge tests, and 2) whether there is a significant gender difference in the acquisition of…

  1. Data-Driven Learning and the Acquisition of Italian Collocations: From Design to Student Evaluation

    ERIC Educational Resources Information Center

    Forti, Luciana

    2017-01-01

    This paper looks at how corpus data was used to design an Italian as an L2 language learning programme and how it was evaluated by students. The study focuses on the acquisition of Italian verb-noun collocations by Chinese native students attending a ten month long Italian language course before enrolling at an Italian university. It describes how…

  2. An empirical understanding of triple collocation evaluation measure

    NASA Astrophysics Data System (ADS)

    Scipal, Klaus; Doubkova, Marcela; Hegyova, Alena; Dorigo, Wouter; Wagner, Wolfgang

    2013-04-01

    Triple collocation method is an advanced evaluation method that has been used in the soil moisture field for only about half a decade. The method requires three datasets with an independent error structure that represent an identical phenomenon. The main advantages of the method are that it a) doesn't require a reference dataset that has to be considered to represent the truth, b) limits the effect of random and systematic errors of other two datasets, and c) simultaneously assesses the error of three datasets. The objective of this presentation is to assess the triple collocation error (Tc) of the ASAR Global Mode Surface Soil Moisture (GM SSM 1) km dataset and highlight problems of the method related to its ability to cancel the effect of error of ancillary datasets. In particular, the goal is to a) investigate trends in Tc related to the change in spatial resolution from 5 to 25 km, b) to investigate trends in Tc related to the choice of a hydrological model, and c) to study the relationship between Tc and other absolute evaluation methods (namely RMSE and Error Propagation EP). The triple collocation method is implemented using ASAR GM, AMSR-E, and a model (either AWRA-L, GLDAS-NOAH, or ERA-Interim). First, the significance of the relationship between the three soil moisture datasets was tested that is a prerequisite for the triple collocation method. Second, the trends in Tc related to the choice of the third reference dataset and scale were assessed. For this purpose the triple collocation is repeated replacing AWRA-L with two different globally available model reanalysis dataset operating at different spatial resolution (ERA-Interim and GLDAS-NOAH). Finally, the retrieved results were compared to the results of the RMSE and EP evaluation measures. Our results demonstrate that the Tc method does not eliminate the random and time-variant systematic errors of the second and the third dataset used in the Tc. The possible reasons include the fact a) that the TC method could not fully function with datasets acting at very different spatial resolutions, or b) that the errors were not fully independent as initially assumed.

  3. A shifted Jacobi collocation algorithm for wave type equations with non-local conservation conditions

    NASA Astrophysics Data System (ADS)

    Doha, Eid H.; Bhrawy, Ali H.; Abdelkawy, Mohammed A.

    2014-09-01

    In this paper, we propose an efficient spectral collocation algorithm to solve numerically wave type equations subject to initial, boundary and non-local conservation conditions. The shifted Jacobi pseudospectral approximation is investigated for the discretization of the spatial variable of such equations. It possesses spectral accuracy in the spatial variable. The shifted Jacobi-Gauss-Lobatto (SJ-GL) quadrature rule is established for treating the non-local conservation conditions, and then the problem with its initial and non-local boundary conditions are reduced to a system of second-order ordinary differential equations in temporal variable. This system is solved by two-stage forth-order A-stable implicit RK scheme. Five numerical examples with comparisons are given. The computational results demonstrate that the proposed algorithm is more accurate than finite difference method, method of lines and spline collocation approach

  4. A review on the solution of Grad-Shafranov equation in the cylindrical coordinates based on the Chebyshev collocation technique

    NASA Astrophysics Data System (ADS)

    Amerian, Z.; Salem, M. K.; Salar Elahi, A.; Ghoranneviss, M.

    2017-03-01

    Equilibrium reconstruction consists of identifying, from experimental measurements, a distribution of the plasma current density that satisfies the pressure balance constraint. Numerous methods exist to solve the Grad-Shafranov equation, describing the equilibrium of plasma confined by an axisymmetric magnetic field. In this paper, we have proposed a new numerical solution to the Grad-Shafranov equation (an axisymmetric, magnetic field transformed in cylindrical coordinates solved with the Chebyshev collocation method) when the source term (current density function) on the right-hand side is linear. The Chebyshev collocation method is a method for computing highly accurate numerical solutions of differential equations. We describe a circular cross-section of the tokamak and present numerical result of magnetic surfaces on the IR-T1 tokamak and then compare the results with an analytical solution.

  5. Dynamic data integration and stochastic inversion of a confined aquifer

    NASA Astrophysics Data System (ADS)

    Wang, D.; Zhang, Y.; Irsa, J.; Huang, H.; Wang, L.

    2013-12-01

    Much work has been done in developing and applying inverse methods to aquifer modeling. The scope of this paper is to investigate the applicability of a new direct method for large inversion problems and to incorporate uncertainty measures in the inversion outcomes (Wang et al., 2013). The problem considered is a two-dimensional inverse model (50×50 grid) of steady-state flow for a heterogeneous ground truth model (500×500 grid) with two hydrofacies. From the ground truth model, decreasing number of wells (12, 6, 3) were sampled for facies types, based on which experimental indicator histograms and directional variograms were computed. These parameters and models were used by Sequential Indicator Simulation to generate 100 realizations of hydrofacies patterns in a 100×100 (geostatistical) grid, which were conditioned to the facies measurements at wells. These realizations were smoothed with Simulated Annealing, coarsened to the 50×50 inverse grid, before they were conditioned with the direct method to the dynamic data, i.e., observed heads and groundwater fluxes at the same sampled wells. A set of realizations of estimated hydraulic conductivities (Ks), flow fields, and boundary conditions were created, which centered on the 'true' solutions from solving the ground truth model. Both hydrofacies conductivities were computed with an estimation accuracy of ×10% (12 wells), ×20% (6 wells), ×35% (3 wells) of the true values. For boundary condition estimation, the accuracy was within × 15% (12 wells), 30% (6 wells), and 50% (3 wells) of the true values. The inversion system of equations was solved with LSQR (Paige et al, 1982), for which coordinate transform and matrix scaling preprocessor were used to improve the condition number (CN) of the coefficient matrix. However, when the inverse grid was refined to 100×100, Gaussian Noise Perturbation was used to limit the growth of the CN before the matrix solve. To scale the inverse problem up (i.e., without smoothing and coarsening and therefore reducing the associated estimation uncertainty), a parallel LSQR solver was written and verified. For the 50×50 grid, the parallel solver sped up the serial solution time by 14X using 4 CPUs (research on parallel performance and scaling is ongoing). A sensitivity analysis was conducted to examine the relation between the observed data and the inversion outcomes, where measurement errors of increasing magnitudes (i.e., ×1, 2, 5, 10% of the total head variation and up to ×2% of the total flux variation) were imposed on the observed data. Inversion results were stable but the accuracy of Ks and boundary estimation degraded with increasing errors, as expected. In particular, quality of the observed heads is critical to hydraulic head recovery, while quality of the observed fluxes plays a dominant role in K estimation. References: Wang, D., Y. Zhang, J. Irsa, H. Huang, and L. Wang (2013), Data integration and stochastic inversion of a confined aquifer with high performance computing, Advances in Water Resources, in preparation. Paige, C. C., and M. A. Saunders (1982), LSQR: an algorithm for sparse linear equations and sparse least squares, ACM Transactions on Mathematical Software, 8(1), 43-71.

  6. An alternate protocol to achieve stochastic and deterministic resonances

    NASA Astrophysics Data System (ADS)

    Tiwari, Ishant; Dave, Darshil; Phogat, Richa; Khera, Neev; Parmananda, P.

    2017-10-01

    Periodic and Aperiodic Stochastic Resonance (SR) and Deterministic Resonance (DR) are studied in this paper. To check for the ubiquitousness of the phenomena, two unrelated systems, namely, FitzHugh-Nagumo and a particle in a bistable potential well, are studied. Instead of the conventional scenario of noise amplitude (in the case of SR) or chaotic signal amplitude (in the case of DR) variation, a tunable system parameter ("a" in the case of FitzHugh-Nagumo model and the damping coefficient "j" in the bistable model) is regulated. The operating values of these parameters are defined as the "setpoint" of the system throughout the present work. Our results indicate that there exists an optimal value of the setpoint for which maximum information transfer between the input and the output signals takes place. This information transfer from the input sub-threshold signal to the output dynamics is quantified by the normalised cross-correlation coefficient ( | CCC | ). | CCC | as a function of the setpoint exhibits a unimodal variation which is characteristic of SR (or DR). Furthermore, | CCC | is computed for a grid of noise (or chaotic signal) amplitude and setpoint values. The heat map of | CCC | over this grid yields the presence of a resonance region in the noise-setpoint plane for which the maximum enhancement of the input sub-threshold signal is observed. This resonance region could be possibly used to explain how organisms maintain their signal detection efficacy with fluctuating amounts of noise present in their environment. Interestingly, the method of regulating the setpoint without changing the noise amplitude was not able to induce Coherence Resonance (CR). A possible, qualitative reasoning for this is provided.

  7. Enhancing Critical Thinking Skills for Army Leaders Using Blended-Learning Methods

    DTIC Science & Technology

    2013-01-01

    delivering, and evaluating leader education and those who develop and implement distributed learning courses that incorporate group collaboration on topics...Circumstances Numerous studies comparing outcomes of collocated and virtual groups show that collocated groups perform better on interdependent tasks, such as...in class or “cold call” on students to answer questions. Third, using small (rather than large) groups for interactive activities can alleviate free

  8. More reliable forecasts with less precise computations: a fast-track route to cloud-resolved weather and climate simulators?

    PubMed

    Palmer, T N

    2014-06-28

    This paper sets out a new methodological approach to solving the equations for simulating and predicting weather and climate. In this approach, the conventionally hard boundary between the dynamical core and the sub-grid parametrizations is blurred. This approach is motivated by the relatively shallow power-law spectrum for atmospheric energy on scales of hundreds of kilometres and less. It is first argued that, because of this, the closure schemes for weather and climate simulators should be based on stochastic-dynamic systems rather than deterministic formulae. Second, as high-wavenumber elements of the dynamical core will necessarily inherit this stochasticity during time integration, it is argued that the dynamical core will be significantly over-engineered if all computations, regardless of scale, are performed completely deterministically and if all variables are represented with maximum numerical precision (in practice using double-precision floating-point numbers). As the era of exascale computing is approached, an energy- and computationally efficient approach to cloud-resolved weather and climate simulation is described where determinism and numerical precision are focused on the largest scales only.

  9. PDF modeling of turbulent flows on unstructured grids

    NASA Astrophysics Data System (ADS)

    Bakosi, Jozsef

    In probability density function (PDF) methods of turbulent flows, the joint PDF of several flow variables is computed by numerically integrating a system of stochastic differential equations for Lagrangian particles. Because the technique solves a transport equation for the PDF of the velocity and scalars, a mathematically exact treatment of advection, viscous effects and arbitrarily complex chemical reactions is possible; these processes are treated without closure assumptions. A set of algorithms is proposed to provide an efficient solution of the PDF transport equation modeling the joint PDF of turbulent velocity, frequency and concentration of a passive scalar in geometrically complex configurations. An unstructured Eulerian grid is employed to extract Eulerian statistics, to solve for quantities represented at fixed locations of the domain and to track particles. All three aspects regarding the grid make use of the finite element method. Compared to hybrid methods, the current methodology is stand-alone, therefore it is consistent both numerically and at the level of turbulence closure without the use of consistency conditions. Since both the turbulent velocity and scalar concentration fields are represented in a stochastic way, the method allows for a direct and close interaction between these fields, which is beneficial in computing accurate scalar statistics. Boundary conditions implemented along solid bodies are of the free-slip and no-slip type without the need for ghost elements. Boundary layers at no-slip boundaries are either fully resolved down to the viscous sublayer, explicitly modeling the high anisotropy and inhomogeneity of the low-Reynolds-number wall region without damping or wall-functions or specified via logarithmic wall-functions. As in moment closures and large eddy simulation, these wall-treatments provide the usual trade-off between resolution and computational cost as required by the given application. Particular attention is focused on modeling the dispersion of passive scalars in inhomogeneous turbulent flows. Two different micromixing models are investigated that incorporate the effect of small scale mixing on the transported scalar: the widely used interaction by exchange with the mean and the interaction by exchange with the conditional mean model. An adaptive algorithm to compute the velocity-conditioned scalar mean is proposed that homogenizes the statistical error over the sample space with no assumption on the shape of the underlying velocity PDF. The development also concentrates on a generally applicable micromixing timescale for complex flow domains. Several newly developed algorithms are described in detail that facilitate a stable numerical solution in arbitrarily complex flow geometries, including a stabilized mean-pressure projection scheme, the estimation of conditional and unconditional Eulerian statistics and their derivatives from stochastic particle fields employing finite element shapefunctions, particle tracking through unstructured grids, an efficient particle redistribution procedure and techniques related to efficient random number generation. The algorithm is validated and tested by computing three different turbulent flows: the fully developed turbulent channel flow, a street canyon (or cavity) flow and the turbulent wake behind a circular cylinder at a sub-critical Reynolds number. The solver has been parallelized and optimized for shared memory and multi-core architectures using the OpenMP standard. Relevant aspects of performance and parallelism on cache-based shared memory machines are discussed and presented in detail. The methodology shows great promise in the simulation of high-Reynolds-number incompressible inert or reactive turbulent flows in realistic configurations.

  10. The Effects of Repetition and Time of Post-Test Administration on EFL Learners' Form Recall of Single Words and Collocations

    ERIC Educational Resources Information Center

    Peters, Elke

    2014-01-01

    This article examines how form recall of target lexical items by learners of English as a foreign language (EFL) is affected (1) by repetition (1, 3 or 5 number of occurrences), (2) by the type of target item (single words versus collocations), and (3) by the time of post-test administration (immediately or one week after the learning session).…

  11. Investigation of IPPD: A Case Study of the Marine Corps AAAV.

    DTIC Science & Technology

    1998-03-01

    process. ( Rafii , 1995, p. 78) The United States Marine Corps is in the process of developing their next generation of Advanced Amphibious Assault...particular product or process. ( Rafii , 1995, p. 78) DiTrapani and Geither’s (1996) study of IPTs stressed the collocation of team members to the...School of Systems Management, Naval Postgraduate School, Monterey, CA, December 1996. Rafii , F., "How Important Is Physical Collocation to Product

  12. Integrated High-Speed Torque Control System for a Robotic Joint

    NASA Technical Reports Server (NTRS)

    Davis, Donald R. (Inventor); Radford, Nicolaus A. (Inventor); Permenter, Frank Noble (Inventor); Valvo, Michael C. (Inventor); Askew, R. Scott (Inventor)

    2013-01-01

    A control system for achieving high-speed torque for a joint of a robot includes a printed circuit board assembly (PCBA) having a collocated joint processor and high-speed communication bus. The PCBA may also include a power inverter module (PIM) and local sensor conditioning electronics (SCE) for processing sensor data from one or more motor position sensors. Torque control of a motor of the joint is provided via the PCBA as a high-speed torque loop. Each joint processor may be embedded within or collocated with the robotic joint being controlled. Collocation of the joint processor, PIM, and high-speed bus may increase noise immunity of the control system, and the localized processing of sensor data from the joint motor at the joint level may minimize bus cabling to and from each control node. The joint processor may include a field programmable gate array (FPGA).

  13. Domain decomposition methods for systems of conservation laws: Spectral collocation approximations

    NASA Technical Reports Server (NTRS)

    Quarteroni, Alfio

    1989-01-01

    Hyperbolic systems of conversation laws are considered which are discretized in space by spectral collocation methods and advanced in time by finite difference schemes. At any time-level a domain deposition method based on an iteration by subdomain procedure was introduced yielding at each step a sequence of independent subproblems (one for each subdomain) that can be solved simultaneously. The method is set for a general nonlinear problem in several space variables. The convergence analysis, however, is carried out only for a linear one-dimensional system with continuous solutions. A precise form of the error reduction factor at each iteration is derived. Although the method is applied here to the case of spectral collocation approximation only, the idea is fairly general and can be used in a different context as well. For instance, its application to space discretization by finite differences is straight forward.

  14. Global collocation methods for approximation and the solution of partial differential equations

    NASA Technical Reports Server (NTRS)

    Solomonoff, A.; Turkel, E.

    1986-01-01

    Polynomial interpolation methods are applied both to the approximation of functions and to the numerical solutions of hyperbolic and elliptic partial differential equations. The derivative matrix for a general sequence of the collocation points is constructed. The approximate derivative is then found by a matrix times vector multiply. The effects of several factors on the performance of these methods including the effect of different collocation points are then explored. The resolution of the schemes for both smooth functions and functions with steep gradients or discontinuities in some derivative are also studied. The accuracy when the gradients occur both near the center of the region and in the vicinity of the boundary is investigated. The importance of the aliasing limit on the resolution of the approximation is investigated in detail. Also examined is the effect of boundary treatment on the stability and accuracy of the scheme.

  15. Unsteady two dimensional airloads acting on oscillating thin airfoils in subsonic ventilated wind tunnels

    NASA Technical Reports Server (NTRS)

    Fromme, J.; Golberg, M.

    1978-01-01

    The numerical calculation of unsteady two dimensional airloads which act upon thin airfoils in subsonic ventilated wind tunnels was studied. Neglecting certain quadrature errors, Bland's collocation method is rigorously proved to converge to the mathematically exact solution of Bland's integral equation, and a three way equivalence was established between collocation, Galerkin's method and least squares whenever the collocation points are chosen to be the nodes of the quadrature rule used for Galerkin's method. A computer program displayed convergence with respect to the number of pressure basis functions employed, and agreement with known special cases was demonstrated. Results are obtained for the combined effects of wind tunnel wall ventilation and wind tunnel depth to airfoil chord ratio, and for acoustic resonance between the airfoil and wind tunnel walls. A boundary condition is proposed for permeable walls through which mass flow rate is proportional to pressure jump.

  16. An Extension of the Split Window Technique for the Retrieval of Precipitable Water: Experimental Verification

    DTIC Science & Technology

    1988-09-23

    DOWNGRADING SCHEDULE D~istribution Unlimited 4. PERFORMING ORGANIZATiON REPORT NUMVBER(S) 5. MONITORiG ORGANIZATION REPORT NUMBER(S) AFGL-TR-88-0237...Collocations were performed on launch sites of the cloud contamination, aerosol problems, collocation 1200 UT radiosondes on 25 Aug 1987. Statistics were...al (1987) and Thomason, 1987). In this imagery opaque clouds to this problem appear white, low clouds and fog appear bright red against a brown

  17. A collocation-shooting method for solving fractional boundary value problems

    NASA Astrophysics Data System (ADS)

    Al-Mdallal, Qasem M.; Syam, Muhammed I.; Anwar, M. N.

    2010-12-01

    In this paper, we discuss the numerical solution of special class of fractional boundary value problems of order 2. The method of solution is based on a conjugating collocation and spline analysis combined with shooting method. A theoretical analysis about the existence and uniqueness of exact solution for the present class is proven. Two examples involving Bagley-Torvik equation subject to boundary conditions are also presented; numerical results illustrate the accuracy of the present scheme.

  18. Precise control of flexible manipulators

    NASA Technical Reports Server (NTRS)

    Cannon, R. H., Jr.

    1984-01-01

    Experimental apparatus were developed for physically testing control systems for pointing flexible structures, such as limber spacecraft, for the case that control actuators cannot be collocated with sensors. Structural damping ratios are less than 0.003, each basic configuration of sensor/actuator noncollocation is available, and inertias can be halved or doubled abruptly during control maneuvers, thereby imposing, in particular, a sudden reversal in the plant's pole-zero sequence. First experimental results are presented, including stable control with both collocation and noncollocation.

  19. Design and Application of a Collocated Capacitance Sensor for Magnetic Bearing Spindle

    NASA Technical Reports Server (NTRS)

    Shin, Dongwon; Liu, Seon-Jung; Kim, Jongwon

    1996-01-01

    This paper presents a collocated capacitance sensor for magnetic bearings. The main feature of the sensor is that it is made of a specific compact printed circuit board (PCB). The signal processing unit has been also developed. The results of the experimental performance evaluation on the sensitivity, resolution and frequency response of the sensor are presented. Finally, an application example of the sensor to the active control of a magnetic bearing is described.

  20. A collocation--Galerkin finite element model of cardiac action potential propagation.

    PubMed

    Rogers, J M; McCulloch, A D

    1994-08-01

    A new computational method was developed for modeling the effects of the geometric complexity, nonuniform muscle fiber orientation, and material inhomogeneity of the ventricular wall on cardiac impulse propagation. The method was used to solve a modification to the FitzHugh-Nagumo system of equations. The geometry, local muscle fiber orientation, and material parameters of the domain were defined using linear Lagrange or cubic Hermite finite element interpolation. Spatial variations of time-dependent excitation and recovery variables were approximated using cubic Hermite finite element interpolation, and the governing finite element equations were assembled using the collocation method. To overcome the deficiencies of conventional collocation methods on irregular domains, Galerkin equations for the no-flux boundary conditions were used instead of collocation equations for the boundary degrees-of-freedom. The resulting system was evolved using an adaptive Runge-Kutta method. Converged two-dimensional simulations of normal propagation showed that this method requires less CPU time than a traditional finite difference discretization. The model also reproduced several other physiologic phenomena known to be important in arrhythmogenesis including: Wenckebach periodicity, slowed propagation and unidirectional block due to wavefront curvature, reentry around a fixed obstacle, and spiral wave reentry. In a new result, we observed wavespeed variations and block due to nonuniform muscle fiber orientation. The findings suggest that the finite element method is suitable for studying normal and pathological cardiac activation and has significant advantages over existing techniques.

  1. A self-consistent estimate for linear viscoelastic polycrystals with internal variables inferred from the collocation method

    NASA Astrophysics Data System (ADS)

    Vu, Q. H.; Brenner, R.; Castelnau, O.; Moulinec, H.; Suquet, P.

    2012-03-01

    The correspondence principle is customarily used with the Laplace-Carson transform technique to tackle the homogenization of linear viscoelastic heterogeneous media. The main drawback of this method lies in the fact that the whole stress and strain histories have to be considered to compute the mechanical response of the material during a given macroscopic loading. Following a remark of Mandel (1966 Mécanique des Milieux Continus(Paris, France: Gauthier-Villars)), Ricaud and Masson (2009 Int. J. Solids Struct. 46 1599-1606) have shown the equivalence between the collocation method used to invert Laplace-Carson transforms and an internal variables formulation. In this paper, this new method is developed for the case of polycrystalline materials with general anisotropic properties for local and macroscopic behavior. Applications are provided for the case of constitutive relations accounting for glide of dislocations on particular slip systems. It is shown that the method yields accurate results that perfectly match the standard collocation method and reference full-field results obtained with a FFT numerical scheme. The formulation is then extended to the case of time- and strain-dependent viscous properties, leading to the incremental collocation method (ICM) that can be solved efficiently by a step-by-step procedure. Specifically, the introduction of isotropic and kinematic hardening at the slip system scale is considered.

  2. Impact of Hydrogeological Uncertainty on Estimation of Environmental Risks Posed by Hydrocarbon Transportation Networks

    NASA Astrophysics Data System (ADS)

    Ciriello, V.; Lauriola, I.; Bonvicini, S.; Cozzani, V.; Di Federico, V.; Tartakovsky, Daniel M.

    2017-11-01

    Ubiquitous hydrogeological uncertainty undermines the veracity of quantitative predictions of soil and groundwater contamination due to accidental hydrocarbon spills from onshore pipelines. Such predictions, therefore, must be accompanied by quantification of predictive uncertainty, especially when they are used for environmental risk assessment. We quantify the impact of parametric uncertainty on quantitative forecasting of temporal evolution of two key risk indices, volumes of unsaturated and saturated soil contaminated by a surface spill of light nonaqueous-phase liquids. This is accomplished by treating the relevant uncertain parameters as random variables and deploying two alternative probabilistic models to estimate their effect on predictive uncertainty. A physics-based model is solved with a stochastic collocation method and is supplemented by a global sensitivity analysis. A second model represents the quantities of interest as polynomials of random inputs and has a virtually negligible computational cost, which enables one to explore any number of risk-related contamination scenarios. For a typical oil-spill scenario, our method can be used to identify key flow and transport parameters affecting the risk indices, to elucidate texture-dependent behavior of different soils, and to evaluate, with a degree of confidence specified by the decision-maker, the extent of contamination and the correspondent remediation costs.

  3. Gravity-Assist Trajectories to the Ice Giants: An Automated Method to Catalog Mass- Or Time-Optimal Solutions

    NASA Technical Reports Server (NTRS)

    Hughes, Kyle M.; Knittel, Jeremy M.; Englander, Jacob A.

    2017-01-01

    This work presents an automated method of calculating mass (or time) optimal gravity-assist trajectories without a priori knowledge of the flyby-body combination. Since gravity assists are particularly crucial for reaching the outer Solar System, we use the Ice Giants, Uranus and Neptune, as example destinations for this work. Catalogs are also provided that list the most attractive trajectories found over launch dates ranging from 2024 to 2038. The tool developed to implement this method, called the Python EMTG Automated Trade Study Application (PEATSA), iteratively runs the Evolutionary Mission Trajectory Generator (EMTG), a NASA Goddard Space Flight Center in-house trajectory optimization tool. EMTG finds gravity-assist trajectories with impulsive maneuvers using a multiple-shooting structure along with stochastic methods (such as monotonic basin hopping) and may be run with or without an initial guess provided. PEATSA runs instances of EMTG in parallel over a grid of launch dates. After each set of runs completes, the best results within a neighborhood of launch dates are used to seed all other cases in that neighborhood-allowing the solutions across the range of launch dates to improve over each iteration. The results here are compared against trajectories found using a grid-search technique, and PEATSA is found to outperform the grid-search results for most launch years considered.

  4. Gravity-Assist Trajectories to the Ice Giants: An Automated Method to Catalog Mass-or Time-Optimal Solutions

    NASA Technical Reports Server (NTRS)

    Hughes, Kyle M.; Knittel, Jeremy M.; Englander, Jacob A.

    2017-01-01

    This work presents an automated method of calculating mass (or time) optimal gravity-assist trajectories without a priori knowledge of the flyby-body combination. Since gravity assists are particularly crucial for reaching the outer Solar System, we use the Ice Giants, Uranus and Neptune, as example destinations for this work. Catalogs are also provided that list the most attractive trajectories found over launch dates ranging from 2024 to 2038. The tool developed to implement this method, called the Python EMTG Automated Trade Study Application (PEATSA), iteratively runs the Evolutionary Mission Trajectory Generator (EMTG), a NASA Goddard Space Flight Center in-house trajectory optimization tool. EMTG finds gravity-assist trajectories with impulsive maneuvers using a multiple-shooting structure along with stochastic methods (such as monotonic basin hopping) and may be run with or without an initial guess provided. PEATSA runs instances of EMTG in parallel over a grid of launch dates. After each set of runs completes, the best results within a neighborhood of launch dates are used to seed all other cases in that neighborhood---allowing the solutions across the range of launch dates to improve over each iteration. The results here are compared against trajectories found using a grid-search technique, and PEATSA is found to outperform the grid-search results for most launch years considered.

  5. Clock measurements to improve the geopotential determination

    NASA Astrophysics Data System (ADS)

    Lion, Guillaume; Panet, Isabelle; Delva, Pacôme; Wolf, Peter; Bize, Sébastien; Guerlin, Christine

    2017-04-01

    Comparisons between optical clocks with an accuracy and stability approaching the 10-18 in term of relative frequency shift are opening new perspectives for the direct determination of geopotential at a centimeter-level accuracy in geoid height. However, so far detailed quantitative estimates of the possible improvement in geoid determination when adding such clock measurements to existing data are lacking. In this context, the present work aims at evaluating the contribution of this new kind of direct measurements in determining the geopotential at high spatial resolution (10 km). We consider the Massif Central area, marked by smooth, moderate altitude mountains and volcanic plateaus leading to variations of the gravitational field over a range of spatial scales. In such type of region, the scarcity of gravity data is an important limitation in deriving accurate high resolution geopotential models. We summarize our methodology to assess the contribution of clock data in the geopotential recovery, in combination with ground gravity measurements. We sample synthetic gravity and disturbing potential data from a spherical harmonics geopotential model, and a topography model, up to 10 km resolution; we also build a potential control grid. From the synthetic data, we estimate the disturbing potential by least-squares collocation. Finally, we assess the quality of the reconstructed potential by comparing it to that of the control grid. We show that adding only a few clock data reduces the reconstruction bias significantly and improves the standard deviation by a factor 3. We discuss the role of different parameters, such as the effect of the data coverage and data quality on these results, the trade-off between the measurement noise level and the number of data, and the optimization of the clock data network.

  6. Development of Wind Speed Retrieval from Cross-Polarization Chinese Gaofen-3 Synthetic Aperture Radar in Typhoons

    PubMed Central

    Yuan, Xinzhe; Sun, Jian; Zhou, Wei; Zhang, Qingjun

    2018-01-01

    The purpose of our work is to determine the feasibility and effectiveness of retrieving sea surface wind speeds from C-band cross-polarization (herein vertical-horizontal, VH) Chinese Gaofen-3 (GF-3) SAR images in typhoons. In this study, we have collected three GF-3 SAR images acquired in Global Observation (GLO) and Wide ScanSAR (WSC) mode during the summer of 2017 from the China Sea, which includes the typhoons Noru, Doksuri and Talim. These images were collocated with wind simulations at 0.12° grids from a numeric model, called the Regional Assimilation and Prediction System-Typhoon model (GRAPES-TYM). Recent research shows that GRAPES-TYM has a good performance for typhoon simulation in the China Sea. Based on the dataset, the dependence of wind speed and of radar incidence angle on normalized radar cross (NRCS) of VH-polarization GF-3 SAR have been investigated, after which an empirical algorithm for wind speed retrieval from VH-polarization GF-3 SAR was tuned. An additional four VH-polarization GF-3 SAR images in three typhoons, Noru, Hato and Talim, were investigated in order to validate the proposed algorithm. SAR-derived winds were compared with measurements from Windsat winds at 0.25° grids with wind speeds up to 40 m/s, showing a 5.5 m/s root mean square error (RMSE) of wind speed and an improved RMSE of 5.1 m/s wind speed was achieved compared with the retrieval results validated against GRAPES-TYM winds. It is concluded that the proposed algorithm is a promising potential technique for strong wind retrieval from cross-polarization GF-3 SAR images without encountering a signal saturation problem. PMID:29385068

  7. Direct comparisons of ice cloud macro- and microphysical properties simulated by the Community Atmosphere Model version 5 with HIPPO aircraft observations

    NASA Astrophysics Data System (ADS)

    Wu, Chenglai; Liu, Xiaohong; Diao, Minghui; Zhang, Kai; Gettelman, Andrew; Lu, Zheng; Penner, Joyce E.; Lin, Zhaohui

    2017-04-01

    In this study we evaluate cloud properties simulated by the Community Atmosphere Model version 5 (CAM5) using in situ measurements from the HIAPER Pole-to-Pole Observations (HIPPO) campaign for the period of 2009 to 2011. The modeled wind and temperature are nudged towards reanalysis. Model results collocated with HIPPO flight tracks are directly compared with the observations, and model sensitivities to the representations of ice nucleation and growth are also examined. Generally, CAM5 is able to capture specific cloud systems in terms of vertical configuration and horizontal extension. In total, the model reproduces 79.8 % of observed cloud occurrences inside model grid boxes and even higher (94.3 %) for ice clouds (T ≤ -40 °C). The missing cloud occurrences in the model are primarily ascribed to the fact that the model cannot account for the high spatial variability of observed relative humidity (RH). Furthermore, model RH biases are mostly attributed to the discrepancies in water vapor, rather than temperature. At the micro-scale of ice clouds, the model captures the observed increase of ice crystal mean sizes with temperature, albeit with smaller sizes than the observations. The model underestimates the observed ice number concentration (Ni) and ice water content (IWC) for ice crystals larger than 75 µm in diameter. Modeled IWC and Ni are more sensitive to the threshold diameter for autoconversion of cloud ice to snow (Dcs), while simulated ice crystal mean size is more sensitive to ice nucleation parameterizations than to Dcs. Our results highlight the need for further improvements to the sub-grid RH variability and ice nucleation and growth in the model.

  8. On the Probabilistic Deployment of Smart Grid Networks in TV White Space.

    PubMed

    Cacciapuoti, Angela Sara; Caleffi, Marcello; Paura, Luigi

    2016-05-10

    To accommodate the rapidly increasing demand for wireless broadband communications in Smart Grid (SG) networks, research efforts are currently ongoing to enable the SG networks to utilize the TV spectrum according to the Cognitive Radio paradigm. To this aim, in this letter, we develop an analytical framework for the optimal deployment of multiple closely-located SG Neighborhood Area Networks (NANs) concurrently using the same TV spectrum. The objective is to derive the optimal values for both the number of NANs and their coverage. More specifically, regarding the number of NANs, we derive the optimal closed-form expression, i.e., the closed-form expression that assures the deployment of the maximum number of NANs in the considered region satisfying a given collision constraint on the transmissions of the NANs. Regarding the NAN coverage, we derive the optimal closed-form expression, i.e., the closed-form expression of the NAN transmission range that assures the maximum coverage of each NAN in the considered region satisfying the given collision constraint. All the theoretical results are derived by adopting a stochastic approach. Finally, numerical results validate the theoretical analysis.

  9. Dynamic Price Vector Formation Model-Based Automatic Demand Response Strategy for PV-Assisted EV Charging Stations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Qifang; Wang, Fei; Hodge, Bri-Mathias

    A real-time price (RTP)-based automatic demand response (ADR) strategy for PV-assisted electric vehicle (EV) Charging Station (PVCS) without vehicle to grid is proposed. The charging process is modeled as a dynamic linear program instead of the normal day-ahead and real-time regulation strategy, to capture the advantages of both global and real-time optimization. Different from conventional price forecasting algorithms, a dynamic price vector formation model is proposed based on a clustering algorithm to form an RTP vector for a particular day. A dynamic feasible energy demand region (DFEDR) model considering grid voltage profiles is designed to calculate the lower and uppermore » bounds. A deduction method is proposed to deal with the unknown information of future intervals, such as the actual stochastic arrival and departure times of EVs, which make the DFEDR model suitable for global optimization. Finally, both the comparative cases articulate the advantages of the developed methods and the validity in reducing electricity costs, mitigating peak charging demand, and improving PV self-consumption of the proposed strategy are verified through simulation scenarios.« less

  10. Parallel Implementation of a High Order Implicit Collocation Method for the Heat Equation

    NASA Technical Reports Server (NTRS)

    Kouatchou, Jules; Halem, Milton (Technical Monitor)

    2000-01-01

    We combine a high order compact finite difference approximation and collocation techniques to numerically solve the two dimensional heat equation. The resulting method is implicit arid can be parallelized with a strategy that allows parallelization across both time and space. We compare the parallel implementation of the new method with a classical implicit method, namely the Crank-Nicolson method, where the parallelization is done across space only. Numerical experiments are carried out on the SGI Origin 2000.

  11. Simulation of Edge Effects in Electroanalytical Experiments by Orthogonal Collocation. Part I. Two Dimensional Collocation and Theory for Chronoamperometry.

    DTIC Science & Technology

    1982-08-18

    cselixataan has been shown to Pro, imonedimie. FIN SMCS10 tam we m Vp assuma threry. In lacd. it has been shown lor difeent oide a pouwerful method to simulate...Afortd meaden 11. ’by)W~itl 116.ttousol =odimra She delade Ia~c Ul - by she Oa SWISS INaolder 10 wAce shem. re’te" s s m rnb -bute r~ 6 X-06 CSc a I

  12. The precision of wet atmospheric deposition data from national atmospheric deposition program/national trends network sites determined with collocated samplers

    USGS Publications Warehouse

    Nilles, M.A.; Gordon, J.D.; Schroder, L.J.

    1994-01-01

    A collocated, wet-deposition sampler program has been operated since October 1988 by the U.S. Geological Survey to estimate the overall sampling precision of wet atmospheric deposition data collected at selected sites in the National Atmospheric Deposition Program and National Trends Network (NADP/NTN). A duplicate set of wet-deposition sampling instruments was installed adjacent to existing sampling instruments at four different NADP/NTN sites for each year of the study. Wet-deposition samples from collocated sites were collected and analysed using standard NADP/NTN procedures. Laboratory analyses included determinations of pH, specific conductance, and concentrations of major cations and anions. The estimates of precision included all variability in the data-collection system, from the point of sample collection through storage in the NADP/NTN database. Sampling precision was determined from the absolute value of differences in the analytical results for the paired samples in terms of median relative and absolute difference. The median relative difference for Mg2+, Na+, K+ and NH4+ concentration and deposition was quite variable between sites and exceeded 10% at most sites. Relative error for analytes whose concentrations typically approached laboratory method detection limits were greater than for analytes that did not typically approach detection limits. The median relative difference for SO42- and NO3- concentration, specific conductance, and sample volume at all sites was less than 7%. Precision for H+ concentration and deposition ranged from less than 10% at sites with typically high levels of H+ concentration to greater than 30% at sites with low H+ concentration. Median difference for analyte concentration and deposition was typically 1.5-2-times greater for samples collected during the winter than during other seasons at two northern sites. Likewise, the median relative difference in sample volume for winter samples was more than double the annual median relative difference at the two northern sites. Bias accounted for less than 25% of the collocated variability in analyte concentration and deposition from weekly collocated precipitation samples at most sites.A collocated, wet-deposition sampler program has been operated since OCtober 1988 by the U.S Geological Survey to estimate the overall sampling precision of wet atmospheric deposition data collected at selected sites in the National Atmospheric Deposition Program and National Trends Network (NADP/NTN). A duplicate set of wet-deposition sampling instruments was installed adjacent to existing sampling instruments four different NADP/NTN sites for each year of the study. Wet-deposition samples from collocated sites were collected and analysed using standard NADP/NTN procedures. Laboratory analyses included determinations of pH, specific conductance, and concentrations of major cations and anions. The estimates of precision included all variability in the data-collection system, from the point of sample collection through storage in the NADP/NTN database.

  13. Numerical study of the flow in a three-dimensional thermally driven cavity

    NASA Astrophysics Data System (ADS)

    Rauwoens, Pieter; Vierendeels, Jan; Merci, Bart

    2008-06-01

    Solutions for the fully compressible Navier-Stokes equations are presented for the flow and temperature fields in a cubic cavity with large horizontal temperature differences. The ideal-gas approximation for air is assumed and viscosity is computed using Sutherland's law. The three-dimensional case forms an extension of previous studies performed on a two-dimensional square cavity. The influence of imposed boundary conditions in the third dimension is investigated as a numerical experiment. Comparison is made between convergence rates in case of periodic and free-slip boundary conditions. Results with no-slip boundary conditions are presented as well. The effect of the Rayleigh number is studied. Results are computed using a finite volume method on a structured, collocated grid. An explicit third-order discretization for the convective part and an implicit central discretization for the acoustic part and for the diffusive part are used. To stabilize the scheme an artificial dissipation term for the pressure and the temperature is introduced. The discrete equations are solved using a time-marching method with restrictions on the timestep corresponding to the explicit parts of the solver. Multigrid is used as acceleration technique.

  14. Improving the geological interpretation of magnetic and gravity satellite anomalies

    NASA Technical Reports Server (NTRS)

    Hinze, William J.; Braile, Lawrence W.; Vonfrese, Ralph R. B.

    1987-01-01

    Quantitative analysis of the geologic component of observed satellite magnetic and gravity fields requires accurate isolation of the geologic component of the observations, theoretically sound and viable inversion techniques, and integration of collateral, constraining geologic and geophysical data. A number of significant contributions were made which make quantitative analysis more accurate. These include procedures for: screening and processing orbital data for lithospheric signals based on signal repeatability and wavelength analysis; producing accurate gridded anomaly values at constant elevations from the orbital data by three-dimensional least squares collocation; increasing the stability of equivalent point source inversion and criteria for the selection of the optimum damping parameter; enhancing inversion techniques through an iterative procedure based on the superposition theorem of potential fields; and modeling efficiently regional-scale lithospheric sources of satellite magnetic anomalies. In addition, these techniques were utilized to investigate regional anomaly sources of North and South America and India and to provide constraints to continental reconstruction. Since the inception of this research study, eleven papers were presented with associated published abstracts, three theses were completed, four papers were published or accepted for publication, and an additional manuscript was submitted for publication.

  15. Verification of continuum drift kinetic equation solvers in NIMROD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Held, E. D.; Ji, J.-Y.; Kruger, S. E.

    Verification of continuum solutions to the electron and ion drift kinetic equations (DKEs) in NIMROD [C. R. Sovinec et al., J. Comp. Phys. 195, 355 (2004)] is demonstrated through comparison with several neoclassical transport codes, most notably NEO [E. A. Belli and J. Candy, Plasma Phys. Controlled Fusion 54, 015015 (2012)]. The DKE solutions use NIMROD's spatial representation, 2D finite-elements in the poloidal plane and a 1D Fourier expansion in toroidal angle. For 2D velocity space, a novel 1D expansion in finite elements is applied for the pitch angle dependence and a collocation grid is used for the normalized speedmore » coordinate. The full, linearized Coulomb collision operator is kept and shown to be important for obtaining quantitative results. Bootstrap currents, parallel ion flows, and radial particle and heat fluxes show quantitative agreement between NIMROD and NEO for a variety of tokamak equilibria. In addition, velocity space distribution function contours for ions and electrons show nearly identical detailed structure and agree quantitatively. A Θ-centered, implicit time discretization and a block-preconditioned, iterative linear algebra solver provide efficient electron and ion DKE solutions that ultimately will be used to obtain closures for NIMROD's evolving fluid model.« less

  16. Modeling and Analysis of Remote, Off-grid Microgrids

    NASA Astrophysics Data System (ADS)

    Madathil, Sreenath Chalil

    Over the past century the electric power industry has evolved to support the delivery of power over long distances with highly interconnected transmission systems. Despite this evolution, some remote communities are not connected to these systems. These communities rely on small, disconnected distribution systems, i.e., microgrids, to deliver power. Power distribution in most of these remote communities often depend on a type of microgrid called "off-grid microgrids". However, as microgrids often are not held to the same reliability standards as transmission grids, remote communities can be at risk to experience extended blackouts. Recent trends have also shown an increased use of renewable energy resources in power systems for remote communities. The increased penetration of renewable resources in power generation will require complex decision making when designing a resilient power system. This is mainly due to the stochastic nature of renewable resources that can lead to loss of load or line overload during their operations. In the first part of this thesis, we develop an optimization model and accompanying solution algorithm for capacity planning and operating microgrids that include N-1 security and other practical modeling features (e.g., AC power flow physics, component efficiencies and thermal limits). We demonstrate the effectiveness of our model and solution approach on two test systems: a modified version of the IEEE 13 node test feeder and a model of a distribution system in a remote Alaskan community. Once a tractable algorithm was identified to solve the above problem, we develop a mathematical model that includes topology design of microgrids. The topology design includes building new lines, making redundant lines, and analyzing N-1 contingencies on generators and lines. We develop a rolling horizon algorithm to efficiently analyze the model and demonstrate the strength of our algorithm in the same network. Finally, we develop a stochastic model that considers generation uncertainties along with N-1 security on generation assets. We develop a chance-constrained model to analyze the efficacy of the problem under consideration and present a case study on an adapted IEEE-13 node network. A successful implementation of this research could help remote communities around the world to enhance their quality of life by providing them with cost-effective, reliable electricity.

  17. A transformed path integral approach for solution of the Fokker-Planck equation

    NASA Astrophysics Data System (ADS)

    Subramaniam, Gnana M.; Vedula, Prakash

    2017-10-01

    A novel path integral (PI) based method for solution of the Fokker-Planck equation is presented. The proposed method, termed the transformed path integral (TPI) method, utilizes a new formulation for the underlying short-time propagator to perform the evolution of the probability density function (PDF) in a transformed computational domain where a more accurate representation of the PDF can be ensured. The new formulation, based on a dynamic transformation of the original state space with the statistics of the PDF as parameters, preserves the non-negativity of the PDF and incorporates short-time properties of the underlying stochastic process. New update equations for the state PDF in a transformed space and the parameters of the transformation (including mean and covariance) that better accommodate nonlinearities in drift and non-Gaussian behavior in distributions are proposed (based on properties of the SDE). Owing to the choice of transformation considered, the proposed method maps a fixed grid in transformed space to a dynamically adaptive grid in the original state space. The TPI method, in contrast to conventional methods such as Monte Carlo simulations and fixed grid approaches, is able to better represent the distributions (especially the tail information) and better address challenges in processes with large diffusion, large drift and large concentration of PDF. Additionally, in the proposed TPI method, error bounds on the probability in the computational domain can be obtained using the Chebyshev's inequality. The benefits of the TPI method over conventional methods are illustrated through simulations of linear and nonlinear drift processes in one-dimensional and multidimensional state spaces. The effects of spatial and temporal grid resolutions as well as that of the diffusion coefficient on the error in the PDF are also characterized.

  18. Static reservoir modeling of the Bahariya reservoirs for the oilfields development in South Umbarka area, Western Desert, Egypt

    NASA Astrophysics Data System (ADS)

    Abdel-Fattah, Mohamed I.; Metwalli, Farouk I.; Mesilhi, El Sayed I.

    2018-02-01

    3D static reservoir modeling of the Bahariya reservoirs using seismic and wells data can be a relevant part of an overall strategy for the oilfields development in South Umbarka area (Western Desert, Egypt). The seismic data is used to build the 3D grid, including fault sticks for the fault modeling, and horizon interpretations and surfaces for horizon modeling. The 3D grid is the digital representation of the structural geology of Bahariya Formation. When we got a reasonably accurate representation, we fill the 3D grid with facies and petrophysical properties to simulate it, to gain a more precise understanding of the reservoir properties behavior. Sequential Indicator Simulation (SIS) and Sequential Gaussian Simulation (SGS) techniques are the stochastic algorithms used to spatially distribute discrete reservoir properties (facies) and continuous reservoir properties (shale volume, porosity, and water saturation) respectively within the created 3D grid throughout property modeling. The structural model of Bahariya Formation exhibits the trapping mechanism which is a fault assisted anticlinal closure trending NW-SE. This major fault breaks the reservoirs into two major fault blocks (North Block and South Block). Petrophysical models classified Lower Bahariya reservoir as a moderate to good reservoir rather than Upper Bahariya reservoir in terms of facies, with good porosity and permeability, low water saturation, and moderate net to gross. The Original Oil In Place (OOIP) values of modeled Bahariya reservoirs show hydrocarbon accumulation in economic quantity, considering the high structural dips at the central part of South Umbarka area. The powerful of 3D static modeling technique has provided a considerable insight into the future prediction of Bahariya reservoirs performance and production behavior.

  19. Spatial and temporal variability of the overall error of National Atmospheric Deposition Program measurements determined by the USGS collocated-sampler program, water years 1989-2001

    USGS Publications Warehouse

    Wetherbee, G.A.; Latysh, N.E.; Gordon, J.D.

    2005-01-01

    Data from the U.S. Geological Survey (USGS) collocated-sampler program for the National Atmospheric Deposition Program/National Trends Network (NADP/NTN) are used to estimate the overall error of NADP/NTN measurements. Absolute errors are estimated by comparison of paired measurements from collocated instruments. Spatial and temporal differences in absolute error were identified and are consistent with longitudinal distributions of NADP/NTN measurements and spatial differences in precipitation characteristics. The magnitude of error for calcium, magnesium, ammonium, nitrate, and sulfate concentrations, specific conductance, and sample volume is of minor environmental significance to data users. Data collected after a 1994 sample-handling protocol change are prone to less absolute error than data collected prior to 1994. Absolute errors are smaller during non-winter months than during winter months for selected constituents at sites where frozen precipitation is common. Minimum resolvable differences are estimated for different regions of the USA to aid spatial and temporal watershed analyses.

  20. Generalized Lagrangian Jacobi Gauss collocation method for solving unsteady isothermal gas through a micro-nano porous medium

    NASA Astrophysics Data System (ADS)

    Parand, Kourosh; Latifi, Sobhan; Delkhosh, Mehdi; Moayeri, Mohammad M.

    2018-01-01

    In the present paper, a new method based on the Generalized Lagrangian Jacobi Gauss (GLJG) collocation method is proposed. The nonlinear Kidder equation, which explains unsteady isothermal gas through a micro-nano porous medium, is a second-order two-point boundary value ordinary differential equation on the unbounded interval [0, ∞). Firstly, using the quasilinearization method, the equation is converted to a sequence of linear ordinary differential equations. Then, by using the GLJG collocation method, the problem is reduced to solving a system of algebraic equations. It must be mentioned that this equation is solved without domain truncation and variable changing. A comparison with some numerical solutions made and the obtained results indicate that the presented solution is highly accurate. The important value of the initial slope, y'(0), is obtained as -1.191790649719421734122828603800159364 for η = 0.5. Comparing to the best result obtained so far, it is accurate up to 36 decimal places.

  1. Cross-evaluation of ground-based, multi-satellite and reanalysis precipitation products: Applicability of the Triple Collocation method across Mainland China

    NASA Astrophysics Data System (ADS)

    Li, Changming; Tang, Guoqiang; Hong, Yang

    2018-07-01

    Evaluating the reliability of satellite and reanalysis precipitation products is critical but challenging over ungauged or poorly gauged regions. The Triple Collocation (TC) method is a reliable approach to estimate the accuracy of any three independent inputs in the absence of truth values. This study assesses the uncertainty of three types of independent precipitation products, i.e., satellite-based, ground-based and model reanalysis over Mainland China using the TC method. The ground-based data set is Gauge Based Daily Precipitation Analysis (CGDPA). The reanalysis data set is European Reanalysis Agency Reanalysis Product (ERA-interim). The satellite-based products include five mainstream satellite products. The comparison and evaluation are conducted at 0.25° and daily resolutions from 2013 to 2015. First, the effectiveness of the TC method is evaluated in South China with dense gauge network. The results demonstrate that the TC method is reliable because the correlation coefficient (CC) and root mean square error (RMSE) derived from TC are close to those derived from ground observations, with only 9% and 7% mean relative differences, respectively. Then, the TC method is applied in Mainland China, with special attention paid to the Tibetan Plateau (TP) known as the Earth's third pole with few ground stations. Results indicate that (1) The overall performance of IMERG is better than the other satellite products over Mainland China, followed by 3B42V7, CMORPH-CRT and PERSIANN-CDR. (2) In the TP, CGDPA shows the best overall performance over gauged grid cells, however, over ungauged regions, IMERG and ERA-interim slightly outperform CGDPA with similar RMSE but higher mean CC (0.63, 0.61, and 0.58, respectively). It highlights the strengths and potentiality of remote sensing and reanalysis data over the TP and reconfirms the cons of the inherent uncertainty of CGDPA due to interpolation from sparsely gauged data. The study concludes that the TC method provides not only reliable cross-validation results over Mainland China but also a new perspective for comparatively assessing multi-source precipitation products, particularly over poorly gauged regions such as the TP.

  2. Simultaneous estimation of cross-validation errors in least squares collocation applied for statistical testing and evaluation of the noise variance components

    NASA Astrophysics Data System (ADS)

    Behnabian, Behzad; Mashhadi Hossainali, Masoud; Malekzadeh, Ahad

    2018-02-01

    The cross-validation technique is a popular method to assess and improve the quality of prediction by least squares collocation (LSC). We present a formula for direct estimation of the vector of cross-validation errors (CVEs) in LSC which is much faster than element-wise CVE computation. We show that a quadratic form of CVEs follows Chi-squared distribution. Furthermore, a posteriori noise variance factor is derived by the quadratic form of CVEs. In order to detect blunders in the observations, estimated standardized CVE is proposed as the test statistic which can be applied when noise variances are known or unknown. We use LSC together with the methods proposed in this research for interpolation of crustal subsidence in the northern coast of the Gulf of Mexico. The results show that after detection and removing outliers, the root mean square (RMS) of CVEs and estimated noise standard deviation are reduced about 51 and 59%, respectively. In addition, RMS of LSC prediction error at data points and RMS of estimated noise of observations are decreased by 39 and 67%, respectively. However, RMS of LSC prediction error on a regular grid of interpolation points covering the area is only reduced about 4% which is a consequence of sparse distribution of data points for this case study. The influence of gross errors on LSC prediction results is also investigated by lower cutoff CVEs. It is indicated that after elimination of outliers, RMS of this type of errors is also reduced by 19.5% for a 5 km radius of vicinity. We propose a method using standardized CVEs for classification of dataset into three groups with presumed different noise variances. The noise variance components for each of the groups are estimated using restricted maximum-likelihood method via Fisher scoring technique. Finally, LSC assessment measures were computed for the estimated heterogeneous noise variance model and compared with those of the homogeneous model. The advantage of the proposed method is the reduction in estimated noise levels for those groups with the fewer number of noisy data points.

  3. A Generalized Distributed Data Match-Up Service in Support of Oceanographic Application

    NASA Astrophysics Data System (ADS)

    Tsontos, V. M.; Huang, T.; Holt, B.; Smith, S. R.; Bourassa, M. A.; Worley, S. J.; Ji, Z.; Elya, J. L.; Stallard, A. P.

    2016-02-01

    Oceanographic applications increasingly rely on the integration and colocation of satellite and field observations providing complementary data coverage over a continuum of spatio-temporal scales. Here we report on a collaborative venture between NASA/JPL, NCAR and FSU/COAPS to develop a Distributed Oceanographic Match-up Service (DOMS). The DOMS project aims to implement a technical infrastructure providing a generalized, publicly accessible data collocation capability for satellite and in situ datasets utilizing remote data stores in support of satellite mission cal/val and a range of research and operational applications. The service will provide a mechanism for users to specify geospatial references and receive collocated satellite and field observations within the selected spatio-temporal domain and matchup window extent. DOMS will include several representative in situ and satellite datasets. Field data will focus on surface observations from NCAR's International Comprehensive Ocean-Atmosphere Data Set (ICOADS), the Shipboard Automated Meteorological and Oceanographic System Initiative (SAMOS) at FSU/COAPS, and the Salinity Processes in the Upper Ocean Regional Study (SPURS) data hosted at JPL/PO.DAAC. Satellite data will include JPL ASCAT L2 12.5 km winds, the Aquarius L2 orbital dataset, MODIS L2 swath data, and the high-resolution gridded L4 MUR-SST product. Importantly, while DOMS will be developed with these select datasets, it will be readily extendable for other in situ and satellite data collections and easily ported to other remote providers, thus potentially supporting additional science disciplines. Technical challenges to be addressed include: 1) ensuring accurate, efficient, and scalable match-up algorithm performance, 2) undertaking colocation using datasets that are distributed on the network, and 3) returning matched observations with sufficient metadata so that value differences can be properly interpreted. DOMS leverages existing technologies (EDGE, w10n, OPeNDAP, relational and graph/triple-store databases) and cloud computing. It will implement both a web portal interface for users to review and submit match-up requests interactively and underlying web service interface facilitating large-scale and automated machine-to-machine based queries.

  4. Least squares collocation applied to local gravimetric solutions from satellite gravity gradiometry data

    NASA Technical Reports Server (NTRS)

    Robbins, J. W.

    1985-01-01

    An autonomous spaceborne gravity gradiometer mission is being considered as a post Geopotential Research Mission project. The introduction of satellite diometry data to geodesy is expected to improve solid earth gravity models. The possibility of utilizing gradiometer data for the determination of pertinent gravimetric quantities on a local basis is explored. The analytical technique of least squares collocation is investigated for its usefulness in local solutions of this type. It is assumed, in the error analysis, that the vertical gravity gradient component of the gradient tensor is used as the raw data signal from which the corresponding reference gradients are removed to create the centered observations required in the collocation solution. The reference gradients are computed from a high degree and order geopotential model. The solution can be made in terms of mean or point gravity anomalies, height anomalies, or other useful gravimetric quantities depending on the choice of covariance types. Selected for this study were 30 x 30 foot mean gravity and height anomalies. Existing software and new software are utilized to implement the collocation technique. It was determined that satellite gradiometry data at an altitude of 200 km can be used successfully for the determination of 30 x 30 foot mean gravity anomalies to an accuracy of 9.2 mgal from this algorithm. It is shown that the resulting accuracy estimates are sensitive to gravity model coefficient uncertainties, data reduction assumptions and satellite mission parameters.

  5. Comparing multiple model-derived aerosol optical properties to spatially collocated ground-based and satellite measurements

    NASA Astrophysics Data System (ADS)

    Ocko, Ilissa B.; Ginoux, Paul A.

    2017-04-01

    Anthropogenic aerosols are a key factor governing Earth's climate and play a central role in human-caused climate change. However, because of aerosols' complex physical, optical, and dynamical properties, aerosols are one of the most uncertain aspects of climate modeling. Fortunately, aerosol measurement networks over the past few decades have led to the establishment of long-term observations for numerous locations worldwide. Further, the availability of datasets from several different measurement techniques (such as ground-based and satellite instruments) can help scientists increasingly improve modeling efforts. This study explores the value of evaluating several model-simulated aerosol properties with data from spatially collocated instruments. We compare aerosol optical depth (AOD; total, scattering, and absorption), single-scattering albedo (SSA), Ångström exponent (α), and extinction vertical profiles in two prominent global climate models (Geophysical Fluid Dynamics Laboratory, GFDL, CM2.1 and CM3) to seasonal observations from collocated instruments (AErosol RObotic NETwork, AERONET, and Cloud-Aerosol Lidar with Orthogonal Polarization, CALIOP) at seven polluted and biomass burning regions worldwide. We find that a multi-parameter evaluation provides key insights on model biases, data from collocated instruments can reveal underlying aerosol-governing physics, column properties wash out important vertical distinctions, and improved models does not mean all aspects are improved. We conclude that it is important to make use of all available data (parameters and instruments) when evaluating aerosol properties derived by models.

  6. Kinetic and dynamic Delaunay tetrahedralizations in three dimensions

    NASA Astrophysics Data System (ADS)

    Schaller, Gernot; Meyer-Hermann, Michael

    2004-09-01

    We describe algorithms to implement fully dynamic and kinetic three-dimensional unconstrained Delaunay triangulations, where the time evolution of the triangulation is not only governed by moving vertices but also by a changing number of vertices. We use three-dimensional simplex flip algorithms, a stochastic visibility walk algorithm for point location and in addition, we propose a new simple method of deleting vertices from an existing three-dimensional Delaunay triangulation while maintaining the Delaunay property. As an example, we analyse the performance in various cases of practical relevance. The dual Dirichlet tessellation can be used to solve differential equations on an irregular grid, to define partitions in cell tissue simulations, for collision detection etc.

  7. Improved Large-Eddy Simulation Using a Stochastic Backscatter Model: Application to the Neutral Atmospheric Boundary Layer and Urban Street Canyon Flow

    NASA Astrophysics Data System (ADS)

    O'Neill, J. J.; Cai, X.; Kinnersley, R.

    2015-12-01

    Large-eddy simulation (LES) provides a powerful tool for developing our understanding of atmospheric boundary layer (ABL) dynamics, which in turn can be used to improve the parameterisations of simpler operational models. However, LES modelling is not without its own limitations - most notably, the need to parameterise the effects of all subgrid-scale (SGS) turbulence. Here, we employ a stochastic backscatter SGS model, which explicitly handles the effects of both forward and reverse energy transfer to/from the subgrid scales, to simulate the neutrally stratified ABL as well as flow within an idealised urban street canyon. In both cases, a clear improvement in LES output statistics is observed when compared with the performance of a SGS model that handles forward energy transfer only. In the neutral ABL case, the near-surface velocity profile is brought significantly closer towards its expected logarithmic form. In the street canyon case, the strength of the primary vortex that forms within the canyon is more accurately reproduced when compared to wind tunnel measurements. Our results indicate that grid-scale backscatter plays an important role in both these modelled situations.

  8. A Two-Stage Stochastic Mixed-Integer Programming Approach to the Smart House Scheduling Problem

    NASA Astrophysics Data System (ADS)

    Ozoe, Shunsuke; Tanaka, Yoichi; Fukushima, Masao

    A “Smart House” is a highly energy-optimized house equipped with photovoltaic systems (PV systems), electric battery systems, fuel cell cogeneration systems (FC systems), electric vehicles (EVs) and so on. Smart houses are attracting much attention recently thanks to their enhanced ability to save energy by making full use of renewable energy and by achieving power grid stability despite an increased power draw for installed PV systems. Yet running a smart house's power system, with its multiple power sources and power storages, is no simple task. In this paper, we consider the problem of power scheduling for a smart house with a PV system, an FC system and an EV. We formulate the problem as a mixed integer programming problem, and then extend it to a stochastic programming problem involving recourse costs to cope with uncertain electricity demand, heat demand and PV power generation. Using our method, we seek to achieve the optimal power schedule running at the minimum expected operation cost. We present some results of numerical experiments with data on real-life demands and PV power generation to show the effectiveness of our method.

  9. XMDS2: Fast, scalable simulation of coupled stochastic partial differential equations

    NASA Astrophysics Data System (ADS)

    Dennis, Graham R.; Hope, Joseph J.; Johnsson, Mattias T.

    2013-01-01

    XMDS2 is a cross-platform, GPL-licensed, open source package for numerically integrating initial value problems that range from a single ordinary differential equation up to systems of coupled stochastic partial differential equations. The equations are described in a high-level XML-based script, and the package generates low-level optionally parallelised C++ code for the efficient solution of those equations. It combines the advantages of high-level simulations, namely fast and low-error development, with the speed, portability and scalability of hand-written code. XMDS2 is a complete redesign of the XMDS package, and features support for a much wider problem space while also producing faster code. Program summaryProgram title: XMDS2 Catalogue identifier: AENK_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENK_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 2 No. of lines in distributed program, including test data, etc.: 872490 No. of bytes in distributed program, including test data, etc.: 45522370 Distribution format: tar.gz Programming language: Python and C++. Computer: Any computer with a Unix-like system, a C++ compiler and Python. Operating system: Any Unix-like system; developed under Mac OS X and GNU/Linux. RAM: Problem dependent (roughly 50 bytes per grid point) Classification: 4.3, 6.5. External routines: The external libraries required are problem-dependent. Uses FFTW3 Fourier transforms (used only for FFT-based spectral methods), dSFMT random number generation (used only for stochastic problems), MPI message-passing interface (used only for distributed problems), HDF5, GNU Scientific Library (used only for Bessel-based spectral methods) and a BLAS implementation (used only for non-FFT-based spectral methods). Nature of problem: General coupled initial-value stochastic partial differential equations. Solution method: Spectral method with method-of-lines integration Running time: Determined by the size of the problem

  10. Impact of Stochastic Parameterization Schemes on Coupled and Uncoupled Climate Simulations with the Community Earth System Model

    NASA Astrophysics Data System (ADS)

    Christensen, H. M.; Berner, J.; Coleman, D.; Palmer, T.

    2015-12-01

    Stochastic parameterizations have been used for more than a decade in atmospheric models to represent the variability of unresolved sub-grid processes. They have a beneficial effect on the spread and mean state of medium- and extended-range forecasts (Buizza et al. 1999, Palmer et al. 2009). There is also increasing evidence that stochastic parameterization of unresolved processes could be beneficial for the climate of an atmospheric model through noise enhanced variability, noise-induced drift (Berner et al. 2008), and by enabling the climate simulator to explore other flow regimes (Christensen et al. 2015; Dawson and Palmer 2015). We present results showing the impact of including the Stochastically Perturbed Parameterization Tendencies scheme (SPPT) in coupled runs of the National Center for Atmospheric Research (NCAR) Community Atmosphere Model, version 4 (CAM4) with historical forcing. The SPPT scheme accounts for uncertainty in the CAM physical parameterization schemes, including the convection scheme, by perturbing the parametrised temperature, moisture and wind tendencies with a multiplicative noise term. SPPT results in a large improvement in the variability of the CAM4 modeled climate. In particular, SPPT results in a significant improvement to the representation of the El Nino-Southern Oscillation in CAM4, improving the power spectrum, as well as both the inter- and intra-annual variability of tropical pacific sea surface temperatures. References: Berner, J., Doblas-Reyes, F. J., Palmer, T. N., Shutts, G. J., & Weisheimer, A., 2008. Phil. Trans. R. Soc A, 366, 2559-2577 Buizza, R., Miller, M. and Palmer, T. N., 1999. Q.J.R. Meteorol. Soc., 125, 2887-2908. Christensen, H. M., I. M. Moroz & T. N. Palmer, 2015. Clim. Dynam., doi: 10.1007/s00382-014-2239-9 Dawson, A. and T. N. Palmer, 2015. Clim. Dynam., doi: 10.1007/s00382-014-2238-x Palmer, T.N., R. Buizza, F. Doblas-Reyes, et al., 2009, ECMWF technical memorandum 598.

  11. Stochasticity of convection in Giga-LES data

    NASA Astrophysics Data System (ADS)

    De La Chevrotière, Michèle; Khouider, Boualem; Majda, Andrew J.

    2016-09-01

    The poor representation of tropical convection in general circulation models (GCMs) is believed to be responsible for much of the uncertainty in the predictions of weather and climate in the tropics. The stochastic multicloud model (SMCM) was recently developed by Khouider et al. (Commun Math Sci 8(1):187-216, 2010) to represent the missing variability in GCMs due to unresolved features of organized tropical convection. The SMCM is based on three cloud types (congestus, deep and stratiform), and transitions between these cloud types are formalized in terms of probability rules that are functions of the large-scale environment convective state and a set of seven arbitrary cloud timescale parameters. Here, a statistical inference method based on the Bayesian paradigm is applied to estimate these key cloud timescales from the Giga-LES dataset, a 24-h large-eddy simulation (LES) of deep tropical convection (Khairoutdinov et al. in J Adv Model Earth Syst 1(12), 2009) over a domain comparable to a GCM gridbox. A sequential learning strategy is used where the Giga-LES domain is partitioned into a few subdomains, and atmospheric time series obtained on each subdomain are used to train the Bayesian procedure incrementally. Convergence of the marginal posterior densities for all seven parameters is demonstrated for two different grid partitions, and sensitivity tests to other model parameters are also presented. A single column model simulation using the SMCM parameterization with the Giga-LES inferred parameters reproduces many important statistical features of the Giga-LES run, without any further tuning. In particular it exhibits intermittent dynamical behavior in both the stochastic cloud fractions and the large scale dynamics, with periods of dry phases followed by a coherent sequence of congestus, deep, and stratiform convection, varying on timescales of a few hours consistent with the Giga-LES time series. The chaotic variations of the cloud area fractions were captured fairly well both qualitatively and quantitatively demonstrating the stochastic nature of convection in the Giga-LES simulation.

  12. Information-theoretic characterization of dynamic energy systems

    NASA Astrophysics Data System (ADS)

    Bevis, Troy Lawson

    The latter half of the 20th century saw tremendous growth in nearly every aspect of civilization. From the internet to transportation, the various infrastructures relied upon by society has become exponentially more complex. Energy systems are no exception, and today the power grid is one of the largest infrastructures in the history of the world. The growing infrastructure has led to an increase in not only the amount of energy produced, but also an increase in the expectations of the energy systems themselves. The need for a power grid that is reliable, secure, and efficient is apparent, and there have been several initiatives to provide such a system. These increases in expectations have led to a growth in the renewable energy sources that are being integrated into the grid, a change that increases efficiency and disperses the generation throughout the system. Although this change in the grid infrastructure is beneficial, it leads to grand challenges in system level control and operation. As the number of sources increases and becomes geographically distributed, the control systems are no longer local to the system. This means that communication networks must be enhanced to support multiple devices that must communicate reliably. A common solution to these new systems is to use wide area networks for the communication network, as opposed to point-to-point communication. Although the wide area network will support a large number of devices, it generally comes with a compromise in the form of latency in the communication system. Now the device controller has latency injected into the feedback loop of the system. Also, renewable energy sources are largely non-dispatchable generation. That is, they are never guaranteed to be online and supplying the demanded energy. As renewable generation is typically modeled as stochastic process, it would useful to include this behavior in the control system algorithms. The combination of communication latency and stochastic sources are compounded by the dynamics of the grid itself. Loads are constantly changing, as well as the sources; this can sometimes lead to a quick change in system states. There is a need for a metric to be able to take into consideration all of the factors detailed above; it needs to be able to take into consideration the amount of information that is available in the system and the rate that the information is losing its value. In a dynamic system, the information is only valid for a length of time, and the controller must be able to take into account the decay of currently held information. This thesis will present the information theory metrics in a way that is useful for application to dynamic energy systems. A test case involving synchronization of several generators is presented for analysis and application of the theory. The objective is to synchronize all the generators and connect them to a common bus. As the phase shift of each generator is a random process, the effects of latency and information decay can be directly observed. The results of the experiments clearly show that the expected outcomes are observed and that entropy and information theory is a valid metric for timing requirement extraction.

  13. Benefits of an ultra large and multiresolution ensemble for estimating available wind power

    NASA Astrophysics Data System (ADS)

    Berndt, Jonas; Hoppe, Charlotte; Elbern, Hendrik

    2016-04-01

    In this study we investigate the benefits of an ultra large ensemble with up to 1000 members including multiple nesting with a target horizontal resolution of 1 km. The ensemble shall be used as a basis to detect events of extreme errors in wind power forecasting. Forecast value is the wind vector at wind turbine hub height (~ 100 m) in the short range (1 to 24 hour). Current wind power forecast systems rest already on NWP ensemble models. However, only calibrated ensembles from meteorological institutions serve as input so far, with limited spatial resolution (˜10 - 80 km) and member number (˜ 50). Perturbations related to the specific merits of wind power production are yet missing. Thus, single extreme error events which are not detected by such ensemble power forecasts occur infrequently. The numerical forecast model used in this study is the Weather Research and Forecasting Model (WRF). Model uncertainties are represented by stochastic parametrization of sub-grid processes via stochastically perturbed parametrization tendencies and in conjunction via the complementary stochastic kinetic-energy backscatter scheme already provided by WRF. We perform continuous ensemble updates by comparing each ensemble member with available observations using a sequential importance resampling filter to improve the model accuracy while maintaining ensemble spread. Additionally, we use different ensemble systems from global models (ECMWF and GFS) as input and boundary conditions to capture different synoptic conditions. Critical weather situations which are connected to extreme error events are located and corresponding perturbation techniques are applied. The demanding computational effort is overcome by utilising the supercomputer JUQUEEN at the Forschungszentrum Juelich.

  14. Electrochemistry of Anilines. II. Oxidation to Dications, Electrochemical and uv/vis Spectroelectrochemical Investigation.

    DTIC Science & Technology

    1984-01-06

    NO-1 ARCUASSII 004-3K-40F /G74N L 2874 Lj6l 1.0= = aM22 1.2 1.1 1. 1. MICROCOP ’ RP’-OLLI’ION liT[* CHART %".NA. H~.Nt I -’AN, All - ,- A t$ CUeavr...The cyclic voltammogram of the methoxy compound -has been simulated by the orthogonal collocation method. Products of bulk electrolysis have been...spectroelectrochemical means. The cyclic volta-mocra. of the methoxy compound has been simulated by the orthoccna. collocation method. Products of bulk

  15. Coexistence of Collocated IEEE 802.11 and Bluetooth Technologies in 2.4 GHz ISM Band

    NASA Astrophysics Data System (ADS)

    Xhafa, Ariton E.; Lu, Xiaolin; Shaver, Donald P.

    In this paper, we investigate coexistence of collocated 802.11 and Bluetooth technologies in 2.4 GHz industrial, scientific, and medical (ISM) band. To that end, we show a time division multiplexing approach suffers from the “avalanche effect”. We then provide remedies to avoid this effect and improve the performance of the overall network. For example, it is shown that a simple request-to-send (RTS) / clear-to-send (CTS) frame handshake in WLAN can avoid “avalanche effect” and improve the performance of overall network.

  16. Finite Differences and Collocation Methods for the Solution of the Two Dimensional Heat Equation

    NASA Technical Reports Server (NTRS)

    Kouatchou, Jules

    1999-01-01

    In this paper we combine finite difference approximations (for spatial derivatives) and collocation techniques (for the time component) to numerically solve the two dimensional heat equation. We employ respectively a second-order and a fourth-order schemes for the spatial derivatives and the discretization method gives rise to a linear system of equations. We show that the matrix of the system is non-singular. Numerical experiments carried out on serial computers, show the unconditional stability of the proposed method and the high accuracy achieved by the fourth-order scheme.

  17. A Survey of Symplectic and Collocation Integration Methods for Orbit Propagation

    NASA Technical Reports Server (NTRS)

    Jones, Brandon A.; Anderson, Rodney L.

    2012-01-01

    Demands on numerical integration algorithms for astrodynamics applications continue to increase. Common methods, like explicit Runge-Kutta, meet the orbit propagation needs of most scenarios, but more specialized scenarios require new techniques to meet both computational efficiency and accuracy needs. This paper provides an extensive survey on the application of symplectic and collocation methods to astrodynamics. Both of these methods benefit from relatively recent theoretical developments, which improve their applicability to artificial satellite orbit propagation. This paper also details their implementation, with several tests demonstrating their advantages and disadvantages.

  18. Fourier analysis of finite element preconditioned collocation schemes

    NASA Technical Reports Server (NTRS)

    Deville, Michel O.; Mund, Ernest H.

    1990-01-01

    The spectrum of the iteration operator of some finite element preconditioned Fourier collocation schemes is investigated. The first part of the paper analyses one-dimensional elliptic and hyperbolic model problems and the advection-diffusion equation. Analytical expressions of the eigenvalues are obtained with use of symbolic computation. The second part of the paper considers the set of one-dimensional differential equations resulting from Fourier analysis (in the tranverse direction) of the 2-D Stokes problem. All results agree with previous conclusions on the numerical efficiency of finite element preconditioning schemes.

  19. Flexible Demand Management under Time-Varying Prices

    NASA Astrophysics Data System (ADS)

    Liang, Yong

    In this dissertation, the problem of flexible demand management under time-varying prices is studied. This generic problem has many applications, which usually have multiple periods in which decisions on satisfying demand need to be made, and prices in these periods are time-varying. Examples of such applications include multi-period procurement problem, operating room scheduling, and user-end demand scheduling in the Smart Grid, where the last application is used as the main motivating story throughout the dissertation. The current grid is experiencing an upgrade with lots of new designs. What is of particular interest is the idea of passing time-varying prices that reflect electricity market conditions to end users as incentives for load shifting. One key component, consequently, is the demand management system at the user-end. The objective of the system is to find the optimal trade-off between cost saving and discomfort increment resulted from load shifting. In this dissertation, we approach this problem from the following aspects: (1) construct a generic model, solve for Pareto optimal solutions, and analyze the robust solution that optimizes the worst-case payoffs, (2) extend to a distribution-free model for multiple types of demand (appliances), for which an approximate dynamic programming (ADP) approach is developed, and (3) design other efficient algorithms for practical purposes of the flexible demand management system. We first construct a novel multi-objective flexible demand management model, in which there are a finite number of periods with time-varying prices, and demand arrives in each period. In each period, the decision maker chooses to either satisfy or defer outstanding demand to minimize costs and discomfort over a certain number of periods. We consider both the deterministic model, models with stochastic demand or prices, and when only partial information about the stochastic demand or prices is known. We first analyze the stochastic optimization problem when the objective is to minimize the expected total cost and discomfort, then since the decision maker is likely to be risk-averse, and she wants to protect herself from price spikes, we study the robust optimization problem to address the risk-aversion of the decision maker. We conduct numerical studies to evaluate the price of robustness. Next, we present a detailed model that manages multiple types of flexible demand in the absence of knowledge regarding the distributions of related stochastic processes. Specifically, we consider the case in which time-varying prices with general structures are offered to users, and an energy management system for each household makes optimal energy usage, storage, and trading decisions according to the preferences of users. Because of the uncertainties associated with electricity prices, local generation, and the arrival processes of demand, we formulate a stochastic dynamic programming model, and outline a novel and tractable ADP approach to overcome the curses of dimensionality. Then, we perform numerical studies, whose results demonstrate the effectiveness of the ADP approach. At last, we propose another approximation approach based on Q-learning. In addition, we also develop another decentralization-based heuristic. Both the Q-learning approach and the heuristic make necessary assumptions on the knowledge of information, and each of them has unique advantages. We conduct numerical studies on a testing problem. The simulation results show that both the Q-learning and the decentralization based heuristic approaches work well. Lastly, we conclude the paper with some discussions on future extension directions.

  20. Evolution of the concentration PDF in random environments modeled by global random walk

    NASA Astrophysics Data System (ADS)

    Suciu, Nicolae; Vamos, Calin; Attinger, Sabine; Knabner, Peter

    2013-04-01

    The evolution of the probability density function (PDF) of concentrations of chemical species transported in random environments is often modeled by ensembles of notional particles. The particles move in physical space along stochastic-Lagrangian trajectories governed by Ito equations, with drift coefficients given by the local values of the resolved velocity field and diffusion coefficients obtained by stochastic or space-filtering upscaling procedures. A general model for the sub-grid mixing also can be formulated as a system of Ito equations solving for trajectories in the composition space. The PDF is finally estimated by the number of particles in space-concentration control volumes. In spite of their efficiency, Lagrangian approaches suffer from two severe limitations. Since the particle trajectories are constructed sequentially, the demanded computing resources increase linearly with the number of particles. Moreover, the need to gather particles at the center of computational cells to perform the mixing step and to estimate statistical parameters, as well as the interpolation of various terms to particle positions, inevitably produce numerical diffusion in either particle-mesh or grid-free particle methods. To overcome these limitations, we introduce a global random walk method to solve the system of Ito equations in physical and composition spaces, which models the evolution of the random concentration's PDF. The algorithm consists of a superposition on a regular lattice of many weak Euler schemes for the set of Ito equations. Since all particles starting from a site of the space-concentration lattice are spread in a single numerical procedure, one obtains PDF estimates at the lattice sites at computational costs comparable with those for solving the system of Ito equations associated to a single particle. The new method avoids the limitations concerning the number of particles in Lagrangian approaches, completely removes the numerical diffusion, and speeds up the computation by orders of magnitude. The approach is illustrated for the transport of passive scalars in heterogeneous aquifers, with hydraulic conductivity modeled as a random field.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deng, Min; Kollias, Pavlos; Feng, Zhe

    The motivation for this research is to develop a precipitation classification and rain rate estimation method using cloud radar-only measurements for Atmospheric Radiation Measurement (ARM) long-term cloud observation analysis, which are crucial and unique for studying cloud lifecycle and precipitation features under different weather and climate regimes. Based on simultaneous and collocated observations of the Ka-band ARM zenith radar (KAZR), two precipitation radars (NCAR S-PolKa and Texas A&M University SMART-R), and surface precipitation during the DYNAMO/AMIE field campaign, a new cloud radar-only based precipitation classification and rain rate estimation method has been developed and evaluated. The resulting precipitation classification ismore » equivalent to those collocated SMART-R and S-PolKa observations. Both cloud and precipitation radars detected about 5% precipitation occurrence during this period. The convective (stratiform) precipitation fraction is about 18% (82%). The 2-day collocated disdrometer observations show an increased number concentration of large raindrops in convective rain compared to dominant concentration of small raindrops in stratiform rain. The composite distributions of KAZR reflectivity and Doppler velocity also show two distinct structures for convective and stratiform rain. These indicate that the method produces physically consistent results for two types of rain. The cloud radar-only rainfall estimation is developed based on the gradient of accumulative radar reflectivity below 1 km, near-surface Ze, and collocated surface rainfall (R) measurement. The parameterization is compared with the Z-R exponential relation. The relative difference between estimated and surface measured rainfall rate shows that the two-parameter relation can improve rainfall estimation.« less

  2. Estimated variability of National Atmospheric Deposition Program/Mercury Deposition Network measurements using collocated samplers

    USGS Publications Warehouse

    Wetherbee, G.A.; Gay, D.A.; Brunette, R.C.; Sweet, C.W.

    2007-01-01

    The National Atmospheric Deposition Program/Mercury Deposition Network (MDN) provides long-term, quality-assured records of mercury in wet deposition in the USA and Canada. Interpretation of spatial and temporal trends in the MDN data requires quantification of the variability of the MDN measurements. Variability is quantified for MDN data from collocated samplers at MDN sites in two states, one in Illinois and one in Washington. Median absolute differences in the collocated sampler data for total mercury concentration are approximately 11% of the median mercury concentration for all valid 1999-2004 MDN data. Median absolute differences are between 3.0% and 14% of the median MDN value for collector catch (sample volume) and between 6.0% and 15% of the median MDN value for mercury wet deposition. The overall measurement errors are sufficiently low to resolve between NADP/MDN measurements by ??2 ng??l-1 and ??2 ????m-2?? year-1, which are the contour intervals used to display the data on NADP isopleths maps for concentration and deposition, respectively. ?? Springer Science+Business Media B.V. 2007.

  3. La conception, la modelisation et la simulation du systeme VSC-HVDC offshore

    NASA Astrophysics Data System (ADS)

    Benhalima, Seghir

    Wind energy is recognized worldwide as a proven technology to meet the growing demands of green sustainable energy. To exploit this stochastic energy source and put together with the conventional energy sources without affecting the performance of existing electrical grids, several research projects have been achieved. In addition, at ocean level, wind energy has a great potential. It means that the production of this energy will increase in the world. The optimal extraction of this energy source needs to be connected to the grid via a voltage source converter which plays the role of interface. To minimise losses due to the transport of energy at very long distances, the technology called High Voltage Direct Current based on Voltage Source Converter (VSC-HVDC) is used. To achieve this goal, a new topology is designed with a new control algorithm based on control of power generated by the wind farm, the DC voltage regulation and the synchronization between wind farm and VSC-HVDC (based on NPC). The proposed topology and its control technique are validated using the "MATLAB/Simulink program". The results are promising, because the THD is less than 5% and the power factor is close to one.

  4. Final Technical Report: Sparse Grid Scenario Generation and Interior Algorithms for Stochastic Optimization in a Parallel Computing Environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mehrotra, Sanjay

    2016-09-07

    The support from this grant resulted in seven published papers and a technical report. Two papers are published in SIAM J. on Optimization [87, 88]; two papers are published in IEEE Transactions on Power Systems [77, 78]; one paper is published in Smart Grid [79]; one paper is published in Computational Optimization and Applications [44] and one in INFORMS J. on Computing [67]). The works in [44, 67, 87, 88] were funded primarily by this DOE grant. The applied papers in [77, 78, 79] were also supported through a subcontract from the Argonne National Lab. We start by presenting ourmore » main research results on the scenario generation problem in Sections 1–2. We present our algorithmic results on interior point methods for convex optimization problems in Section 3. We describe a new ‘central’ cutting surface algorithm developed for solving large scale convex programming problems (as is the case with our proposed research) with semi-infinite number of constraints in Section 4. In Sections 5–6 we present our work on two application problems of interest to DOE.« less

  5. Calibrating Parameters of Power System Stability Models using Advanced Ensemble Kalman Filter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Renke; Diao, Ruisheng; Li, Yuanyuan

    With the ever increasing penetration of renewable energy, smart loads, energy storage, and new market behavior, today’s power grid becomes more dynamic and stochastic, which may invalidate traditional study assumptions and pose great operational challenges. Thus, it is of critical importance to maintain good-quality models for secure and economic planning and real-time operation. Following the 1996 Western Systems Coordinating Council (WSCC) system blackout, North American Electric Reliability Corporation (NERC) and Western Electricity Coordinating Council (WECC) in North America enforced a number of policies and standards to guide the power industry to periodically validate power grid models and calibrate poor parametersmore » with the goal of building sufficient confidence in model quality. The PMU-based approach using online measurements without interfering with the operation of generators provides a low-cost alternative to meet NERC standards. This paper presents an innovative procedure and tool suites to validate and calibrate models based on a trajectory sensitivity analysis method and an advanced ensemble Kalman filter algorithm. The developed prototype demonstrates excellent performance in identifying and calibrating bad parameters of a realistic hydro power plant against multiple system events.« less

  6. Power Management Based Current Control Technique for Photovoltaic-Battery Assisted Wind-Hydro Hybrid System

    NASA Astrophysics Data System (ADS)

    Ram Prabhakar, J.; Ragavan, K.

    2013-07-01

    This article proposes new power management based current control strategy for integrated wind-solar-hydro system equipped with battery storage mechanism. In this control technique, an indirect estimation of load current is done, through energy balance model, DC-link voltage control and droop control. This system features simpler energy management strategy and necessitates few power electronic converters, thereby minimizing the cost of the system. The generation-demand (G-D) management diagram is formulated based on the stochastic weather conditions and demand, which would likely moderate the gap between both. The features of management strategy deploying energy balance model include (1) regulating DC-link voltage within specified tolerances, (2) isolated operation without relying on external electric power transmission network, (3) indirect current control of hydro turbine driven induction generator and (4) seamless transition between grid-connected and off-grid operation modes. Furthermore, structuring of the hybrid system with appropriate selection of control variables enables power sharing among each energy conversion systems and battery storage mechanism. By addressing these intricacies, it is viable to regulate the frequency and voltage of the remote network at load end. The performance of the proposed composite scheme is demonstrated through time-domain simulation in MATLAB/Simulink environment.

  7. Multivariate bias adjustment of high-dimensional climate simulations: the Rank Resampling for Distributions and Dependences (R2D2) bias correction

    NASA Astrophysics Data System (ADS)

    Vrac, Mathieu

    2018-06-01

    Climate simulations often suffer from statistical biases with respect to observations or reanalyses. It is therefore common to correct (or adjust) those simulations before using them as inputs into impact models. However, most bias correction (BC) methods are univariate and so do not account for the statistical dependences linking the different locations and/or physical variables of interest. In addition, they are often deterministic, and stochasticity is frequently needed to investigate climate uncertainty and to add constrained randomness to climate simulations that do not possess a realistic variability. This study presents a multivariate method of rank resampling for distributions and dependences (R2D2) bias correction allowing one to adjust not only the univariate distributions but also their inter-variable and inter-site dependence structures. Moreover, the proposed R2D2 method provides some stochasticity since it can generate as many multivariate corrected outputs as the number of statistical dimensions (i.e., number of grid cell × number of climate variables) of the simulations to be corrected. It is based on an assumption of stability in time of the dependence structure - making it possible to deal with a high number of statistical dimensions - that lets the climate model drive the temporal properties and their changes in time. R2D2 is applied on temperature and precipitation reanalysis time series with respect to high-resolution reference data over the southeast of France (1506 grid cell). Bivariate, 1506-dimensional and 3012-dimensional versions of R2D2 are tested over a historical period and compared to a univariate BC. How the different BC methods behave in a climate change context is also illustrated with an application to regional climate simulations over the 2071-2100 period. The results indicate that the 1d-BC basically reproduces the climate model multivariate properties, 2d-R2D2 is only satisfying in the inter-variable context, 1506d-R2D2 strongly improves inter-site properties and 3012d-R2D2 is able to account for both. Applications of the proposed R2D2 method to various climate datasets are relevant for many impact studies. The perspectives of improvements are numerous, such as introducing stochasticity in the dependence itself, questioning its stability assumption, and accounting for temporal properties adjustment while including more physics in the adjustment procedures.

  8. Numerical solution of sixth-order boundary-value problems using Legendre wavelet collocation method

    NASA Astrophysics Data System (ADS)

    Sohaib, Muhammad; Haq, Sirajul; Mukhtar, Safyan; Khan, Imad

    2018-03-01

    An efficient method is proposed to approximate sixth order boundary value problems. The proposed method is based on Legendre wavelet in which Legendre polynomial is used. The mechanism of the method is to use collocation points that converts the differential equation into a system of algebraic equations. For validation two test problems are discussed. The results obtained from proposed method are quite accurate, also close to exact solution, and other different methods. The proposed method is computationally more effective and leads to more accurate results as compared to other methods from literature.

  9. Benchmarking the Collocation Stand-Alone Library and Toolkit (CSALT)

    NASA Technical Reports Server (NTRS)

    Hughes, Steven; Knittel, Jeremy; Shoan, Wendy; Kim, Youngkwang; Conway, Claire; Conway, Darrel J.

    2017-01-01

    This paper describes the processes and results of Verification and Validation (VV) efforts for the Collocation Stand Alone Library and Toolkit (CSALT). We describe the test program and environments, the tools used for independent test data, and comparison results. The VV effort employs classical problems with known analytic solutions, solutions from other available software tools, and comparisons to benchmarking data available in the public literature. Presenting all test results are beyond the scope of a single paper. Here we present high-level test results for a broad range of problems, and detailed comparisons for selected problems.

  10. Revisiting and Extending Interface Penalties for Multi-Domain Summation-by-Parts Operators

    NASA Technical Reports Server (NTRS)

    Carpenter, Mark H.; Nordstrom, Jan; Gottlieb, David

    2007-01-01

    General interface coupling conditions are presented for multi-domain collocation methods, which satisfy the summation-by-parts (SBP) spatial discretization convention. The combined interior/interface operators are proven to be L2 stable, pointwise stable, and conservative, while maintaining the underlying accuracy of the interior SBP operator. The new interface conditions resemble (and were motivated by) those used in the discontinuous Galerkin finite element community, and maintain many of the same properties. Extensive validation studies are presented using two classes of high-order SBP operators: 1) central finite difference, and 2) Legendre spectral collocation.

  11. Benchmarking the Collocation Stand-Alone Library and Toolkit (CSALT)

    NASA Technical Reports Server (NTRS)

    Hughes, Steven; Knittel, Jeremy; Shoan, Wendy (Compiler); Kim, Youngkwang; Conway, Claire (Compiler); Conway, Darrel

    2017-01-01

    This paper describes the processes and results of Verification and Validation (V&V) efforts for the Collocation Stand Alone Library and Toolkit (CSALT). We describe the test program and environments, the tools used for independent test data, and comparison results. The V&V effort employs classical problems with known analytic solutions, solutions from other available software tools, and comparisons to benchmarking data available in the public literature. Presenting all test results are beyond the scope of a single paper. Here we present high-level test results for a broad range of problems, and detailed comparisons for selected problems.

  12. Hydrogeologic Unit Flow Characterization Using Transition Probability Geostatistics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, N L; Walker, J R; Carle, S F

    2003-11-21

    This paper describes a technique for applying the transition probability geostatistics method for stochastic simulation to a MODFLOW model. Transition probability geostatistics has several advantages over traditional indicator kriging methods including a simpler and more intuitive framework for interpreting geologic relationships and the ability to simulate juxtapositional tendencies such as fining upwards sequences. The indicator arrays generated by the transition probability simulation are converted to layer elevation and thickness arrays for use with the new Hydrogeologic Unit Flow (HUF) package in MODFLOW 2000. This makes it possible to preserve complex heterogeneity while using reasonably sized grids. An application of themore » technique involving probabilistic capture zone delineation for the Aberjona Aquifer in Woburn, Ma. is included.« less

  13. Finite-temperature effects in helical quantum turbulence

    NASA Astrophysics Data System (ADS)

    Clark Di Leoni, Patricio; Mininni, Pablo D.; Brachet, Marc E.

    2018-04-01

    We perform a study of the evolution of helical quantum turbulence at different temperatures by solving numerically the Gross-Pitaevskii and the stochastic Ginzburg-Landau equations, using up to 40963 grid points with a pseudospectral method. We show that for temperatures close to the critical one, the fluid described by these equations can act as a classical viscous flow, with the decay of the incompressible kinetic energy and the helicity becoming exponential. The transition from this behavior to the one observed at zero temperature is smooth as a function of temperature. Moreover, the presence of strong thermal effects can inhibit the development of a proper turbulent cascade. We provide Ansätze for the effective viscosity and friction as a function of the temperature.

  14. Traffic jam dynamics in stochastic cellular automata

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nagel, K.; Schreckenberg, M.

    1995-09-01

    Simple models for particles hopping on a grid (cellular automata) are used to simulate (single lane) traffic flow. Despite their simplicity, these models are astonishingly realistic in reproducing start-stop-waves and realistic fundamental diagrams. One can use these models to investigate traffic phenomena near maximum flow. A so-called phase transition at average maximum flow is visible in the life-times of jams. The resulting dynamic picture is consistent with recent fluid-dynamical results by Kuehne/Kerner/Konhaeuser, and with Treiterer`s hysteresis description. This places CA models between car-following models and fluid-dynamical models for traffic flow. CA models are tested in projects in Los Alamos (USA)more » and in NRW (Germany) for large scale microsimulations of network traffic.« less

  15. Spatially Controlled Relay Beamforming

    NASA Astrophysics Data System (ADS)

    Kalogerias, Dionysios

    This thesis is about fusion of optimal stochastic motion control and physical layer communications. Distributed, networked communication systems, such as relay beamforming networks (e.g., Amplify & Forward (AF)), are typically designed without explicitly considering how the positions of the respective nodes might affect the quality of the communication. Optimum placement of network nodes, which could potentially improve the quality of the communication, is not typically considered. However, in most practical settings in physical layer communications, such as relay beamforming, the Channel State Information (CSI) observed by each node, per channel use, although it might be (modeled as) random, it is both spatially and temporally correlated. It is, therefore, reasonable to ask if and how the performance of the system could be improved by (predictively) controlling the positions of the network nodes (e.g., the relays), based on causal side (CSI) information, and exploitting the spatiotemporal dependencies of the wireless medium. In this work, we address this problem in the context of AF relay beamforming networks. This novel, cyber-physical system approach to relay beamforming is termed as "Spatially Controlled Relay Beamforming". First, we discuss wireless channel modeling, however, in a rigorous, Bayesian framework. Experimentally accurate and, at the same time, technically precise channel modeling is absolutely essential for designing and analyzing spatially controlled communication systems. In this work, we are interested in two distinct spatiotemporal statistical models, for describing the behavior of the log-scale magnitude of the wireless channel: 1. Stationary Gaussian Fields: In this case, the channel is assumed to evolve as a stationary, Gaussian stochastic field in continuous space and discrete time (say, for instance, time slots). Under such assumptions, spatial and temporal statistical interactions are determined by a set of time and space invariant parameters, which completely determine the mean and covariance of the underlying Gaussian measure. This model is relatively simple to describe, and can be sufficiently characterized, at least for our purposes, both statistically and topologically. Additionally, the model is rather versatile and there is existing experimental evidence, supporting its practical applicability. Our contributions are summarized in properly formulating the whole spatiotemporal model in a completely rigorous mathematical setting, under a convenient measure theoretic framework. Such framework greatly facilitates formulation of meaningful stochastic control problems, where the wireless channel field (or a function of it) can be regarded as a stochastic optimization surface.. 2. Conditionally Gaussian Fields, when conditioned on a Markovian channel state: This is a completely novel approach to wireless channel modeling. In this approach, the communication medium is assumed to behave as a partially observable (or hidden) system, where a hidden, global, temporally varying underlying stochastic process, called the channel state, affects the spatial interactions of the actual channel magnitude, evaluated at any set of locations in the plane. More specifically, we assume that, conditioned on the channel state, the wireless channel constitutes an observable, conditionally Gaussian stochastic process. The channel state evolves in time according to a known, possibly non stationary, non Gaussian, low dimensional Markov kernel. Recognizing the intractability of general nonlinear state estimation, we advocate the use of grid based approximate nonlinear filters as an effective and robust means for recursive tracking of the channel state. We also propose a sequential spatiotemporal predictor for tracking the channel gains at any point in time and space, providing real time sequential estimates for the respective channel gain map. In this context, our contributions are multifold. Except for the introduction of the layered channel model previously described, this line of research has resulted in a number of general, asymptotic convergence results, advancing the theory of grid-based approximate nonlinear stochastic filtering. In particular, sufficient conditions, ensuring asymptotic optimality are relaxed, and, at the same time, the mode of convergence is strengthened. Although the need for such results initiated as an attempt to theoretically characterize the performance of the proposed approximate methods for statistical inference, in regard to the proposed channel modeling approach, they turn out to be of fundamental importance in the areas of nonlinear estimation and stochastic control. The experimental validation of the proposed channel model, as well as the related parameter estimation problem, termed as "Markovian Channel Profiling (MCP)", fundamentally important for any practical deployment, are subject of current, ongoing research. Second, adopting the first of the two aforementioned channel modeling approaches, we consider the spatially controlled relay beamforming problem for an AF network with a single source, a single destination, and multiple, controlled at will, relay nodes. (Abstract shortened by ProQuest.).

  16. The GEWEX LandFlux project: Evaluation of model evaporation using tower-based and globally gridded forcing data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCabe, M. F.; Ershadi, A.; Jimenez, C.

    Determining the spatial distribution and temporal development of evaporation at regional and global scales is required to improve our understanding of the coupled water and energy cycles and to better monitor any changes in observed trends and variability of linked hydrological processes. With recent international efforts guiding the development of long-term and globally distributed flux estimates, continued product assessments are required to inform upon the selection of suitable model structures and also to establish the appropriateness of these multi-model simulations for global application. In support of the objectives of the Global Energy and Water Cycle Exchanges (GEWEX) LandFlux project, fourmore » commonly used evaporation models are evaluated against data from tower-based eddy-covariance observations, distributed across a range of biomes and climate zones. The selected schemes include the Surface Energy Balance System (SEBS) approach, the Priestley–Taylor Jet Propulsion Laboratory (PT-JPL) model, the Penman–Monteith-based Mu model (PM-Mu) and the Global Land Evaporation Amsterdam Model (GLEAM). Here we seek to examine the fidelity of global evaporation simulations by examining the multi-model response to varying sources of forcing data. To do this, we perform parallel and collocated model simulations using tower-based data together with a global-scale grid-based forcing product. Through quantifying the multi-model response to high-quality tower data, a better understanding of the subsequent model response to the coarse-scale globally gridded data that underlies the LandFlux product can be obtained, while also providing a relative evaluation and assessment of model performance. Using surface flux observations from 45 globally distributed eddy-covariance stations as independent metrics of performance, the tower-based analysis indicated that PT-JPL provided the highest overall statistical performance (0.72; 61 W m –2; 0.65), followed closely by GLEAM (0.68; 64 W m –2; 0.62), with values in parentheses representing the R 2, RMSD and Nash–Sutcliffe efficiency (NSE), respectively. PM-Mu (0.51; 78 W m –2; 0.45) tended to underestimate fluxes, while SEBS (0.72; 101 W m –2; 0.24) overestimated values relative to observations. A focused analysis across specific biome types and climate zones showed considerable variability in the performance of all models, with no single model consistently able to outperform any other. Results also indicated that the global gridded data tended to reduce the performance for all of the studied models when compared to the tower data, likely a response to scale mismatch and issues related to forcing quality. Rather than relying on any single model simulation, the spatial and temporal variability at both the tower- and grid-scale highlighted the potential benefits of developing an ensemble or blended evaporation product for global-scale LandFlux applications. Hence, challenges related to the robust assessment of the LandFlux product are also discussed.« less

  17. The GEWEX LandFlux project: Evaluation of model evaporation using tower-based and globally gridded forcing data

    DOE PAGES

    McCabe, M. F.; Ershadi, A.; Jimenez, C.; ...

    2016-01-26

    Determining the spatial distribution and temporal development of evaporation at regional and global scales is required to improve our understanding of the coupled water and energy cycles and to better monitor any changes in observed trends and variability of linked hydrological processes. With recent international efforts guiding the development of long-term and globally distributed flux estimates, continued product assessments are required to inform upon the selection of suitable model structures and also to establish the appropriateness of these multi-model simulations for global application. In support of the objectives of the Global Energy and Water Cycle Exchanges (GEWEX) LandFlux project, fourmore » commonly used evaporation models are evaluated against data from tower-based eddy-covariance observations, distributed across a range of biomes and climate zones. The selected schemes include the Surface Energy Balance System (SEBS) approach, the Priestley–Taylor Jet Propulsion Laboratory (PT-JPL) model, the Penman–Monteith-based Mu model (PM-Mu) and the Global Land Evaporation Amsterdam Model (GLEAM). Here we seek to examine the fidelity of global evaporation simulations by examining the multi-model response to varying sources of forcing data. To do this, we perform parallel and collocated model simulations using tower-based data together with a global-scale grid-based forcing product. Through quantifying the multi-model response to high-quality tower data, a better understanding of the subsequent model response to the coarse-scale globally gridded data that underlies the LandFlux product can be obtained, while also providing a relative evaluation and assessment of model performance. Using surface flux observations from 45 globally distributed eddy-covariance stations as independent metrics of performance, the tower-based analysis indicated that PT-JPL provided the highest overall statistical performance (0.72; 61 W m –2; 0.65), followed closely by GLEAM (0.68; 64 W m –2; 0.62), with values in parentheses representing the R 2, RMSD and Nash–Sutcliffe efficiency (NSE), respectively. PM-Mu (0.51; 78 W m –2; 0.45) tended to underestimate fluxes, while SEBS (0.72; 101 W m –2; 0.24) overestimated values relative to observations. A focused analysis across specific biome types and climate zones showed considerable variability in the performance of all models, with no single model consistently able to outperform any other. Results also indicated that the global gridded data tended to reduce the performance for all of the studied models when compared to the tower data, likely a response to scale mismatch and issues related to forcing quality. Rather than relying on any single model simulation, the spatial and temporal variability at both the tower- and grid-scale highlighted the potential benefits of developing an ensemble or blended evaporation product for global-scale LandFlux applications. Hence, challenges related to the robust assessment of the LandFlux product are also discussed.« less

  18. Hybrid PV/Wind Power Systems Incorporating Battery Storage and Considering the Stochastic Nature of Renewable Resources

    NASA Astrophysics Data System (ADS)

    Barnawi, Abdulwasa Bakr

    Hybrid power generation system and distributed generation technology are attracting more investments due to the growing demand for energy nowadays and the increasing awareness regarding emissions and their environmental impacts such as global warming and pollution. The price fluctuation of crude oil is an additional reason for the leading oil producing countries to consider renewable resources as an alternative. Saudi Arabia as the top oil exporter country in the word announced the "Saudi Arabia Vision 2030" which is targeting to generate 9.5 GW of electricity from renewable resources. Two of the most promising renewable technologies are wind turbines (WT) and photovoltaic cells (PV). The integration or hybridization of photovoltaics and wind turbines with battery storage leads to higher adequacy and redundancy for both autonomous and grid connected systems. This study presents a method for optimal generation unit planning by installing a proper number of solar cells, wind turbines, and batteries in such a way that the net present value (NPV) is minimized while the overall system redundancy and adequacy is maximized. A new renewable fraction technique (RFT) is used to perform the generation unit planning. RFT was tested and validated with particle swarm optimization and HOMER Pro under the same conditions and environment. Renewable resources and load randomness and uncertainties are considered. Both autonomous and grid-connected system designs were adopted in the optimal generation units planning process. An uncertainty factor was designed and incorporated in both autonomous and grid connected system designs. In the autonomous hybrid system design model, the strategy including an additional amount of operation reserve as a percent of the hourly load was considered to deal with resource uncertainty since the battery storage system is the only backup. While in the grid-connected hybrid system design model, demand response was incorporated to overcome the impact of uncertainty and perform energy trading between the hybrid grid utility and main grid utility in addition to the designed uncertainty factor. After the generation unit planning was carried out and component sizing was determined, adequacy evaluation was conducted by calculating the loss of load expectation adequacy index for different contingency criteria considering probability of equipment failure. Finally, a microgrid planning was conducted by finding the proper size and location to install distributed generation units in a radial distribution network.

  19. Effective grid-dependent dispersion coefficient for conservative and reactive transport simulations in heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    Cortinez, J. M.; Valocchi, A. J.; Herrera, P. A.

    2013-12-01

    Because of the finite size of numerical grids, it is very difficult to correctly account for processes that occur at different spatial scales to accurately simulate the migration of conservative and reactive compounds dissolved in groundwater. In one hand, transport processes in heterogeneous porous media are controlled by local-scale dispersion associated to transport processes at the pore-scale. On the other hand, variations of velocity at the continuum- or Darcy-scale produce spreading of the contaminant plume, which is referred to as macro-dispersion. Furthermore, under some conditions both effects interact, so that spreading may enhance the action of local-scale dispersion resulting in higher mixing, dilution and reaction rates. Traditionally, transport processes at different spatial scales have been included in numerical simulations by using a single dispersion coefficient. This approach implicitly assumes that the separate effects of local-dispersion and macro-dispersion can be added and represented by a unique effective dispersion coefficient. Moreover, the selection of the effective dispersion coefficient for numerical simulations usually do not consider the filtering effect of the grid size over the small-scale flow features. We have developed a multi-scale Lagragian numerical method that allows using two different dispersion coefficients to represent local- and macro-scale dispersion. This technique considers fluid particles that carry solute mass and whose locations evolve according to a deterministic component given by the grid-scale velocity and a stochastic component that corresponds to a block-effective macro-dispersion coefficient. Mass transfer between particles due to local-scale dispersion is approximated by a meshless method. We use our model to test under which transport conditions the combined effect of local- and macro-dispersion are additive and can be represented by a single effective dispersion coefficient. We also demonstrate that for the situations where both processes are additive, an effective grid-dependent dispersion coefficient can be derived based on the concept of block-effective dispersion. We show that the proposed effective dispersion coefficient is able to reproduce dilution, mixing and reaction rates for a wide range of transport conditions similar to the ones found in many practical applications.

  20. Planning a Target Renewable Portfolio using Atmospheric Modeling and Stochastic Optimization

    NASA Astrophysics Data System (ADS)

    Hart, E.; Jacobson, M. Z.

    2009-12-01

    A number of organizations have suggested that an 80% reduction in carbon emissions by 2050 is a necessary step to mitigate climate change and that decarbonization of the electricity sector is a crucial component of any strategy to meet this target. Integration of large renewable and intermittent generators poses many new problems in power system planning. In this study, we attempt to determine an optimal portfolio of renewable resources to meet best the fluctuating California load while also meeting an 80% carbon emissions reduction requirement. A stochastic optimization scheme is proposed that is based on a simplified model of the California electricity grid. In this single-busbar power system model, the load is met with generation from wind, solar thermal, photovoltaic, hydroelectric, geothermal, and natural gas plants. Wind speeds and insolation are calculated using GATOR-GCMOM, a global-through-urban climate-weather-air pollution model. Fields were produced for California and Nevada at 21km SN by 14 km WE spatial resolution every 15 minutes for the year 2006. Load data for 2006 were obtained from the California ISO OASIS database. Maximum installed capacities for wind and solar thermal generation were determined using a GIS analysis of potential development sites throughout the state. The stochastic optimization scheme requires that power balance be achieved in a number of meteorological and load scenarios that deviate from the forecasted (or modeled) data. By adjusting the error distributions of the forecasts, the model describes how improvements in wind speed and insolation forecasting may affect the optimal renewable portfolio. Using a simple model, we describe the diversity, size, and sensitivities of a renewable portfolio that is best suited to the resources and needs of California and that contributes significantly to reduction of the state’s carbon emissions.

  1. Stochastic Coastal/Regional Uncertainty Modelling: a Copernicus marine research project in the framework of Service Evolution

    NASA Astrophysics Data System (ADS)

    Vervatis, Vassilios; De Mey, Pierre; Ayoub, Nadia; Kailas, Marios; Sofianos, Sarantis

    2017-04-01

    The project entitled Stochastic Coastal/Regional Uncertainty Modelling (SCRUM) aims at strengthening CMEMS in the areas of ocean uncertainty quantification, ensemble consistency verification and ensemble data assimilation. The project has been initiated by the University of Athens and LEGOS/CNRS research teams, in the framework of CMEMS Service Evolution. The work is based on stochastic modelling of ocean physics and biogeochemistry in the Bay of Biscay, on an identical sub-grid configuration of the IBI-MFC system in its latest CMEMS operational version V2. In a first step, we use a perturbed tendencies scheme to generate ensembles describing uncertainties in open ocean and on the shelf, focusing on upper ocean processes. In a second step, we introduce two methodologies (i.e. rank histograms and array modes) aimed at checking the consistency of the above ensembles with respect to TAC data and arrays. Preliminary results highlight that wind uncertainties dominate all other atmosphere-ocean sources of model errors. The ensemble spread in medium-range ensembles is approximately 0.01 m for SSH and 0.15 °C for SST, though these values vary depending on season and cross shelf regions. Ecosystem model uncertainties emerging from perturbations in physics appear to be moderately larger than those perturbing the concentration of the biogeochemical compartments, resulting in total chlorophyll spread at about 0.01 mg.m-3. First consistency results show that the model ensemble and the pseudo-ensemble of OSTIA (L4) observation SSTs appear to exhibit nonzero joint probabilities with each other since error vicinities overlap. Rank histograms show that the model ensemble is initially under-dispersive, though results improve in the context of seasonal-range ensembles.

  2. Well-conditioned fractional collocation methods using fractional Birkhoff interpolation basis

    NASA Astrophysics Data System (ADS)

    Jiao, Yujian; Wang, Li-Lian; Huang, Can

    2016-01-01

    The purpose of this paper is twofold. Firstly, we provide explicit and compact formulas for computing both Caputo and (modified) Riemann-Liouville (RL) fractional pseudospectral differentiation matrices (F-PSDMs) of any order at general Jacobi-Gauss-Lobatto (JGL) points. We show that in the Caputo case, it suffices to compute F-PSDM of order μ ∈ (0 , 1) to compute that of any order k + μ with integer k ≥ 0, while in the modified RL case, it is only necessary to evaluate a fractional integral matrix of order μ ∈ (0 , 1). Secondly, we introduce suitable fractional JGL Birkhoff interpolation problems leading to new interpolation polynomial basis functions with remarkable properties: (i) the matrix generated from the new basis yields the exact inverse of F-PSDM at "interior" JGL points; (ii) the matrix of the highest fractional derivative in a collocation scheme under the new basis is diagonal; and (iii) the resulted linear system is well-conditioned in the Caputo case, while in the modified RL case, the eigenvalues of the coefficient matrix are highly concentrated. In both cases, the linear systems of the collocation schemes using the new basis can be solved by an iterative solver within a few iterations. Notably, the inverse can be computed in a very stable manner, so this offers optimal preconditioners for usual fractional collocation methods for fractional differential equations (FDEs). It is also noteworthy that the choice of certain special JGL points with parameters related to the order of the equations can ease the implementation. We highlight that the use of the Bateman's fractional integral formulas and fast transforms between Jacobi polynomials with different parameters, is essential for our algorithm development.

  3. Three-dimensional data-tracking dynamic optimization simulations of human locomotion generated by direct collocation.

    PubMed

    Lin, Yi-Chung; Pandy, Marcus G

    2017-07-05

    The aim of this study was to perform full-body three-dimensional (3D) dynamic optimization simulations of human locomotion by driving a neuromusculoskeletal model toward in vivo measurements of body-segmental kinematics and ground reaction forces. Gait data were recorded from 5 healthy participants who walked at their preferred speeds and ran at 2m/s. Participant-specific data-tracking dynamic optimization solutions were generated for one stride cycle using direct collocation in tandem with an OpenSim-MATLAB interface. The body was represented as a 12-segment, 21-degree-of-freedom skeleton actuated by 66 muscle-tendon units. Foot-ground interaction was simulated using six contact spheres under each foot. The dynamic optimization problem was to find the set of muscle excitations needed to reproduce 3D measurements of body-segmental motions and ground reaction forces while minimizing the time integral of muscle activations squared. Direct collocation took on average 2.7±1.0h and 2.2±1.6h of CPU time, respectively, to solve the optimization problems for walking and running. Model-computed kinematics and foot-ground forces were in good agreement with corresponding experimental data while the calculated muscle excitation patterns were consistent with measured EMG activity. The results demonstrate the feasibility of implementing direct collocation on a detailed neuromusculoskeletal model with foot-ground contact to accurately and efficiently generate 3D data-tracking dynamic optimization simulations of human locomotion. The proposed method offers a viable tool for creating feasible initial guesses needed to perform predictive simulations of movement using dynamic optimization theory. The source code for implementing the model and computational algorithm may be downloaded at http://simtk.org/home/datatracking. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Coevolution of a multilayer node-aligned network whose layers represent different social relations.

    PubMed

    Bahulkar, Ashwin; Szymanski, Boleslaw K; Chan, Kevin; Lizardo, Omar

    2017-01-01

    We examine the coevolution of three-layer node-aligned network of university students. The first layer is defined by nominations based on perceived prominence collected from repeated surveys during the first four semesters; the second is a behavioral layer representing actual students' interactions based on records of mobile calls and text messages; while the third is a behavioral layer representing potential face-to-face interactions suggested by bluetooth collocations. We address four interrelated questions. First, we ask whether the formation or dissolution of a link in one of the layers precedes or succeeds the formation or dissolution of the corresponding link in another layer (temporal dependencies). Second, we explore the causes of observed temporal dependencies between the layers. For those temporal dependencies that are confirmed, we measure the predictive capability of such dependencies. Third, we observe the progress towards nominations and the stages that lead to them. Finally, we examine whether the differences in dissolution rates of symmetric (undirected) versus asymmetric (directed) links co-exist in all layers. We find strong patterns of reciprocal temporal dependencies between the layers. In particular, the creation of an edge in either behavioral layer generally precedes the formation of a corresponding edge in the nomination layer. Conversely, the decay of a link in the nomination layer generally precedes a decline in the intensity of communication and collocation. Finally, nodes connected by asymmetric nomination edges have lower overall communication and collocation volumes and more asymmetric communication flows than the nodes linked by symmetric edges. We find that creation and dissolution of cognitively salient contacts have temporal dependencies with communication and collocation behavior.

  5. Intercomparison of an Aerosol Chemical Speciation Monitor (ACSM) with ambient fine aerosol measurements in downtown Atlanta, Georgia

    NASA Astrophysics Data System (ADS)

    Budisulistiorini, S. H.; Canagaratna, M. R.; Croteau, P. L.; Baumann, K.; Edgerton, E. S.; Kollman, M. S.; Ng, N. L.; Verma, V.; Shaw, S. L.; Knipping, E. M.; Worsnop, D. R.; Jayne, J. T.; Weber, R. J.; Surratt, J. D.

    2014-07-01

    Currently, there are a limited number of field studies that evaluate the long-term performance of the Aerodyne Aerosol Chemical Speciation Monitor (ACSM) against established monitoring networks. In this study, we present seasonal intercomparisons of the ACSM with collocated fine aerosol (PM2.5) measurements at the Southeastern Aerosol Research and Characterization (SEARCH) Jefferson Street (JST) site near downtown Atlanta, GA, during 2011-2012. Intercomparison of two collocated ACSMs resulted in strong correlations (r2 > 0.8) for all chemical species, except chloride (r2 = 0.21) indicating that ACSM instruments are capable of stable and reproducible operation. In general, speciated ACSM mass concentrations correlate well (r2 > 0.7) with the filter-adjusted continuous measurements from JST, although the correlation for nitrate is weaker (r2 = 0.55) in summer. Correlations of the ACSM NR-PM1 (non-refractory particulate matter with aerodynamic diameter less than or equal to 1 μm) plus elemental carbon (EC) with tapered element oscillating microbalance (TEOM) PM2.5 and Federal Reference Method (FRM) PM1 mass are strong with r2 > 0.7 and r2 > 0.8, respectively. Discrepancies might be attributed to evaporative losses of semi-volatile species from the filter measurements used to adjust the collocated continuous measurements. This suggests that adjusting the ambient aerosol continuous measurements with results from filter analysis introduced additional bias to the measurements. We also recommend to calibrate the ambient aerosol monitoring instruments using aerosol standards rather than gas-phase standards. The fitting approach for ACSM relative ionization for sulfate was shown to improve the comparisons between ACSM and collocated measurements in the absence of calibrated values, suggesting the importance of adding sulfate calibration into the ACSM calibration routine.

  6. Sensitivity of Assimilated Tropical Tropospheric Ozone to the Meteorological Analyses

    NASA Technical Reports Server (NTRS)

    Hayashi, Hiroo; Stajner, Ivanka; Pawson, Steven; Thompson, Anne M.

    2002-01-01

    Tropical tropospheric ozone fields from two different experiments performed with an off-line ozone assimilation system developed in NASA's Data Assimilation Office (DAO) are examined. Assimilated ozone fields from the two experiments are compared with the collocated ozone profiles from the Southern Hemispheric Additional Ozonesondes (SHADOZ) network. Results are presented for 1998. The ozone assimilation system includes a chemistry-transport model, which uses analyzed winds from the Goddard Earth Observing System (GEOS) Data Assimilation System (DAS). The two experiments use wind fields from different versions of GEOS DAS: an operational version of the GEOS-2 system and a prototype of the GEOS-4 system. While both versions of the DAS utilize the Physical-space Statistical Analysis System and use comparable observations, they use entirely different general circulation models and data insertion techniques. The shape of the annual-mean vertical profile of the assimilated ozone fields is sensitive to the meteorological analyses, with the GEOS-4-based ozone being closest to the observations. This indicates that the resolved transport in GEOS-4 is more realistic than in GEOS-2. Remaining uncertainties include quantification of the representation of sub-grid-scale processes in the transport calculations, which plays an important role in the locations and seasons where convection dominates the transport.

  7. FastMag: Fast micromagnetic simulator for complex magnetic structures (invited)

    NASA Astrophysics Data System (ADS)

    Chang, R.; Li, S.; Lubarda, M. V.; Livshitz, B.; Lomakin, V.

    2011-04-01

    A fast micromagnetic simulator (FastMag) for general problems is presented. FastMag solves the Landau-Lifshitz-Gilbert equation and can handle multiscale problems with a high computational efficiency. The simulator derives its high performance from efficient methods for evaluating the effective field and from implementations on massively parallel graphics processing unit (GPU) architectures. FastMag discretizes the computational domain into tetrahedral elements and therefore is highly flexible for general problems. The magnetostatic field is computed via the superposition principle for both volume and surface parts of the computational domain. This is accomplished by implementing efficient quadrature rules and analytical integration for overlapping elements in which the integral kernel is singular. Thus, discretized superposition integrals are computed using a nonuniform grid interpolation method, which evaluates the field from N sources at N collocated observers in O(N) operations. This approach allows handling objects of arbitrary shape, allows easily calculating of the field outside the magnetized domains, does not require solving a linear system of equations, and requires little memory. FastMag is implemented on GPUs with ?> GPU-central processing unit speed-ups of 2 orders of magnitude. Simulations are shown of a large array of magnetic dots and a recording head fully discretized down to the exchange length, with over a hundred million tetrahedral elements on an inexpensive desktop computer.

  8. Eigenmodes of Ducted Flows With Radially-Dependent Axial and Swirl Velocity Components

    NASA Technical Reports Server (NTRS)

    Kousen, Kenneth A.

    1999-01-01

    This report characterizes the sets of small disturbances possible in cylindrical and annular ducts with mean flow whose axial and tangential components vary arbitrarily with radius. The linearized equations of motion are presented and discussed, and then exponential forms for the axial, circumferential, and time dependencies of any unsteady disturbances are assumed. The resultant equations form a generalized eigenvalue problem, the solution of which yields the axial wavenumbers and radial mode shapes of the unsteady disturbances. Two numerical discretizations are applied to the system of equations: (1) a spectral collocation technique based on Chebyshev polynomial expansions on the Gauss-Lobatto points, and (2) second and fourth order finite differences on uniform grids. The discretized equations are solved using a standard eigensystem package employing the QR algorithm. The eigenvalues fall into two primary categories: a discrete set (analogous to the acoustic modes found in uniform mean flows) and a continuous band (analogous to convected disturbances in uniform mean flows) where the phase velocities of the disturbances correspond to the local mean flow velocities. Sample mode shapes and eigensystem distributions are presented for both sheared axial and swirling flows. The physics of swirling flows is examined with reference to hydrodynamic stability and completeness of the eigensystem expansions. The effect of assuming exponential dependence in the axial direction is discussed.

  9. An approach for assessing the sensitivity of floods to regional climate change

    NASA Astrophysics Data System (ADS)

    Hughes, James P.; Lettenmaier, Dennis P.; Wood, Eric F.

    1992-06-01

    A high visibility afforded climate change issues is recent years has led to conflicts between and among decision makers and scientists. Decision makers inevitably feel pressure to assess the effect of climate change on the public welfare, while most climate modelers are, to a greater or lesser degree, concerned about the extent to which known inaccuracies in their models limit or preclude the use of modeling results for policy making. The water resources sector affords a good example of the limitations of the use of alternative climate scenarios derived from GCMs for decision making. GCM simulations of precipitation agree poorly between GCMs, and GCM predictions of runoff and evapotranspiration are even more uncertain. Further, water resources managers must be concerned about hydrologic extremes (floods and droughts) which are much more difficult to predict than ``average'' conditions. Most studies of the sensitivity of water resource systems and operating policies to climate change to data have been based on simple perturbations of historic hydroclimatological time series to reflect the difference between large area GCM simulations for an altered climate (e.g., CO2 doubling) and a GCM simulation of present climate. Such approaches are especially limited for assessment of the sensitivity of water resources systems under extreme conditions, conditions, since the distribution of storm inter-arrival times, for instance, is kept identical to that observed in the historic past. Further, such approaches have generally been based on the difference between the GCM altered and present climates for a single grid cell, primarily because the GCM spatial scale is often much larger than the scale at which climate interpretations are desired. The use of single grid cell GCM results is considered inadvisable by many GCM modelers, who feel the spatial scale for which interpretation of GCM results is most reasonable is on the order of several grid cells. In this paper, we demonstrate an alternative approach to assessing the implications of altered climates as predicted by GCMs for extreme (flooding) conditions. The approach is based on the characterization of regional atmospheric circulation patterns through a weather typing procedure, from which a stochastic model of the weather class occurrences is formulated. Weather types are identified through a CART (Classification and Regression Tree) approach. Precipitation occurence/non-occurence at multiple precipitation station is then predicted through a second stage stochastic model. Precipitation amounts are predicted conditional on the weather class identified from the large area circulation information.

  10. Stochastic Ground Water Flow Simulation with a Fracture Zone Continuum Model

    USGS Publications Warehouse

    Langevin, C.D.

    2003-01-01

    A method is presented for incorporating the hydraulic effects of vertical fracture zones into two-dimensional cell-based continuum models of ground water flow and particle tracking. High hydraulic conductivity features are used in the model to represent fracture zones. For fracture zones that are not coincident with model rows or columns, an adjustment is required for the hydraulic conductivity value entered into the model cells to compensate for the longer flowpath through the model grid. A similar adjustment is also required for simulated travel times through model cells. A travel time error of less than 8% can occur for particles moving through fractures with certain orientations. The fracture zone continuum model uses stochastically generated fracture zone networks and Monte Carlo analysis to quantify uncertainties with simulated advective travel times. An approach is also presented for converting an equivalent continuum model into a fracture zone continuum model by establishing the contribution of matrix block transmissivity to the bulk transmissivity of the aquifer. The methods are used for a case study in west-central Florida to quantify advective travel times from a potential wetland rehydration site to a municipal supply wellfield. Uncertainties in advective travel times are assumed to result from the presence of vertical fracture zones, commonly observed on aerial photographs as photolineaments.

  11. Evaluation of a spatial rainfall generator and an interpolation methods for the creation of future gridded data sets over complex terrains

    NASA Astrophysics Data System (ADS)

    Camera, Corrado; Bruggeman, Adriana; Hadjinicolaou, Panos; Michaelides, Silas; Lange, Manfred A.

    2015-04-01

    Space-time variability of precipitation plays a key role as a driver of many processes in different environmental fields like hydrology, ecology, biology, agriculture, and natural hazards. The objective of this study was to compare two approaches for statistical downscaling of precipitation from climate models. The study was applied to the island of Cyprus, an orographically complex terrain. The first approach makes use of a spatial temporal Neyman-Scott Rectangular Pulses (NSRP) model and a previously tested interpolation scheme (Camera et al., 2014). The second approach is based on the use of the single site NSRP model and a simplified gridded scheme based on scaling coefficients obtained from past observations. The rainfall generators were evaluated on the period 1980-2010. Both approaches were subsequently used to downscale three RCMs from the EU ENSEMBLE project to calculate climate projections (2020-2050). The main advantage of the spatial-temporal approach is that it allows creating spatially consistent daily maps of precipitation. On the other hand, due to the assumptions made using a stochastic generator based on homogeneous Poisson processes, it shows a smoothing out of all the rainfall statistics (except mean and variance) all over the study area. This leads to high errors when analyzing indices related to extremes. Examples are the number of days with rainfall over 50 mm (R50 - mean error 65%), the 95th percentile value of rainy days (RT95 - mean error 19%), and the mean annual rainfall recorded on days with rainfall above the 95th percentile (RA95 - mean error 22%). The single site approach excludes the possibility of using the created gridded data sets for case studies involving spatial connection between grid cells (e.g. hydrologic modelling), but it leads to a better reproduction of rainfall statistics and properties. The errors for the extreme indices are in fact much lower: 17% for R50, 4% for RT95, and 2% for RA95. Future projections show a decrease of the mean annual rainfall (for both approaches) over the study area between 70 mm (≈15%) and 5 mm (≈1%), in comparison to the reference period 1980-2010. Regarding extremes, calculated only with the single site approach, the projections show a decrease of the R50 index between 25% and 7%, and of the RT95 between 8% and 0%. Thus, these projections indicate that a slight reduction in the number and intensity of extremes can be expected. Further research will be done to adapt and evaluate the use of a spatial-temporal generator with nonhomogeneous spatial activation of raincells (Burton et al., 2010) to the study area. Burton, A., Fowler, H.J., Kilsby, C.G., O'Connell, P. E., 2010a. A stochastic model for the spatial-temporal simulation of non-homogeneous rainfall occurrence and amounts, Water Resour. Res. 46, W11501. DOI: 10.1029/2009WR008884 Camera, C., Bruggeman, A., Hadjinicolaou, P., Pashiardis, S., Lange, M. A., 2014. Evaluation of interpolation techniques for the creation of gridded daily precipitation (1 × 1 km2); Cyprus, 1980-2010. J. Geophys. Res. Atmos., 119, 693-712. DOI: 10.1002/2013JD020611.

  12. Microbialite Morphologies and Distributions-Geoacoustic Survey with an AUV of Pavilion Lake, British Columbia, Canada

    NASA Astrophysics Data System (ADS)

    Gutsche, J. R.; Trembanis, A. C.

    2010-12-01

    With advances in lake bottom mapping it has been observed that modern microbialites, much like the ancient stromatolites, thrive in freshwater lake environments. Previously collected data shows that a diverse community of living stromatolites are present within Pavilion Lake (Laval et al., 2000, Lim et al., 2009). An additional comprehensive data set was collected in June-July 2010. By building on the previous dataset it is possible to compare two high-resolution geoacoustic datasets. Using Autonomous Underwater Vehicles (AUVs) as exploration platforms to conduct surveys of the lake bottom, very high-resolution sonar data has been collected. The data collected in June-July 2010 is composed of 125 km of AUV trackline. This length of trackline allowed for survey coverage of nearly the entire lake bottom. The Gavia AUV used for this survey collected bathymetry data collocated with backscatter information. The data has been processed and gridded to 1m, with specific high value areas gridded to a finer 0.5m. The bathymetric data was compiled to create a base map of the floor of Pavilion Lake. Backscatter data was also collected and processed using the same 1m grid resolution. After the backscatter data was processed, it was draped over the bathymetry map of Pavilion Lake. The tools offered within the Fledermaus software package allow for the bathymetry data to be analyzed with respect to slope and rugosity. By analyzing this dense phase measuring bathymetric sonar of the lake bottom, with respect to slope and rugosity, it is possible to map the morphological trends of the stromatolites. Additionally, the ability to compare two datasets allows for erosional changes in the lake bottom to be identified. The bathymetry data allows for the quantitative analysis of bed forms within Pavilion Lake, allowing for a better understanding of microbialite morphologies. The backscatter data is increasingly important to the Pavilion Lake project because of the location and general surroundings of the lake. The lake itself is located in a limestone canyon, which frequently sustains erosional episodes. The backscatter data allows for the differentiation between erosional deposits and microbial mounds. The combination of backscatter and bathymetry allows for a further understanding of bedforms and microbialite growth patterns.

  13. Stochastic Downscaling of Digital Elevation Models

    NASA Astrophysics Data System (ADS)

    Rasera, Luiz Gustavo; Mariethoz, Gregoire; Lane, Stuart N.

    2016-04-01

    High-resolution digital elevation models (HR-DEMs) are extremely important for the understanding of small-scale geomorphic processes in Alpine environments. In the last decade, remote sensing techniques have experienced a major technological evolution, enabling fast and precise acquisition of HR-DEMs. However, sensors designed to measure elevation data still feature different spatial resolution and coverage capabilities. Terrestrial altimetry allows the acquisition of HR-DEMs with centimeter to millimeter-level precision, but only within small spatial extents and often with dead ground problems. Conversely, satellite radiometric sensors are able to gather elevation measurements over large areas but with limited spatial resolution. In the present study, we propose an algorithm to downscale low-resolution satellite-based DEMs using topographic patterns extracted from HR-DEMs derived for example from ground-based and airborne altimetry. The method consists of a multiple-point geostatistical simulation technique able to generate high-resolution elevation data from low-resolution digital elevation models (LR-DEMs). Initially, two collocated DEMs with different spatial resolutions serve as an input to construct a database of topographic patterns, which is also used to infer the statistical relationships between the two scales. High-resolution elevation patterns are then retrieved from the database to downscale a LR-DEM through a stochastic simulation process. The output of the simulations are multiple equally probable DEMs with higher spatial resolution that also depict the large-scale geomorphic structures present in the original LR-DEM. As these multiple models reflect the uncertainty related to the downscaling, they can be employed to quantify the uncertainty of phenomena that are dependent on fine topography, such as catchment hydrological processes. The proposed methodology is illustrated for a case study in the Swiss Alps. A swissALTI3D HR-DEM (with 5 m resolution) and a SRTM-derived LR-DEM from the Western Alps are used to downscale a SRTM-based LR-DEM from the eastern part of the Alps. The results show that the method is capable of generating multiple high-resolution synthetic DEMs that reproduce the spatial structure and statistics of the original DEM.

  14. 3D frequency-domain finite-difference modeling of acoustic wave propagation

    NASA Astrophysics Data System (ADS)

    Operto, S.; Virieux, J.

    2006-12-01

    We present a 3D frequency-domain finite-difference method for acoustic wave propagation modeling. This method is developed as a tool to perform 3D frequency-domain full-waveform inversion of wide-angle seismic data. For wide-angle data, frequency-domain full-waveform inversion can be applied only to few discrete frequencies to develop reliable velocity model. Frequency-domain finite-difference (FD) modeling of wave propagation requires resolution of a huge sparse system of linear equations. If this system can be solved with a direct method, solutions for multiple sources can be computed efficiently once the underlying matrix has been factorized. The drawback of the direct method is the memory requirement resulting from the fill-in of the matrix during factorization. We assess in this study whether representative problems can be addressed in 3D geometry with such approach. We start from the velocity-stress formulation of the 3D acoustic wave equation. The spatial derivatives are discretized with second-order accurate staggered-grid stencil on different coordinate systems such that the axis span over as many directions as possible. Once the discrete equations were developed on each coordinate system, the particle velocity fields are eliminated from the first-order hyperbolic system (following the so-called parsimonious staggered-grid method) leading to second-order elliptic wave equations in pressure. The second-order wave equations discretized on each coordinate system are combined linearly to mitigate the numerical anisotropy. Secondly, grid dispersion is minimized by replacing the mass term at the collocation point by its weighted averaging over all the grid points of the stencil. Use of second-order accurate staggered- grid stencil allows to reduce the bandwidth of the matrix to be factorized. The final stencil incorporates 27 points. Absorbing conditions are PML. The system is solved using the parallel direct solver MUMPS developed for distributed-memory computers. The MUMPS solver is based on a multifrontal method for LU factorization. We used the METIS algorithm to perform re-ordering of the matrix coefficients before factorization. Four grid points per minimum wavelength is used for discretization. We applied our algorithm to the 3D SEG/EAGE synthetic onshore OVERTHRUST model of dimensions 20 x 20 x 4.65 km. The velocities range between 2 and 6 km/s. We performed the simulations using 192 processors with 2 Gbytes of RAM memory per processor. We performed simulations for the 5 Hz, 7 Hz and 10 Hz frequencies in some fractions of the OVERTHRUST model. The grid interval was 100 m, 75 m and 50 m respectively. The grid dimensions were 207x207x53, 275x218x71 and 409x109x102 respectively corresponding to 100, 80 and 25 percents of the model respectively. The time for factorization is 20 mn, 108 mn and 163 mn respectively. The time for resolution was 3.8, 9.3 and 10.3 s per source. The total memory used during factorization is 143, 384 and 449 Gbytes respectively. One can note the huge memory requirement for factorization and the efficiency of the direct method to compute solutions for a large number of sources. This highlights the respective drawback and merit of the frequency-domain approach with respect to the time- domain counterpart. These results show that 3D acoustic frequency-domain wave propagation modeling can be performed at low frequencies using direct solver on large clusters of Pcs. This forward modeling algorithm may be used in the future as a tool to image the first kilometers of the crust by frequency-domain full-waveform inversion. For larger problems, we will use the out-of-core memory during factorization that has been implemented by the authors of MUMPS.

  15. Application of Collocated GPS and Seismic Sensors to Earthquake Monitoring and Early Warning

    PubMed Central

    Li, Xingxing; Zhang, Xiaohong; Guo, Bofeng

    2013-01-01

    We explore the use of collocated GPS and seismic sensors for earthquake monitoring and early warning. The GPS and seismic data collected during the 2011 Tohoku-Oki (Japan) and the 2010 El Mayor-Cucapah (Mexico) earthquakes are analyzed by using a tightly-coupled integration. The performance of the integrated results is validated by both time and frequency domain analysis. We detect the P-wave arrival and observe small-scale features of the movement from the integrated results and locate the epicenter. Meanwhile, permanent offsets are extracted from the integrated displacements highly accurately and used for reliable fault slip inversion and magnitude estimation. PMID:24284765

  16. Legendre spectral-collocation method for solving some types of fractional optimal control problems

    PubMed Central

    Sweilam, Nasser H.; Al-Ajami, Tamer M.

    2014-01-01

    In this paper, the Legendre spectral-collocation method was applied to obtain approximate solutions for some types of fractional optimal control problems (FOCPs). The fractional derivative was described in the Caputo sense. Two different approaches were presented, in the first approach, necessary optimality conditions in terms of the associated Hamiltonian were approximated. In the second approach, the state equation was discretized first using the trapezoidal rule for the numerical integration followed by the Rayleigh–Ritz method to evaluate both the state and control variables. Illustrative examples were included to demonstrate the validity and applicability of the proposed techniques. PMID:26257937

  17. Novel two-way artificial boundary condition for 2D vertical water wave propagation modelled with Radial-Basis-Function Collocation Method

    NASA Astrophysics Data System (ADS)

    Mueller, A.

    2018-04-01

    A new transparent artificial boundary condition for the two-dimensional (vertical) (2DV) free surface water wave propagation modelled using the meshless Radial-Basis-Function Collocation Method (RBFCM) as boundary-only solution is derived. The two-way artificial boundary condition (2wABC) works as pure incidence, pure radiation and as combined incidence/radiation BC. In this work the 2wABC is applied to harmonic linear water waves; its performance is tested against the analytical solution for wave propagation over horizontal sea bottom, standing and partially standing wave as well as wave interference of waves with different periods.

  18. Hybrid near-optimal aeroassisted orbit transfer plane change trajectories

    NASA Technical Reports Server (NTRS)

    Calise, Anthony J.; Duckeman, Gregory A.

    1994-01-01

    In this paper, a hybrid methodology is used to determine optimal open loop controls for the atmospheric portion of the aeroassisted plane change problem. The method is hybrid in the sense that it combines the features of numerical collocation with the analytically tractable portions of the problem which result when the two-point boundary value problem is cast in the form of a regular perturbation problem. Various levels of approximation are introduced by eliminating particular collocation parameters and their effect upon problem complexity and required number of nodes is discussed. The results include plane changes of 10, 20, and 30 degrees for a given vehicle.

  19. Acoustic ranging of small arms fire using a single sensor node collocated with the target.

    PubMed

    Lo, Kam W; Ferguson, Brian G

    2015-06-01

    A ballistic model-based method, which builds upon previous work by Lo and Ferguson [J. Acoust. Soc. Am. 132, 2997-3017 (2012)], is described for ranging small arms fire using a single acoustic sensor node collocated with the target, without a priori knowledge of the muzzle speed and ballistic constant of the bullet except that they belong to a known two-dimensional parameter space. The method requires measurements of the differential time of arrival and differential angle of arrival of the muzzle blast and ballistic shock wave at the sensor node. Its performance is evaluated using both simulated and real data.

  20. Martian resource locations: Identification and optimization

    NASA Astrophysics Data System (ADS)

    Chamitoff, Gregory; James, George; Barker, Donald; Dershowitz, Adam

    2005-04-01

    The identification and utilization of in situ Martian natural resources is the key to enable cost-effective long-duration missions and permanent human settlements on Mars. This paper presents a powerful software tool for analyzing Martian data from all sources, and for optimizing mission site selection based on resource collocation. This program, called Planetary Resource Optimization and Mapping Tool (PROMT), provides a wide range of analysis and display functions that can be applied to raw data or imagery. Thresholds, contours, custom algorithms, and graphical editing are some of the various methods that can be used to process data. Output maps can be created to identify surface regions on Mars that meet any specific criteria. The use of this tool for analyzing data, generating maps, and collocating features is demonstrated using data from the Mars Global Surveyor and the Odyssey spacecraft. The overall mission design objective is to maximize a combination of scientific return and self-sufficiency based on utilization of local materials. Landing site optimization involves maximizing accessibility to collocated science and resource features within a given mission radius. Mission types are categorized according to duration, energy resources, and in situ resource utilization. Preliminary optimization results are shown for a number of mission scenarios.

  1. Computational approach to compact Riemann surfaces

    NASA Astrophysics Data System (ADS)

    Frauendiener, Jörg; Klein, Christian

    2017-01-01

    A purely numerical approach to compact Riemann surfaces starting from plane algebraic curves is presented. The critical points of the algebraic curve are computed via a two-dimensional Newton iteration. The starting values for this iteration are obtained from the resultants with respect to both coordinates of the algebraic curve and a suitable pairing of their zeros. A set of generators of the fundamental group for the complement of these critical points in the complex plane is constructed from circles around these points and connecting lines obtained from a minimal spanning tree. The monodromies are computed by solving the defining equation of the algebraic curve on collocation points along these contours and by analytically continuing the roots. The collocation points are chosen to correspond to Chebychev collocation points for an ensuing Clenshaw-Curtis integration of the holomorphic differentials which gives the periods of the Riemann surface with spectral accuracy. At the singularities of the algebraic curve, Puiseux expansions computed by contour integration on the circles around the singularities are used to identify the holomorphic differentials. The Abel map is also computed with the Clenshaw-Curtis algorithm and contour integrals. As an application of the code, solutions to the Kadomtsev-Petviashvili equation are computed on non-hyperelliptic Riemann surfaces.

  2. [A particular anthropometric method for the study of accessibility of a workstation].

    PubMed

    Molinaro, V; Del Ferraro, S

    2008-01-01

    One of the main factors which can involve musculo-skeletal disorders is the assumption of awkward postures. These lasts can be caused, in some cases, by a no-suitable collocation of some devices which are indispensable for the work. It is possible to evaluate if the chosen collocation is adequate or not by studying the accessibility of the workstation with a special regard for the accessibility of the devices placed inside the workstation. EN ISO 14738:2002 is a specific standard which has been adopted in Italy as UNI EN ISO 14738:2004. This standard gives some useful requirements, in terms of accessibility, to design a workstation at no-mobile machinery. In this study, the authors have analyzed a check out workstation by following the requirements described in UNI EN ISO 14738:2004. Critical aspects, related to the organization both of the work activities either of the workstation, have been highlighted taking into account standard criteria. Finally the authors make a new design of the check out workstation trying to optimize device collocation in order to reduce awkward postures. The new configuration has been investigated by applying the criteria mentioned in the standard.

  3. Understanding intersatellite biases of microwave humidity sounders using global simultaneous nadir overpasses

    NASA Astrophysics Data System (ADS)

    John, Viju O.; Holl, Gerrit; Buehler, Stefan A.; Candy, Brett; Saunders, Roger W.; Parker, David E.

    2012-01-01

    Simultaneous nadir overpasses (SNOs) of polar-orbiting satellites are most frequent in polar areas but can occur at any latitude when the equatorial crossing times of the satellites become close owing to orbital drift. We use global SNOs of polar orbiting satellites to evaluate the intercalibration of microwave humidity sounders from the more frequent high-latitude SNOs. We have found based on sensitivity analyses that optimal distance and time thresholds for defining collocations are pixel centers less than 5 km apart and time differences less than 300 s. These stringent collocation criteria reduce the impact of highly variable surface or atmospheric conditions on the estimated biases. Uncertainties in the estimated biases are dominated by the combined radiometric noise of the instrument pair. The effects of frequency changes between different versions of the humidity sounders depend on the amount of water vapor in the atmosphere. There are significant scene radiance and thus latitude dependencies in the estimated biases and this has to taken into account while intercalibrating microwave humidity sounders. Therefore the results obtained using polar SNOs will not be representative for moist regions, necessitating the use of global collocations for reliable intercalibration.

  4. Adaptive probabilistic collocation based Kalman filter for unsaturated flow problem

    NASA Astrophysics Data System (ADS)

    Man, J.; Li, W.; Zeng, L.; Wu, L.

    2015-12-01

    The ensemble Kalman filter (EnKF) has gained popularity in hydrological data assimilation problems. As a Monte Carlo based method, a relatively large ensemble size is usually required to guarantee the accuracy. As an alternative approach, the probabilistic collocation based Kalman filter (PCKF) employs the Polynomial Chaos to approximate the original system. In this way, the sampling error can be reduced. However, PCKF suffers from the so called "cure of dimensionality". When the system nonlinearity is strong and number of parameters is large, PCKF is even more computationally expensive than EnKF. Motivated by recent developments in uncertainty quantification, we propose a restart adaptive probabilistic collocation based Kalman filter (RAPCKF) for data assimilation in unsaturated flow problem. During the implementation of RAPCKF, the important parameters are identified and active PCE basis functions are adaptively selected. The "restart" technology is used to alleviate the inconsistency between model parameters and states. The performance of RAPCKF is tested by unsaturated flow numerical cases. It is shown that RAPCKF is more efficient than EnKF with the same computational cost. Compared with the traditional PCKF, the RAPCKF is more applicable in strongly nonlinear and high dimensional problems.

  5. Linking the Weather Generator with Regional Climate Model

    NASA Astrophysics Data System (ADS)

    Dubrovsky, Martin; Farda, Ales; Skalak, Petr; Huth, Radan

    2013-04-01

    One of the downscaling approaches, which transform the raw outputs from the climate models (GCMs or RCMs) into data with more realistic structure, is based on linking the stochastic weather generator with the climate model output. The present contribution, in which the parametric daily surface weather generator (WG) M&Rfi is linked to the RCM output, follows two aims: (1) Validation of the new simulations of the present climate (1961-1990) made by the ALADIN-Climate Regional Climate Model at 25 km resolution. The WG parameters are derived from the RCM-simulated surface weather series and compared to those derived from weather series observed in 125 Czech meteorological stations. The set of WG parameters will include statistics of the surface temperature and precipitation series (including probability of wet day occurrence). (2) Presenting a methodology for linking the WG with RCM output. This methodology, which is based on merging information from observations and RCM, may be interpreted as a downscaling procedure, whose product is a gridded WG capable of producing realistic synthetic multivariate weather series for weather-ungauged locations. In this procedure, WG is calibrated with RCM-simulated multi-variate weather series in the first step, and the grid specific WG parameters are then de-biased by spatially interpolated correction factors based on comparison of WG parameters calibrated with gridded RCM weather series and spatially scarcer observations. The quality of the weather series produced by the resultant gridded WG will be assessed in terms of selected climatic characteristics (focusing on characteristics related to variability and extremes of surface temperature and precipitation). Acknowledgements: The present experiment is made within the frame of projects ALARO-Climate (project P209/11/2405 sponsored by the Czech Science Foundation), WG4VALUE (project LD12029 sponsored by the Ministry of Education, Youth and Sports of CR) and VALUE (COST ES 1102 action).

  6. The Star-Forming Main Sequence as a Natural Consequence of the Central Limit Theorem

    NASA Astrophysics Data System (ADS)

    Kelson, Daniel David

    2015-08-01

    Star-formation rates (SFR) of disk galaxies correlate with stellar mass, with a small dispersion in SSFR at fixed mass, sigma~0.3 dex. With such scatter this star-formation main sequence (SFMS) has been interpreted as deterministic and fundamental. Here I demonstrate that such a correlation arises naturally from the central limit theorem. The derivation begins by approximating in situ stellar mass growth as a stochastic process, much like a random walk, where the expectation of SFR at any time is equal to the SFR at the previous time. The SFRs of real galaxies, however, do not experience wholly random stochastic changes over time, but change in a highly correlated fashion due to the long reach of gravity and the correlation of structure in the universe. We therefore generalize the results for star-formation as a stochastic process that has random correlations over random and potentially infinite timescales. For unbiased samples of (disk) galaxies we derive expectation values for SSFR and its scatter, such that =2/T, and Sig[SFR/M]=. Note that this relative scatter is independent of mass and time. This derived correlation between SFR and stellar mass, and its evolution, matches published data to z=10 with sufficient accuracy to constrain cosmological parameters from the data. This statistical approach to the diversity of star-formation histories reproduces several important observables, including: the scatter in SSFR at fixed mass; the forms of SFHs of nearby dwarf galaxies and the Milky Way. At least one additional process beyond a single one responsible for in situ stellar mass growth will be required to match the evolution of the stellar mass function, and we discuss ways to generalize the framework. The implied dispersion in SFHs, and the SFMS's insensitivity to timescales of stochasticity, thus substantially limits the ability to connect massive galaxies to their progenitors over long cosmic baselines. Such analytical work shows promise for statisically modeling distributions of galaxies over cosmic time, in a manner particularly indpendent of the thorny uncertainties in sub-grid astrophysics of modern cosmological simulations.

  7. TEMIS UV product validation using NILU-UV ground-based measurements in Thessaloniki, Greece

    NASA Astrophysics Data System (ADS)

    Zempila, Melina-Maria; van Geffen, Jos H. G. M.; Taylor, Michael; Fountoulakis, Ilias; Koukouli, Maria-Elissavet; van Weele, Michiel; van der A, Ronald J.; Bais, Alkiviadis; Meleti, Charikleia; Balis, Dimitrios

    2017-06-01

    This study aims to cross-validate ground-based and satellite-based models of three photobiological UV effective dose products: the Commission Internationale de l'Éclairage (CIE) erythemal UV, the production of vitamin D in the skin, and DNA damage, using high-temporal-resolution surface-based measurements of solar UV spectral irradiances from a synergy of instruments and models. The satellite-based Tropospheric Emission Monitoring Internet Service (TEMIS; version 1.4) UV daily dose data products were evaluated over the period 2009 to 2014 with ground-based data from a Norsk Institutt for Luftforskning (NILU)-UV multifilter radiometer located at the northern midlatitude super-site of the Laboratory of Atmospheric Physics, Aristotle University of Thessaloniki (LAP/AUTh), in Greece. For the NILU-UV effective dose rates retrieval algorithm, a neural network (NN) was trained to learn the nonlinear functional relation between NILU-UV irradiances and collocated Brewer-based photobiological effective dose products. Then the algorithm was subjected to sensitivity analysis and validation. The correlation of the NN estimates with target outputs was high (r = 0. 988 to 0.990) and with a very low bias (0.000 to 0.011 in absolute units) proving the robustness of the NN algorithm. For further evaluation of the NILU NN-derived products, retrievals of the vitamin D and DNA-damage effective doses from a collocated Yankee Environmental Systems (YES) UVB-1 pyranometer were used. For cloud-free days, differences in the derived UV doses are better than 2 % for all UV dose products, revealing the reference quality of the ground-based UV doses at Thessaloniki from the NILU-UV NN retrievals. The TEMIS UV doses used in this study are derived from ozone measurements by the SCIAMACHY/Envisat and GOME2/MetOp-A satellite instruments, over the European domain in combination with SEVIRI/Meteosat-based diurnal cycle of the cloud cover fraction per 0. 5° × 0. 5° (lat × long) grid cells. TEMIS UV doses were found to be ˜ 12.5 % higher than the NILU NN estimates but, despite the presence of a visually apparent seasonal pattern, the R2 values were found to be robustly high and equal to 0.92-0.93 for 1588 all-sky coincidences. These results significantly improve when limiting the dataset to cloud-free days with differences of 0.57 % for the erythemal doses, 1.22 % for the vitamin D doses, and 1.18 % for the DNA-damage doses, with standard deviations of the order of 11-13 %. The improvement of the comparative statistics under cloud-free cases further testifies to the importance of the appropriate consideration of the contribution of clouds in the UV radiation reaching the Earth's surface. For the urban area of Thessaloniki, with highly variable aerosol, the weakness of the implicit aerosol information introduced to the TEMIS UV dose algorithm was revealed by comparison of the datasets to aerosol optical depths at 340 nm as reported by a collocated CIMEL sun photometer, operating in Thessaloniki at LAP/AUTh as part of the NASA Aerosol Robotic Network.

  8. The GNSS polarimetric radio-occultation technique to sense precipitation events: a new concept to be tested aboard PAZ Low Earth Satellite

    NASA Astrophysics Data System (ADS)

    Tomás, Sergio; Oliveras, Santi; Cardellach, Estel; Rius, Antonio

    2013-04-01

    The Radio Occultation and Heavy Precipitation (ROHP) experiment, to be conducted aboard the Spanish PAZ satellite, consists of a radio occultation (RO) mission provided with dual-polarization capabilities. The research with polarimetric RO data has the goal of assessing the capabilities and limitations of this technique to infer profiles of heavy precipitation. The technique aims to provide vertical profiles of precipitation simultaneously to the vertical profiles of thermodynamic parameters (standard RO products) perfectly collocated both in space and time. If successful, the polarimetric RO will represent the first technique able to provide these complementary information on precipitation. This is a relevant input for studies on heavy and violent rainfall events, which being poorly represented by the current-generation of Numerical Weather Prediction and General Circulation Models appear to be difficult to forecast on all time-scales. The Low Earth Orbiter hosting this experiment, to be launched in 2013, will orbit at 500 km altitude in a near-Polar orbit. The Radio Occulation payload includes a RO GNSS receiver and a dual polarization (H/V) limb oriented antenna to capture the signals of setting GNSS transmitters. NOAA and UCAR participate in the ground-segment of the radiometric experiment to enable near-real time dissemination of the level-1 standard RO products. The space-based GNSS RO technique scans the atmosphere vertically at fine resolution (close to 300 meter in the troposphere) by precisely measure the delay between a GNSS transmitter and a GNSS receiver aboard a Low Earth Orbiter, when the former is setting below or rising above the Earth limb. The standard, thermodynamical, products are extracted from the excess delay induced by the atmosphere at different layers. This presentation will not focus on this well-established application, but a novel concept using polarimetry to also retrieve rain information. The precipitation-measurement principle is based on the asymmetry between the vertical and horizontal axis of the rain droplets, especially when intense rates of precipitation occur. As a first approximation, the RO signals propagate across the precipitation-volume tangentially, that is, along the local horizontal axis of the droplets. Forward scattering models have been implemented to quantify the sensitivity of L-band signals to different rain rates and precipitation extension being crossed by the signals. The observable considered so far is the polarimetric phase shift: difference between the phase delay suffered by the H- and V-polarizations. Real RO events have been collocated with TRMM precipitation data. The path traveled by the RO signal under a given altitude has been projected on the TRMM grid of observations, to obtain a profile of the precipitation being crossed by the RO link at a given moment of the occultation event. This mechanism has been used to feed the propagation models and thus estimate the polarimetric phase shift that each precipitation event would have induced into the occultation observation. This simulation exercise permits to determine the detectability thresholds and the expected statistics of such collocated events. Methodology and results will be presented.

  9. Application of SMAP Data for Ocean Surface Remote Sensing

    NASA Astrophysics Data System (ADS)

    Fore, A.; Yueh, S. H.; Tang, W.; Stiles, B. W.; Hayashi, A.

    2017-12-01

    The Soil Moisture Active Passive (SMAP) mission was launched January 31st, 2015. It is designed to measure the soil moisture over land using a combined active / passive L-band system. Due to the Aquarius mission, L-band model functions for ocean winds and salinity are mature and are directly applicable to the SMAP mission. In contrast to Aquarius, the higher resolution and scanning geometry of SMAP allow for wide-swath ocean winds and salinities to be retrieved. In this talk we present the SMAP Sea Surface Salinity (SSS) and extreme winds dataset and its performance. First we discuss the heritage of SMAP SSS algorithms, showing that SMAP and Aquarius show excellent agreement in the ocean surface roughness correction. Then, we give an overview of some newly developed algorithms that are only relevant to the SMAP system; a new galaxy correction and land correction enabling SSS retrievals up to 40 km from coast. We discuss recent improvements to the SMAP data processing for version 4.0. Next we compare the performance of the SMAP SSS to in-situ salinity measurements obtained from ARGO floats, tropical moored buoys, and ship-based data. SMAP SSS has accuracy of 0.2 PSU on a monthly basis compared to ARGO gridded data in tropics and mid-latitudes. In tropical oceans, time series comparison of salinity measured at 1 m depth by moored buoys indicates SMAP can track large salinity changes within a month. Synergetic analysis of SMAP, SMOS, and Argo data allows us to identify and exclude erroneous buoy data from assessment of SMAP SSS. The resulting SMAP-buoy matchup analysis gives a mean standard deviation (STD) of 0.22 PSU and correlation of 0.73 on weekly scale; at monthly scale the mean STD decreased to 0.17 PSU and the correlation increased to 0.8. In addition to SSS, SMAP provides a view into tropical cyclones having much higher sensitivity than traditional scatterometers. We validate the high-winds using collocations with SFMR during tropical cyclones as well as triple-collocations with RapidScat and WindSat. We consider two validation regimes, storm force winds and hurricane force winds. For storm force winds we validate using other space-borne scatterometers and microwave radiometers as well as with SFMR, however, for hurricane force winds we must use SFMR. Finally we discuss the various data products and where they may be obtained.

  10. Conservative and bounded volume-of-fluid advection on unstructured grids

    NASA Astrophysics Data System (ADS)

    Ivey, Christopher B.; Moin, Parviz

    2017-12-01

    This paper presents a novel Eulerian-Lagrangian piecewise-linear interface calculation (PLIC) volume-of-fluid (VOF) advection method, which is three-dimensional, unsplit, and discretely conservative and bounded. The approach is developed with reference to a collocated node-based finite-volume two-phase flow solver that utilizes the median-dual mesh constructed from non-convex polyhedra. The proposed advection algorithm satisfies conservation and boundedness of the liquid volume fraction irrespective of the underlying flux polyhedron geometry, which differs from contemporary unsplit VOF schemes that prescribe topologically complicated flux polyhedron geometries in efforts to satisfy conservation. Instead of prescribing complicated flux-polyhedron geometries, which are prone to topological failures, our VOF advection scheme, the non-intersecting flux polyhedron advection (NIFPA) method, builds the flux polyhedron iteratively such that its intersection with neighboring flux polyhedra, and any other unavailable volume, is empty and its total volume matches the calculated flux volume. During each iteration, a candidate nominal flux polyhedron is extruded using an iteration dependent scalar. The candidate is subsequently intersected with the volume guaranteed available to it at the time of the flux calculation to generate the candidate flux polyhedron. The difference in the volume of the candidate flux polyhedron and the actual flux volume is used to calculate extrusion during the next iteration. The choice in nominal flux polyhedron impacts the cost and accuracy of the scheme; however, it does not impact the methods underlying conservation and boundedness. As such, various robust nominal flux polyhedron are proposed and tested using canonical periodic kinematic test cases: Zalesak's disk and two- and three-dimensional deformation. The tests are conducted on the median duals of a quadrilateral and triangular primal mesh, in two-dimensions, and on the median duals of a hexahedral, wedge and tetrahedral primal mesh, in three-dimensions. Comparisons are made with the adaptation of a conventional unsplit VOF advection scheme to our collocated node-based flow solver. Depending on the choice in the nominal flux polyhedron, the NIFPA scheme presented accuracies ranging from zeroth to second order and calculation times that differed by orders of magnitude. For the nominal flux polyhedra which demonstrate second-order accuracy on all tests and meshes, the NIFPA method's cost was comparable to the traditional topologically complex second-order accurate VOF advection scheme.

  11. Adaptive multiresolution modeling of groundwater flow in heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    Malenica, Luka; Gotovac, Hrvoje; Srzic, Veljko; Andric, Ivo

    2016-04-01

    Proposed methodology was originally developed by our scientific team in Split who designed multiresolution approach for analyzing flow and transport processes in highly heterogeneous porous media. The main properties of the adaptive Fup multi-resolution approach are: 1) computational capabilities of Fup basis functions with compact support capable to resolve all spatial and temporal scales, 2) multi-resolution presentation of heterogeneity as well as all other input and output variables, 3) accurate, adaptive and efficient strategy and 4) semi-analytical properties which increase our understanding of usually complex flow and transport processes in porous media. The main computational idea behind this approach is to separately find the minimum number of basis functions and resolution levels necessary to describe each flow and transport variable with the desired accuracy on a particular adaptive grid. Therefore, each variable is separately analyzed, and the adaptive and multi-scale nature of the methodology enables not only computational efficiency and accuracy, but it also describes subsurface processes closely related to their understood physical interpretation. The methodology inherently supports a mesh-free procedure, avoiding the classical numerical integration, and yields continuous velocity and flux fields, which is vitally important for flow and transport simulations. In this paper, we will show recent improvements within the proposed methodology. Since "state of the art" multiresolution approach usually uses method of lines and only spatial adaptive procedure, temporal approximation was rarely considered as a multiscale. Therefore, novel adaptive implicit Fup integration scheme is developed, resolving all time scales within each global time step. It means that algorithm uses smaller time steps only in lines where solution changes are intensive. Application of Fup basis functions enables continuous time approximation, simple interpolation calculations across different temporal lines and local time stepping control. Critical aspect of time integration accuracy is construction of spatial stencil due to accurate calculation of spatial derivatives. Since common approach applied for wavelets and splines uses a finite difference operator, we developed here collocation one including solution values and differential operator. In this way, new improved algorithm is adaptive in space and time enabling accurate solution for groundwater flow problems, especially in highly heterogeneous porous media with large lnK variances and different correlation length scales. In addition, differences between collocation and finite volume approaches are discussed. Finally, results show application of methodology to the groundwater flow problems in highly heterogeneous confined and unconfined aquifers.

  12. Distribution-Agnostic Stochastic Optimal Power Flow for Distribution Grids: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, Kyri; Dall'Anese, Emiliano; Summers, Tyler

    2016-09-01

    This paper outlines a data-driven, distributionally robust approach to solve chance-constrained AC optimal power flow problems in distribution networks. Uncertain forecasts for loads and power generated by photovoltaic (PV) systems are considered, with the goal of minimizing PV curtailment while meeting power flow and voltage regulation constraints. A data- driven approach is utilized to develop a distributionally robust conservative convex approximation of the chance-constraints; particularly, the mean and covariance matrix of the forecast errors are updated online, and leveraged to enforce voltage regulation with predetermined probability via Chebyshev-based bounds. By combining an accurate linear approximation of the AC power flowmore » equations with the distributionally robust chance constraint reformulation, the resulting optimization problem becomes convex and computationally tractable.« less

  13. Light Propagation in Turbulent Media

    NASA Astrophysics Data System (ADS)

    Perez, Dario G.

    2003-07-01

    First, we make a revision of the up-to-date Passive Scalar Fields properties: also, the refractive index is among them. Afterwards, we formulated the properties that make the family of `isotropic' fractional Brownian motion (with parameter H) a good candidate to simulate the turbulent refractive index. Moreover, we obtained its fractal dimension which matches the estimated by Constantin for passive scalar, and thus the parameter H determines the state of the turbulence. Next, using a path integral velocity representation, with the Markovian model, to calculate the effects of the turbulence over a system of grids. Finally, with the tools of Stochastic Calculus for fractional Brownian motions we studied the ray-equation coming from the Geometric Optics in the turbulent case. Our analysis covers those cases where average temperature gradients are relevant.

  14. Sensitivity of Simulated Warm Rain Formation to Collision and Coalescence Efficiencies, Breakup, and Turbulence: Comparison of Two Bin-Resolved Numerical Models

    NASA Technical Reports Server (NTRS)

    Fridlind, Ann; Seifert, Axel; Ackerman, Andrew; Jensen, Eric

    2004-01-01

    Numerical models that resolve cloud particles into discrete mass size distributions on an Eulerian grid provide a uniquely powerful means of studying the closely coupled interaction of aerosols, cloud microphysics, and transport that determine cloud properties and evolution. However, such models require many experimentally derived paramaterizations in order to properly represent the complex interactions of droplets within turbulent flow. Many of these parameterizations remain poorly quantified, and the numerical methods of solving the equations for temporal evolution of the mass size distribution can also vary considerably in terms of efficiency and accuracy. In this work, we compare results from two size-resolved microphysics models that employ various widely-used parameterizations and numerical solution methods for several aspects of stochastic collection.

  15. Obtaining a Pragmatic Representation of Fire Disturbance in Dynamic Vegetation Models by Assimilating Earth Observation Data

    NASA Astrophysics Data System (ADS)

    Kantzas, Euripides; Quegan, Shaun

    2015-04-01

    Fire constitutes a violent and unpredictable pathway of carbon from the terrestrial biosphere into the atmosphere. Despite fire emissions being in many biomes of similar magnitude to that of Net Ecosystem Exchange, even the most complex Dynamic Vegetation Models (DVMs) embedded in IPCC General Circulation Models poorly represent fire behavior and dynamics, a fact which still remains understated. As DVMs operate on a deterministic, grid cell-by-grid cell basis they are unable to describe a host of important fire characteristics such as its propagation, magnitude of area burned and stochastic nature. Here we address these issues by describing a model-independent methodology which assimilates Earth Observation (EO) data by employing image analysis techniques and algorithms to offer a realistic fire disturbance regime in a DVM. This novel approach, with minimum model restructuring, manages to retain the Fire Return Interval produced by the model whilst assigning pragmatic characteristics to its fire outputs thus allowing realistic simulations of fire-related processes such as carbon injection into the atmosphere and permafrost degradation. We focus our simulations in the Arctic and specifically Canada and Russia and we offer a snippet of how this approach permits models to engage in post-fire dynamics hitherto absent from any other model regardless of complexity.

  16. Deep Part Load Flow Analysis in a Francis Model turbine by means of two-phase unsteady flow simulations

    NASA Astrophysics Data System (ADS)

    Conrad, Philipp; Weber, Wilhelm; Jung, Alexander

    2017-04-01

    Hydropower plants are indispensable to stabilize the grid by reacting quickly to changes of the energy demand. However, an extension of the operating range towards high and deep part load conditions without fatigue of the hydraulic components is desirable to increase their flexibility. In this paper a model sized Francis turbine at low discharge operating conditions (Q/QBEP = 0.27) is analyzed by means of computational fluid dynamics (CFD). Unsteady two-phase simulations for two Thoma-number conditions are conducted. Stochastic pressure oscillations, observed on the test rig at low discharge, require sophisticated numerical models together with small time steps, large grid sizes and long simulation times to cope with these fluctuations. In this paper the BSL-EARSM model (Explicit Algebraic Reynolds Stress) was applied as a compromise between scale resolving and two-equation turbulence models with respect to computational effort and accuracy. Simulation results are compared to pressure measurements showing reasonable agreement in resolving the frequency spectra and amplitude. Inner blade vortices were predicted successfully in shape and size. Surface streamlines in blade-to-blade view are presented, giving insights to the formation of the inner blade vortices. The acquired time dependent pressure fields can be used for quasi-static structural analysis (FEA) for fatigue calculations in the future.

  17. HydroGrid: Technologies for Global Water Quality and Sustainability

    NASA Astrophysics Data System (ADS)

    Yeghiazarian, L.

    2017-12-01

    Humans have been transforming planet Earth for millennia. We have recently come to understand that the collective impact of our decisions and actions has brought about severe water quality problems, which are likely to worsen in the light of rapid population growth to the projected nine billion by 2050. To sustainably manage our global water resources and possibly reverse these effects requires efforts in real-time monitoring of water contamination, analysis of monitoring data, and control of the state of water contamination. We develop technologies to address all three areas: monitoring, analysis and control. These efforts are carried out in the conceptual framework of the HydroGrid, an interconnected water system, which is (1) firmly rooted in the fundamental understanding of processes that govern microbial dynamics on multiple scales; and (2) used to develop watershed-specific management strategies. In the area of monitoring we are developing mobile autonomous sensors to detect surface water contamination, an effort supported by extensive materials research to provide multifunctional materials. We analyze environmental data within a stochastic modeling paradigm that bridges microscopic particle interactions to macroscopic manifestation of microbial population behavior in time and space in entire watersheds. These models are supported with laboratory and field experiments. Finally, we combine control and graph theories to derive controllability metrics of natural watersheds.

  18. Wall-layer model for LES with massive separation

    NASA Astrophysics Data System (ADS)

    Fakhari, Ahmad; Armenio, Vincenzo; Roman, Federico

    2016-11-01

    Currently, Wall Functions (WF) work well under specific conditions, mostly exhibit drawbacks specially in flows with separation beyond curvatures. In this work, we propose a more general WF which works well in attached and detached flows, in presence and absence of Immersed Boundaries (IB). First we modified an equilibrium stress WF for boundary-fitted geometry making dynamic the computation of the k (von Karman constant) of the log-law; the model was first applied to a periodic open channel flow, and then to the flow over a 2D single hill using uniform coarse grids; the model captured separation with reasonable accuracy. Thereafter IB Method by Roman et al. was improved to avoid momentum loss at the interface between the fluid-solid regions. This required calibration of interfacial eddy viscosity; also a random stochastic forcing was used in wall-normal direction to increase Reynolds stresses and improve mean velocity profile. Finally, to reproduce flow separation, a simplified boundary layer equation was applied to construct velocity at near wall computational nodes. The new scheme was tested on the 2D single hill and periodic hills applying Cartesian and curvilinear grids; good agreement with references was obtained with reduction in cost and complexity. Financial support from project COSMO "CFD open source per opera morta" PAR FSC 2007-2013, Friuli Venezia Giulia.

  19. The use of sensory perception indicators for improving the characterization and modelling of total petroleum hydrocarbon (TPH) grade in soils.

    PubMed

    Roxo, Sónia; de Almeida, José António; Matias, Filipa Vieira; Mata-Lima, Herlander; Barbosa, Sofia

    2016-03-01

    This paper proposes a multistep approach for creating a 3D stochastic model of total petroleum hydrocarbon (TPH) grade in potentially polluted soils of a deactivated oil storage site by using chemical analysis results as primary or hard data and classes of sensory perception variables as secondary or soft data. First, the statistical relationship between the sensory perception variables (e.g. colour, odour and oil-water reaction) and TPH grade is analysed, after which the sensory perception variable exhibiting the highest correlation is selected (oil-water reaction in this case study). The probabilities of cells belonging to classes of oil-water reaction are then estimated for the entire soil volume using indicator kriging. Next, local histograms of TPH grade for each grid cell are computed, combining the probabilities of belonging to a specific sensory perception indicator class and conditional to the simulated values of TPH grade. Finally, simulated images of TPH grade are generated by using the P-field simulation algorithm, utilising the local histograms of TPH grade for each grid cell. The set of simulated TPH values allows several calculations to be performed, such as average values, local uncertainties and the probability of the TPH grade of the soil exceeding a specific threshold value.

  20. Weather Observation Systems and Efficiency of Fighting Forest Fires

    NASA Astrophysics Data System (ADS)

    Khabarov, N.; Moltchanova, E.; Obersteiner, M.

    2007-12-01

    Weather observation is an essential component of modern forest fire management systems. Satellite and in-situ based weather observation systems might help to reduce forest loss, human casualties and destruction of economic capital. In this paper, we develop and apply a methodology to assess the benefits of various weather observation systems on reductions of burned area due to early fire detection. In particular, we consider a model where the air patrolling schedule is determined by a fire hazard index. The index is computed from gridded daily weather data for the area covering parts Spain and Portugal. We conduct a number of simulation experiments. First, the resolution of the original data set is artificially reduced. The reduction of the total forest burned area associated with air patrolling based on a finer weather grid indicates the benefit of using higher spatially resolved weather observations. Second, we consider a stochastic model to simulate forest fires and explore the sensitivity of the model with respect to the quality of input data. The analysis of combination of satellite and ground monitoring reveals potential cost saving due to a "system of systems effect" and substantial reduction in burned area. Finally, we estimate the marginal improvement schedule for loss of life and economic capital as a function of the improved fire observing system.

Top