Adaptive probabilistic collocation based Kalman filter for unsaturated flow problem
NASA Astrophysics Data System (ADS)
Man, J.; Li, W.; Zeng, L.; Wu, L.
2015-12-01
The ensemble Kalman filter (EnKF) has gained popularity in hydrological data assimilation problems. As a Monte Carlo based method, a relatively large ensemble size is usually required to guarantee the accuracy. As an alternative approach, the probabilistic collocation based Kalman filter (PCKF) employs the Polynomial Chaos to approximate the original system. In this way, the sampling error can be reduced. However, PCKF suffers from the so called "cure of dimensionality". When the system nonlinearity is strong and number of parameters is large, PCKF is even more computationally expensive than EnKF. Motivated by recent developments in uncertainty quantification, we propose a restart adaptive probabilistic collocation based Kalman filter (RAPCKF) for data assimilation in unsaturated flow problem. During the implementation of RAPCKF, the important parameters are identified and active PCE basis functions are adaptively selected. The "restart" technology is used to alleviate the inconsistency between model parameters and states. The performance of RAPCKF is tested by unsaturated flow numerical cases. It is shown that RAPCKF is more efficient than EnKF with the same computational cost. Compared with the traditional PCKF, the RAPCKF is more applicable in strongly nonlinear and high dimensional problems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Man, Jun; Li, Weixuan; Zeng, Lingzao
2016-06-01
The ensemble Kalman filter (EnKF) has gained popularity in hydrological data assimilation problems. As a Monte Carlo based method, a relatively large ensemble size is usually required to guarantee the accuracy. As an alternative approach, the probabilistic collocation based Kalman filter (PCKF) employs the polynomial chaos to approximate the original system. In this way, the sampling error can be reduced. However, PCKF suffers from the so-called "curse of dimensionality". When the system nonlinearity is strong and number of parameters is large, PCKF could be even more computationally expensive than EnKF. Motivated by most recent developments in uncertainty quantification, we proposemore » a restart adaptive probabilistic collocation based Kalman filter (RAPCKF) for data assimilation in unsaturated flow problems. During the implementation of RAPCKF, the important parameters are identified and active PCE basis functions are adaptively selected. The "restart" technology is used to eliminate the inconsistency between model parameters and states. The performance of RAPCKF is tested with numerical cases of unsaturated flow models. It is shown that RAPCKF is more efficient than EnKF with the same computational cost. Compared with the traditional PCKF, the RAPCKF is more applicable in strongly nonlinear and high dimensional problems.« less
NASA Astrophysics Data System (ADS)
Wang, S.; Huang, G. H.; Huang, W.; Fan, Y. R.; Li, Z.
2015-10-01
In this study, a fractional factorial probabilistic collocation method is proposed to reveal statistical significance of hydrologic model parameters and their multi-level interactions affecting model outputs, facilitating uncertainty propagation in a reduced dimensional space. The proposed methodology is applied to the Xiangxi River watershed in China to demonstrate its validity and applicability, as well as its capability of revealing complex and dynamic parameter interactions. A set of reduced polynomial chaos expansions (PCEs) only with statistically significant terms can be obtained based on the results of factorial analysis of variance (ANOVA), achieving a reduction of uncertainty in hydrologic predictions. The predictive performance of reduced PCEs is verified by comparing against standard PCEs and the Monte Carlo with Latin hypercube sampling (MC-LHS) method in terms of reliability, sharpness, and Nash-Sutcliffe efficiency (NSE). Results reveal that the reduced PCEs are able to capture hydrologic behaviors of the Xiangxi River watershed, and they are efficient functional representations for propagating uncertainties in hydrologic predictions.
A Probabilistic Collocation Based Iterative Kalman Filter for Landfill Data Assimilation
NASA Astrophysics Data System (ADS)
Qiang, Z.; Zeng, L.; Wu, L.
2016-12-01
Due to the strong spatial heterogeneity of landfill, uncertainty is ubiquitous in gas transport process in landfill. To accurately characterize the landfill properties, the ensemble Kalman filter (EnKF) has been employed to assimilate the measurements, e.g., the gas pressure. As a Monte Carlo (MC) based method, the EnKF usually requires a large ensemble size, which poses a high computational cost for large scale problems. In this work, we propose a probabilistic collocation based iterative Kalman filter (PCIKF) to estimate permeability in a liquid-gas coupling model. This method employs polynomial chaos expansion (PCE) to represent and propagate the uncertainties of model parameters and states, and an iterative form of Kalman filter to assimilate the current gas pressure data. To further reduce the computation cost, the functional ANOVA (analysis of variance) decomposition is conducted, and only the first order ANOVA components are remained for PCE. Illustrated with numerical case studies, this proposed method shows significant superiority in computation efficiency compared with the traditional MC based iterative EnKF. The developed method has promising potential in reliable prediction and management of landfill gas production.
Probabilistic Parameter Uncertainty Analysis of Single Input Single Output Control Systems
NASA Technical Reports Server (NTRS)
Smith, Brett A.; Kenny, Sean P.; Crespo, Luis G.
2005-01-01
The current standards for handling uncertainty in control systems use interval bounds for definition of the uncertain parameters. This approach gives no information about the likelihood of system performance, but simply gives the response bounds. When used in design, current methods of m-analysis and can lead to overly conservative controller design. With these methods, worst case conditions are weighted equally with the most likely conditions. This research explores a unique approach for probabilistic analysis of control systems. Current reliability methods are examined showing the strong areas of each in handling probability. A hybrid method is developed using these reliability tools for efficiently propagating probabilistic uncertainty through classical control analysis problems. The method developed is applied to classical response analysis as well as analysis methods that explore the effects of the uncertain parameters on stability and performance metrics. The benefits of using this hybrid approach for calculating the mean and variance of responses cumulative distribution functions are shown. Results of the probabilistic analysis of a missile pitch control system, and a non-collocated mass spring system, show the added information provided by this hybrid analysis.
NASA Astrophysics Data System (ADS)
Dai, Cheng; Xue, Liang; Zhang, Dongxiao; Guadagnini, Alberto
2018-02-01
The authors regret that in the Acknowledgments Section an incorrect Grant Agreement number was reported for the Project "Furthering the knowledge Base for Reducing the Environmental Footprint of Shale Gas Development" FRACRISK. The correct Grant Agreement number is 636811.
Collocation and Galerkin Time-Stepping Methods
NASA Technical Reports Server (NTRS)
Huynh, H. T.
2011-01-01
We study the numerical solutions of ordinary differential equations by one-step methods where the solution at tn is known and that at t(sub n+1) is to be calculated. The approaches employed are collocation, continuous Galerkin (CG) and discontinuous Galerkin (DG). Relations among these three approaches are established. A quadrature formula using s evaluation points is employed for the Galerkin formulations. We show that with such a quadrature, the CG method is identical to the collocation method using quadrature points as collocation points. Furthermore, if the quadrature formula is the right Radau one (including t(sub n+1)), then the DG and CG methods also become identical, and they reduce to the Radau IIA collocation method. In addition, we present a generalization of DG that yields a method identical to CG and collocation with arbitrary collocation points. Thus, the collocation, CG, and generalized DG methods are equivalent, and the latter two methods can be formulated using the differential instead of integral equation. Finally, all schemes discussed can be cast as s-stage implicit Runge-Kutta methods.
Stochastic Simulation and Forecast of Hydrologic Time Series Based on Probabilistic Chaos Expansion
NASA Astrophysics Data System (ADS)
Li, Z.; Ghaith, M.
2017-12-01
Hydrological processes are characterized by many complex features, such as nonlinearity, dynamics and uncertainty. How to quantify and address such complexities and uncertainties has been a challenging task for water engineers and managers for decades. To support robust uncertainty analysis, an innovative approach for the stochastic simulation and forecast of hydrologic time series is developed is this study. Probabilistic Chaos Expansions (PCEs) are established through probabilistic collocation to tackle uncertainties associated with the parameters of traditional hydrological models. The uncertainties are quantified in model outputs as Hermite polynomials with regard to standard normal random variables. Sequentially, multivariate analysis techniques are used to analyze the complex nonlinear relationships between meteorological inputs (e.g., temperature, precipitation, evapotranspiration, etc.) and the coefficients of the Hermite polynomials. With the established relationships between model inputs and PCE coefficients, forecasts of hydrologic time series can be generated and the uncertainties in the future time series can be further tackled. The proposed approach is demonstrated using a case study in China and is compared to a traditional stochastic simulation technique, the Markov-Chain Monte-Carlo (MCMC) method. Results show that the proposed approach can serve as a reliable proxy to complicated hydrological models. It can provide probabilistic forecasting in a more computationally efficient manner, compared to the traditional MCMC method. This work provides technical support for addressing uncertainties associated with hydrological modeling and for enhancing the reliability of hydrological modeling results. Applications of the developed approach can be extended to many other complicated geophysical and environmental modeling systems to support the associated uncertainty quantification and risk analysis.
Accurate, efficient, and (iso)geometrically flexible collocation methods for phase-field models
NASA Astrophysics Data System (ADS)
Gomez, Hector; Reali, Alessandro; Sangalli, Giancarlo
2014-04-01
We propose new collocation methods for phase-field models. Our algorithms are based on isogeometric analysis, a new technology that makes use of functions from computational geometry, such as, for example, Non-Uniform Rational B-Splines (NURBS). NURBS exhibit excellent approximability and controllable global smoothness, and can represent exactly most geometries encapsulated in Computer Aided Design (CAD) models. These attributes permitted us to derive accurate, efficient, and geometrically flexible collocation methods for phase-field models. The performance of our method is demonstrated by several numerical examples of phase separation modeled by the Cahn-Hilliard equation. We feel that our method successfully combines the geometrical flexibility of finite elements with the accuracy and simplicity of pseudo-spectral collocation methods, and is a viable alternative to classical collocation methods.
ERIC Educational Resources Information Center
Varlamova, Elena V.; Naciscione, Anita; Tulusina, Elena A.
2016-01-01
Relevance of the issue stated in the article is determined by the fact that there is a lack of research devoted to the methods of teaching English and German collocations. The aim of our work is to determine methods of teaching English and German collocations to Russian university students studying foreign languages through experimental testing.…
A two-level stochastic collocation method for semilinear elliptic equations with random coefficients
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Luoping; Zheng, Bin; Lin, Guang
In this work, we propose a novel two-level discretization for solving semilinear elliptic equations with random coefficients. Motivated by the two-grid method for deterministic partial differential equations (PDEs) introduced by Xu, our two-level stochastic collocation method utilizes a two-grid finite element discretization in the physical space and a two-level collocation method in the random domain. In particular, we solve semilinear equations on a coarse meshmore » $$\\mathcal{T}_H$$ with a low level stochastic collocation (corresponding to the polynomial space $$\\mathcal{P}_{P}$$) and solve linearized equations on a fine mesh $$\\mathcal{T}_h$$ using high level stochastic collocation (corresponding to the polynomial space $$\\mathcal{P}_p$$). We prove that the approximated solution obtained from this method achieves the same order of accuracy as that from solving the original semilinear problem directly by stochastic collocation method with $$\\mathcal{T}_h$$ and $$\\mathcal{P}_p$$. The two-level method is computationally more efficient, especially for nonlinear problems with high random dimensions. Numerical experiments are also provided to verify the theoretical results.« less
NASA Astrophysics Data System (ADS)
Liu, L. H.; Tan, J. Y.
2007-02-01
A least-squares collocation meshless method is employed for solving the radiative heat transfer in absorbing, emitting and scattering media. The least-squares collocation meshless method for radiative transfer is based on the discrete ordinates equation. A moving least-squares approximation is applied to construct the trial functions. Except for the collocation points which are used to construct the trial functions, a number of auxiliary points are also adopted to form the total residuals of the problem. The least-squares technique is used to obtain the solution of the problem by minimizing the summation of residuals of all collocation and auxiliary points. Three numerical examples are studied to illustrate the performance of this new solution method. The numerical results are compared with the other benchmark approximate solutions. By comparison, the results show that the least-squares collocation meshless method is efficient, accurate and stable, and can be used for solving the radiative heat transfer in absorbing, emitting and scattering media.
Impact of uncertainties in free stream conditions on the aerodynamics of a rectangular cylinder
NASA Astrophysics Data System (ADS)
Mariotti, Alessandro; Shoeibi Omrani, Pejman; Witteveen, Jeroen; Salvetti, Maria Vittoria
2015-11-01
The BARC benchmark deals with the flow around a rectangular cylinder with chord-to-depth ratio equal to 5. This flow configuration is of practical interest for civil and industrial structures and it is characterized by massively separated flow and unsteadiness. In a recent review of BARC results, significant dispersion was observed both in experimental and numerical predictions of some flow quantities, which are extremely sensitive to various uncertainties, which may be present in experiments and simulations. Besides modeling and numerical errors, in simulations it is difficult to exactly reproduce the experimental conditions due to uncertainties in the set-up parameters, which sometimes cannot be exactly controlled or characterized. Probabilistic methods and URANS simulations are used to investigate the impact of the uncertainties in the following set-up parameters: the angle of incidence, the free stream longitudinal turbulence intensity and length scale. Stochastic collocation is employed to perform the probabilistic propagation of the uncertainty. The discretization and modeling errors are estimated by repeating the same analysis for different grids and turbulence models. The results obtained for different assumed PDF of the set-up parameters are also compared.
Mangado, Nerea; Pons-Prats, Jordi; Coma, Martí; Mistrík, Pavel; Piella, Gemma; Ceresa, Mario; González Ballester, Miguel Á
2018-01-01
Cochlear implantation (CI) is a complex surgical procedure that restores hearing in patients with severe deafness. The successful outcome of the implanted device relies on a group of factors, some of them unpredictable or difficult to control. Uncertainties on the electrode array position and the electrical properties of the bone make it difficult to accurately compute the current propagation delivered by the implant and the resulting neural activation. In this context, we use uncertainty quantification methods to explore how these uncertainties propagate through all the stages of CI computational simulations. To this end, we employ an automatic framework, encompassing from the finite element generation of CI models to the assessment of the neural response induced by the implant stimulation. To estimate the confidence intervals of the simulated neural response, we propose two approaches. First, we encode the variability of the cochlear morphology among the population through a statistical shape model. This allows us to generate a population of virtual patients using Monte Carlo sampling and to assign to each of them a set of parameter values according to a statistical distribution. The framework is implemented and parallelized in a High Throughput Computing environment that enables to maximize the available computing resources. Secondly, we perform a patient-specific study to evaluate the computed neural response to seek the optimal post-implantation stimulus levels. Considering a single cochlear morphology, the uncertainty in tissue electrical resistivity and surgical insertion parameters is propagated using the Probabilistic Collocation method, which reduces the number of samples to evaluate. Results show that bone resistivity has the highest influence on CI outcomes. In conjunction with the variability of the cochlear length, worst outcomes are obtained for small cochleae with high resistivity values. However, the effect of the surgical insertion length on the CI outcomes could not be clearly observed, since its impact may be concealed by the other considered parameters. Whereas the Monte Carlo approach implies a high computational cost, Probabilistic Collocation presents a suitable trade-off between precision and computational time. Results suggest that the proposed framework has a great potential to help in both surgical planning decisions and in the audiological setting process.
Code of Federal Regulations, 2010 CFR
2010-10-01
... elements include, but are not limited to: (1) Physical collocation and virtual collocation at the premises... seeking a particular collocation arrangement, either physical or virtual, is entitled to a presumption... incumbent LEC shall be required to provide virtual collocation, except at points where the incumbent LEC...
NASA Astrophysics Data System (ADS)
Agarwal, P.; El-Sayed, A. A.
2018-06-01
In this paper, a new numerical technique for solving the fractional order diffusion equation is introduced. This technique basically depends on the Non-Standard finite difference method (NSFD) and Chebyshev collocation method, where the fractional derivatives are described in terms of the Caputo sense. The Chebyshev collocation method with the (NSFD) method is used to convert the problem into a system of algebraic equations. These equations solved numerically using Newton's iteration method. The applicability, reliability, and efficiency of the presented technique are demonstrated through some given numerical examples.
Six-Degree-of-Freedom Trajectory Optimization Utilizing a Two-Timescale Collocation Architecture
NASA Technical Reports Server (NTRS)
Desai, Prasun N.; Conway, Bruce A.
2005-01-01
Six-degree-of-freedom (6DOF) trajectory optimization of a reentry vehicle is solved using a two-timescale collocation methodology. This class of 6DOF trajectory problems are characterized by two distinct timescales in their governing equations, where a subset of the states have high-frequency dynamics (the rotational equations of motion) while the remaining states (the translational equations of motion) vary comparatively slowly. With conventional collocation methods, the 6DOF problem size becomes extraordinarily large and difficult to solve. Utilizing the two-timescale collocation architecture, the problem size is reduced significantly. The converged solution shows a realistic landing profile and captures the appropriate high-frequency rotational dynamics. A large reduction in the overall problem size (by 55%) is attained with the two-timescale architecture as compared to the conventional single-timescale collocation method. Consequently, optimum 6DOF trajectory problems can now be solved efficiently using collocation, which was not previously possible for a system with two distinct timescales in the governing states.
NASA Technical Reports Server (NTRS)
Carpenter, Mark H.; Fisher, Travis C.; Nielsen, Eric J.; Frankel, Steven H.
2013-01-01
Nonlinear entropy stability and a summation-by-parts framework are used to derive provably stable, polynomial-based spectral collocation methods of arbitrary order. The new methods are closely related to discontinuous Galerkin spectral collocation methods commonly known as DGFEM, but exhibit a more general entropy stability property. Although the new schemes are applicable to a broad class of linear and nonlinear conservation laws, emphasis herein is placed on the entropy stability of the compressible Navier-Stokes equations.
Pereira, Félix Monteiro; Oliveira, Samuel Conceição
2016-11-01
In this article, the occurrence of dead core in catalytic particles containing immobilized enzymes is analyzed for the Michaelis-Menten kinetics. An assessment of numerical methods is performed to solve the boundary value problem generated by the mathematical modeling of diffusion and reaction processes under steady state and isothermal conditions. Two classes of numerical methods were employed: shooting and collocation. The shooting method used the ode function from Scilab software. The collocation methods included: that implemented by the bvode function of Scilab, the orthogonal collocation, and the orthogonal collocation on finite elements. The methods were validated for simplified forms of the Michaelis-Menten equation (zero-order and first-order kinetics), for which analytical solutions are available. Among the methods covered in this article, the orthogonal collocation on finite elements proved to be the most robust and efficient method to solve the boundary value problem concerning Michaelis-Menten kinetics. For this enzyme kinetics, it was found that the dead core can occur when verified certain conditions of diffusion-reaction within the catalytic particle. The application of the concepts and methods presented in this study will allow for a more generalized analysis and more accurate designs of heterogeneous enzymatic reactors.
NASA Astrophysics Data System (ADS)
Blakely, Christopher D.
This dissertation thesis has three main goals: (1) To explore the anatomy of meshless collocation approximation methods that have recently gained attention in the numerical analysis community; (2) Numerically demonstrate why the meshless collocation method should clearly become an attractive alternative to standard finite-element methods due to the simplicity of its implementation and its high-order convergence properties; (3) Propose a meshless collocation method for large scale computational geophysical fluid dynamics models. We provide numerical verification and validation of the meshless collocation scheme applied to the rotational shallow-water equations on the sphere and demonstrate computationally that the proposed model can compete with existing high performance methods for approximating the shallow-water equations such as the SEAM (spectral-element atmospheric model) developed at NCAR. A detailed analysis of the parallel implementation of the model, along with the introduction of parallel algorithmic routines for the high-performance simulation of the model will be given. We analyze the programming and computational aspects of the model using Fortran 90 and the message passing interface (mpi) library along with software and hardware specifications and performance tests. Details from many aspects of the implementation in regards to performance, optimization, and stabilization will be given. In order to verify the mathematical correctness of the algorithms presented and to validate the performance of the meshless collocation shallow-water model, we conclude the thesis with numerical experiments on some standardized test cases for the shallow-water equations on the sphere using the proposed method.
NASA Technical Reports Server (NTRS)
Zang, Thomas A.; Mathelin, Lionel; Hussaini, M. Yousuff; Bataille, Francoise
2003-01-01
This paper describes a fully spectral, Polynomial Chaos method for the propagation of uncertainty in numerical simulations of compressible, turbulent flow, as well as a novel stochastic collocation algorithm for the same application. The stochastic collocation method is key to the efficient use of stochastic methods on problems with complex nonlinearities, such as those associated with the turbulence model equations in compressible flow and for CFD schemes requiring solution of a Riemann problem. Both methods are applied to compressible flow in a quasi-one-dimensional nozzle. The stochastic collocation method is roughly an order of magnitude faster than the fully Galerkin Polynomial Chaos method on the inviscid problem.
NASA Technical Reports Server (NTRS)
Hussaini, M. Y.; Kopriva, D. A.; Patera, A. T.
1987-01-01
This review covers the theory and application of spectral collocation methods. Section 1 describes the fundamentals, and summarizes results pertaining to spectral approximations of functions. Some stability and convergence results are presented for simple elliptic, parabolic, and hyperbolic equations. Applications of these methods to fluid dynamics problems are discussed in Section 2.
Comparison of Implicit Collocation Methods for the Heat Equation
NASA Technical Reports Server (NTRS)
Kouatchou, Jules; Jezequel, Fabienne; Zukor, Dorothy (Technical Monitor)
2001-01-01
We combine a high-order compact finite difference scheme to approximate spatial derivatives arid collocation techniques for the time component to numerically solve the two dimensional heat equation. We use two approaches to implement the collocation methods. The first one is based on an explicit computation of the coefficients of polynomials and the second one relies on differential quadrature. We compare them by studying their merits and analyzing their numerical performance. All our computations, based on parallel algorithms, are carried out on the CRAY SV1.
ERIC Educational Resources Information Center
Gheisari, Nouzar; Yousofi, Nouroldin
2016-01-01
The effectiveness of different teaching methods of collocational expressions in ESL/EFL contexts of education has been a point of debate for more than two decades, with some believing in explicit and the others in implicit instruction of collocations. In this regard, the present study aimed at finding about which kind of instruction is more…
The Chebyshev-Legendre method: Implementing Legendre methods on Chebyshev points
NASA Technical Reports Server (NTRS)
Don, Wai Sun; Gottlieb, David
1993-01-01
We present a new collocation method for the numerical solution of partial differential equations. This method uses the Chebyshev collocation points, but because of the way the boundary conditions are implemented, it has all the advantages of the Legendre methods. In particular, L2 estimates can be obtained easily for hyperbolic and parabolic problems.
Elastostatic stress analysis of orthotropic rectangular center-cracked plates
NASA Technical Reports Server (NTRS)
Gyekenyesi, G. S.; Mendelson, A.
1972-01-01
A mapping-collocation method was developed for the elastostatic stress analysis of finite, anisotropic plates with centrally located traction-free cracks. The method essentially consists of mapping the crack into the unit circle and satisfying the crack boundary conditions exactly with the help of Muskhelishvili's function extension concept. The conditions on the outer boundary are satisfied approximately by applying the method of least-squares boundary collocation. A parametric study of finite-plate stress intensity factors, employing this mapping-collocation method, is presented. It shows the effects of varying material properties, orientation angle, and crack-length-to-plate-width and plate-height-to-plate-width ratios for rectangular orthotropic plates under constant tensile and shear loads.
Efficient Jacobi-Gauss collocation method for solving initial value problems of Bratu type
NASA Astrophysics Data System (ADS)
Doha, E. H.; Bhrawy, A. H.; Baleanu, D.; Hafez, R. M.
2013-09-01
In this paper, we propose the shifted Jacobi-Gauss collocation spectral method for solving initial value problems of Bratu type, which is widely applicable in fuel ignition of the combustion theory and heat transfer. The spatial approximation is based on shifted Jacobi polynomials J {/n (α,β)}( x) with α, β ∈ (-1, ∞), x ∈ [0, 1] and n the polynomial degree. The shifted Jacobi-Gauss points are used as collocation nodes. Illustrative examples have been discussed to demonstrate the validity and applicability of the proposed technique. Comparing the numerical results of the proposed method with some well-known results show that the method is efficient and gives excellent numerical results.
Adaptive wavelet collocation methods for initial value boundary problems of nonlinear PDE's
NASA Technical Reports Server (NTRS)
Cai, Wei; Wang, Jian-Zhong
1993-01-01
We have designed a cubic spline wavelet decomposition for the Sobolev space H(sup 2)(sub 0)(I) where I is a bounded interval. Based on a special 'point-wise orthogonality' of the wavelet basis functions, a fast Discrete Wavelet Transform (DWT) is constructed. This DWT transform will map discrete samples of a function to its wavelet expansion coefficients in O(N log N) operations. Using this transform, we propose a collocation method for the initial value boundary problem of nonlinear PDE's. Then, we test the efficiency of the DWT transform and apply the collocation method to solve linear and nonlinear PDE's.
Recent advances in (soil moisture) triple collocation analysis
USDA-ARS?s Scientific Manuscript database
To date, triple collocation (TC) analysis is one of the most important methods for the global scale evaluation of remotely sensed soil moisture data sets. In this study we review existing implementations of soil moisture TC analysis as well as investigations of the assumptions underlying the method....
Uncertainty analysis in 3D global models: Aerosol representation in MOZART-4
NASA Astrophysics Data System (ADS)
Gasore, J.; Prinn, R. G.
2012-12-01
The Probabilistic Collocation Method (PCM) has been proven to be an efficient general method of uncertainty analysis in atmospheric models (Tatang et al 1997, Cohen&Prinn 2011). However, its application has been mainly limited to urban- and regional-scale models and chemical source-sink models, because of the drastic increase in computational cost when the dimension of uncertain parameters increases. Moreover, the high-dimensional output of global models has to be reduced to allow a computationally reasonable number of polynomials to be generated. This dimensional reduction has been mainly achieved by grouping the model grids into a few regions based on prior knowledge and expectations; urban versus rural for instance. As the model output is used to estimate the coefficients of the polynomial chaos expansion (PCE), the arbitrariness in the regional aggregation can generate problems in estimating uncertainties. To address these issues in a complex model, we apply the probabilistic collocation method of uncertainty analysis to the aerosol representation in MOZART-4, which is a 3D global chemical transport model (Emmons et al., 2010). Thereafter, we deterministically delineate the model output surface into regions of homogeneous response using the method of Principal Component Analysis. This allows the quantification of the uncertainty associated with the dimensional reduction. Because only a bulk mass is calculated online in Mozart-4, a lognormal number distribution is assumed with a priori fixed scale and location parameters, to calculate the surface area for heterogeneous reactions involving tropospheric oxidants. We have applied the PCM to the six parameters of the lognormal number distributions of Black Carbon, Organic Carbon and Sulfate. We have carried out a Monte-Carlo sampling from the probability density functions of the six uncertain parameters, using the reduced PCE model. The global mean concentration of major tropospheric oxidants did not show a significant variation in response to the variation in input parameters. However, a substantial variation at regional and temporal scale has been found. Tatang M. A., Pan W., Prinn R G., McRae G. J., An efficient method for parametric uncertainty analysis of numerical geophysical models, J. Gephys. Res., 102, 21925-21932, 1997. Cohen, J.B., and R.G. Prinn, Development of a fast, urban chemistry metamodel for inclusion in global models,Atmos. Chem. Phys., 11, 7629-7656, doi:10.5194/acp-11-7629-2011, 2011. Emmons L. K., Walters S., Hess P. G., Lamarque J. -F., P_ster G. G., Fillmore D., Granier C., Guenther A., Kinnison D., Laepple T., Orlando J., Tie X., Tyndall G., Wiedinmyer C., Baughcum S. L., Kloster J. S., Description and evaluation of the Model for Ozone and Related chemical Tracers, version 4 (MOZART-4). Geosci. Model Dev., 3, 4367, 2010.
Domain identification in impedance computed tomography by spline collocation method
NASA Technical Reports Server (NTRS)
Kojima, Fumio
1990-01-01
A method for estimating an unknown domain in elliptic boundary value problems is considered. The problem is formulated as an inverse problem of integral equations of the second kind. A computational method is developed using a splice collocation scheme. The results can be applied to the inverse problem of impedance computed tomography (ICT) for image reconstruction.
NASA Astrophysics Data System (ADS)
Tirani, M. D.; Maleki, M.; Kajani, M. T.
2014-11-01
A numerical method for solving the Lane-Emden equations of the polytropic index α when 4.75 ≤ α ≤ 5 is introduced. The method is based upon nonclassical Gauss-Radau collocation points and Freud type weights. Nonclassical orthogonal polynomials, nonclassical Radau points and weighted interpolation are introduced and are utilized in the interval [0,1]. A smooth, strictly monotonic transformation is used to map the infinite domain x ∈ [0,∞) onto a half-open interval t ∈ [0,1). The resulting problem on the finite interval is then transcribed to a system of nonlinear algebraic equations using collocation. The method is easy to implement and yields very accurate results.
NASA Astrophysics Data System (ADS)
Liao, Qinzhuo; Zhang, Dongxiao; Tchelepi, Hamdi
2017-02-01
A new computational method is proposed for efficient uncertainty quantification of multiphase flow in porous media with stochastic permeability. For pressure estimation, it combines the dimension-adaptive stochastic collocation method on Smolyak sparse grids and the Kronrod-Patterson-Hermite nested quadrature formulas. For saturation estimation, an additional stage is developed, in which the pressure and velocity samples are first generated by the sparse grid interpolation and then substituted into the transport equation to solve for the saturation samples, to address the low regularity problem of the saturation. Numerical examples are presented for multiphase flow with stochastic permeability fields to demonstrate accuracy and efficiency of the proposed two-stage adaptive stochastic collocation method on nested sparse grids.
NASA Technical Reports Server (NTRS)
Gottlieb, David; Shu, Chi-Wang
1994-01-01
The paper presents a method to recover exponential accuracy at all points (including at the discontinuities themselves), from the knowledge of an approximation to the interpolation polynomial (or trigonometrical polynomial). We show that if we are given the collocation point values (or a highly accurate approximation) at the Gauss or Gauss-Lobatto points, we can reconstruct a uniform exponentially convergent approximation to the function f(x) in any sub-interval of analyticity. The proof covers the cases of Fourier, Chebyshev, Legendre, and more general Gegenbauer collocation methods.
Inversion of the strain-life and strain-stress relationships for use in metal fatigue analysis
NASA Technical Reports Server (NTRS)
Manson, S. S.
1979-01-01
The paper presents closed-form solutions (collocation method and spline-function method) for the constants of the cyclic fatigue life equation so that they can be easily incorporated into cumulative damage analysis. The collocation method involves conformity with the experimental curve at specific life values. The spline-function method is such that the basic life relation is expressed as a two-part function, one applicable at strains above the transition strain (strain at intersection of elastic and plastic lines), the other below. An illustrative example is treated by both methods. It is shown that while the collocation representation has the advantage of simplicity of form, the spline-function representation can be made more accurate over a wider life range, and is simpler to use.
NASA Astrophysics Data System (ADS)
Gotovac, Hrvoje; Srzic, Veljko
2014-05-01
Contaminant transport in natural aquifers is a complex, multiscale process that is frequently studied using different Eulerian, Lagrangian and hybrid numerical methods. Conservative solute transport is typically modeled using the advection-dispersion equation (ADE). Despite the large number of available numerical methods that have been developed to solve it, the accurate numerical solution of the ADE still presents formidable challenges. In particular, current numerical solutions of multidimensional advection-dominated transport in non-uniform velocity fields are affected by one or all of the following problems: numerical dispersion that introduces artificial mixing and dilution, grid orientation effects, unresolved spatial and temporal scales and unphysical numerical oscillations (e.g., Herrera et al, 2009; Bosso et al., 2012). In this work we will present Eulerian Lagrangian Adaptive Fup Collocation Method (ELAFCM) based on Fup basis functions and collocation approach for spatial approximation and explicit stabilized Runge-Kutta-Chebyshev temporal integration (public domain routine SERK2) which is especially well suited for stiff parabolic problems. Spatial adaptive strategy is based on Fup basis functions which are closely related to the wavelets and splines so that they are also compactly supported basis functions; they exactly describe algebraic polynomials and enable a multiresolution adaptive analysis (MRA). MRA is here performed via Fup Collocation Transform (FCT) so that at each time step concentration solution is decomposed using only a few significant Fup basis functions on adaptive collocation grid with appropriate scales (frequencies) and locations, a desired level of accuracy and a near minimum computational cost. FCT adds more collocations points and higher resolution levels only in sensitive zones with sharp concentration gradients, fronts and/or narrow transition zones. According to the our recent achievements there is no need for solving the large linear system on adaptive grid because each Fup coefficient is obtained by predefined formulas equalizing Fup expansion around corresponding collocation point and particular collocation operator based on few surrounding solution values. Furthermore, each Fup coefficient can be obtained independently which is perfectly suited for parallel processing. Adaptive grid in each time step is obtained from solution of the last time step or initial conditions and advective Lagrangian step in the current time step according to the velocity field and continuous streamlines. On the other side, we implement explicit stabilized routine SERK2 for dispersive Eulerian part of solution in the current time step on obtained spatial adaptive grid. Overall adaptive concept does not require the solving of large linear systems for the spatial and temporal approximation of conservative transport. Also, this new Eulerian-Lagrangian-Collocation scheme resolves all mentioned numerical problems due to its adaptive nature and ability to control numerical errors in space and time. Proposed method solves advection in Lagrangian way eliminating problems in Eulerian methods, while optimal collocation grid efficiently describes solution and boundary conditions eliminating usage of large number of particles and other problems in Lagrangian methods. Finally, numerical tests show that this approach enables not only accurate velocity field, but also conservative transport even in highly heterogeneous porous media resolving all spatial and temporal scales of concentration field.
An empirical understanding of triple collocation evaluation measure
NASA Astrophysics Data System (ADS)
Scipal, Klaus; Doubkova, Marcela; Hegyova, Alena; Dorigo, Wouter; Wagner, Wolfgang
2013-04-01
Triple collocation method is an advanced evaluation method that has been used in the soil moisture field for only about half a decade. The method requires three datasets with an independent error structure that represent an identical phenomenon. The main advantages of the method are that it a) doesn't require a reference dataset that has to be considered to represent the truth, b) limits the effect of random and systematic errors of other two datasets, and c) simultaneously assesses the error of three datasets. The objective of this presentation is to assess the triple collocation error (Tc) of the ASAR Global Mode Surface Soil Moisture (GM SSM 1) km dataset and highlight problems of the method related to its ability to cancel the effect of error of ancillary datasets. In particular, the goal is to a) investigate trends in Tc related to the change in spatial resolution from 5 to 25 km, b) to investigate trends in Tc related to the choice of a hydrological model, and c) to study the relationship between Tc and other absolute evaluation methods (namely RMSE and Error Propagation EP). The triple collocation method is implemented using ASAR GM, AMSR-E, and a model (either AWRA-L, GLDAS-NOAH, or ERA-Interim). First, the significance of the relationship between the three soil moisture datasets was tested that is a prerequisite for the triple collocation method. Second, the trends in Tc related to the choice of the third reference dataset and scale were assessed. For this purpose the triple collocation is repeated replacing AWRA-L with two different globally available model reanalysis dataset operating at different spatial resolution (ERA-Interim and GLDAS-NOAH). Finally, the retrieved results were compared to the results of the RMSE and EP evaluation measures. Our results demonstrate that the Tc method does not eliminate the random and time-variant systematic errors of the second and the third dataset used in the Tc. The possible reasons include the fact a) that the TC method could not fully function with datasets acting at very different spatial resolutions, or b) that the errors were not fully independent as initially assumed.
Understanding a reference-free impedance method using collocated piezoelectric transducers
NASA Astrophysics Data System (ADS)
Kim, Eun Jin; Kim, Min Koo; Sohn, Hoon; Park, Hyun Woo
2010-03-01
A new concept of a reference-free impedance method, which does not require direct comparison with a baseline impedance signal, is proposed for damage detection in a plate-like structure. A single pair of piezoelectric (PZT) wafers collocated on both surfaces of a plate are utilized for extracting electro-mechanical signatures (EMS) associated with mode conversion due to damage. A numerical simulation is conducted to investigate the EMS of collocated PZT wafers in the frequency domain at the presence of damage through spectral element analysis. Then, the EMS due to mode conversion induced by damage are extracted using the signal decomposition technique based on the polarization characteristics of the collocated PZT wafers. The effects of the size and the location of damage on the decomposed EMS are investigated as well. Finally, the applicability of the decomposed EMS to the reference-free damage diagnosis is discussed.
NASA Astrophysics Data System (ADS)
Liao, Q.; Tchelepi, H.; Zhang, D.
2015-12-01
Uncertainty quantification aims at characterizing the impact of input parameters on the output responses and plays an important role in many areas including subsurface flow and transport. In this study, a sparse grid collocation approach, which uses a nested Kronrod-Patterson-Hermite quadrature rule with moderate delay for Gaussian random parameters, is proposed to quantify the uncertainty of model solutions. The conventional stochastic collocation method serves as a promising non-intrusive approach and has drawn a great deal of interests. The collocation points are usually chosen to be Gauss-Hermite quadrature nodes, which are naturally unnested. The Kronrod-Patterson-Hermite nodes are shown to be more efficient than the Gauss-Hermite nodes due to nestedness. We propose a Kronrod-Patterson-Hermite rule with moderate delay to further improve the performance. Our study demonstrates the effectiveness of the proposed method for uncertainty quantification through subsurface flow and transport examples.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liao, Qinzhuo, E-mail: liaoqz@pku.edu.cn; Zhang, Dongxiao; Tchelepi, Hamdi
A new computational method is proposed for efficient uncertainty quantification of multiphase flow in porous media with stochastic permeability. For pressure estimation, it combines the dimension-adaptive stochastic collocation method on Smolyak sparse grids and the Kronrod–Patterson–Hermite nested quadrature formulas. For saturation estimation, an additional stage is developed, in which the pressure and velocity samples are first generated by the sparse grid interpolation and then substituted into the transport equation to solve for the saturation samples, to address the low regularity problem of the saturation. Numerical examples are presented for multiphase flow with stochastic permeability fields to demonstrate accuracy and efficiencymore » of the proposed two-stage adaptive stochastic collocation method on nested sparse grids.« less
Domain decomposition preconditioners for the spectral collocation method
NASA Technical Reports Server (NTRS)
Quarteroni, Alfio; Sacchilandriani, Giovanni
1988-01-01
Several block iteration preconditioners are proposed and analyzed for the solution of elliptic problems by spectral collocation methods in a region partitioned into several rectangles. It is shown that convergence is achieved with a rate which does not depend on the polynomial degree of the spectral solution. The iterative methods here presented can be effectively implemented on multiprocessor systems due to their high degree of parallelism.
Some spectral approximation of one-dimensional fourth-order problems
NASA Technical Reports Server (NTRS)
Bernardi, Christine; Maday, Yvon
1989-01-01
Some spectral type collocation method well suited for the approximation of fourth-order systems are proposed. The model problem is the biharmonic equation, in one and two dimensions when the boundary conditions are periodic in one direction. It is proved that the standard Gauss-Lobatto nodes are not the best choice for the collocation points. Then, a new set of nodes related to some generalized Gauss type quadrature formulas is proposed. Also provided is a complete analysis of these formulas including some new issues about the asymptotic behavior of the weights and we apply these results to the analysis of the collocation method.
NASA Astrophysics Data System (ADS)
Amerian, Z.; Salem, M. K.; Salar Elahi, A.; Ghoranneviss, M.
2017-03-01
Equilibrium reconstruction consists of identifying, from experimental measurements, a distribution of the plasma current density that satisfies the pressure balance constraint. Numerous methods exist to solve the Grad-Shafranov equation, describing the equilibrium of plasma confined by an axisymmetric magnetic field. In this paper, we have proposed a new numerical solution to the Grad-Shafranov equation (an axisymmetric, magnetic field transformed in cylindrical coordinates solved with the Chebyshev collocation method) when the source term (current density function) on the right-hand side is linear. The Chebyshev collocation method is a method for computing highly accurate numerical solutions of differential equations. We describe a circular cross-section of the tokamak and present numerical result of magnetic surfaces on the IR-T1 tokamak and then compare the results with an analytical solution.
Parallel Implementation of a High Order Implicit Collocation Method for the Heat Equation
NASA Technical Reports Server (NTRS)
Kouatchou, Jules; Halem, Milton (Technical Monitor)
2000-01-01
We combine a high order compact finite difference approximation and collocation techniques to numerically solve the two dimensional heat equation. The resulting method is implicit arid can be parallelized with a strategy that allows parallelization across both time and space. We compare the parallel implementation of the new method with a classical implicit method, namely the Crank-Nicolson method, where the parallelization is done across space only. Numerical experiments are carried out on the SGI Origin 2000.
Domain decomposition methods for systems of conservation laws: Spectral collocation approximations
NASA Technical Reports Server (NTRS)
Quarteroni, Alfio
1989-01-01
Hyperbolic systems of conversation laws are considered which are discretized in space by spectral collocation methods and advanced in time by finite difference schemes. At any time-level a domain deposition method based on an iteration by subdomain procedure was introduced yielding at each step a sequence of independent subproblems (one for each subdomain) that can be solved simultaneously. The method is set for a general nonlinear problem in several space variables. The convergence analysis, however, is carried out only for a linear one-dimensional system with continuous solutions. A precise form of the error reduction factor at each iteration is derived. Although the method is applied here to the case of spectral collocation approximation only, the idea is fairly general and can be used in a different context as well. For instance, its application to space discretization by finite differences is straight forward.
A multidomain spectral collocation method for the Stokes problem
NASA Technical Reports Server (NTRS)
Landriani, G. Sacchi; Vandeven, H.
1989-01-01
A multidomain spectral collocation scheme is proposed for the approximation of the two-dimensional Stokes problem. It is shown that the discrete velocity vector field is exactly divergence-free and we prove error estimates both for the velocity and the pressure.
Evaluation of assumptions in soil moisture triple collocation analysis
USDA-ARS?s Scientific Manuscript database
Triple collocation analysis (TCA) enables estimation of error variances for three or more products that retrieve or estimate the same geophysical variable using mutually-independent methods. Several statistical assumptions regarding the statistical nature of errors (e.g., mutual independence and ort...
A collocation--Galerkin finite element model of cardiac action potential propagation.
Rogers, J M; McCulloch, A D
1994-08-01
A new computational method was developed for modeling the effects of the geometric complexity, nonuniform muscle fiber orientation, and material inhomogeneity of the ventricular wall on cardiac impulse propagation. The method was used to solve a modification to the FitzHugh-Nagumo system of equations. The geometry, local muscle fiber orientation, and material parameters of the domain were defined using linear Lagrange or cubic Hermite finite element interpolation. Spatial variations of time-dependent excitation and recovery variables were approximated using cubic Hermite finite element interpolation, and the governing finite element equations were assembled using the collocation method. To overcome the deficiencies of conventional collocation methods on irregular domains, Galerkin equations for the no-flux boundary conditions were used instead of collocation equations for the boundary degrees-of-freedom. The resulting system was evolved using an adaptive Runge-Kutta method. Converged two-dimensional simulations of normal propagation showed that this method requires less CPU time than a traditional finite difference discretization. The model also reproduced several other physiologic phenomena known to be important in arrhythmogenesis including: Wenckebach periodicity, slowed propagation and unidirectional block due to wavefront curvature, reentry around a fixed obstacle, and spiral wave reentry. In a new result, we observed wavespeed variations and block due to nonuniform muscle fiber orientation. The findings suggest that the finite element method is suitable for studying normal and pathological cardiac activation and has significant advantages over existing techniques.
NASA Astrophysics Data System (ADS)
Vu, Q. H.; Brenner, R.; Castelnau, O.; Moulinec, H.; Suquet, P.
2012-03-01
The correspondence principle is customarily used with the Laplace-Carson transform technique to tackle the homogenization of linear viscoelastic heterogeneous media. The main drawback of this method lies in the fact that the whole stress and strain histories have to be considered to compute the mechanical response of the material during a given macroscopic loading. Following a remark of Mandel (1966 Mécanique des Milieux Continus(Paris, France: Gauthier-Villars)), Ricaud and Masson (2009 Int. J. Solids Struct. 46 1599-1606) have shown the equivalence between the collocation method used to invert Laplace-Carson transforms and an internal variables formulation. In this paper, this new method is developed for the case of polycrystalline materials with general anisotropic properties for local and macroscopic behavior. Applications are provided for the case of constitutive relations accounting for glide of dislocations on particular slip systems. It is shown that the method yields accurate results that perfectly match the standard collocation method and reference full-field results obtained with a FFT numerical scheme. The formulation is then extended to the case of time- and strain-dependent viscous properties, leading to the incremental collocation method (ICM) that can be solved efficiently by a step-by-step procedure. Specifically, the introduction of isotropic and kinematic hardening at the slip system scale is considered.
Pseudospectral collocation methods for fourth order differential equations
NASA Technical Reports Server (NTRS)
Malek, Alaeddin; Phillips, Timothy N.
1994-01-01
Collocation schemes are presented for solving linear fourth order differential equations in one and two dimensions. The variational formulation of the model fourth order problem is discretized by approximating the integrals by a Gaussian quadrature rule generalized to include the values of the derivative of the integrand at the boundary points. Collocation schemes are derived which are equivalent to this discrete variational problem. An efficient preconditioner based on a low-order finite difference approximation to the same differential operator is presented. The corresponding multidomain problem is also considered and interface conditions are derived. Pseudospectral approximations which are C1 continuous at the interfaces are used in each subdomain to approximate the solution. The approximations are also shown to be C3 continuous at the interfaces asymptotically. A complete analysis of the collocation scheme for the multidomain problem is provided. The extension of the method to the biharmonic equation in two dimensions is discussed and results are presented for a problem defined in a nonrectangular domain.
A collocation-shooting method for solving fractional boundary value problems
NASA Astrophysics Data System (ADS)
Al-Mdallal, Qasem M.; Syam, Muhammed I.; Anwar, M. N.
2010-12-01
In this paper, we discuss the numerical solution of special class of fractional boundary value problems of order 2. The method of solution is based on a conjugating collocation and spline analysis combined with shooting method. A theoretical analysis about the existence and uniqueness of exact solution for the present class is proven. Two examples involving Bagley-Torvik equation subject to boundary conditions are also presented; numerical results illustrate the accuracy of the present scheme.
Nanosecond Enhancements of the Atmospheric Electron Density by Extensive Air Showers
NASA Astrophysics Data System (ADS)
Rutjes, C.; Camporeale, E.; Ebert, U.; Buitink, S.; Scholten, O.; Trinh, G. T. N.; Witteveen, J.
2015-12-01
As is well known a sufficient density of free electrons and strong electric fields are the basic requirements to start any electrical discharge. In the context of thunderstorm discharges it has become clear that in addition droplets and or ice particles are required to enhance the electric field to values above breakdown. In our recent study [1] we have shown that these three ingredients have to interplay to allow for lightning inception, triggered by an extensive air shower event. The extensive air showers are a very stochastic natural phenomenon, creating highly coherent sub-nanosecond enhancements of the atmospheric electron density. Predicting these electron density enhancements accurately one has to take the uncertainty of the input variables into account. For this study we use the initial energy, inclination and altitude of first interaction, which will influence the evolution of the shower significantly. To this end, we use the stochastic collocation method, [2] to post-process our detailed Monte Carlo extensive air shower simulations, done with the CORSIKA [3] software package, which provides an efficient and elegant way to determine the distribution of the atmospheric electron density enhancements. [1] Dubinova, A., Rutjes, C., Ebert, E., Buitink, S., Scholten, O., and Trinh, G. T. N. "Prediction of Lightning Inception by Large Ice Particles and Extensive Air Showers." PRL 115 015002 (2015)[2] G.J.A. Loeven, J.A.S. Witteveen, H. Bijl, Probabilistic collocation: an efficient nonintrusive approach for arbitrarily distributed parametric uncertainties, 45th AIAA Aerospace Sciences Meeting, Reno, Nevada, 2007, AIAA-2007-317[3] Heck, Dieter, et al. CORSIKA: A Monte Carlo code to simulate extensive air showers. No. FZKA-6019. 1998.
Numerical solution of sixth-order boundary-value problems using Legendre wavelet collocation method
NASA Astrophysics Data System (ADS)
Sohaib, Muhammad; Haq, Sirajul; Mukhtar, Safyan; Khan, Imad
2018-03-01
An efficient method is proposed to approximate sixth order boundary value problems. The proposed method is based on Legendre wavelet in which Legendre polynomial is used. The mechanism of the method is to use collocation points that converts the differential equation into a system of algebraic equations. For validation two test problems are discussed. The results obtained from proposed method are quite accurate, also close to exact solution, and other different methods. The proposed method is computationally more effective and leads to more accurate results as compared to other methods from literature.
Reducing the cost of using collocation to compute vibrational energy levels: Results for CH2NH.
Avila, Gustavo; Carrington, Tucker
2017-08-14
In this paper, we improve the collocation method for computing vibrational spectra that was presented in the work of Avila and Carrington, Jr. [J. Chem. Phys. 143, 214108 (2015)]. Known quadrature and collocation methods using a Smolyak grid require storing intermediate vectors with more elements than points on the Smolyak grid. This is due to the fact that grid labels are constrained among themselves and basis labels are constrained among themselves. We show that by using the so-called hierarchical basis functions, one can significantly reduce the memory required. In this paper, the intermediate vectors have only as many elements as the Smolyak grid. The ideas are tested by computing energy levels of CH 2 NH.
Comparison of Calibration Techniques for Low-Cost Air Quality Monitoring
NASA Astrophysics Data System (ADS)
Malings, C.; Ramachandran, S.; Tanzer, R.; Kumar, S. P. N.; Hauryliuk, A.; Zimmerman, N.; Presto, A. A.
2017-12-01
Assessing the intra-city spatial distribution and temporal variability of air quality can be facilitated by a dense network of monitoring stations. However, the cost of implementing such a network can be prohibitive if high-quality but high-cost monitoring systems are used. To this end, the Real-time Affordable Multi-Pollutant (RAMP) sensor package has been developed at the Center for Atmospheric Particle Studies of Carnegie Mellon University, in collaboration with SenSevere LLC. This self-contained unit can measure up to five gases out of CO, SO2, NO, NO2, O3, VOCs, and CO2, along with temperature and relative humidity. Responses of individual gas sensors can vary greatly even when exposed to the same ambient conditions. Those of VOC sensors in particular were observed to vary by a factor-of-8, which suggests that each sensor requires its own calibration model. To this end, we apply and compare two different calibration methods to data collected by RAMP sensors collocated with a reference monitor station. The first method, random forest (RF) modeling, is a rule-based method which maps sensor responses to pollutant concentrations by implementing a trained sequence of decision rules. RF modeling has previously been used for other RAMP gas sensors by the group, and has produced precise calibrated measurements. However, RF models can only predict pollutant concentrations within the range observed in the training data collected during the collocation period. The second method, Gaussian process (GP) modeling, is a probabilistic Bayesian technique whereby broad prior estimates of pollutant concentrations are updated using sensor responses to generate more refined posterior predictions, as well as allowing predictions beyond the range of the training data. The accuracy and precision of these techniques are assessed and compared on VOC data collected during the summer of 2017 in Pittsburgh, PA. By combining pollutant data gathered by each RAMP sensor and applying appropriate calibration techniques, the potentially noisy or biased responses of individual sensors can be mapped to pollutant concentration values which are comparable to those of reference instruments.
Porsa, Sina; Lin, Yi-Chung; Pandy, Marcus G
2016-08-01
The aim of this study was to compare the computational performances of two direct methods for solving large-scale, nonlinear, optimal control problems in human movement. Direct shooting and direct collocation were implemented on an 8-segment, 48-muscle model of the body (24 muscles on each side) to compute the optimal control solution for maximum-height jumping. Both algorithms were executed on a freely-available musculoskeletal modeling platform called OpenSim. Direct collocation converged to essentially the same optimal solution up to 249 times faster than direct shooting when the same initial guess was assumed (3.4 h of CPU time for direct collocation vs. 35.3 days for direct shooting). The model predictions were in good agreement with the time histories of joint angles, ground reaction forces and muscle activation patterns measured for subjects jumping to their maximum achievable heights. Both methods converged to essentially the same solution when started from the same initial guess, but computation time was sensitive to the initial guess assumed. Direct collocation demonstrates exceptional computational performance and is well suited to performing predictive simulations of movement using large-scale musculoskeletal models.
NASA Astrophysics Data System (ADS)
Parand, Kourosh; Latifi, Sobhan; Delkhosh, Mehdi; Moayeri, Mohammad M.
2018-01-01
In the present paper, a new method based on the Generalized Lagrangian Jacobi Gauss (GLJG) collocation method is proposed. The nonlinear Kidder equation, which explains unsteady isothermal gas through a micro-nano porous medium, is a second-order two-point boundary value ordinary differential equation on the unbounded interval [0, ∞). Firstly, using the quasilinearization method, the equation is converted to a sequence of linear ordinary differential equations. Then, by using the GLJG collocation method, the problem is reduced to solving a system of algebraic equations. It must be mentioned that this equation is solved without domain truncation and variable changing. A comparison with some numerical solutions made and the obtained results indicate that the presented solution is highly accurate. The important value of the initial slope, y'(0), is obtained as -1.191790649719421734122828603800159364 for η = 0.5. Comparing to the best result obtained so far, it is accurate up to 36 decimal places.
NASA Technical Reports Server (NTRS)
Phillips, J. R.
1996-01-01
In this paper we derive error bounds for a collocation-grid-projection scheme tuned for use in multilevel methods for solving boundary-element discretizations of potential integral equations. The grid-projection scheme is then combined with a precorrected FFT style multilevel method for solving potential integral equations with 1/r and e(sup ikr)/r kernels. A complexity analysis of this combined method is given to show that for homogeneous problems, the method is order n natural log n nearly independent of the kernel. In addition, it is shown analytically and experimentally that for an inhomogeneity generated by a very finely discretized surface, the combined method slows to order n(sup 4/3). Finally, examples are given to show that the collocation-based grid-projection plus precorrected-FFT scheme is competitive with fast-multipole algorithms when considering realistic problems and 1/r kernels, but can be used over a range of spatial frequencies with only a small performance penalty.
A Survey of Symplectic and Collocation Integration Methods for Orbit Propagation
NASA Technical Reports Server (NTRS)
Jones, Brandon A.; Anderson, Rodney L.
2012-01-01
Demands on numerical integration algorithms for astrodynamics applications continue to increase. Common methods, like explicit Runge-Kutta, meet the orbit propagation needs of most scenarios, but more specialized scenarios require new techniques to meet both computational efficiency and accuracy needs. This paper provides an extensive survey on the application of symplectic and collocation methods to astrodynamics. Both of these methods benefit from relatively recent theoretical developments, which improve their applicability to artificial satellite orbit propagation. This paper also details their implementation, with several tests demonstrating their advantages and disadvantages.
Stability of compressible Taylor-Couette flow
NASA Technical Reports Server (NTRS)
Kao, Kai-Hsiung; Chow, Chuen-Yen
1991-01-01
Compressible stability equations are solved using the spectral collocation method in an attempt to study the effects of temperature difference and compressibility on the stability of Taylor-Couette flow. It is found that the Chebyshev collocation spectral method yields highly accurate results using fewer grid points for solving stability problems. Comparisons are made between the result obtained by assuming small Mach number with a uniform temperature distribution and that based on fully incompressible analysis.
Global collocation methods for approximation and the solution of partial differential equations
NASA Technical Reports Server (NTRS)
Solomonoff, A.; Turkel, E.
1986-01-01
Polynomial interpolation methods are applied both to the approximation of functions and to the numerical solutions of hyperbolic and elliptic partial differential equations. The derivative matrix for a general sequence of the collocation points is constructed. The approximate derivative is then found by a matrix times vector multiply. The effects of several factors on the performance of these methods including the effect of different collocation points are then explored. The resolution of the schemes for both smooth functions and functions with steep gradients or discontinuities in some derivative are also studied. The accuracy when the gradients occur both near the center of the region and in the vicinity of the boundary is investigated. The importance of the aliasing limit on the resolution of the approximation is investigated in detail. Also examined is the effect of boundary treatment on the stability and accuracy of the scheme.
NASA Technical Reports Server (NTRS)
Fromme, J.; Golberg, M.
1978-01-01
The numerical calculation of unsteady two dimensional airloads which act upon thin airfoils in subsonic ventilated wind tunnels was studied. Neglecting certain quadrature errors, Bland's collocation method is rigorously proved to converge to the mathematically exact solution of Bland's integral equation, and a three way equivalence was established between collocation, Galerkin's method and least squares whenever the collocation points are chosen to be the nodes of the quadrature rule used for Galerkin's method. A computer program displayed convergence with respect to the number of pressure basis functions employed, and agreement with known special cases was demonstrated. Results are obtained for the combined effects of wind tunnel wall ventilation and wind tunnel depth to airfoil chord ratio, and for acoustic resonance between the airfoil and wind tunnel walls. A boundary condition is proposed for permeable walls through which mass flow rate is proportional to pressure jump.
A Comprehensive Study of Gridding Methods for GPS Horizontal Velocity Fields
NASA Astrophysics Data System (ADS)
Wu, Yanqiang; Jiang, Zaisen; Liu, Xiaoxia; Wei, Wenxin; Zhu, Shuang; Zhang, Long; Zou, Zhenyu; Xiong, Xiaohui; Wang, Qixin; Du, Jiliang
2017-03-01
Four gridding methods for GPS velocities are compared in terms of their precision, applicability and robustness by analyzing simulated data with uncertainties from 0.0 to ±3.0 mm/a. When the input data are 1° × 1° grid sampled and the uncertainty of the additional error is greater than ±1.0 mm/a, the gridding results show that the least-squares collocation method is highly robust while the robustness of the Kriging method is low. In contrast, the spherical harmonics and the multi-surface function are moderately robust, and the regional singular values for the multi-surface function method and the edge effects for the spherical harmonics method become more significant with increasing uncertainty of the input data. When the input data (with additional errors of ±2.0 mm/a) are decimated by 50% from the 1° × 1° grid data and then erased in three 6° × 12° regions, the gridding results in these three regions indicate that the least-squares collocation and the spherical harmonics methods have good performances, while the multi-surface function and the Kriging methods may lead to singular values. The gridding techniques are also applied to GPS horizontal velocities with an average error of ±0.8 mm/a over the Chinese mainland and the surrounding areas, and the results show that the least-squares collocation method has the best performance, followed by the Kriging and multi-surface function methods. Furthermore, the edge effects of the spherical harmonics method are significantly affected by the sparseness and geometric distribution of the input data. In general, the least-squares collocation method is superior in terms of its robustness, edge effect, error distribution and stability, while the other methods have several positive features.
A Christoffel function weighted least squares algorithm for collocation approximations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Narayan, Akil; Jakeman, John D.; Zhou, Tao
Here, we propose, theoretically investigate, and numerically validate an algorithm for the Monte Carlo solution of least-squares polynomial approximation problems in a collocation framework. Our investigation is motivated by applications in the collocation approximation of parametric functions, which frequently entails construction of surrogates via orthogonal polynomials. A standard Monte Carlo approach would draw samples according to the density defining the orthogonal polynomial family. Our proposed algorithm instead samples with respect to the (weighted) pluripotential equilibrium measure of the domain, and subsequently solves a weighted least-squares problem, with weights given by evaluations of the Christoffel function. We present theoretical analysis tomore » motivate the algorithm, and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Avila, Gustavo, E-mail: Gustavo-Avila@telefonica.net; Carrington, Tucker, E-mail: Tucker.Carrington@queensu.ca
In this paper, we improve the collocation method for computing vibrational spectra that was presented in Avila and Carrington, Jr. [J. Chem. Phys. 139, 134114 (2013)]. Using an iterative eigensolver, energy levels and wavefunctions are determined from values of the potential on a Smolyak grid. The kinetic energy matrix-vector product is evaluated by transforming a vector labelled with (nondirect product) grid indices to a vector labelled by (nondirect product) basis indices. Both the transformation and application of the kinetic energy operator (KEO) scale favorably. Collocation facilitates dealing with complicated KEOs because it obviates the need to calculate integrals of coordinatemore » dependent coefficients of differential operators. The ideas are tested by computing energy levels of HONO using a KEO in bond coordinates.« less
A Christoffel function weighted least squares algorithm for collocation approximations
Narayan, Akil; Jakeman, John D.; Zhou, Tao
2016-11-28
Here, we propose, theoretically investigate, and numerically validate an algorithm for the Monte Carlo solution of least-squares polynomial approximation problems in a collocation framework. Our investigation is motivated by applications in the collocation approximation of parametric functions, which frequently entails construction of surrogates via orthogonal polynomials. A standard Monte Carlo approach would draw samples according to the density defining the orthogonal polynomial family. Our proposed algorithm instead samples with respect to the (weighted) pluripotential equilibrium measure of the domain, and subsequently solves a weighted least-squares problem, with weights given by evaluations of the Christoffel function. We present theoretical analysis tomore » motivate the algorithm, and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest.« less
Finite Differences and Collocation Methods for the Solution of the Two Dimensional Heat Equation
NASA Technical Reports Server (NTRS)
Kouatchou, Jules
1999-01-01
In this paper we combine finite difference approximations (for spatial derivatives) and collocation techniques (for the time component) to numerically solve the two dimensional heat equation. We employ respectively a second-order and a fourth-order schemes for the spatial derivatives and the discretization method gives rise to a linear system of equations. We show that the matrix of the system is non-singular. Numerical experiments carried out on serial computers, show the unconditional stability of the proposed method and the high accuracy achieved by the fourth-order scheme.
A Fourier collocation time domain method for numerically solving Maxwell's equations
NASA Technical Reports Server (NTRS)
Shebalin, John V.
1991-01-01
A new method for solving Maxwell's equations in the time domain for arbitrary values of permittivity, conductivity, and permeability is presented. Spatial derivatives are found by a Fourier transform method and time integration is performed using a second order, semi-implicit procedure. Electric and magnetic fields are collocated on the same grid points, rather than on interleaved points, as in the Finite Difference Time Domain (FDTD) method. Numerical results are presented for the propagation of a 2-D Transverse Electromagnetic (TEM) mode out of a parallel plate waveguide and into a dielectric and conducting medium.
Reduced basis ANOVA methods for partial differential equations with high-dimensional random inputs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liao, Qifeng, E-mail: liaoqf@shanghaitech.edu.cn; Lin, Guang, E-mail: guanglin@purdue.edu
2016-07-15
In this paper we present a reduced basis ANOVA approach for partial deferential equations (PDEs) with random inputs. The ANOVA method combined with stochastic collocation methods provides model reduction in high-dimensional parameter space through decomposing high-dimensional inputs into unions of low-dimensional inputs. In this work, to further reduce the computational cost, we investigate spatial low-rank structures in the ANOVA-collocation method, and develop efficient spatial model reduction techniques using hierarchically generated reduced bases. We present a general mathematical framework of the methodology, validate its accuracy and demonstrate its efficiency with numerical experiments.
NASA Astrophysics Data System (ADS)
Doha, Eid H.; Bhrawy, Ali H.; Abdelkawy, Mohammed A.
2014-09-01
In this paper, we propose an efficient spectral collocation algorithm to solve numerically wave type equations subject to initial, boundary and non-local conservation conditions. The shifted Jacobi pseudospectral approximation is investigated for the discretization of the spatial variable of such equations. It possesses spectral accuracy in the spatial variable. The shifted Jacobi-Gauss-Lobatto (SJ-GL) quadrature rule is established for treating the non-local conservation conditions, and then the problem with its initial and non-local boundary conditions are reduced to a system of second-order ordinary differential equations in temporal variable. This system is solved by two-stage forth-order A-stable implicit RK scheme. Five numerical examples with comparisons are given. The computational results demonstrate that the proposed algorithm is more accurate than finite difference method, method of lines and spline collocation approach
ERIC Educational Resources Information Center
Webb, Stuart; Kagimoto, Eve
2011-01-01
This study investigated the effects of three factors (the number of collocates per node word, the position of the node word, synonymy) on learning collocations. Japanese students studying English as a foreign language learned five sets of 12 target collocations. Each collocation was presented in a single glossed sentence. The number of collocates…
Simplex-stochastic collocation method with improved scalability
NASA Astrophysics Data System (ADS)
Edeling, W. N.; Dwight, R. P.; Cinnella, P.
2016-04-01
The Simplex-Stochastic Collocation (SSC) method is a robust tool used to propagate uncertain input distributions through a computer code. However, it becomes prohibitively expensive for problems with dimensions higher than 5. The main purpose of this paper is to identify bottlenecks, and to improve upon this bad scalability. In order to do so, we propose an alternative interpolation stencil technique based upon the Set-Covering problem, and we integrate the SSC method in the High-Dimensional Model-Reduction framework. In addition, we address the issue of ill-conditioned sample matrices, and we present an analytical map to facilitate uniformly-distributed simplex sampling.
NASA Astrophysics Data System (ADS)
Ciriello, V.; Lauriola, I.; Bonvicini, S.; Cozzani, V.; Di Federico, V.; Tartakovsky, Daniel M.
2017-11-01
Ubiquitous hydrogeological uncertainty undermines the veracity of quantitative predictions of soil and groundwater contamination due to accidental hydrocarbon spills from onshore pipelines. Such predictions, therefore, must be accompanied by quantification of predictive uncertainty, especially when they are used for environmental risk assessment. We quantify the impact of parametric uncertainty on quantitative forecasting of temporal evolution of two key risk indices, volumes of unsaturated and saturated soil contaminated by a surface spill of light nonaqueous-phase liquids. This is accomplished by treating the relevant uncertain parameters as random variables and deploying two alternative probabilistic models to estimate their effect on predictive uncertainty. A physics-based model is solved with a stochastic collocation method and is supplemented by a global sensitivity analysis. A second model represents the quantities of interest as polynomials of random inputs and has a virtually negligible computational cost, which enables one to explore any number of risk-related contamination scenarios. For a typical oil-spill scenario, our method can be used to identify key flow and transport parameters affecting the risk indices, to elucidate texture-dependent behavior of different soils, and to evaluate, with a degree of confidence specified by the decision-maker, the extent of contamination and the correspondent remediation costs.
Locating CVBEM collocation points for steady state heat transfer problems
Hromadka, T.V.
1985-01-01
The Complex Variable Boundary Element Method or CVBEM provides a highly accurate means of developing numerical solutions to steady state two-dimensional heat transfer problems. The numerical approach exactly solves the Laplace equation and satisfies the boundary conditions at specified points on the boundary by means of collocation. The accuracy of the approximation depends upon the nodal point distribution specified by the numerical analyst. In order to develop subsequent, refined approximation functions, four techniques for selecting additional collocation points are presented. The techniques are compared as to the governing theory, representation of the error of approximation on the problem boundary, the computational costs, and the ease of use by the numerical analyst. ?? 1985.
Galerkin-collocation domain decomposition method for arbitrary binary black holes
NASA Astrophysics Data System (ADS)
Barreto, W.; Clemente, P. C. M.; de Oliveira, H. P.; Rodriguez-Mueller, B.
2018-05-01
We present a new computational framework for the Galerkin-collocation method for double domain in the context of ADM 3 +1 approach in numerical relativity. This work enables us to perform high resolution calculations for initial sets of two arbitrary black holes. We use the Bowen-York method for binary systems and the puncture method to solve the Hamiltonian constraint. The nonlinear numerical code solves the set of equations for the spectral modes using the standard Newton-Raphson method, LU decomposition and Gaussian quadratures. We show convergence of our code for the conformal factor and the ADM mass. Thus, we display features of the conformal factor for different masses, spins and linear momenta.
1984-01-06
NO-1 ARCUASSII 004-3K-40F /G74N L 2874 Lj6l 1.0= = aM22 1.2 1.1 1. 1. MICROCOP ’ RP’-OLLI’ION liT[* CHART %".NA. H~.Nt I -’AN, All - ,- A t$ CUeavr...The cyclic voltammogram of the methoxy compound -has been simulated by the orthogonal collocation method. Products of bulk electrolysis have been...spectroelectrochemical means. The cyclic volta-mocra. of the methoxy compound has been simulated by the orthoccna. collocation method. Products of bulk
The Use of Verb Noun Collocations in Writing Stories among Iranian EFL Learners
ERIC Educational Resources Information Center
Bazzaz, Fatemeh Ebrahimi; Samad, Arshad Abd
2011-01-01
An important aspect of native speakers' communicative competence is collocational competence which involves knowing which words usually come together and which do not. This paper investigates the possible relationship between knowledge of collocations and the use of verb noun collocation in writing stories because collocational knowledge…
Developing and Evaluating a Chinese Collocation Retrieval Tool for CFL Students and Teachers
ERIC Educational Resources Information Center
Chen, Howard Hao-Jan; Wu, Jian-Cheng; Yang, Christine Ting-Yu; Pan, Iting
2016-01-01
The development of collocational knowledge is important for foreign language learners; unfortunately, learners often have difficulties producing proper collocations in the target language. Among the various ways of collocation learning, the DDL (data-driven learning) approach encourages the independent learning of collocations and allows learners…
Systematic parameter inference in stochastic mesoscopic modeling
NASA Astrophysics Data System (ADS)
Lei, Huan; Yang, Xiu; Li, Zhen; Karniadakis, George Em
2017-02-01
We propose a method to efficiently determine the optimal coarse-grained force field in mesoscopic stochastic simulations of Newtonian fluid and polymer melt systems modeled by dissipative particle dynamics (DPD) and energy conserving dissipative particle dynamics (eDPD). The response surfaces of various target properties (viscosity, diffusivity, pressure, etc.) with respect to model parameters are constructed based on the generalized polynomial chaos (gPC) expansion using simulation results on sampling points (e.g., individual parameter sets). To alleviate the computational cost to evaluate the target properties, we employ the compressive sensing method to compute the coefficients of the dominant gPC terms given the prior knowledge that the coefficients are "sparse". The proposed method shows comparable accuracy with the standard probabilistic collocation method (PCM) while it imposes a much weaker restriction on the number of the simulation samples especially for systems with high dimensional parametric space. Fully access to the response surfaces within the confidence range enables us to infer the optimal force parameters given the desirable values of target properties at the macroscopic scale. Moreover, it enables us to investigate the intrinsic relationship between the model parameters, identify possible degeneracies in the parameter space, and optimize the model by eliminating model redundancies. The proposed method provides an efficient alternative approach for constructing mesoscopic models by inferring model parameters to recover target properties of the physics systems (e.g., from experimental measurements), where those force field parameters and formulation cannot be derived from the microscopic level in a straight forward way.
Enhancing Critical Thinking Skills for Army Leaders Using Blended-Learning Methods
2013-01-01
delivering, and evaluating leader education and those who develop and implement distributed learning courses that incorporate group collaboration on topics...Circumstances Numerous studies comparing outcomes of collocated and virtual groups show that collocated groups perform better on interdependent tasks, such as...in class or “cold call” on students to answer questions. Third, using small (rather than large) groups for interactive activities can alleviate free
NASA Astrophysics Data System (ADS)
Shen, Xiang; Liu, Bin; Li, Qing-Quan
2017-03-01
The Rational Function Model (RFM) has proven to be a viable alternative to the rigorous sensor models used for geo-processing of high-resolution satellite imagery. Because of various errors in the satellite ephemeris and instrument calibration, the Rational Polynomial Coefficients (RPCs) supplied by image vendors are often not sufficiently accurate, and there is therefore a clear need to correct the systematic biases in order to meet the requirements of high-precision topographic mapping. In this paper, we propose a new RPC bias-correction method using the thin-plate spline modeling technique. Benefiting from its excellent performance and high flexibility in data fitting, the thin-plate spline model has the potential to remove complex distortions in vendor-provided RPCs, such as the errors caused by short-period orbital perturbations. The performance of the new method was evaluated by using Ziyuan-3 satellite images and was compared against the recently developed least-squares collocation approach, as well as the classical affine-transformation and quadratic-polynomial based methods. The results show that the accuracies of the thin-plate spline and the least-squares collocation approaches were better than the other two methods, which indicates that strong non-rigid deformations exist in the test data because they cannot be adequately modeled by simple polynomial-based methods. The performance of the thin-plate spline method was close to that of the least-squares collocation approach when only a few Ground Control Points (GCPs) were used, and it improved more rapidly with an increase in the number of redundant observations. In the test scenario using 21 GCPs (some of them located at the four corners of the scene), the correction residuals of the thin-plate spline method were about 36%, 37%, and 19% smaller than those of the affine transformation method, the quadratic polynomial method, and the least-squares collocation algorithm, respectively, which demonstrates that the new method can be more effective at removing systematic biases in vendor-supplied RPCs.
Corpus-Aided Business English Collocation Pedagogy: An Empirical Study in Chinese EFL Learners
ERIC Educational Resources Information Center
Chen, Lidan
2017-01-01
This study reports an empirical study of an explicit instruction of corpus-aided Business English collocations and verifies its effectiveness in improving learners' collocation awareness and learner autonomy, as a result of which is significant improvement of learners' collocation competence. An eight-week instruction in keywords' collocations,…
Perceptions on L2 Lexical Collocation Translation with a Focus on English-Arabic
ERIC Educational Resources Information Center
Alqaed, Mai Abdullah
2017-01-01
This paper aims to shed light on recent research concerning translating English-Arabic lexical collocations. It begins with a brief overview of English and Arabic lexical collocations with reference to specialized dictionaries. Research views on translating lexical collocations are presented, with the focus on English-Arabic collocations. These…
Legendre spectral-collocation method for solving some types of fractional optimal control problems
Sweilam, Nasser H.; Al-Ajami, Tamer M.
2014-01-01
In this paper, the Legendre spectral-collocation method was applied to obtain approximate solutions for some types of fractional optimal control problems (FOCPs). The fractional derivative was described in the Caputo sense. Two different approaches were presented, in the first approach, necessary optimality conditions in terms of the associated Hamiltonian were approximated. In the second approach, the state equation was discretized first using the trapezoidal rule for the numerical integration followed by the Rayleigh–Ritz method to evaluate both the state and control variables. Illustrative examples were included to demonstrate the validity and applicability of the proposed techniques. PMID:26257937
High resolution wind measurements for offshore wind energy development
NASA Technical Reports Server (NTRS)
Nghiem, Son Van (Inventor); Neumann, Gregory (Inventor)
2013-01-01
A method, apparatus, system, article of manufacture, and computer readable storage medium provide the ability to measure wind. Data at a first resolution (i.e., low resolution data) is collected by a satellite scatterometer. Thin slices of the data are determined. A collocation of the data slices are determined at each grid cell center to obtain ensembles of collocated data slices. Each ensemble of collocated data slices is decomposed into a mean part and a fluctuating part. The data is reconstructed at a second resolution from the mean part and a residue of the fluctuating part. A wind measurement is determined from the data at the second resolution using a wind model function. A description of the wind measurement is output.
ERIC Educational Resources Information Center
Miyakoshi, Tomoko
2009-01-01
Although it is widely acknowledged that collocations play an important part in second language learning, especially at intermediate-advanced levels, learners' difficulties with collocations have not been investigated in much detail so far. The present study examines ESL learners' use of verb-noun collocations, such as "take notes," "place an…
NASA Astrophysics Data System (ADS)
Taverniers, Søren; Tartakovsky, Daniel M.
2017-11-01
Predictions of the total energy deposited into a brain tumor through X-ray irradiation are notoriously error-prone. We investigate how this predictive uncertainty is affected by uncertainty in both the location of the region occupied by a dose-enhancing iodinated contrast agent and the agent's concentration. This is done within the probabilistic framework in which these uncertain parameters are modeled as random variables. We employ the stochastic collocation (SC) method to estimate statistical moments of the deposited energy in terms of statistical moments of the random inputs, and the global sensitivity analysis (GSA) to quantify the relative importance of uncertainty in these parameters on the overall predictive uncertainty. A nonlinear radiation-diffusion equation dramatically magnifies the coefficient of variation of the uncertain parameters, yielding a large coefficient of variation for the predicted energy deposition. This demonstrates that accurate prediction of the energy deposition requires a proper treatment of even small parametric uncertainty. Our analysis also reveals that SC outperforms standard Monte Carlo, but its relative efficiency decreases as the number of uncertain parameters increases from one to three. A robust GSA ameliorates this problem by reducing this number.
A fast collocation method for a variable-coefficient nonlocal diffusion model
NASA Astrophysics Data System (ADS)
Wang, Che; Wang, Hong
2017-02-01
We develop a fast collocation scheme for a variable-coefficient nonlocal diffusion model, for which a numerical discretization would yield a dense stiffness matrix. The development of the fast method is achieved by carefully handling the variable coefficients appearing inside the singular integral operator and exploiting the structure of the dense stiffness matrix. The resulting fast method reduces the computational work from O (N3) required by a commonly used direct solver to O (Nlog N) per iteration and the memory requirement from O (N2) to O (N). Furthermore, the fast method reduces the computational work of assembling the stiffness matrix from O (N2) to O (N). Numerical results are presented to show the utility of the fast method.
Single-grid spectral collocation for the Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Bernardi, Christine; Canuto, Claudio; Maday, Yvon; Metivet, Brigitte
1988-01-01
The aim of the paper is to study a collocation spectral method to approximate the Navier-Stokes equations: only one grid is used, which is built from the nodes of a Gauss-Lobatto quadrature formula, either of Legendre or of Chebyshev type. The convergence is proven for the Stokes problem provided with inhomogeneous Dirichlet conditions, then thoroughly analyzed for the Navier-Stokes equations. The practical implementation algorithm is presented, together with numerical results.
Accuracy and speed in computing the Chebyshev collocation derivative
NASA Technical Reports Server (NTRS)
Don, Wai-Sun; Solomonoff, Alex
1991-01-01
We studied several algorithms for computing the Chebyshev spectral derivative and compare their roundoff error. For a large number of collocation points, the elements of the Chebyshev differentiation matrix, if constructed in the usual way, are not computed accurately. A subtle cause is is found to account for the poor accuracy when computing the derivative by the matrix-vector multiplication method. Methods for accurately computing the elements of the matrix are presented, and we find that if the entities of the matrix are computed accurately, the roundoff error of the matrix-vector multiplication is as small as that of the transform-recursion algorithm. Results of CPU time usage are shown for several different algorithms for computing the derivative by the Chebyshev collocation method for a wide variety of two-dimensional grid sizes on both an IBM and a Cray 2 computer. We found that which algorithm is fastest on a particular machine depends not only on the grid size, but also on small details of the computer hardware as well. For most practical grid sizes used in computation, the even-odd decomposition algorithm is found to be faster than the transform-recursion method.
NASA Astrophysics Data System (ADS)
Parand, K.; Nikarya, M.
2017-11-01
In this paper a novel method will be introduced to solve a nonlinear partial differential equation (PDE). In the proposed method, we use the spectral collocation method based on Bessel functions of the first kind and the Jacobian free Newton-generalized minimum residual (JFNGMRes) method with adaptive preconditioner. In this work a nonlinear PDE has been converted to a nonlinear system of algebraic equations using the collocation method based on Bessel functions without any linearization, discretization or getting the help of any other methods. Finally, by using JFNGMRes, the solution of the nonlinear algebraic system is achieved. To illustrate the reliability and efficiency of the proposed method, we solve some examples of the famous Fisher equation. We compare our results with other methods.
Are Nonadjacent Collocations Processed Faster?
ERIC Educational Resources Information Center
Vilkaite, Laura
2016-01-01
Numerous studies have shown processing advantages for collocations, but they only investigated processing of adjacent collocations (e.g., "provide information"). However, in naturally occurring language, nonadjacent collocations ("provide" some of the "information") are equally, if not more frequent. This raises the…
Mars Mission Optimization Based on Collocation of Resources
NASA Technical Reports Server (NTRS)
Chamitoff, G. E.; James, G. H.; Barker, D. C.; Dershowitz, A. L.
2003-01-01
This paper presents a powerful approach for analyzing Martian data and for optimizing mission site selection based on resource collocation. This approach is implemented in a program called PROMT (Planetary Resource Optimization and Mapping Tool), which provides a wide range of analysis and display functions that can be applied to raw data or imagery. Thresholds, contours, custom algorithms, and graphical editing are some of the various methods that can be used to process data. Output maps can be created to identify surface regions on Mars that meet any specific criteria. The use of this tool for analyzing data, generating maps, and collocating features is demonstrated using data from the Mars Global Surveyor and the Odyssey spacecraft. The overall mission design objective is to maximize a combination of scientific return and self-sufficiency based on utilization of local materials. Landing site optimization involves maximizing accessibility to collocated science and resource features within a given mission radius. Mission types are categorized according to duration, energy resources, and in-situ resource utilization. Optimization results are shown for a number of mission scenarios.
A Two-Timescale Discretization Scheme for Collocation
NASA Technical Reports Server (NTRS)
Desai, Prasun; Conway, Bruce A.
2004-01-01
The development of a two-timescale discretization scheme for collocation is presented. This scheme allows a larger discretization to be utilized for smoothly varying state variables and a second finer discretization to be utilized for state variables having higher frequency dynamics. As such. the discretization scheme can be tailored to the dynamics of the particular state variables. In so doing. the size of the overall Nonlinear Programming (NLP) problem can be reduced significantly. Two two-timescale discretization architecture schemes are described. Comparison of results between the two-timescale method and conventional collocation show very good agreement. Differences of less than 0.5 percent are observed. Consequently. a significant reduction (by two-thirds) in the number of NLP parameters and iterations required for convergence can be achieved without sacrificing solution accuracy.
Proceedings, Seminar on Probabilistic Methods in Geotechnical Engineering
NASA Astrophysics Data System (ADS)
Hynes-Griffin, M. E.; Buege, L. L.
1983-09-01
Contents: Applications of Probabilistic Methods in Geotechnical Engineering; Probabilistic Seismic and Geotechnical Evaluation at a Dam Site; Probabilistic Slope Stability Methodology; Probability of Liquefaction in a 3-D Soil Deposit; Probabilistic Design of Flood Levees; Probabilistic and Statistical Methods for Determining Rock Mass Deformability Beneath Foundations: An Overview; Simple Statistical Methodology for Evaluating Rock Mechanics Exploration Data; New Developments in Statistical Techniques for Analyzing Rock Slope Stability.
NASA Astrophysics Data System (ADS)
Zhou, Rui-Rui; Li, Ben-Wen
2017-03-01
In this study, the Chebyshev collocation spectral method (CCSM) is developed to solve the radiative integro-differential transfer equation (RIDTE) for one-dimensional absorbing, emitting and linearly anisotropic-scattering cylindrical medium. The general form of quadrature formulas for Chebyshev collocation points is deduced. These formulas are proved to have the same accuracy as the Gauss-Legendre quadrature formula (GLQF) for the F-function (geometric function) in the RIDTE. The explicit expressions of the Lagrange basis polynomials and the differentiation matrices for Chebyshev collocation points are also given. These expressions are necessary for solving an integro-differential equation by the CCSM. Since the integrand in the RIDTE is continuous but non-smooth, it is treated by the segments integration method (SIM). The derivative terms in the RIDTE are carried out to improve the accuracy near the origin. In this way, a fourth order accuracy is achieved by the CCSM for the RIDTE, whereas it's only a second order one by the finite difference method (FDM). Several benchmark problems (BPs) with various combinations of optical thickness, medium temperature distribution, degree of anisotropy, and scattering albedo are solved. The results show that present CCSM is efficient to obtain high accurate results, especially for the optically thin medium. The solutions rounded to seven significant digits are given in tabular form, and show excellent agreement with the published data. Finally, the solutions of RIDTE are used as benchmarks for the solution of radiative integral transfer equations (RITEs) presented by Sutton and Chen (JQSRT 84 (2004) 65-103). A non-uniform grid refined near the wall is advised to improve the accuracy of RITEs solutions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deng, Min; Kollias, Pavlos; Feng, Zhe
The motivation for this research is to develop a precipitation classification and rain rate estimation method using cloud radar-only measurements for Atmospheric Radiation Measurement (ARM) long-term cloud observation analysis, which are crucial and unique for studying cloud lifecycle and precipitation features under different weather and climate regimes. Based on simultaneous and collocated observations of the Ka-band ARM zenith radar (KAZR), two precipitation radars (NCAR S-PolKa and Texas A&M University SMART-R), and surface precipitation during the DYNAMO/AMIE field campaign, a new cloud radar-only based precipitation classification and rain rate estimation method has been developed and evaluated. The resulting precipitation classification ismore » equivalent to those collocated SMART-R and S-PolKa observations. Both cloud and precipitation radars detected about 5% precipitation occurrence during this period. The convective (stratiform) precipitation fraction is about 18% (82%). The 2-day collocated disdrometer observations show an increased number concentration of large raindrops in convective rain compared to dominant concentration of small raindrops in stratiform rain. The composite distributions of KAZR reflectivity and Doppler velocity also show two distinct structures for convective and stratiform rain. These indicate that the method produces physically consistent results for two types of rain. The cloud radar-only rainfall estimation is developed based on the gradient of accumulative radar reflectivity below 1 km, near-surface Ze, and collocated surface rainfall (R) measurement. The parameterization is compared with the Z-R exponential relation. The relative difference between estimated and surface measured rainfall rate shows that the two-parameter relation can improve rainfall estimation.« less
NASA Technical Reports Server (NTRS)
Singhal, Surendra N.
2003-01-01
The SAE G-11 RMSL Division and Probabilistic Methods Committee meeting sponsored by the Picatinny Arsenal during March 1-3, 2004 at Westin Morristown, will report progress on projects for probabilistic assessment of Army system and launch an initiative for probabilistic education. The meeting features several Army and industry Senior executives and Ivy League Professor to provide an industry/government/academia forum to review RMSL technology; reliability and probabilistic technology; reliability-based design methods; software reliability; and maintainability standards. With over 100 members including members with national/international standing, the mission of the G-11s Probabilistic Methods Committee is to enable/facilitate rapid deployment of probabilistic technology to enhance the competitiveness of our industries by better, faster, greener, smarter, affordable and reliable product development.
An embedded formula of the Chebyshev collocation method for stiff problems
NASA Astrophysics Data System (ADS)
Piao, Xiangfan; Bu, Sunyoung; Kim, Dojin; Kim, Philsu
2017-12-01
In this study, we have developed an embedded formula of the Chebyshev collocation method for stiff problems, based on the zeros of the generalized Chebyshev polynomials. A new strategy for the embedded formula, using a pair of methods to estimate the local truncation error, as performed in traditional embedded Runge-Kutta schemes, is proposed. The method is performed in such a way that not only the stability region of the embedded formula can be widened, but by allowing the usage of larger time step sizes, the total computational costs can also be reduced. In terms of concrete convergence and stability analysis, the constructed algorithm turns out to have an 8th order convergence and it exhibits A-stability. Through several numerical experimental results, we have demonstrated that the proposed method is numerically more efficient, compared to several existing implicit methods.
Collocations: A Neglected Variable in EFL.
ERIC Educational Resources Information Center
Farghal, Mohammed; Obiedat, Hussein
1995-01-01
Addresses the issue of collocations as an important and neglected variable in English-as-a-Foreign-Language classes. Two questionnaires, in English and Arabic, involving common collocations relating to food, color, and weather were administered to English majors and English language teachers. Results show both groups deficient in collocations. (36…
Code of Federal Regulations, 2010 CFR
2010-10-01
... Collocation of Wireless Antennas B Appendix B to Part 1 Telecommunication FEDERAL COMMUNICATIONS COMMISSION... the Collocation of Wireless Antennas Nationwide Programmatic Agreement for the Collocation of Wireless Antennas Executed by the Federal Communications Commission, the National Conference of State Historic...
Code of Federal Regulations, 2014 CFR
2014-10-01
... Collocation of Wireless Antennas B Appendix B to Part 1 Telecommunication FEDERAL COMMUNICATIONS COMMISSION... the Collocation of Wireless Antennas Nationwide Programmatic Agreement for the Collocation of Wireless Antennas Executed by the Federal Communications Commission, the National Conference of State Historic...
Code of Federal Regulations, 2011 CFR
2011-10-01
... Collocation of Wireless Antennas B Appendix B to Part 1 Telecommunication FEDERAL COMMUNICATIONS COMMISSION... the Collocation of Wireless Antennas Nationwide Programmatic Agreement for the Collocation of Wireless Antennas Executed by the Federal Communications Commission, the National Conference of State Historic...
Code of Federal Regulations, 2013 CFR
2013-10-01
... Collocation of Wireless Antennas B Appendix B to Part 1 Telecommunication FEDERAL COMMUNICATIONS COMMISSION... the Collocation of Wireless Antennas Nationwide Programmatic Agreement for the Collocation of Wireless Antennas Executed by the Federal Communications Commission, the National Conference of State Historic...
Code of Federal Regulations, 2012 CFR
2012-10-01
... Collocation of Wireless Antennas B Appendix B to Part 1 Telecommunication FEDERAL COMMUNICATIONS COMMISSION... the Collocation of Wireless Antennas Nationwide Programmatic Agreement for the Collocation of Wireless Antennas Executed by the Federal Communications Commission, the National Conference of State Historic...
NASA Astrophysics Data System (ADS)
Mueller, A.
2018-04-01
A new transparent artificial boundary condition for the two-dimensional (vertical) (2DV) free surface water wave propagation modelled using the meshless Radial-Basis-Function Collocation Method (RBFCM) as boundary-only solution is derived. The two-way artificial boundary condition (2wABC) works as pure incidence, pure radiation and as combined incidence/radiation BC. In this work the 2wABC is applied to harmonic linear water waves; its performance is tested against the analytical solution for wave propagation over horizontal sea bottom, standing and partially standing wave as well as wave interference of waves with different periods.
Acoustic ranging of small arms fire using a single sensor node collocated with the target.
Lo, Kam W; Ferguson, Brian G
2015-06-01
A ballistic model-based method, which builds upon previous work by Lo and Ferguson [J. Acoust. Soc. Am. 132, 2997-3017 (2012)], is described for ranging small arms fire using a single acoustic sensor node collocated with the target, without a priori knowledge of the muzzle speed and ballistic constant of the bullet except that they belong to a known two-dimensional parameter space. The method requires measurements of the differential time of arrival and differential angle of arrival of the muzzle blast and ballistic shock wave at the sensor node. Its performance is evaluated using both simulated and real data.
1982-08-18
cselixataan has been shown to Pro, imonedimie. FIN SMCS10 tam we m Vp assuma threry. In lacd. it has been shown lor difeent oide a pouwerful method to simulate...Afortd meaden 11. ’by)W~itl 116.ttousol =odimra She delade Ia~c Ul - by she Oa SWISS INaolder 10 wAce shem. re’te" s s m rnb -bute r~ 6 X-06 CSc a I
Systematic parameter inference in stochastic mesoscopic modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lei, Huan; Yang, Xiu; Li, Zhen
2017-02-01
We propose a method to efficiently determine the optimal coarse-grained force field in mesoscopic stochastic simulations of Newtonian fluid and polymer melt systems modeled by dissipative particle dynamics (DPD) and energy conserving dissipative particle dynamics (eDPD). The response surfaces of various target properties (viscosity, diffusivity, pressure, etc.) with respect to model parameters are constructed based on the generalized polynomial chaos (gPC) expansion using simulation results on sampling points (e.g., individual parameter sets). To alleviate the computational cost to evaluate the target properties, we employ the compressive sensing method to compute the coefficients of the dominant gPC terms given the priormore » knowledge that the coefficients are “sparse”. The proposed method shows comparable accuracy with the standard probabilistic collocation method (PCM) while it imposes a much weaker restriction on the number of the simulation samples especially for systems with high dimensional parametric space. Fully access to the response surfaces within the confidence range enables us to infer the optimal force parameters given the desirable values of target properties at the macroscopic scale. Moreover, it enables us to investigate the intrinsic relationship between the model parameters, identify possible degeneracies in the parameter space, and optimize the model by eliminating model redundancies. The proposed method provides an efficient alternative approach for constructing mesoscopic models by inferring model parameters to recover target properties of the physics systems (e.g., from experimental measurements), where those force field parameters and formulation cannot be derived from the microscopic level in a straight forward way.« less
Motsa, S. S.; Magagula, V. M.; Sibanda, P.
2014-01-01
This paper presents a new method for solving higher order nonlinear evolution partial differential equations (NPDEs). The method combines quasilinearisation, the Chebyshev spectral collocation method, and bivariate Lagrange interpolation. In this paper, we use the method to solve several nonlinear evolution equations, such as the modified KdV-Burgers equation, highly nonlinear modified KdV equation, Fisher's equation, Burgers-Fisher equation, Burgers-Huxley equation, and the Fitzhugh-Nagumo equation. The results are compared with known exact analytical solutions from literature to confirm accuracy, convergence, and effectiveness of the method. There is congruence between the numerical results and the exact solutions to a high order of accuracy. Tables were generated to present the order of accuracy of the method; convergence graphs to verify convergence of the method and error graphs are presented to show the excellent agreement between the results from this study and the known results from literature. PMID:25254252
Motsa, S S; Magagula, V M; Sibanda, P
2014-01-01
This paper presents a new method for solving higher order nonlinear evolution partial differential equations (NPDEs). The method combines quasilinearisation, the Chebyshev spectral collocation method, and bivariate Lagrange interpolation. In this paper, we use the method to solve several nonlinear evolution equations, such as the modified KdV-Burgers equation, highly nonlinear modified KdV equation, Fisher's equation, Burgers-Fisher equation, Burgers-Huxley equation, and the Fitzhugh-Nagumo equation. The results are compared with known exact analytical solutions from literature to confirm accuracy, convergence, and effectiveness of the method. There is congruence between the numerical results and the exact solutions to a high order of accuracy. Tables were generated to present the order of accuracy of the method; convergence graphs to verify convergence of the method and error graphs are presented to show the excellent agreement between the results from this study and the known results from literature.
Richard V. Field, Jr.; Emery, John M.; Grigoriu, Mircea Dan
2015-05-19
The stochastic collocation (SC) and stochastic Galerkin (SG) methods are two well-established and successful approaches for solving general stochastic problems. A recently developed method based on stochastic reduced order models (SROMs) can also be used. Herein we provide a comparison of the three methods for some numerical examples; our evaluation only holds for the examples considered in the paper. The purpose of the comparisons is not to criticize the SC or SG methods, which have proven very useful for a broad range of applications, nor is it to provide overall ratings of these methods as compared to the SROM method.more » Furthermore, our objectives are to present the SROM method as an alternative approach to solving stochastic problems and provide information on the computational effort required by the implementation of each method, while simultaneously assessing their performance for a collection of specific problems.« less
Examining Second Language Receptive Knowledge of Collocation and Factors That Affect Learning
ERIC Educational Resources Information Center
Nguyen, Thi My Hang; Webb, Stuart
2017-01-01
This study investigated Vietnamese EFL learners' knowledge of verb-noun and adjective-noun collocations at the first three 1,000 word frequency levels, and the extent to which five factors (node word frequency, collocation frequency, mutual information score, congruency, and part of speech) predicted receptive knowledge of collocation. Knowledge…
Tensorial Basis Spline Collocation Method for Poisson's Equation
NASA Astrophysics Data System (ADS)
Plagne, Laurent; Berthou, Jean-Yves
2000-01-01
This paper aims to describe the tensorial basis spline collocation method applied to Poisson's equation. In the case of a localized 3D charge distribution in vacuum, this direct method based on a tensorial decomposition of the differential operator is shown to be competitive with both iterative BSCM and FFT-based methods. We emphasize the O(h4) and O(h6) convergence of TBSCM for cubic and quintic splines, respectively. We describe the implementation of this method on a distributed memory parallel machine. Performance measurements on a Cray T3E are reported. Our code exhibits high performance and good scalability: As an example, a 27 Gflops performance is obtained when solving Poisson's equation on a 2563 non-uniform 3D Cartesian mesh by using 128 T3E-750 processors. This represents 215 Mflops per processors.
NASA Technical Reports Server (NTRS)
Gatewood, B. E.
1971-01-01
The linearized integral equation for the Foucault test of a solid mirror was solved by various methods: power series, Fourier series, collocation, iteration, and inversion integral. The case of the Cassegrain mirror was solved by a particular power series method, collocation, and inversion integral. The inversion integral method appears to be the best overall method for both the solid and Cassegrain mirrors. Certain particular types of power series and Fourier series are satisfactory for the Cassegrain mirror. Numerical integration of the nonlinear equation for selected surface imperfections showed that results start to deviate from those given by the linearized equation at a surface deviation of about 3 percent of the wavelength of light. Several possible procedures for calibrating and scaling the input data for the integral equation are described.
NASA Technical Reports Server (NTRS)
Townsend, J.; Meyers, C.; Ortega, R.; Peck, J.; Rheinfurth, M.; Weinstock, B.
1993-01-01
Probabilistic structural analyses and design methods are steadily gaining acceptance within the aerospace industry. The safety factor approach to design has long been the industry standard, and it is believed by many to be overly conservative and thus, costly. A probabilistic approach to design may offer substantial cost savings. This report summarizes several probabilistic approaches: the probabilistic failure analysis (PFA) methodology developed by Jet Propulsion Laboratory, fast probability integration (FPI) methods, the NESSUS finite element code, and response surface methods. Example problems are provided to help identify the advantages and disadvantages of each method.
English Learners' Knowledge of Prepositions: Collocational Knowledge or Knowledge Based on Meaning?
ERIC Educational Resources Information Center
Mueller, Charles M.
2011-01-01
Second language (L2) learners' successful performance in an L2 can be partly attributed to their knowledge of collocations. In some cases, this knowledge is accompanied by knowledge of the semantic and/or grammatical patterns that motivate the collocation. At other times, collocational knowledge may serve a compensatory role. To determine the…
ERIC Educational Resources Information Center
Wolter, Brent; Gyllstad, Henrik
2013-01-01
This study investigated the influence of frequency effects on the processing of congruent (i.e., having an equivalent first language [L1] construction) collocations and incongruent (i.e., not having an equivalent L1 construction) collocations in a second language (L2). An acceptability judgment task was administered to native and advanced…
Corpus-Based versus Traditional Learning of Collocations
ERIC Educational Resources Information Center
Daskalovska, Nina
2015-01-01
One of the aspects of knowing a word is the knowledge of which words it is usually used with. Since knowledge of collocations is essential for appropriate and fluent use of language, learning collocations should have a central place in the study of vocabulary. There are different opinions about the best ways of learning collocations. This study…
ERIC Educational Resources Information Center
Gablasova, Dana; Brezina, Vaclav; McEnery, Tony
2017-01-01
This article focuses on the use of collocations in language learning research (LLR). Collocations, as units of formulaic language, are becoming prominent in our understanding of language learning and use; however, while the number of corpus-based LLR studies of collocations is growing, there is still a need for a deeper understanding of factors…
Probabilistic Structural Analysis Methods (PSAM) for select space propulsion system components
NASA Technical Reports Server (NTRS)
1991-01-01
The fourth year of technical developments on the Numerical Evaluation of Stochastic Structures Under Stress (NESSUS) system for Probabilistic Structural Analysis Methods is summarized. The effort focused on the continued expansion of the Probabilistic Finite Element Method (PFEM) code, the implementation of the Probabilistic Boundary Element Method (PBEM), and the implementation of the Probabilistic Approximate Methods (PAppM) code. The principal focus for the PFEM code is the addition of a multilevel structural dynamics capability. The strategy includes probabilistic loads, treatment of material, geometry uncertainty, and full probabilistic variables. Enhancements are included for the Fast Probability Integration (FPI) algorithms and the addition of Monte Carlo simulation as an alternate. Work on the expert system and boundary element developments continues. The enhanced capability in the computer codes is validated by applications to a turbine blade and to an oxidizer duct.
The Benard problem: A comparison of finite difference and spectral collocation eigen value solutions
NASA Technical Reports Server (NTRS)
Skarda, J. Raymond Lee; Mccaughan, Frances E.; Fitzmaurice, Nessan
1995-01-01
The application of spectral methods, using a Chebyshev collocation scheme, to solve hydrodynamic stability problems is demonstrated on the Benard problem. Implementation of the Chebyshev collocation formulation is described. The performance of the spectral scheme is compared with that of a 2nd order finite difference scheme. An exact solution to the Marangoni-Benard problem is used to evaluate the performance of both schemes. The error of the spectral scheme is at least seven orders of magnitude smaller than finite difference error for a grid resolution of N = 15 (number of points used). The performance of the spectral formulation far exceeded the performance of the finite difference formulation for this problem. The spectral scheme required only slightly more effort to set up than the 2nd order finite difference scheme. This suggests that the spectral scheme may actually be faster to implement than higher order finite difference schemes.
Simurda, Matej; Duggen, Lars; Basse, Nils T; Lassen, Benny
2018-02-01
A numerical model for transit-time ultrasonic flowmeters operating under multiphase flow conditions previously presented by us is extended by mesh refinement and grid point redistribution. The method solves modified first-order stress-velocity equations of elastodynamics with additional terms to account for the effect of the background flow. Spatial derivatives are calculated by a Fourier collocation scheme allowing the use of the fast Fourier transform, while the time integration is realized by the explicit third-order Runge-Kutta finite-difference scheme. The method is compared against analytical solutions and experimental measurements to verify the benefit of using mapped grids. Additionally, a study of clamp-on and in-line ultrasonic flowmeters operating under multiphase flow conditions is carried out.
A Jacobi collocation approximation for nonlinear coupled viscous Burgers' equation
NASA Astrophysics Data System (ADS)
Doha, Eid H.; Bhrawy, Ali H.; Abdelkawy, Mohamed A.; Hafez, Ramy M.
2014-02-01
This article presents a numerical approximation of the initial-boundary nonlinear coupled viscous Burgers' equation based on spectral methods. A Jacobi-Gauss-Lobatto collocation (J-GL-C) scheme in combination with the implicit Runge-Kutta-Nyström (IRKN) scheme are employed to obtain highly accurate approximations to the mentioned problem. This J-GL-C method, based on Jacobi polynomials and Gauss-Lobatto quadrature integration, reduces solving the nonlinear coupled viscous Burgers' equation to a system of nonlinear ordinary differential equation which is far easier to solve. The given examples show, by selecting relatively few J-GL-C points, the accuracy of the approximations and the utility of the approach over other analytical or numerical methods. The illustrative examples demonstrate the accuracy, efficiency, and versatility of the proposed algorithm.
NASA Technical Reports Server (NTRS)
Zhang, Yiqiang; Alexander, J. I. D.; Ouazzani, J.
1994-01-01
Free and moving boundary problems require the simultaneous solution of unknown field variables and the boundaries of the domains on which these variables are defined. There are many technologically important processes that lead to moving boundary problems associated with fluid surfaces and solid-fluid boundaries. These include crystal growth, metal alloy and glass solidification, melting and name propagation. The directional solidification of semi-conductor crystals by the Bridgman-Stockbarger method is a typical example of such a complex process. A numerical model of this growth method must solve the appropriate heat, mass and momentum transfer equations and determine the location of the melt-solid interface. In this work, a Chebyshev pseudospectra collocation method is adapted to the problem of directional solidification. Implementation involves a solution algorithm that combines domain decomposition, finite-difference preconditioned conjugate minimum residual method and a Picard type iterative scheme.
NASA Astrophysics Data System (ADS)
Xiao, Long; Liu, Xinggao; Ma, Liang; Zhang, Zeyin
2018-03-01
Dynamic optimisation problem with characteristic times, widely existing in many areas, is one of the frontiers and hotspots of dynamic optimisation researches. This paper considers a class of dynamic optimisation problems with constraints that depend on the interior points either fixed or variable, where a novel direct pseudospectral method using Legendre-Gauss (LG) collocation points for solving these problems is presented. The formula for the state at the terminal time of each subdomain is derived, which results in a linear combination of the state at the LG points in the subdomains so as to avoid the complex nonlinear integral. The sensitivities of the state at the collocation points with respect to the variable characteristic times are derived to improve the efficiency of the method. Three well-known characteristic time dynamic optimisation problems are solved and compared in detail among the reported literature methods. The research results show the effectiveness of the proposed method.
47 CFR 69.121 - Connection charges for expanded interconnection.
Code of Federal Regulations, 2010 CFR
2010-10-01
... separations. (2) Charges for subelements associated with physical collocation or virtual collocation, other... of the virtual collocation equipment described in § 64.1401(e)(1) of this chapter, may reasonably...
Interpolation of Superconducting Gravity Observations Using Least-Squares Collocation Method
NASA Astrophysics Data System (ADS)
Habel, Branislav; Janak, Juraj
2014-05-01
A pre-processing of the gravity data measured by superconducting gravimeter involves removing of spikes, offsets and gaps. Their presence in observations can limit the data analysis and degrades the quality of obtained results. Short data gaps are filling by theoretical signal in order to get continuous records of gravity. It requires the accurate tidal model and eventually atmospheric pressure at the observed site. The poster presents a design of algorithm for interpolation of gravity observations with a sampling rate of 1 min. Novel approach is based on least-squares collocation which combines adjustment of trend parameters, filtering of noise and prediction. It allows the interpolation of missing data up to a few hours without necessity of any other information. Appropriate parameters for covariance function are found using a Bayes' theorem by modified optimization process. Accuracy of method is improved by the rejection of outliers before interpolation. For filling of longer gaps the collocation model is combined with theoretical tidal signal for the rigid Earth. Finally, the proposed method was tested on the superconducting gravity observations at several selected stations of Global Geodynamics Project. Testing demonstrates its reliability and offers results comparable with the standard approach implemented in ETERNA software package without necessity of an accurate tidal model.
Recent developments of the NESSUS probabilistic structural analysis computer program
NASA Technical Reports Server (NTRS)
Millwater, H.; Wu, Y.-T.; Torng, T.; Thacker, B.; Riha, D.; Leung, C. P.
1992-01-01
The NESSUS probabilistic structural analysis computer program combines state-of-the-art probabilistic algorithms with general purpose structural analysis methods to compute the probabilistic response and the reliability of engineering structures. Uncertainty in loading, material properties, geometry, boundary conditions and initial conditions can be simulated. The structural analysis methods include nonlinear finite element and boundary element methods. Several probabilistic algorithms are available such as the advanced mean value method and the adaptive importance sampling method. The scope of the code has recently been expanded to include probabilistic life and fatigue prediction of structures in terms of component and system reliability and risk analysis of structures considering cost of failure. The code is currently being extended to structural reliability considering progressive crack propagation. Several examples are presented to demonstrate the new capabilities.
NASA Astrophysics Data System (ADS)
Prinn, R. G.
2013-12-01
The world is facing major challenges that create tensions between human development and environmental sustenance. In facing these challenges, computer models are invaluable tools for addressing the need for probabilistic approaches to forecasting. To illustrate this, I use the MIT Integrated Global System Model framework (IGSM; http://globalchange.mit.edu ). The IGSM consists of a set of coupled sub-models of global economic and technological development and resultant emissions, and physical, dynamical and chemical processes in the atmosphere, land, ocean and ecosystems (natural and managed). Some of the sub-models have both complex and simplified versions available, with the choice of which version to use being guided by the questions being addressed. Some sub-models (e.g.urban air pollution) are reduced forms of complex ones created by probabilistic collocation with polynomial chaos bases. Given the significant uncertainties in the model components, it is highly desirable that forecasts be probabilistic. We achieve this by running 400-member ensembles (Latin hypercube sampling) with different choices for key uncertain variables and processes within the human and natural system model components (pdfs of inputs estimated by model-observation comparisons, literature surveys, or expert elicitation). The IGSM has recently been used for probabilistic forecasts of climate, each using 400-member ensembles: one ensemble assumes no explicit climate mitigation policy and others assume increasingly stringent policies involving stabilization of greenhouse gases at various levels. These forecasts indicate clearly that the greatest effect of these policies is to lower the probability of extreme changes. The value of such probability analyses for policy decision-making lies in their ability to compare relative (not just absolute) risks of various policies, which are less affected by the earth system model uncertainties. Given the uncertainties in forecasts, it is also clear that we need to evaluate policies based on their ability to lower risk, and to re-evaluate decisions over time as new knowledge is gained. Reference: R. G. Prinn, Development and Application of Earth System Models, Proceedings, National Academy of Science, June 15, 2012, http://www.pnas.org/cgi/doi/10.1073/pnas.1107470109.
Optimal guidance law development for an advanced launch system
NASA Technical Reports Server (NTRS)
Calise, Anthony J.; Leung, Martin S. K.
1995-01-01
The objective of this research effort was to develop a real-time guidance approach for launch vehicles ascent to orbit injection. Various analytical approaches combined with a variety of model order and model complexity reduction have been investigated. Singular perturbation methods were first attempted and found to be unsatisfactory. The second approach based on regular perturbation analysis was subsequently investigated. It also fails because the aerodynamic effects (ignored in the zero order solution) are too large to be treated as perturbations. Therefore, the study demonstrates that perturbation methods alone (both regular and singular perturbations) are inadequate for use in developing a guidance algorithm for the atmospheric flight phase of a launch vehicle. During a second phase of the research effort, a hybrid analytic/numerical approach was developed and evaluated. The approach combines the numerical methods of collocation and the analytical method of regular perturbations. The concept of choosing intelligent interpolating functions is also introduced. Regular perturbation analysis allows the use of a crude representation for the collocation solution, and intelligent interpolating functions further reduce the number of elements without sacrificing the approximation accuracy. As a result, the combined method forms a powerful tool for solving real-time optimal control problems. Details of the approach are illustrated in a fourth order nonlinear example. The hybrid approach is then applied to the launch vehicle problem. The collocation solution is derived from a bilinear tangent steering law, and results in a guidance solution for the entire flight regime that includes both atmospheric and exoatmospheric flight phases.
Noise reduction in long‐period seismograms by way of array summing
Ringler, Adam; Wilson, David; Storm, Tyler; Marshall, Benjamin T.; Hutt, Charles R.; Holland, Austin
2016-01-01
Long‐period (>100 s period) seismic data can often be dominated by instrumental noise as well as local site noise. When multiple collocated sensors are installed at a single site, it is possible to improve the overall station noise levels by applying stacking methods to their traces. We look at the noise reduction in long‐period seismic data by applying the time–frequency phase‐weighted stacking method of Schimmel and Gallart (2007) as well as the phase‐weighted stacking (PWS) method of Schimmel and Paulssen (1997) to four collocated broadband sensors installed in the quiet Albuquerque Seismological Laboratory underground vault. We show that such stacking methods can improve vertical noise levels by as much as 10 dB over the mean background noise levels at 400 s period, suggesting that greater improvements could be achieved with an array involving multiple sensors. We also apply this method to reduce local incoherent noise on horizontal seismic records of the 2 March 2016 Mw 7.8 Sumatra earthquake, where the incoherent noise levels at very long periods are similar in amplitude to the earthquake signal. To maximize the coherency, we apply the PWS method to horizontal data where relative azimuths between collocated sensors are estimated and compared with a simpler linear stack with no azimuthal rotation. Such methods could help reduce noise levels at various seismic stations where multiple high‐quality sensors have been deployed. Such small arrays may also provide a solution to improving long‐period noise levels at Global Seismographic Network stations.
NASA Astrophysics Data System (ADS)
Hu, Y.; Vaughan, M.; McClain, C.; Behrenfeld, M.; Maring, H.; Anderson, D.; Sun-Mack, S.; Flittner, D.; Huang, J.; Wielicki, B.; Minnis, P.; Weimer, C.; Trepte, C.; Kuehn, R.
2007-06-01
This study presents an empirical relation that links the volume extinction coefficients of water clouds, the layer integrated depolarization ratios measured by lidar, and the effective radii of water clouds derived from collocated passive sensor observations. Based on Monte Carlo simulations of CALIPSO lidar observations, this method combines the cloud effective radius reported by MODIS with the lidar depolarization ratios measured by CALIPSO to estimate both the liquid water content and the effective number concentration of water clouds. The method is applied to collocated CALIPSO and MODIS measurements obtained during July and October of 2006, and January 2007. Global statistics of the cloud liquid water content and effective number concentration are presented.
An hp symplectic pseudospectral method for nonlinear optimal control
NASA Astrophysics Data System (ADS)
Peng, Haijun; Wang, Xinwei; Li, Mingwu; Chen, Biaosong
2017-01-01
An adaptive symplectic pseudospectral method based on the dual variational principle is proposed and is successfully applied to solving nonlinear optimal control problems in this paper. The proposed method satisfies the first order necessary conditions of continuous optimal control problems, also the symplectic property of the original continuous Hamiltonian system is preserved. The original optimal control problem is transferred into a set of nonlinear equations which can be solved easily by Newton-Raphson iterations, and the Jacobian matrix is found to be sparse and symmetric. The proposed method, on one hand, exhibits exponent convergence rates when the number of collocation points are increasing with the fixed number of sub-intervals; on the other hand, exhibits linear convergence rates when the number of sub-intervals is increasing with the fixed number of collocation points. Furthermore, combining with the hp method based on the residual error of dynamic constraints, the proposed method can achieve given precisions in a few iterations. Five examples highlight the high precision and high computational efficiency of the proposed method.
Spectral methods on arbitrary grids
NASA Technical Reports Server (NTRS)
Carpenter, Mark H.; Gottlieb, David
1995-01-01
Stable and spectrally accurate numerical methods are constructed on arbitrary grids for partial differential equations. These new methods are equivalent to conventional spectral methods but do not rely on specific grid distributions. Specifically, we show how to implement Legendre Galerkin, Legendre collocation, and Laguerre Galerkin methodology on arbitrary grids.
Isogeometric Collocation for Elastostatics and Explicit Dynamics
2012-01-25
ICES REPORT 12-07 January 2012 Isogeometric collocation for elastostatics and explicit dynamics by F. Auricchio, L. Beirao da Veiga , T.J.R. Hughes, A...Auricchio, L. Beirao da Veiga , T.J.R. Hughes, A. Reali, G. Sangalli, Isogeometric collocation for elastostatics and explicit dynamics, ICES REPORT 12-07...Isogeometric collocation for elastostatics and explicit dynamics F. Auricchio a,c, L. Beirão da Veiga b,c, T.J.R. Hughes d, A. Reali a,c,∗, G
NASA Astrophysics Data System (ADS)
Bhrawy, A. H.; Doha, E. H.; Ezz-Eldien, S. S.; Van Gorder, Robert A.
2014-12-01
The Jacobi spectral collocation method (JSCM) is constructed and used in combination with the operational matrix of fractional derivatives (described in the Caputo sense) for the numerical solution of the time-fractional Schrödinger equation (T-FSE) and the space-fractional Schrödinger equation (S-FSE). The main characteristic behind this approach is that it reduces such problems to those of solving a system of algebraic equations, which greatly simplifies the solution process. In addition, the presented approach is also applied to solve the time-fractional coupled Schrödinger system (T-FCSS). In order to demonstrate the validity and accuracy of the numerical scheme proposed, several numerical examples with their approximate solutions are presented with comparisons between our numerical results and those obtained by other methods.
NASA Technical Reports Server (NTRS)
Cruse, T. A.
1987-01-01
The objective is the development of several modular structural analysis packages capable of predicting the probabilistic response distribution for key structural variables such as maximum stress, natural frequencies, transient response, etc. The structural analysis packages are to include stochastic modeling of loads, material properties, geometry (tolerances), and boundary conditions. The solution is to be in terms of the cumulative probability of exceedance distribution (CDF) and confidence bounds. Two methods of probability modeling are to be included as well as three types of structural models - probabilistic finite-element method (PFEM); probabilistic approximate analysis methods (PAAM); and probabilistic boundary element methods (PBEM). The purpose in doing probabilistic structural analysis is to provide the designer with a more realistic ability to assess the importance of uncertainty in the response of a high performance structure. Probabilistic Structural Analysis Method (PSAM) tools will estimate structural safety and reliability, while providing the engineer with information on the confidence that should be given to the predicted behavior. Perhaps most critically, the PSAM results will directly provide information on the sensitivity of the design response to those variables which are seen to be uncertain.
NASA Technical Reports Server (NTRS)
Cruse, T. A.; Burnside, O. H.; Wu, Y.-T.; Polch, E. Z.; Dias, J. B.
1988-01-01
The objective is the development of several modular structural analysis packages capable of predicting the probabilistic response distribution for key structural variables such as maximum stress, natural frequencies, transient response, etc. The structural analysis packages are to include stochastic modeling of loads, material properties, geometry (tolerances), and boundary conditions. The solution is to be in terms of the cumulative probability of exceedance distribution (CDF) and confidence bounds. Two methods of probability modeling are to be included as well as three types of structural models - probabilistic finite-element method (PFEM); probabilistic approximate analysis methods (PAAM); and probabilistic boundary element methods (PBEM). The purpose in doing probabilistic structural analysis is to provide the designer with a more realistic ability to assess the importance of uncertainty in the response of a high performance structure. Probabilistic Structural Analysis Method (PSAM) tools will estimate structural safety and reliability, while providing the engineer with information on the confidence that should be given to the predicted behavior. Perhaps most critically, the PSAM results will directly provide information on the sensitivity of the design response to those variables which are seen to be uncertain.
Probabilistic structural analysis methods for space propulsion system components
NASA Technical Reports Server (NTRS)
Chamis, C. C.
1986-01-01
The development of a three-dimensional inelastic analysis methodology for the Space Shuttle main engine (SSME) structural components is described. The methodology is composed of: (1) composite load spectra, (2) probabilistic structural analysis methods, (3) the probabilistic finite element theory, and (4) probabilistic structural analysis. The methodology has led to significant technical progress in several important aspects of probabilistic structural analysis. The program and accomplishments to date are summarized.
Revisiting and Extending Interface Penalties for Multi-Domain Summation-by-Parts Operators
NASA Technical Reports Server (NTRS)
Carpenter, Mark H.; Nordstrom, Jan; Gottlieb, David
2007-01-01
General interface coupling conditions are presented for multi-domain collocation methods, which satisfy the summation-by-parts (SBP) spatial discretization convention. The combined interior/interface operators are proven to be L2 stable, pointwise stable, and conservative, while maintaining the underlying accuracy of the interior SBP operator. The new interface conditions resemble (and were motivated by) those used in the discontinuous Galerkin finite element community, and maintain many of the same properties. Extensive validation studies are presented using two classes of high-order SBP operators: 1) central finite difference, and 2) Legendre spectral collocation.
Probabilistic Structural Analysis Theory Development
NASA Technical Reports Server (NTRS)
Burnside, O. H.
1985-01-01
The objective of the Probabilistic Structural Analysis Methods (PSAM) project is to develop analysis techniques and computer programs for predicting the probabilistic response of critical structural components for current and future space propulsion systems. This technology will play a central role in establishing system performance and durability. The first year's technical activity is concentrating on probabilistic finite element formulation strategy and code development. Work is also in progress to survey critical materials and space shuttle mian engine components. The probabilistic finite element computer program NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) is being developed. The final probabilistic code will have, in the general case, the capability of performing nonlinear dynamic of stochastic structures. It is the goal of the approximate methods effort to increase problem solving efficiency relative to finite element methods by using energy methods to generate trial solutions which satisfy the structural boundary conditions. These approximate methods will be less computer intensive relative to the finite element approach.
Probabilistic Structural Analysis Program
NASA Technical Reports Server (NTRS)
Pai, Shantaram S.; Chamis, Christos C.; Murthy, Pappu L. N.; Stefko, George L.; Riha, David S.; Thacker, Ben H.; Nagpal, Vinod K.; Mital, Subodh K.
2010-01-01
NASA/NESSUS 6.2c is a general-purpose, probabilistic analysis program that computes probability of failure and probabilistic sensitivity measures of engineered systems. Because NASA/NESSUS uses highly computationally efficient and accurate analysis techniques, probabilistic solutions can be obtained even for extremely large and complex models. Once the probabilistic response is quantified, the results can be used to support risk-informed decisions regarding reliability for safety-critical and one-of-a-kind systems, as well as for maintaining a level of quality while reducing manufacturing costs for larger-quantity products. NASA/NESSUS has been successfully applied to a diverse range of problems in aerospace, gas turbine engines, biomechanics, pipelines, defense, weaponry, and infrastructure. This program combines state-of-the-art probabilistic algorithms with general-purpose structural analysis and lifting methods to compute the probabilistic response and reliability of engineered structures. Uncertainties in load, material properties, geometry, boundary conditions, and initial conditions can be simulated. The structural analysis methods include non-linear finite-element methods, heat-transfer analysis, polymer/ceramic matrix composite analysis, monolithic (conventional metallic) materials life-prediction methodologies, boundary element methods, and user-written subroutines. Several probabilistic algorithms are available such as the advanced mean value method and the adaptive importance sampling method. NASA/NESSUS 6.2c is structured in a modular format with 15 elements.
Collocation and Pattern Recognition Effects on System Failure Remediation
NASA Technical Reports Server (NTRS)
Trujillo, Anna C.; Press, Hayes N.
2007-01-01
Previous research found that operators prefer to have status, alerts, and controls located on the same screen. Unfortunately, that research was done with displays that were not designed specifically for collocation. In this experiment, twelve subjects evaluated two displays specifically designed for collocating system information against a baseline that consisted of dial status displays, a separate alert area, and a controls panel. These displays differed in the amount of collocation, pattern matching, and parameter movement compared to display size. During the data runs, subjects kept a randomly moving target centered on a display using a left-handed joystick and they scanned system displays to find a problem in order to correct it using the provided checklist. Results indicate that large parameter movement aided detection and then pattern recognition is needed for diagnosis but the collocated displays centralized all the information subjects needed, which reduced workload. Therefore, the collocated display with large parameter movement may be an acceptable display after familiarization because of the possible pattern recognition developed with training and its use.
Probabilistic Structural Analysis Methods (PSAM) for Select Space Propulsion System Components
NASA Technical Reports Server (NTRS)
1999-01-01
Probabilistic Structural Analysis Methods (PSAM) are described for the probabilistic structural analysis of engine components for current and future space propulsion systems. Components for these systems are subjected to stochastic thermomechanical launch loads. Uncertainties or randomness also occurs in material properties, structural geometry, and boundary conditions. Material property stochasticity, such as in modulus of elasticity or yield strength, exists in every structure and is a consequence of variations in material composition and manufacturing processes. Procedures are outlined for computing the probabilistic structural response or reliability of the structural components. The response variables include static or dynamic deflections, strains, and stresses at one or several locations, natural frequencies, fatigue or creep life, etc. Sample cases illustrates how the PSAM methods and codes simulate input uncertainties and compute probabilistic response or reliability using a finite element model with probabilistic methods.
Sokolova, L V; Cherkasova, A S
2015-01-01
Texts or words/pseudowords are often used as stimuli for human verbal activity research. Our study pays attention to decoding processes of grammatical constructions consisted of two-three words--collocations. Russian and English collocation sets without any narrative were presented to Russian-speaking students with different English language skill. Stimulus material had two types of collocations: paradigmatic and syntagmatic. 30 students (average age--20.4 ± 0.22) took part in the study, they were divided into two equal groups depending on their English language skill (linguists/nonlinguists). During reading brain bioelectrical activity of cortex has been registered from 12 electrodes in alfa-, beta-, theta-bands. Coherent function reflecting cooperation of different cortical areas during reading collocations has been analyzed. Increase of interhemispheric and diagonal connections while reading collocations in different languages in the group of students with low knowledge of foreign language testifies of importance of functional cooperation between the hemispheres. It has been found out that brain bioelectrical activity of students with good foreign language knowledge during reading of all collocation types in Russian and English is characterized by economization of nervous substrate resources compared to nonlinguists. Selective activation of certain cortical areas has also been observed (depending on the grammatical construction type) in nonlinguists group that is probably related to special decoding system which processes presented stimuli. Reading Russian paradigmatic constructions by nonlinguists entailed increase between left cortical areas, reading of English syntagmatic collocations--between right ones.
Probabilistic structural analysis methods of hot engine structures
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Hopkins, D. A.
1989-01-01
Development of probabilistic structural analysis methods for hot engine structures is a major activity at Lewis Research Center. Recent activities have focused on extending the methods to include the combined uncertainties in several factors on structural response. This paper briefly describes recent progress on composite load spectra models, probabilistic finite element structural analysis, and probabilistic strength degradation modeling. Progress is described in terms of fundamental concepts, computer code development, and representative numerical results.
NASA Astrophysics Data System (ADS)
Zaidi, W. H.; Faber, W. R.; Hussein, I. I.; Mercurio, M.; Roscoe, C. W. T.; Wilkins, M. P.
One of the most challenging problems in treating space debris is the characterization of the orbit of a newly detected and uncorrelated measurement. The admissible region is defined as the set of physically acceptable orbits (i.e. orbits with negative energies) consistent with one or more measurements of a Resident Space Object (RSO). Given additional constraints on the orbital semi-major axis, eccentricity, etc., the admissible region can be constrained, resulting in the constrained admissible region (CAR). Based on known statistics of the measurement process, one can replace hard constraints with a Probabilistic Admissible Region (PAR), a concept introduced in 2014 as a Monte Carlo uncertainty representation approach using topocentric spherical coordinates. Ultimately, a PAR can be used to initialize a sequential Bayesian estimator and to prioritize orbital propagations in a multiple hypothesis tracking framework such as Finite Set Statistics (FISST). To date, measurements used to build the PAR have been collected concurrently and by the same sensor. In this paper, we allow measurements to have different time stamps. We also allow for non-collocated sensor collections; optical data can be collected by one sensor at a given time and radar data collected by another sensor located elsewhere. We then revisit first principles to link asynchronous optical and radar measurements using both the conservation of specific orbital energy and specific orbital angular momentum. The result from the proposed algorithm is an implicit-Bayesian and non-Gaussian representation of orbital state uncertainty.
Well-conditioned fractional collocation methods using fractional Birkhoff interpolation basis
NASA Astrophysics Data System (ADS)
Jiao, Yujian; Wang, Li-Lian; Huang, Can
2016-01-01
The purpose of this paper is twofold. Firstly, we provide explicit and compact formulas for computing both Caputo and (modified) Riemann-Liouville (RL) fractional pseudospectral differentiation matrices (F-PSDMs) of any order at general Jacobi-Gauss-Lobatto (JGL) points. We show that in the Caputo case, it suffices to compute F-PSDM of order μ ∈ (0 , 1) to compute that of any order k + μ with integer k ≥ 0, while in the modified RL case, it is only necessary to evaluate a fractional integral matrix of order μ ∈ (0 , 1). Secondly, we introduce suitable fractional JGL Birkhoff interpolation problems leading to new interpolation polynomial basis functions with remarkable properties: (i) the matrix generated from the new basis yields the exact inverse of F-PSDM at "interior" JGL points; (ii) the matrix of the highest fractional derivative in a collocation scheme under the new basis is diagonal; and (iii) the resulted linear system is well-conditioned in the Caputo case, while in the modified RL case, the eigenvalues of the coefficient matrix are highly concentrated. In both cases, the linear systems of the collocation schemes using the new basis can be solved by an iterative solver within a few iterations. Notably, the inverse can be computed in a very stable manner, so this offers optimal preconditioners for usual fractional collocation methods for fractional differential equations (FDEs). It is also noteworthy that the choice of certain special JGL points with parameters related to the order of the equations can ease the implementation. We highlight that the use of the Bateman's fractional integral formulas and fast transforms between Jacobi polynomials with different parameters, is essential for our algorithm development.
Boundary conditions in Chebyshev and Legendre methods
NASA Technical Reports Server (NTRS)
Canuto, C.
1984-01-01
Two different ways of treating non-Dirichlet boundary conditions in Chebyshev and Legendre collocation methods are discussed for second order differential problems. An error analysis is provided. The effect of preconditioning the corresponding spectral operators by finite difference matrices is also investigated.
Collocated electrodynamic FDTD schemes using overlapping Yee grids and higher-order Hodge duals
NASA Astrophysics Data System (ADS)
Deimert, C.; Potter, M. E.; Okoniewski, M.
2016-12-01
The collocated Lebedev grid has previously been proposed as an alternative to the Yee grid for electromagnetic finite-difference time-domain (FDTD) simulations. While it performs better in anisotropic media, it performs poorly in isotropic media because it is equivalent to four overlapping, uncoupled Yee grids. We propose to couple the four Yee grids and fix the Lebedev method using discrete exterior calculus (DEC) with higher-order Hodge duals. We find that higher-order Hodge duals do improve the performance of the Lebedev grid, but they also improve the Yee grid by a similar amount. The effectiveness of coupling overlapping Yee grids with a higher-order Hodge dual is thus questionable. However, the theoretical foundations developed to derive these methods may be of interest in other problems.
A study of the radiative transfer equation using a spherical harmonics-nodal collocation method
NASA Astrophysics Data System (ADS)
Capilla, M. T.; Talavera, C. F.; Ginestar, D.; Verdú, G.
2017-03-01
Optical tomography has found many medical applications that need to know how the photons interact with the different tissues. The majority of the photon transport simulations are done using the diffusion approximation, but this approximation has a limited validity when optical properties of the different tissues present large gradients, when structures near the photons source are studied or when anisotropic scattering has to be taken into account. As an alternative to the diffusion model, the PL equations for the radiative transfer problem are studied. These equations are discretized in a rectangular mesh using a nodal collocation method. The performance of this model is studied by solving different 1D and 2D benchmark problems of light propagation in tissue having media with isotropic and anisotropic scattering.
Learning L2 Collocations Incidentally from Reading
ERIC Educational Resources Information Center
Pellicer-Sánchez, Ana
2017-01-01
Previous studies have shown that intentional learning through explicit instruction is effective for the acquisition of collocations in a second language (L2) (e.g. Peters, 2014, 2015), but relatively little is known about the effectiveness of incidental approaches for the acquisition of L2 collocations. The present study examined the incidental…
Incidental Learning of Collocation
ERIC Educational Resources Information Center
Webb, Stuart; Newton, Jonathan; Chang, Anna
2013-01-01
This study investigated the effects of repetition on the learning of collocation. Taiwanese university students learning English as a foreign language simultaneously read and listened to one of four versions of a modified graded reader that included different numbers of encounters (1, 5, 10, and 15 encounters) with a set of 18 target collocations.…
47 CFR 51.323 - Standards for physical collocation and virtual collocation.
Code of Federal Regulations, 2011 CFR
2011-10-01
... accessible by both the incumbent LEC and the collocating telecommunications carrier, at which the fiber optic... technically feasible, the incumbent LEC shall provide the connection using copper, dark fiber, lit fiber, or... that the incumbent LEC may adopt include: (1) Installing security cameras or other monitoring systems...
47 CFR 51.323 - Standards for physical collocation and virtual collocation.
Code of Federal Regulations, 2010 CFR
2010-10-01
... accessible by both the incumbent LEC and the collocating telecommunications carrier, at which the fiber optic... technically feasible, the incumbent LEC shall provide the connection using copper, dark fiber, lit fiber, or... that the incumbent LEC may adopt include: (1) Installing security cameras or other monitoring systems...
Usability Study of Two Collocated Prototype System Displays
NASA Technical Reports Server (NTRS)
Trujillo, Anna C.
2007-01-01
Currently, most of the displays in control rooms can be categorized as status screens, alerts/procedures screens (or paper), or control screens (where the state of a component is changed by the operator). The primary focus of this line of research is to determine which pieces of information (status, alerts/procedures, and control) should be collocated. Two collocated displays were tested for ease of understanding in an automated desktop survey. This usability study was conducted as a prelude to a larger human-in-the-loop experiment in order to verify that the 2 new collocated displays were easy to learn and usable. The results indicate that while the DC display was preferred and yielded better performance than the MDO display, both collocated displays can be easily learned and used.
NASA Technical Reports Server (NTRS)
Singhal, Surendra N.
2003-01-01
The SAE G-11 RMSL Division and Probabilistic Methods Committee meeting during October 6-8 at the Best Western Sterling Inn, Sterling Heights (Detroit), Michigan is co-sponsored by US Army Tank-automotive & Armaments Command (TACOM). The meeting will provide an industry/government/academia forum to review RMSL technology; reliability and probabilistic technology; reliability-based design methods; software reliability; and maintainability standards. With over 100 members including members with national/international standing, the mission of the G-11's Probabilistic Methods Committee is to "enable/facilitate rapid deployment of probabilistic technology to enhance the competitiveness of our industries by better, faster, greener, smarter, affordable and reliable product development."
Hybrid near-optimal aeroassisted orbit transfer plane change trajectories
NASA Technical Reports Server (NTRS)
Calise, Anthony J.; Duckeman, Gregory A.
1994-01-01
In this paper, a hybrid methodology is used to determine optimal open loop controls for the atmospheric portion of the aeroassisted plane change problem. The method is hybrid in the sense that it combines the features of numerical collocation with the analytically tractable portions of the problem which result when the two-point boundary value problem is cast in the form of a regular perturbation problem. Various levels of approximation are introduced by eliminating particular collocation parameters and their effect upon problem complexity and required number of nodes is discussed. The results include plane changes of 10, 20, and 30 degrees for a given vehicle.
Not Just "Small Potatoes": Knowledge of the Idiomatic Meanings of Collocations
ERIC Educational Resources Information Center
Macis, Marijana; Schmitt, Norbert
2017-01-01
This study investigated learner knowledge of the figurative meanings of 30 collocations that can be both literal and figurative. One hundred and seven Chilean Spanish-speaking university students of English were asked to complete a meaning-recall collocation test in which the target items were embedded in non-defining sentences. Results showed…
Teaching and Learning Collocation in Adult Second and Foreign Language Learning
ERIC Educational Resources Information Center
Boers, Frank; Webb, Stuart
2018-01-01
Perhaps the greatest challenge to creating a research timeline on teaching and learning collocation is deciding how wide to cast the net in the search for relevant publications. For one thing, the term "collocation" does not have the same meaning for all (applied) linguists and practitioners (Barfield & Gyllstad 2009) (see timeline).…
Supporting Collocation Learning with a Digital Library
ERIC Educational Resources Information Center
Wu, Shaoqun; Franken, Margaret; Witten, Ian H.
2010-01-01
Extensive knowledge of collocations is a key factor that distinguishes learners from fluent native speakers. Such knowledge is difficult to acquire simply because there is so much of it. This paper describes a system that exploits the facilities offered by digital libraries to provide a rich collocation-learning environment. The design is based on…
Cross-Linguistic Influence: Its Impact on L2 English Collocation Production
ERIC Educational Resources Information Center
Phoocharoensil, Supakorn
2013-01-01
This research study investigated the influence of learners' mother tongue on their acquisition of English collocations. Having drawn the linguistic data from two groups of Thai EFL learners differing in English proficiency level, the researcher found that the native language (L1) plays a significant role in the participants' collocation learning…
Going beyond Patterns: Involving Cognitive Analysis in the Learning of Collocations
ERIC Educational Resources Information Center
Liu, Dilin
2010-01-01
Since the late 1980s, collocations have received increasing attention in applied linguistics, especially language teaching, as is evidenced by the many publications on the topic. These works fall roughly into two lines of research (a) those focusing on the identification and use of collocations (Benson, 1989; Hunston, 2002; Hunston & Francis,…
English Collocation Learning through Corpus Data: On-Line Concordance and Statistical Information
ERIC Educational Resources Information Center
Ohtake, Hiroshi; Fujita, Nobuyuki; Kawamoto, Takeshi; Morren, Brian; Ugawa, Yoshihiro; Kaneko, Shuji
2012-01-01
We developed an English Collocations On Demand system offering on-line corpus and concordance information to help Japanese researchers acquire a better command of English collocation patterns. The Life Science Dictionary Corpus consists of approximately 90,000,000 words collected from life science related research papers published in academic…
The Effect of Error Correction Feedback on the Collocation Competence of Iranian EFL Learners
ERIC Educational Resources Information Center
Jafarpour, Ali Akbar; Sharifi, Abolghasem
2012-01-01
Collocations are one of the most important elements in language proficiency but the effect of error correction feedback of collocations has not been thoroughly examined. Some researchers report the usefulness and importance of error correction (Hyland, 1990; Bartram & Walton, 1991; Ferris, 1999; Chandler, 2003), while others showed that error…
Collocations of High Frequency Noun Keywords in Prescribed Science Textbooks
ERIC Educational Resources Information Center
Menon, Sujatha; Mukundan, Jayakaran
2012-01-01
This paper analyses the discourse of science through the study of collocational patterns of high frequency noun keywords in science textbooks used by upper secondary students in Malaysia. Research has shown that one of the areas of difficulty in science discourse concerns lexis, especially that of collocations. This paper describes a corpus-based…
The Effect of Grouping and Presenting Collocations on Retention
ERIC Educational Resources Information Center
Akpinar, Kadriye Dilek; Bardakçi, Mehmet
2015-01-01
The aim of this study is two-fold. Firstly, it attempts to determine the role of presenting collocations by organizing them based on (i) the keyword, (ii) topic related and (iii) grammatical aspect on retention of collocations. Secondly, it investigates the relationship between participants' general English proficiency and the presentation types…
ERIC Educational Resources Information Center
Leonardi, Magda
1977-01-01
Discusses the importance of two Firthian themes for language teaching. The first theme, "Restricted Languages," concerns the "microlanguages" of every language (e.g., literary language, scientific, etc.). The second theme, "Collocation," shows that equivalent words in two languages rarely have the same position in…
Corpora and Collocations in Chinese-English Dictionaries for Chinese Users
ERIC Educational Resources Information Center
Xia, Lixin
2015-01-01
The paper identifies the major problems of the Chinese-English dictionary in representing collocational information after an extensive survey of nine dictionaries popular among Chinese users. It is found that the Chinese-English dictionary only provides the collocation types of "v+n" and "v+n," but completely ignores those of…
NASA Astrophysics Data System (ADS)
Wang, N.; Li, J.; Borisov, D.; Gharti, H. N.; Shen, Y.; Zhang, W.; Savage, B. K.
2016-12-01
We incorporate 3D anelastic attenuation into the collocated-grid finite-difference method on curvilinear grids (Zhang et al., 2012), using the rheological model of the generalized Maxwell body (Emmerich and Korn, 1987; Moczo and Kristek, 2005; Käser et al., 2007). We follow a conventional procedure to calculate the anelastic coefficients (Emmerich and Korn, 1987) determined by the Q(ω)-law, with a modification in the choice of frequency band and thus the relaxation frequencies that equidistantly cover the logarithmic frequency range. We show that such an optimization of anelastic coefficients is more accurate when using a fixed number of relaxation mechanisms to fit the frequency independent Q-factors. We use curvilinear grids to represent the surface topography. The velocity-stress form of the 3D isotropic anelastic wave equation is solved with a collocated-grid finite-difference method. Compared with the elastic case, we need to solve additional material-independent anelastic functions (Kristek and Moczo, 2003) for the mechanisms at each relaxation frequency. Based on the stress-strain relation, we calculate the spatial partial derivatives of the anelastic functions indirectly thereby saving computational storage and improving computational efficiency. The complex-frequency-shifted perfectly matched layer (CFS-PML) is used for the absorbing boundary condition based on the auxiliary difference equation (Zhang and Shen, 2010). The traction image method (Zhang and Chen, 2006) is employed for the free-surface boundary condition. We perform several numerical experiments including homogeneous full-space models and layered half-space models, considering both flat and 3D Gaussian-shape hill surfaces. The results match very well with those of the spectral-element method (Komatitisch and Tromp, 2002; Savage et al., 2010), verifying the simulations by our method in the anelastic model with surface topography.
NASA Technical Reports Server (NTRS)
Argentiero, P.; Lowrey, B.
1977-01-01
The least squares collocation algorithm for estimating gravity anomalies from geodetic data is shown to be an application of the well known regression equations which provide the mean and covariance of a random vector (gravity anomalies) given a realization of a correlated random vector (geodetic data). It is also shown that the collocation solution for gravity anomalies is equivalent to the conventional least-squares-Stokes' function solution when the conventional solution utilizes properly weighted zero a priori estimates. The mathematical and physical assumptions underlying the least squares collocation estimator are described.
NASA Astrophysics Data System (ADS)
Belyaev, V. A.; Shapeev, V. P.
2017-10-01
New versions of the collocations and least squares method of high-order accuracy are proposed and implemented for the numerical solution of the boundary value problems for the biharmonic equation in non-canonical domains. The solution of the biharmonic equation is used for simulating the stress-strain state of an isotropic plate under the action of transverse load. The differential problem is projected into a space of fourth-degree polynomials by the CLS method. The boundary conditions for the approximate solution are put down exactly on the boundary of the computational domain. The versions of the CLS method are implemented on the grids which are constructed in two different ways. It is shown that the approximate solution of problems converges with high order. Thus it matches with high accuracy with the analytical solution of the test problems in the case of known solution in the numerical experiments on the convergence of the solution of various problems on a sequence of grids.
On the Effect of Gender and Years of Instruction on Iranian EFL Learners' Collocational Competence
ERIC Educational Resources Information Center
Ganji, Mansoor
2012-01-01
This study investigates the Iranian EFL learners' Knowledge of Lexical Collocation at three academic levels: freshmen, sophomores, and juniors. The participants were forty three English majors doing their B.A. in English Translation studies in Chabahar Maritime University. They took a 50-item fill-in-the-blank test of lexical collocations. The…
ERIC Educational Resources Information Center
Krummes, Cedric; Ensslin, Astrid
2015-01-01
Whereas there exists a plethora of research on collocations and formulaic language in English, this article contributes towards a somewhat less developed area: the understanding and teaching of formulaic language in German as a foreign language. It analyses formulaic sequences and collocations in German writing (corpus-driven) and provides modern…
Symmetrical and Asymmetrical Scaffolding of L2 Collocations in the Context of Concordancing
ERIC Educational Resources Information Center
Rezaee, Abbas Ali; Marefat, Hamideh; Saeedakhtar, Afsaneh
2015-01-01
Collocational competence is recognized to be integral to native-like L2 performance, and concordancing can be of assistance in gaining this competence. This study reports on an investigation into the effect of symmetrical and asymmetrical scaffolding on the collocational competence of Iranian intermediate learners of English in the context of…
Profiling the Collocation Use in ELT Textbooks and Learner Writing
ERIC Educational Resources Information Center
Tsai, Kuei-Ju
2015-01-01
The present study investigates the collocational profiles of (1) three series of graded textbooks for English as a foreign language (EFL) commonly used in Taiwan, (2) the written productions of EFL learners, and (3) the written productions of native speakers (NS) of English. These texts were examined against a purpose-built collocation list. Based…
Learning and Teaching L2 Collocations: Insights from Research
ERIC Educational Resources Information Center
Szudarski, Pawel
2017-01-01
The aim of this article is to present and summarize the main research findings in the area of learning and teaching second language (L2) collocations. Being a large part of naturally occurring language, collocations and other types of multiword units (e.g., idioms, phrasal verbs, lexical bundles) have been identified as important aspects of L2…
Probabilistic finite elements for fracture mechanics
NASA Technical Reports Server (NTRS)
Besterfield, Glen
1988-01-01
The probabilistic finite element method (PFEM) is developed for probabilistic fracture mechanics (PFM). A finite element which has the near crack-tip singular strain embedded in the element is used. Probabilistic distributions, such as expectation, covariance and correlation stress intensity factors, are calculated for random load, random material and random crack length. The method is computationally quite efficient and can be expected to determine the probability of fracture or reliability.
Probabilistic Structural Analysis Methods (PSAM) for select space propulsion systems components
NASA Technical Reports Server (NTRS)
1991-01-01
Summarized here is the technical effort and computer code developed during the five year duration of the program for probabilistic structural analysis methods. The summary includes a brief description of the computer code manuals and a detailed description of code validation demonstration cases for random vibrations of a discharge duct, probabilistic material nonlinearities of a liquid oxygen post, and probabilistic buckling of a transfer tube liner.
NASA Astrophysics Data System (ADS)
Parand, K.; Latifi, S.; Moayeri, M. M.; Delkhosh, M.
2018-05-01
In this study, we have constructed a new numerical approach for solving the time-dependent linear and nonlinear Fokker-Planck equations. In fact, we have discretized the time variable with Crank-Nicolson method and for the space variable, a numerical method based on Generalized Lagrange Jacobi Gauss-Lobatto (GLJGL) collocation method is applied. It leads to in solving the equation in a series of time steps and at each time step, the problem is reduced to a problem consisting of a system of algebraic equations that greatly simplifies the problem. One can observe that the proposed method is simple and accurate. Indeed, one of its merits is that it is derivative-free and by proposing a formula for derivative matrices, the difficulty aroused in calculation is overcome, along with that it does not need to calculate the General Lagrange basis and matrices; they have Kronecker property. Linear and nonlinear Fokker-Planck equations are given as examples and the results amply demonstrate that the presented method is very valid, effective, reliable and does not require any restrictive assumptions for nonlinear terms.
NASA Astrophysics Data System (ADS)
Fei, Cheng-Wei; Bai, Guang-Chen
2014-12-01
To improve the computational precision and efficiency of probabilistic design for mechanical dynamic assembly like the blade-tip radial running clearance (BTRRC) of gas turbine, a distribution collaborative probabilistic design method-based support vector machine of regression (SR)(called as DCSRM) is proposed by integrating distribution collaborative response surface method and support vector machine regression model. The mathematical model of DCSRM is established and the probabilistic design idea of DCSRM is introduced. The dynamic assembly probabilistic design of aeroengine high-pressure turbine (HPT) BTRRC is accomplished to verify the proposed DCSRM. The analysis results reveal that the optimal static blade-tip clearance of HPT is gained for designing BTRRC, and improving the performance and reliability of aeroengine. The comparison of methods shows that the DCSRM has high computational accuracy and high computational efficiency in BTRRC probabilistic analysis. The present research offers an effective way for the reliability design of mechanical dynamic assembly and enriches mechanical reliability theory and method.
NASA Technical Reports Server (NTRS)
Biringen, S.; Danabasoglu, G.
1988-01-01
A Chebyshev matrix collocation method is outlined for the solution of the Orr-Sommerfeld equation for the Blausius boundary layer. User information is provided for FORTRAN program ORRBL which solves the equation by the QR method.
Numerical solutions for Helmholtz equations using Bernoulli polynomials
NASA Astrophysics Data System (ADS)
Bicer, Kubra Erdem; Yalcinbas, Salih
2017-07-01
This paper reports a new numerical method based on Bernoulli polynomials for the solution of Helmholtz equations. The method uses matrix forms of Bernoulli polynomials and their derivatives by means of collocation points. Aim of this paper is to solve Helmholtz equations using this matrix relations.
Model-free simulations of turbulent reactive flows
NASA Technical Reports Server (NTRS)
Givi, Peyman
1989-01-01
The current computational methods for solving transport equations of turbulent reacting single-phase flows are critically reviewed, with primary attention given to those methods that lead to model-free simulations. In particular, consideration is given to direct numerical simulations using spectral (Galerkin) and pseudospectral (collocation) methods, spectral element methods, and Lagrangian methods. The discussion also covers large eddy simulations and turbulence modeling.
ERIC Educational Resources Information Center
Ying, Yang
2015-01-01
This study aimed to seek an in-depth understanding about English collocation learning and the development of learner autonomy through investigating a group of English as a Second Language (ESL) learners' perspectives and practices in their learning of English collocations using an AWARE approach. A group of 20 PRC students learning English in…
ERIC Educational Resources Information Center
Chang, Yu-Chia; Chang, Jason S.; Chen, Hao-Jan; Liou, Hsien-Chin
2008-01-01
Previous work in the literature reveals that EFL learners were deficient in collocations that are a hallmark of near native fluency in learner's writing. Among different types of collocations, the verb-noun (V-N) one was found to be particularly difficult to master, and learners' first language was also found to heavily influence their collocation…
ERIC Educational Resources Information Center
Heidrick, Ingrid T.
2017-01-01
This study compares monolinguals and different kinds of bilinguals with respect to their knowledge of the type of lexical phenomenon known as collocation. Collocations are word combinations that speakers use recurrently, forming the basis of conventionalized lexical patterns that are shared by a linguistic community. Examples of collocations…
NASA Astrophysics Data System (ADS)
Song, Lu-Kai; Wen, Jie; Fei, Cheng-Wei; Bai, Guang-Chen
2018-05-01
To improve the computing efficiency and precision of probabilistic design for multi-failure structure, a distributed collaborative probabilistic design method-based fuzzy neural network of regression (FR) (called as DCFRM) is proposed with the integration of distributed collaborative response surface method and fuzzy neural network regression model. The mathematical model of DCFRM is established and the probabilistic design idea with DCFRM is introduced. The probabilistic analysis of turbine blisk involving multi-failure modes (deformation failure, stress failure and strain failure) was investigated by considering fluid-structure interaction with the proposed method. The distribution characteristics, reliability degree, and sensitivity degree of each failure mode and overall failure mode on turbine blisk are obtained, which provides a useful reference for improving the performance and reliability of aeroengine. Through the comparison of methods shows that the DCFRM reshapes the probability of probabilistic analysis for multi-failure structure and improves the computing efficiency while keeping acceptable computational precision. Moreover, the proposed method offers a useful insight for reliability-based design optimization of multi-failure structure and thereby also enriches the theory and method of mechanical reliability design.
Edge gradients evaluation for 2D hybrid finite volume method model
USDA-ARS?s Scientific Manuscript database
In this study, a two-dimensional depth-integrated hydrodynamic model was developed using FVM on a hybrid unstructured collocated mesh system. To alleviate the negative effects of mesh irregularity and non-uniformity, a conservative evaluation method for edge gradients based on the second-order Tayl...
NASA Astrophysics Data System (ADS)
Ophaug, Vegard; Gerlach, Christian
2017-11-01
This work is an investigation of three methods for regional geoid computation: Stokes's formula, least-squares collocation (LSC), and spherical radial base functions (RBFs) using the spline kernel (SK). It is a first attempt to compare the three methods theoretically and numerically in a unified framework. While Stokes integration and LSC may be regarded as classic methods for regional geoid computation, RBFs may still be regarded as a modern approach. All methods are theoretically equal when applied globally, and we therefore expect them to give comparable results in regional applications. However, it has been shown by de Min (Bull Géod 69:223-232, 1995. doi: 10.1007/BF00806734) that the equivalence of Stokes's formula and LSC does not hold in regional applications without modifying the cross-covariance function. In order to make all methods comparable in regional applications, the corresponding modification has been introduced also in the SK. Ultimately, we present numerical examples comparing Stokes's formula, LSC, and SKs in a closed-loop environment using synthetic noise-free data, to verify their equivalence. All agree on the millimeter level.
Collocational Links in the L2 Mental Lexicon and the Influence of L1 Intralexical Knowledge
ERIC Educational Resources Information Center
Wolter, Brent; Gyllstad, Henrik
2011-01-01
This article assesses the influence of L1 intralexical knowledge on the formation of L2 intralexical collocations. Two tests, a primed lexical decision task (LDT) and a test of receptive collocational knowledge, were administered to a group of non-native speakers (NNSs) (L1 Swedish), with native speakers (NSs) of English serving as controls on the…
ERIC Educational Resources Information Center
Jaen, Maria Moreno
2007-01-01
This paper reports an assessment of the collocational competence of students of English Linguistics at the University of Granada. This was carried out to meet a two-fold purpose. On the one hand, we aimed to establish a solid corpus-driven approach based upon a systematic and reliable framework for the evaluation of collocational competence in…
NASA Technical Reports Server (NTRS)
Thacker, B. H.; Mcclung, R. C.; Millwater, H. R.
1990-01-01
An eigenvalue analysis of a typical space propulsion system turbopump blade is presented using an approximate probabilistic analysis methodology. The methodology was developed originally to investigate the feasibility of computing probabilistic structural response using closed-form approximate models. This paper extends the methodology to structures for which simple closed-form solutions do not exist. The finite element method will be used for this demonstration, but the concepts apply to any numerical method. The results agree with detailed analysis results and indicate the usefulness of using a probabilistic approximate analysis in determining efficient solution strategies.
Spectral methods for partial differential equations
NASA Technical Reports Server (NTRS)
Hussaini, M. Y.; Streett, C. L.; Zang, T. A.
1983-01-01
Origins of spectral methods, especially their relation to the Method of Weighted Residuals, are surveyed. Basic Fourier, Chebyshev, and Legendre spectral concepts are reviewed, and demonstrated through application to simple model problems. Both collocation and tau methods are considered. These techniques are then applied to a number of difficult, nonlinear problems of hyperbolic, parabolic, elliptic, and mixed type. Fluid dynamical applications are emphasized.
Recent applications of spectral methods in fluid dynamics
NASA Technical Reports Server (NTRS)
Zang, T. A.; Hussaini, M. Y.
1985-01-01
Origins of spectral methods, especially their relation to the method of weighted residuals, are surveyed. Basic Fourier and Chebyshev spectral concepts are reviewed and demonstrated through application to simple model problems. Both collocation and tau methods are considered. These techniques are then applied to a number of difficult, nonlinear problems of hyperbolic, parabolic, elliptic and mixzed type. Fluid dynamical applications are emphasized.
Probabilistic classifiers with high-dimensional data
Kim, Kyung In; Simon, Richard
2011-01-01
For medical classification problems, it is often desirable to have a probability associated with each class. Probabilistic classifiers have received relatively little attention for small n large p classification problems despite of their importance in medical decision making. In this paper, we introduce 2 criteria for assessment of probabilistic classifiers: well-calibratedness and refinement and develop corresponding evaluation measures. We evaluated several published high-dimensional probabilistic classifiers and developed 2 extensions of the Bayesian compound covariate classifier. Based on simulation studies and analysis of gene expression microarray data, we found that proper probabilistic classification is more difficult than deterministic classification. It is important to ensure that a probabilistic classifier is well calibrated or at least not “anticonservative” using the methods developed here. We provide this evaluation for several probabilistic classifiers and also evaluate their refinement as a function of sample size under weak and strong signal conditions. We also present a cross-validation method for evaluating the calibration and refinement of any probabilistic classifier on any data set. PMID:21087946
Failed rib region prediction in a human body model during crash events with precrash braking.
Guleyupoglu, B; Koya, B; Barnard, R; Gayzik, F S
2018-02-28
The objective of this study is 2-fold. We used a validated human body finite element model to study the predicted chest injury (focusing on rib fracture as a function of element strain) based on varying levels of simulated precrash braking. Furthermore, we compare deterministic and probabilistic methods of rib injury prediction in the computational model. The Global Human Body Models Consortium (GHBMC) M50-O model was gravity settled in the driver position of a generic interior equipped with an advanced 3-point belt and airbag. Twelve cases were investigated with permutations for failure, precrash braking system, and crash severity. The severities used were median (17 kph), severe (34 kph), and New Car Assessment Program (NCAP; 56.4 kph). Cases with failure enabled removed rib cortical bone elements once 1.8% effective plastic strain was exceeded. Alternatively, a probabilistic framework found in the literature was used to predict rib failure. Both the probabilistic and deterministic methods take into consideration location (anterior, lateral, and posterior). The deterministic method is based on a rubric that defines failed rib regions dependent on a threshold for contiguous failed elements. The probabilistic method depends on age-based strain and failure functions. Kinematics between both methods were similar (peak max deviation: ΔX head = 17 mm; ΔZ head = 4 mm; ΔX thorax = 5 mm; ΔZ thorax = 1 mm). Seat belt forces at the time of probabilistic failed region initiation were lower than those at deterministic failed region initiation. The probabilistic method for rib fracture predicted more failed regions in the rib (an analog for fracture) than the deterministic method in all but 1 case where they were equal. The failed region patterns between models are similar; however, there are differences that arise due to stress reduced from element elimination that cause probabilistic failed regions to continue to rise after no deterministic failed region would be predicted. Both the probabilistic and deterministic methods indicate similar trends with regards to the effect of precrash braking; however, there are tradeoffs. The deterministic failed region method is more spatially sensitive to failure and is more sensitive to belt loads. The probabilistic failed region method allows for increased capability in postprocessing with respect to age. The probabilistic failed region method predicted more failed regions than the deterministic failed region method due to force distribution differences.
NASA Technical Reports Server (NTRS)
Argentiero, P.; Lowrey, B.
1976-01-01
The least squares collocation algorithm for estimating gravity anomalies from geodetic data is shown to be an application of the well known regression equations which provide the mean and covariance of a random vector (gravity anomalies) given a realization of a correlated random vector (geodetic data). It is also shown that the collocation solution for gravity anomalies is equivalent to the conventional least-squares-Stokes' function solution when the conventional solution utilizes properly weighted zero a priori estimates. The mathematical and physical assumptions underlying the least squares collocation estimator are described, and its numerical properties are compared with the numerical properties of the conventional least squares estimator.
Dominating Scale-Free Networks Using Generalized Probabilistic Methods
Molnár,, F.; Derzsy, N.; Czabarka, É.; Székely, L.; Szymanski, B. K.; Korniss, G.
2014-01-01
We study ensemble-based graph-theoretical methods aiming to approximate the size of the minimum dominating set (MDS) in scale-free networks. We analyze both analytical upper bounds of dominating sets and numerical realizations for applications. We propose two novel probabilistic dominating set selection strategies that are applicable to heterogeneous networks. One of them obtains the smallest probabilistic dominating set and also outperforms the deterministic degree-ranked method. We show that a degree-dependent probabilistic selection method becomes optimal in its deterministic limit. In addition, we also find the precise limit where selecting high-degree nodes exclusively becomes inefficient for network domination. We validate our results on several real-world networks, and provide highly accurate analytical estimates for our methods. PMID:25200937
NASA Astrophysics Data System (ADS)
Oladyshkin, Sergey; Class, Holger; Helmig, Rainer; Nowak, Wolfgang
2010-05-01
CO2 storage in geological formations is currently being discussed intensively as a technology for mitigating CO2 emissions. However, any large-scale application requires a thorough analysis of the potential risks. Current numerical simulation models are too expensive for probabilistic risk analysis and for stochastic approaches based on brute-force repeated simulation. Even single deterministic simulations may require parallel high-performance computing. The multiphase flow processes involved are too non-linear for quasi-linear error propagation and other simplified stochastic tools. As an alternative approach, we propose a massive stochastic model reduction based on the probabilistic collocation method. The model response is projected onto a orthogonal basis of higher-order polynomials to approximate dependence on uncertain parameters (porosity, permeability etc.) and design parameters (injection rate, depth etc.). This allows for a non-linear propagation of model uncertainty affecting the predicted risk, ensures fast computation and provides a powerful tool for combining design variables and uncertain variables into one approach based on an integrative response surface. Thus, the design task of finding optimal injection regimes explicitly includes uncertainty, which leads to robust designs of the non-linear system that minimize failure probability and provide valuable support for risk-informed management decisions. We validate our proposed stochastic approach by Monte Carlo simulation using a common 3D benchmark problem (Class et al. Computational Geosciences 13, 2009). A reasonable compromise between computational efforts and precision was reached already with second-order polynomials. In our case study, the proposed approach yields a significant computational speedup by a factor of 100 compared to Monte Carlo simulation. We demonstrate that, due to the non-linearity of the flow and transport processes during CO2 injection, including uncertainty in the analysis leads to a systematic and significant shift of predicted leakage rates towards higher values compared with deterministic simulations, affecting both risk estimates and the design of injection scenarios. This implies that, neglecting uncertainty can be a strong simplification for modeling CO2 injection, and the consequences can be stronger than when neglecting several physical phenomena (e.g. phase transition, convective mixing, capillary forces etc.). The authors would like to thank the German Research Foundation (DFG) for financial support of the project within the Cluster of Excellence in Simulation Technology (EXC 310/1) at the University of Stuttgart. Keywords: polynomial chaos; CO2 storage; multiphase flow; porous media; risk assessment; uncertainty; integrative response surfaces
Is probabilistic bias analysis approximately Bayesian?
MacLehose, Richard F.; Gustafson, Paul
2011-01-01
Case-control studies are particularly susceptible to differential exposure misclassification when exposure status is determined following incident case status. Probabilistic bias analysis methods have been developed as ways to adjust standard effect estimates based on the sensitivity and specificity of exposure misclassification. The iterative sampling method advocated in probabilistic bias analysis bears a distinct resemblance to a Bayesian adjustment; however, it is not identical. Furthermore, without a formal theoretical framework (Bayesian or frequentist), the results of a probabilistic bias analysis remain somewhat difficult to interpret. We describe, both theoretically and empirically, the extent to which probabilistic bias analysis can be viewed as approximately Bayesian. While the differences between probabilistic bias analysis and Bayesian approaches to misclassification can be substantial, these situations often involve unrealistic prior specifications and are relatively easy to detect. Outside of these special cases, probabilistic bias analysis and Bayesian approaches to exposure misclassification in case-control studies appear to perform equally well. PMID:22157311
NASA Technical Reports Server (NTRS)
Fromme, J. A.; Golberg, M. A.
1979-01-01
Lift interference effects are discussed based on Bland's (1968) integral equation. A mathematical existence theory is utilized for which convergence of the numerical method has been proved for general (square-integrable) downwashes. Airloads are computed using orthogonal airfoil polynomial pairs in conjunction with a collocation method which is numerically equivalent to Galerkin's method and complex least squares. Convergence exhibits exponentially decreasing error with the number n of collocation points for smooth downwashes, whereas errors are proportional to 1/n for discontinuous downwashes. The latter can be reduced to 1/n to the m+1 power with mth-order Richardson extrapolation (by using m = 2, hundredfold error reductions were obtained with only a 13% increase of computer time). Numerical results are presented showing acoustic resonance, as well as the effect of Mach number, ventilation, height-to-chord ratio, and mode shape on wind-tunnel interference. Excellent agreement with experiment is obtained in steady flow, and good agreement is obtained for unsteady flow.
Vibration and Control of Flexible Rotor Supported by Magnetic Bearings
NASA Technical Reports Server (NTRS)
Nonami, Kenzou
1988-01-01
Active vibration control of flexible rotors supported by magnetic bearings is discussed. Using a finite-element method for a mathematical model of the flexible rotor, the eigenvalue problem is formulated taking into account the interaction between a mechanical system of the flexible rotor and an electrical system of the magnetic bearings and the controller. However, for the sake of simplicity, gyroscopic effects are disregarded. It is possible to adapt this formulation to a general flexible rotor-magnetic bearing system. Controllability with and without collocation sensors and actuators located at the same distance along the rotor axis is discussed for the higher order flexible modes of the test rig. In conclusion, it is proposed that it is necessary to add new active control loops for the higher flexible modes even in the case of collocation. Then it is possible to stabilize for the case of uncollocation by means of this method.
Optimization of Low-Thrust Spiral Trajectories by Collocation
NASA Technical Reports Server (NTRS)
Falck, Robert D.; Dankanich, John W.
2012-01-01
As NASA examines potential missions in the post space shuttle era, there has been a renewed interest in low-thrust electric propulsion for both crewed and uncrewed missions. While much progress has been made in the field of software for the optimization of low-thrust trajectories, many of the tools utilize higher-fidelity methods which, while excellent, result in extremely high run-times and poor convergence when dealing with planetocentric spiraling trajectories deep within a gravity well. Conversely, faster tools like SEPSPOT provide a reasonable solution but typically fail to account for other forces such as third-body gravitation, aerodynamic drag, solar radiation pressure. SEPSPOT is further constrained by its solution method, which may require a very good guess to yield a converged optimal solution. Here the authors have developed an approach using collocation intended to provide solution times comparable to those given by SEPSPOT while allowing for greater robustness and extensible force models.
The application of nonlinear programming and collocation to optimal aeroassisted orbital transfers
NASA Astrophysics Data System (ADS)
Shi, Y. Y.; Nelson, R. L.; Young, D. H.; Gill, P. E.; Murray, W.; Saunders, M. A.
1992-01-01
Sequential quadratic programming (SQP) and collocation of the differential equations of motion were applied to optimal aeroassisted orbital transfers. The Optimal Trajectory by Implicit Simulation (OTIS) computer program codes with updated nonlinear programming code (NZSOL) were used as a testbed for the SQP nonlinear programming (NLP) algorithms. The state-of-the-art sparse SQP method is considered to be effective for solving large problems with a sparse matrix. Sparse optimizers are characterized in terms of memory requirements and computational efficiency. For the OTIS problems, less than 10 percent of the Jacobian matrix elements are nonzero. The SQP method encompasses two phases: finding an initial feasible point by minimizing the sum of infeasibilities and minimizing the quadratic objective function within the feasible region. The orbital transfer problem under consideration involves the transfer from a high energy orbit to a low energy orbit.
Global/local methods for probabilistic structural analysis
NASA Technical Reports Server (NTRS)
Millwater, H. R.; Wu, Y.-T.
1993-01-01
A probabilistic global/local method is proposed to reduce the computational requirements of probabilistic structural analysis. A coarser global model is used for most of the computations with a local more refined model used only at key probabilistic conditions. The global model is used to establish the cumulative distribution function (cdf) and the Most Probable Point (MPP). The local model then uses the predicted MPP to adjust the cdf value. The global/local method is used within the advanced mean value probabilistic algorithm. The local model can be more refined with respect to the g1obal model in terms of finer mesh, smaller time step, tighter tolerances, etc. and can be used with linear or nonlinear models. The basis for this approach is described in terms of the correlation between the global and local models which can be estimated from the global and local MPPs. A numerical example is presented using the NESSUS probabilistic structural analysis program with the finite element method used for the structural modeling. The results clearly indicate a significant computer savings with minimal loss in accuracy.
Global/local methods for probabilistic structural analysis
NASA Astrophysics Data System (ADS)
Millwater, H. R.; Wu, Y.-T.
1993-04-01
A probabilistic global/local method is proposed to reduce the computational requirements of probabilistic structural analysis. A coarser global model is used for most of the computations with a local more refined model used only at key probabilistic conditions. The global model is used to establish the cumulative distribution function (cdf) and the Most Probable Point (MPP). The local model then uses the predicted MPP to adjust the cdf value. The global/local method is used within the advanced mean value probabilistic algorithm. The local model can be more refined with respect to the g1obal model in terms of finer mesh, smaller time step, tighter tolerances, etc. and can be used with linear or nonlinear models. The basis for this approach is described in terms of the correlation between the global and local models which can be estimated from the global and local MPPs. A numerical example is presented using the NESSUS probabilistic structural analysis program with the finite element method used for the structural modeling. The results clearly indicate a significant computer savings with minimal loss in accuracy.
NASA Technical Reports Server (NTRS)
Eren, K.
1980-01-01
The mathematical background in spectral analysis as applied to geodetic applications is summarized. The resolution (cut-off frequency) of the GEOS 3 altimeter data is examined by determining the shortest wavelength (corresponding to the cut-off frequency) recoverable. The data from some 18 profiles are used. The total power (variance) in the sea surface topography with respect to the reference ellipsoid as well as with respect to the GEM-9 surface is computed. A fast inversion algorithm for matrices of simple and block Toeplitz matrices and its application to least squares collocation is explained. This algorithm yields a considerable gain in computer time and storage in comparison with conventional least squares collocation. Frequency domain least squares collocation techniques are also introduced and applied to estimating gravity anomalies from GEOS 3 altimeter data. These techniques substantially reduce the computer time and requirements in storage associated with the conventional least squares collocation. Numerical examples given demonstrate the efficiency and speed of these techniques.
Probabilistic drug connectivity mapping
2014-01-01
Background The aim of connectivity mapping is to match drugs using drug-treatment gene expression profiles from multiple cell lines. This can be viewed as an information retrieval task, with the goal of finding the most relevant profiles for a given query drug. We infer the relevance for retrieval by data-driven probabilistic modeling of the drug responses, resulting in probabilistic connectivity mapping, and further consider the available cell lines as different data sources. We use a special type of probabilistic model to separate what is shared and specific between the sources, in contrast to earlier connectivity mapping methods that have intentionally aggregated all available data, neglecting information about the differences between the cell lines. Results We show that the probabilistic multi-source connectivity mapping method is superior to alternatives in finding functionally and chemically similar drugs from the Connectivity Map data set. We also demonstrate that an extension of the method is capable of retrieving combinations of drugs that match different relevant parts of the query drug response profile. Conclusions The probabilistic modeling-based connectivity mapping method provides a promising alternative to earlier methods. Principled integration of data from different cell lines helps to identify relevant responses for specific drug repositioning applications. PMID:24742351
Potential applications of skip SMV with thrust engine
NASA Astrophysics Data System (ADS)
Wang, Weilin; Savvaris, Al
2016-11-01
This paper investigates the potential applications of Space Maneuver Vehicles (SMV) with skip trajectory. Due to soaring space operations over the past decades, the risk of space debris has considerably increased such as collision risks with space asset, human property on ground and even aviation. Many active debris removal methods have been investigated and in this paper, a debris remediation method is first proposed based on skip SMV. The key point is to perform controlled re-entry. These vehicles are expected to achieve a trans-atmospheric maneuver with thrust engine. If debris is released at altitude below 80 km, debris could be captured by the atmosphere drag force and re-entry interface prediction accuracy is improved. Moreover if the debris is released in a cargo at a much lower altitude, this technique protects high value space asset from break up by the atmosphere and improves landing accuracy. To demonstrate the feasibility of this concept, the present paper presents the simulation results for two specific mission profiles: (1) descent to predetermined altitude; (2) descent to predetermined point (altitude, longitude and latitude). The evolutionary collocation method is adopted for skip trajectory optimization due to its global optimality and high-accuracy. This method is actually a two-step optimization approach based on the heuristic algorithm and the collocation method. The optimal-control problem is transformed into a nonlinear programming problem (NLP) which can be efficiently and accurately solved by the sequential quadratic programming (SQP) procedure. However, such a method is sensitive to initial values. To reduce the sensitivity problem, genetic algorithm (GA) is adopted to refine the grids and provide near optimum initial values. By comparing the simulation data from different scenarios, it is found that skip SMV is feasible in active debris removal and the evolutionary collocation method gives a truthful re-entry trajectory that satisfies the path and boundary constraints.
Probabilistic Learning in Junior High School: Investigation of Student Probabilistic Thinking Levels
NASA Astrophysics Data System (ADS)
Kurniasih, R.; Sujadi, I.
2017-09-01
This paper was to investigate level on students’ probabilistic thinking. Probabilistic thinking level is level of probabilistic thinking. Probabilistic thinking is thinking about probabilistic or uncertainty matter in probability material. The research’s subject was students in grade 8th Junior High School students. The main instrument is a researcher and a supporting instrument is probabilistic thinking skills test and interview guidelines. Data was analyzed using triangulation method. The results showed that the level of students probabilistic thinking before obtaining a teaching opportunity at the level of subjective and transitional. After the students’ learning level probabilistic thinking is changing. Based on the results of research there are some students who have in 8th grade level probabilistic thinking numerically highest of levels. Level of students’ probabilistic thinking can be used as a reference to make a learning material and strategy.
Comparison of probabilistic and deterministic fiber tracking of cranial nerves.
Zolal, Amir; Sobottka, Stephan B; Podlesek, Dino; Linn, Jennifer; Rieger, Bernhard; Juratli, Tareq A; Schackert, Gabriele; Kitzler, Hagen H
2017-09-01
OBJECTIVE The depiction of cranial nerves (CNs) using diffusion tensor imaging (DTI) is of great interest in skull base tumor surgery and DTI used with deterministic tracking methods has been reported previously. However, there are still no good methods usable for the elimination of noise from the resulting depictions. The authors have hypothesized that probabilistic tracking could lead to more accurate results, because it more efficiently extracts information from the underlying data. Moreover, the authors have adapted a previously described technique for noise elimination using gradual threshold increases to probabilistic tracking. To evaluate the utility of this new approach, a comparison is provided with this work between the gradual threshold increase method in probabilistic and deterministic tracking of CNs. METHODS Both tracking methods were used to depict CNs II, III, V, and the VII+VIII bundle. Depiction of 240 CNs was attempted with each of the above methods in 30 healthy subjects, which were obtained from 2 public databases: the Kirby repository (KR) and Human Connectome Project (HCP). Elimination of erroneous fibers was attempted by gradually increasing the respective thresholds (fractional anisotropy [FA] and probabilistic index of connectivity [PICo]). The results were compared with predefined ground truth images based on corresponding anatomical scans. Two label overlap measures (false-positive error and Dice similarity coefficient) were used to evaluate the success of both methods in depicting the CN. Moreover, the differences between these parameters obtained from the KR and HCP (with higher angular resolution) databases were evaluated. Additionally, visualization of 10 CNs in 5 clinical cases was attempted with both methods and evaluated by comparing the depictions with intraoperative findings. RESULTS Maximum Dice similarity coefficients were significantly higher with probabilistic tracking (p < 0.001; Wilcoxon signed-rank test). The false-positive error of the last obtained depiction was also significantly lower in probabilistic than in deterministic tracking (p < 0.001). The HCP data yielded significantly better results in terms of the Dice coefficient in probabilistic tracking (p < 0.001, Mann-Whitney U-test) and in deterministic tracking (p = 0.02). The false-positive errors were smaller in HCP data in deterministic tracking (p < 0.001) and showed a strong trend toward significance in probabilistic tracking (p = 0.06). In the clinical cases, the probabilistic method visualized 7 of 10 attempted CNs accurately, compared with 3 correct depictions with deterministic tracking. CONCLUSIONS High angular resolution DTI scans are preferable for the DTI-based depiction of the cranial nerves. Probabilistic tracking with a gradual PICo threshold increase is more effective for this task than the previously described deterministic tracking with a gradual FA threshold increase and might represent a method that is useful for depicting cranial nerves with DTI since it eliminates the erroneous fibers without manual intervention.
Probabilistic Geoacoustic Inversion in Complex Environments
2015-09-30
Probabilistic Geoacoustic Inversion in Complex Environments Jan Dettmer School of Earth and Ocean Sciences, University of Victoria, Victoria BC...long-range inversion methods can fail to provide sufficient resolution. For proper quantitative examination of variability, parameter uncertainty must...project aims to advance probabilistic geoacoustic inversion methods for complex ocean environments for a range of geoacoustic data types. The work is
Finite length Taylor Couette flow
NASA Technical Reports Server (NTRS)
Streett, C. L.; Hussaini, M. Y.
1987-01-01
Axisymmetric numerical solutions of the unsteady Navier-Stokes equations for flow between concentric rotating cylinders of finite length are obtained by a spectral collocation method. These representative results pertain to two-cell/one-cell exchange process, and are compared with recent experiments.
NASA Astrophysics Data System (ADS)
Plestenjak, Bor; Gheorghiu, Călin I.; Hochstenbach, Michiel E.
2015-10-01
In numerous science and engineering applications a partial differential equation has to be solved on some fairly regular domain that allows the use of the method of separation of variables. In several orthogonal coordinate systems separation of variables applied to the Helmholtz, Laplace, or Schrödinger equation leads to a multiparameter eigenvalue problem (MEP); important cases include Mathieu's system, Lamé's system, and a system of spheroidal wave functions. Although multiparameter approaches are exploited occasionally to solve such equations numerically, MEPs remain less well known, and the variety of available numerical methods is not wide. The classical approach of discretizing the equations using standard finite differences leads to algebraic MEPs with large matrices, which are difficult to solve efficiently. The aim of this paper is to change this perspective. We show that by combining spectral collocation methods and new efficient numerical methods for algebraic MEPs it is possible to solve such problems both very efficiently and accurately. We improve on several previous results available in the literature, and also present a MATLAB toolbox for solving a wide range of problems.
Deterministic analysis of extrinsic and intrinsic noise in an epidemiological model.
Bayati, Basil S
2016-05-01
We couple a stochastic collocation method with an analytical expansion of the canonical epidemiological master equation to analyze the effects of both extrinsic and intrinsic noise. It is shown that depending on the distribution of the extrinsic noise, the master equation yields quantitatively different results compared to using the expectation of the distribution for the stochastic parameter. This difference is incident to the nonlinear terms in the master equation, and we show that the deviation away from the expectation of the extrinsic noise scales nonlinearly with the variance of the distribution. The method presented here converges linearly with respect to the number of particles in the system and exponentially with respect to the order of the polynomials used in the stochastic collocation calculation. This makes the method presented here more accurate than standard Monte Carlo methods, which suffer from slow, nonmonotonic convergence. In epidemiological terms, the results show that extrinsic fluctuations should be taken into account since they effect the speed of disease breakouts and that the gamma distribution should be used to model the basic reproductive number.
NASA Astrophysics Data System (ADS)
Liao, Qinzhuo; Zhang, Dongxiao; Tchelepi, Hamdi
2017-06-01
In numerical modeling of subsurface flow and transport problems, formation properties may not be deterministically characterized, which leads to uncertainty in simulation results. In this study, we propose a sparse grid collocation method, which adopts nested quadrature rules with delay and transformation to quantify the uncertainty of model solutions. We show that the nested Kronrod-Patterson-Hermite quadrature is more efficient than the unnested Gauss-Hermite quadrature. We compare the convergence rates of various quadrature rules including the domain truncation and domain mapping approaches. To further improve accuracy and efficiency, we present a delayed process in selecting quadrature nodes and a transformed process for approximating unsmooth or discontinuous solutions. The proposed method is tested by an analytical function and in one-dimensional single-phase and two-phase flow problems with different spatial variances and correlation lengths. An additional example is given to demonstrate its applicability to three-dimensional black-oil models. It is found from these examples that the proposed method provides a promising approach for obtaining satisfactory estimation of the solution statistics and is much more efficient than the Monte-Carlo simulations.
A new method of imposing boundary conditions for hyperbolic equations
NASA Technical Reports Server (NTRS)
Funaro, D.; ative.
1987-01-01
A new method to impose boundary conditions for pseudospectral approximations to hyperbolic equations is suggested. This method involves the collocation of the equation at the boundary nodes as well as satisfying boundary conditions. Stability and convergence results are proven for the Chebyshev approximation of linear scalar hyperbolic equations. The eigenvalues of this method applied to parabolic equations are shown to be real and negative.
NASA Technical Reports Server (NTRS)
Townsend, John S.; Peck, Jeff; Ayala, Samuel
2000-01-01
NASA has funded several major programs (the Probabilistic Structural Analysis Methods Project is an example) to develop probabilistic structural analysis methods and tools for engineers to apply in the design and assessment of aerospace hardware. A probabilistic finite element software code, known as Numerical Evaluation of Stochastic Structures Under Stress, is used to determine the reliability of a critical weld of the Space Shuttle solid rocket booster aft skirt. An external bracket modification to the aft skirt provides a comparison basis for examining the details of the probabilistic analysis and its contributions to the design process. Also, analysis findings are compared with measured Space Shuttle flight data.
A Chebyshev matrix method for spatial modes of the Orr-Sommerfeld equation
NASA Technical Reports Server (NTRS)
Danabasoglu, G.; Biringen, S.
1989-01-01
The Chebyshev matrix collocation method is applied to obtain the spatial modes of the Orr-Sommerfeld equation for Poiseuille flow and the Blausius boundary layer. The problem is linearized by the companion matrix technique for semi-infinite domain using a mapping transformation. The method can be easily adapted to problems with different boundary conditions requiring different transformations.
Application of the Probabilistic Dynamic Synthesis Method to the Analysis of a Realistic Structure
NASA Technical Reports Server (NTRS)
Brown, Andrew M.; Ferri, Aldo A.
1998-01-01
The Probabilistic Dynamic Synthesis method is a new technique for obtaining the statistics of a desired response engineering quantity for a structure with non-deterministic parameters. The method uses measured data from modal testing of the structure as the input random variables, rather than more "primitive" quantities like geometry or material variation. This modal information is much more comprehensive and easily measured than the "primitive" information. The probabilistic analysis is carried out using either response surface reliability methods or Monte Carlo simulation. A previous work verified the feasibility of the PDS method on a simple seven degree-of-freedom spring-mass system. In this paper, extensive issues involved with applying the method to a realistic three-substructure system are examined, and free and forced response analyses are performed. The results from using the method are promising, especially when the lack of alternatives for obtaining quantitative output for probabilistic structures is considered.
Application of the Probabilistic Dynamic Synthesis Method to Realistic Structures
NASA Technical Reports Server (NTRS)
Brown, Andrew M.; Ferri, Aldo A.
1998-01-01
The Probabilistic Dynamic Synthesis method is a technique for obtaining the statistics of a desired response engineering quantity for a structure with non-deterministic parameters. The method uses measured data from modal testing of the structure as the input random variables, rather than more "primitive" quantities like geometry or material variation. This modal information is much more comprehensive and easily measured than the "primitive" information. The probabilistic analysis is carried out using either response surface reliability methods or Monte Carlo simulation. In previous work, the feasibility of the PDS method applied to a simple seven degree-of-freedom spring-mass system was verified. In this paper, extensive issues involved with applying the method to a realistic three-substructure system are examined, and free and forced response analyses are performed. The results from using the method are promising, especially when the lack of alternatives for obtaining quantitative output for probabilistic structures is considered.
Probabilistic structural analysis methods for select space propulsion system components
NASA Technical Reports Server (NTRS)
Millwater, H. R.; Cruse, T. A.
1989-01-01
The Probabilistic Structural Analysis Methods (PSAM) project developed at the Southwest Research Institute integrates state-of-the-art structural analysis techniques with probability theory for the design and analysis of complex large-scale engineering structures. An advanced efficient software system (NESSUS) capable of performing complex probabilistic analysis has been developed. NESSUS contains a number of software components to perform probabilistic analysis of structures. These components include: an expert system, a probabilistic finite element code, a probabilistic boundary element code and a fast probability integrator. The NESSUS software system is shown. An expert system is included to capture and utilize PSAM knowledge and experience. NESSUS/EXPERT is an interactive menu-driven expert system that provides information to assist in the use of the probabilistic finite element code NESSUS/FEM and the fast probability integrator (FPI). The expert system menu structure is summarized. The NESSUS system contains a state-of-the-art nonlinear probabilistic finite element code, NESSUS/FEM, to determine the structural response and sensitivities. A broad range of analysis capabilities and an extensive element library is present.
Martian resource locations: Identification and optimization
NASA Astrophysics Data System (ADS)
Chamitoff, Gregory; James, George; Barker, Donald; Dershowitz, Adam
2005-04-01
The identification and utilization of in situ Martian natural resources is the key to enable cost-effective long-duration missions and permanent human settlements on Mars. This paper presents a powerful software tool for analyzing Martian data from all sources, and for optimizing mission site selection based on resource collocation. This program, called Planetary Resource Optimization and Mapping Tool (PROMT), provides a wide range of analysis and display functions that can be applied to raw data or imagery. Thresholds, contours, custom algorithms, and graphical editing are some of the various methods that can be used to process data. Output maps can be created to identify surface regions on Mars that meet any specific criteria. The use of this tool for analyzing data, generating maps, and collocating features is demonstrated using data from the Mars Global Surveyor and the Odyssey spacecraft. The overall mission design objective is to maximize a combination of scientific return and self-sufficiency based on utilization of local materials. Landing site optimization involves maximizing accessibility to collocated science and resource features within a given mission radius. Mission types are categorized according to duration, energy resources, and in situ resource utilization. Preliminary optimization results are shown for a number of mission scenarios.
[A particular anthropometric method for the study of accessibility of a workstation].
Molinaro, V; Del Ferraro, S
2008-01-01
One of the main factors which can involve musculo-skeletal disorders is the assumption of awkward postures. These lasts can be caused, in some cases, by a no-suitable collocation of some devices which are indispensable for the work. It is possible to evaluate if the chosen collocation is adequate or not by studying the accessibility of the workstation with a special regard for the accessibility of the devices placed inside the workstation. EN ISO 14738:2002 is a specific standard which has been adopted in Italy as UNI EN ISO 14738:2004. This standard gives some useful requirements, in terms of accessibility, to design a workstation at no-mobile machinery. In this study, the authors have analyzed a check out workstation by following the requirements described in UNI EN ISO 14738:2004. Critical aspects, related to the organization both of the work activities either of the workstation, have been highlighted taking into account standard criteria. Finally the authors make a new design of the check out workstation trying to optimize device collocation in order to reduce awkward postures. The new configuration has been investigated by applying the criteria mentioned in the standard.
Uncertainty analyses of CO2 plume expansion subsequent to wellbore CO2 leakage into aquifers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hou, Zhangshuan; Bacon, Diana H.; Engel, David W.
2014-08-01
In this study, we apply an uncertainty quantification (UQ) framework to CO2 sequestration problems. In one scenario, we look at the risk of wellbore leakage of CO2 into a shallow unconfined aquifer in an urban area; in another scenario, we study the effects of reservoir heterogeneity on CO2 migration. We combine various sampling approaches (quasi-Monte Carlo, probabilistic collocation, and adaptive sampling) in order to reduce the number of forward calculations while trying to fully explore the input parameter space and quantify the input uncertainty. The CO2 migration is simulated using the PNNL-developed simulator STOMP-CO2e (the water-salt-CO2 module). For computationally demandingmore » simulations with 3D heterogeneity fields, we combined the framework with a scalable version module, eSTOMP, as the forward modeling simulator. We built response curves and response surfaces of model outputs with respect to input parameters, to look at the individual and combined effects, and identify and rank the significance of the input parameters.« less
NASA Astrophysics Data System (ADS)
Wang, Lei; Xiong, Chuang; Wang, Xiaojun; Li, Yunlong; Xu, Menghui
2018-04-01
Considering that multi-source uncertainties from inherent nature as well as the external environment are unavoidable and severely affect the controller performance, the dynamic safety assessment with high confidence is of great significance for scientists and engineers. In view of this, the uncertainty quantification analysis and time-variant reliability estimation corresponding to the closed-loop control problems are conducted in this study under a mixture of random, interval, and convex uncertainties. By combining the state-space transformation and the natural set expansion, the boundary laws of controlled response histories are first confirmed with specific implementation of random items. For nonlinear cases, the collocation set methodology and fourth Rounge-Kutta algorithm are introduced as well. Enlightened by the first-passage model in random process theory as well as by the static probabilistic reliability ideas, a new definition of the hybrid time-variant reliability measurement is provided for the vibration control systems and the related solution details are further expounded. Two engineering examples are eventually presented to demonstrate the validity and applicability of the methodology developed.
Use of adjoint methods in the probabilistic finite element approach to fracture mechanics
NASA Technical Reports Server (NTRS)
Liu, Wing Kam; Besterfield, Glen; Lawrence, Mark; Belytschko, Ted
1988-01-01
The adjoint method approach to probabilistic finite element methods (PFEM) is presented. When the number of objective functions is small compared to the number of random variables, the adjoint method is far superior to the direct method in evaluating the objective function derivatives with respect to the random variables. The PFEM is extended to probabilistic fracture mechanics (PFM) using an element which has the near crack-tip singular strain field embedded. Since only two objective functions (i.e., mode I and II stress intensity factors) are needed for PFM, the adjoint method is well suited.
A probabilistic Hu-Washizu variational principle
NASA Technical Reports Server (NTRS)
Liu, W. K.; Belytschko, T.; Besterfield, G. H.
1987-01-01
A Probabilistic Hu-Washizu Variational Principle (PHWVP) for the Probabilistic Finite Element Method (PFEM) is presented. This formulation is developed for both linear and nonlinear elasticity. The PHWVP allows incorporation of the probabilistic distributions for the constitutive law, compatibility condition, equilibrium, domain and boundary conditions into the PFEM. Thus, a complete probabilistic analysis can be performed where all aspects of the problem are treated as random variables and/or fields. The Hu-Washizu variational formulation is available in many conventional finite element codes thereby enabling the straightforward inclusion of the probabilistic features into present codes.
Dynamic Probabilistic Instability of Composite Structures
NASA Technical Reports Server (NTRS)
Chamis, Christos C.
2009-01-01
A computationally effective method is described to evaluate the non-deterministic dynamic instability (probabilistic dynamic buckling) of thin composite shells. The method is a judicious combination of available computer codes for finite element, composite mechanics and probabilistic structural analysis. The solution method is incrementally updated Lagrangian. It is illustrated by applying it to thin composite cylindrical shell subjected to dynamic loads. Both deterministic and probabilistic buckling loads are evaluated to demonstrate the effectiveness of the method. A universal plot is obtained for the specific shell that can be used to approximate buckling loads for different load rates and different probability levels. Results from this plot show that the faster the rate, the higher the buckling load and the shorter the time. The lower the probability, the lower is the buckling load for a specific time. Probabilistic sensitivity results show that the ply thickness, the fiber volume ratio and the fiber longitudinal modulus, dynamic load and loading rate are the dominant uncertainties in that order.
Code of Federal Regulations, 2010 CFR
2010-10-01
... FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED) INTERCONNECTION Pricing of Elements § 51.501 Scope. (a) The rules in this subpart apply to the pricing of network elements, interconnection, and methods of obtaining access to unbundled elements, including physical collocation and virtual...
Performance of FFT methods in local gravity field modelling
NASA Technical Reports Server (NTRS)
Forsberg, Rene; Solheim, Dag
1989-01-01
Fast Fourier transform (FFT) methods provide a fast and efficient means of processing large amounts of gravity or geoid data in local gravity field modelling. The FFT methods, however, has a number of theoretical and practical limitations, especially the use of flat-earth approximation, and the requirements for gridded data. In spite of this the method often yields excellent results in practice when compared to other more rigorous (and computationally expensive) methods, such as least-squares collocation. The good performance of the FFT methods illustrate that the theoretical approximations are offset by the capability of taking into account more data in larger areas, especially important for geoid predictions. For best results good data gridding algorithms are essential. In practice truncated collocation approaches may be used. For large areas at high latitudes the gridding must be done using suitable map projections such as UTM, to avoid trivial errors caused by the meridian convergence. The FFT methods are compared to ground truth data in New Mexico (xi, eta from delta g), Scandinavia (N from delta g, the geoid fits to 15 cm over 2000 km), and areas of the Atlantic (delta g from satellite altimetry using Wiener filtering). In all cases the FFT methods yields results comparable or superior to other methods.
Probabilistic boundary element method
NASA Technical Reports Server (NTRS)
Cruse, T. A.; Raveendra, S. T.
1989-01-01
The purpose of the Probabilistic Structural Analysis Method (PSAM) project is to develop structural analysis capabilities for the design analysis of advanced space propulsion system hardware. The boundary element method (BEM) is used as the basis of the Probabilistic Advanced Analysis Methods (PADAM) which is discussed. The probabilistic BEM code (PBEM) is used to obtain the structural response and sensitivity results to a set of random variables. As such, PBEM performs analogous to other structural analysis codes such as finite elements in the PSAM system. For linear problems, unlike the finite element method (FEM), the BEM governing equations are written at the boundary of the body only, thus, the method eliminates the need to model the volume of the body. However, for general body force problems, a direct condensation of the governing equations to the boundary of the body is not possible and therefore volume modeling is generally required.
NASA Technical Reports Server (NTRS)
Price J. M.; Ortega, R.
1998-01-01
Probabilistic method is not a universally accepted approach for the design and analysis of aerospace structures. The validity of this approach must be demonstrated to encourage its acceptance as it viable design and analysis tool to estimate structural reliability. The objective of this Study is to develop a well characterized finite population of similar aerospace structures that can be used to (1) validate probabilistic codes, (2) demonstrate the basic principles behind probabilistic methods, (3) formulate general guidelines for characterization of material drivers (such as elastic modulus) when limited data is available, and (4) investigate how the drivers affect the results of sensitivity analysis at the component/failure mode level.
NASA Technical Reports Server (NTRS)
Ryan, Robert S.; Townsend, John S.
1993-01-01
The prospective improvement of probabilistic methods for space program analysis/design entails the further development of theories, codes, and tools which match specific areas of application, the drawing of lessons from previous uses of probability and statistics data bases, the enlargement of data bases (especially in the field of structural failures), and the education of engineers and managers on the advantages of these methods. An evaluation is presently made of the current limitations of probabilistic engineering methods. Recommendations are made for specific applications.
Sayers, Adrian; Ben-Shlomo, Yoav; Blom, Ashley W; Steele, Fiona
2016-01-01
Abstract Studies involving the use of probabilistic record linkage are becoming increasingly common. However, the methods underpinning probabilistic record linkage are not widely taught or understood, and therefore these studies can appear to be a ‘black box’ research tool. In this article, we aim to describe the process of probabilistic record linkage through a simple exemplar. We first introduce the concept of deterministic linkage and contrast this with probabilistic linkage. We illustrate each step of the process using a simple exemplar and describe the data structure required to perform a probabilistic linkage. We describe the process of calculating and interpreting matched weights and how to convert matched weights into posterior probabilities of a match using Bayes theorem. We conclude this article with a brief discussion of some of the computational demands of record linkage, how you might assess the quality of your linkage algorithm, and how epidemiologists can maximize the value of their record-linked research using robust record linkage methods. PMID:26686842
Probabilistic structural analysis methods of hot engine structures
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Hopkins, D. A.
1989-01-01
Development of probabilistic structural analysis methods for hot engine structures at Lewis Research Center is presented. Three elements of the research program are: (1) composite load spectra methodology; (2) probabilistic structural analysis methodology; and (3) probabilistic structural analysis application. Recent progress includes: (1) quantification of the effects of uncertainties for several variables on high pressure fuel turbopump (HPFT) turbine blade temperature, pressure, and torque of the space shuttle main engine (SSME); (2) the evaluation of the cumulative distribution function for various structural response variables based on assumed uncertainties in primitive structural variables; and (3) evaluation of the failure probability. Collectively, the results demonstrate that the structural durability of hot engine structural components can be effectively evaluated in a formal probabilistic/reliability framework.
A probabilistic and continuous model of protein conformational space for template-free modeling.
Zhao, Feng; Peng, Jian; Debartolo, Joe; Freed, Karl F; Sosnick, Tobin R; Xu, Jinbo
2010-06-01
One of the major challenges with protein template-free modeling is an efficient sampling algorithm that can explore a huge conformation space quickly. The popular fragment assembly method constructs a conformation by stringing together short fragments extracted from the Protein Data Base (PDB). The discrete nature of this method may limit generated conformations to a subspace in which the native fold does not belong. Another worry is that a protein with really new fold may contain some fragments not in the PDB. This article presents a probabilistic model of protein conformational space to overcome the above two limitations. This probabilistic model employs directional statistics to model the distribution of backbone angles and 2(nd)-order Conditional Random Fields (CRFs) to describe sequence-angle relationship. Using this probabilistic model, we can sample protein conformations in a continuous space, as opposed to the widely used fragment assembly and lattice model methods that work in a discrete space. We show that when coupled with a simple energy function, this probabilistic method compares favorably with the fragment assembly method in the blind CASP8 evaluation, especially on alpha or small beta proteins. To our knowledge, this is the first probabilistic method that can search conformations in a continuous space and achieves favorable performance. Our method also generated three-dimensional (3D) models better than template-based methods for a couple of CASP8 hard targets. The method described in this article can also be applied to protein loop modeling, model refinement, and even RNA tertiary structure prediction.
NASA Technical Reports Server (NTRS)
Lustman, L.
1984-01-01
An outline for spectral methods for partial differential equations is presented. The basic spectral algorithm is defined, collocation are emphasized and the main advantage of the method, the infinite order of accuracy in problems with smooth solutions are discussed. Examples of theoretical numerical analysis of spectral calculations are presented. An application of spectral methods to transonic flow is presented. The full potential transonic equation is among the best understood among nonlinear equations.
Polynomial Chaos Based Acoustic Uncertainty Predictions from Ocean Forecast Ensembles
NASA Astrophysics Data System (ADS)
Dennis, S.
2016-02-01
Most significant ocean acoustic propagation occurs at tens of kilometers, at scales small compared basin and to most fine scale ocean modeling. To address the increased emphasis on uncertainty quantification, for example transmission loss (TL) probability density functions (PDF) within some radius, a polynomial chaos (PC) based method is utilized. In order to capture uncertainty in ocean modeling, Navy Coastal Ocean Model (NCOM) now includes ensembles distributed to reflect the ocean analysis statistics. Since the ensembles are included in the data assimilation for the new forecast ensembles, the acoustic modeling uses the ensemble predictions in a similar fashion for creating sound speed distribution over an acoustically relevant domain. Within an acoustic domain, singular value decomposition over the combined time-space structure of the sound speeds can be used to create Karhunen-Loève expansions of sound speed, subject to multivariate normality testing. These sound speed expansions serve as a basis for Hermite polynomial chaos expansions of derived quantities, in particular TL. The PC expansion coefficients result from so-called non-intrusive methods, involving evaluation of TL at multi-dimensional Gauss-Hermite quadrature collocation points. Traditional TL calculation from standard acoustic propagation modeling could be prohibitively time consuming at all multi-dimensional collocation points. This method employs Smolyak order and gridding methods to allow adaptive sub-sampling of the collocation points to determine only the most significant PC expansion coefficients to within a preset tolerance. Practically, the Smolyak order and grid sizes grow only polynomially in the number of Karhunen-Loève terms, alleviating the curse of dimensionality. The resulting TL PC coefficients allow the determination of TL PDF normality and its mean and standard deviation. In the non-normal case, PC Monte Carlo methods are used to rapidly establish the PDF. This work was sponsored by the Office of Naval Research
NASA Technical Reports Server (NTRS)
Singhal, Surendra N.
2003-01-01
The SAE G-11 RMSL (Reliability, Maintainability, Supportability, and Logistics) Division activities include identification and fulfillment of joint industry, government, and academia needs for development and implementation of RMSL technologies. Four Projects in the Probabilistic Methods area and two in the area of RMSL have been identified. These are: (1) Evaluation of Probabilistic Technology - progress has been made toward the selection of probabilistic application cases. Future effort will focus on assessment of multiple probabilistic softwares in solving selected engineering problems using probabilistic methods. Relevance to Industry & Government - Case studies of typical problems encountering uncertainties, results of solutions to these problems run by different codes, and recommendations on which code is applicable for what problems; (2) Probabilistic Input Preparation - progress has been made in identifying problem cases such as those with no data, little data and sufficient data. Future effort will focus on developing guidelines for preparing input for probabilistic analysis, especially with no or little data. Relevance to Industry & Government - Too often, we get bogged down thinking we need a lot of data before we can quantify uncertainties. Not True. There are ways to do credible probabilistic analysis with little data; (3) Probabilistic Reliability - probabilistic reliability literature search has been completed along with what differentiates it from statistical reliability. Work on computation of reliability based on quantification of uncertainties in primitive variables is in progress. Relevance to Industry & Government - Correct reliability computations both at the component and system level are needed so one can design an item based on its expected usage and life span; (4) Real World Applications of Probabilistic Methods (PM) - A draft of volume 1 comprising aerospace applications has been released. Volume 2, a compilation of real world applications of probabilistic methods with essential information demonstrating application type and timehost savings by the use of probabilistic methods for generic applications is in progress. Relevance to Industry & Government - Too often, we say, 'The Proof is in the Pudding'. With help from many contributors, we hope to produce such a document. Problem is - not too many people are coming forward due to proprietary nature. So, we are asking to document only minimum information including problem description, what method used, did it result in any savings, and how much?; (5) Software Reliability - software reliability concept, program, implementation, guidelines, and standards are being documented. Relevance to Industry & Government - software reliability is a complex issue that must be understood & addressed in all facets of business in industry, government, and other institutions. We address issues, concepts, ways to implement solutions, and guidelines for maximizing software reliability; (6) Maintainability Standards - maintainability/serviceability industry standard/guidelines and industry best practices and methodologies used in performing maintainability/ serviceability tasks are being documented. Relevance to Industry & Government - Any industry or government process, project, and/or tool must be maintained and serviced to realize the life and performance it was designed for. We address issues and develop guidelines for optimum performance & life.
Revision of IRIS/IDA Seismic Station Metadata
NASA Astrophysics Data System (ADS)
Xu, W.; Davis, P.; Auerbach, D.; Klimczak, E.
2017-12-01
Trustworthy data quality assurance has always been one of the goals of seismic network operators and data management centers. This task is considerably complex and evolving due to the huge quantities as well as the rapidly changing characteristics and complexities of seismic data. Published metadata usually reflect instrument response characteristics and their accuracies, which includes zero frequency sensitivity for both seismometer and data logger as well as other, frequency-dependent elements. In this work, we are mainly focused studying the variation of the seismometer sensitivity with time of IRIS/IDA seismic recording systems with a goal to improve the metadata accuracy for the history of the network. There are several ways to measure the accuracy of seismometer sensitivity for the seismic stations in service. An effective practice recently developed is to collocate a reference seismometer in proximity to verify the in-situ sensors' calibration. For those stations with a secondary broadband seismometer, IRIS' MUSTANG metric computation system introduced a transfer function metric to reflect two sensors' gain ratios in the microseism frequency band. In addition, a simulation approach based on M2 tidal measurements has been proposed and proven to be effective. In this work, we compare and analyze the results from three different methods, and concluded that the collocated-sensor method is most stable and reliable with the minimum uncertainties all the time. However, for epochs without both the collocated sensor and secondary seismometer, we rely on the analysis results from tide method. For the data since 1992 on IDA stations, we computed over 600 revised seismometer sensitivities for all the IRIS/IDA network calibration epochs. Hopefully further revision procedures will help to guarantee that the data is accurately reflected by the metadata of these stations.
NASA Astrophysics Data System (ADS)
Belikov, Dmitry A.; Maksyutov, Shamil; Ganshin, Alexander; Zhuravlev, Ruslan; Deutscher, Nicholas M.; Wunch, Debra; Feist, Dietrich G.; Morino, Isamu; Parker, Robert J.; Strong, Kimberly; Yoshida, Yukio; Bril, Andrey; Oshchepkov, Sergey; Boesch, Hartmut; Dubey, Manvendra K.; Griffith, David; Hewson, Will; Kivi, Rigel; Mendonca, Joseph; Notholt, Justus; Schneider, Matthias; Sussmann, Ralf; Velazco, Voltaire A.; Aoki, Shuji
2017-01-01
The Total Carbon Column Observing Network (TCCON) is a network of ground-based Fourier transform spectrometers (FTSs) that record near-infrared (NIR) spectra of the sun. From these spectra, accurate and precise observations of CO2 column-averaged dry-air mole fractions (denoted XCO2) are retrieved. TCCON FTS observations have previously been used to validate satellite estimations of XCO2; however, our knowledge of the short-term spatial and temporal variations in XCO2 surrounding the TCCON sites is limited. In this work, we use the National Institute for Environmental Studies (NIES) Eulerian three-dimensional transport model and the FLEXPART (FLEXible PARTicle dispersion model) Lagrangian particle dispersion model (LPDM) to determine the footprints of short-term variations in XCO2 observed by operational, past, future and possible TCCON sites. We propose a footprint-based method for the collocation of satellite and TCCON XCO2 observations and estimate the performance of the method using the NIES model and five GOSAT (Greenhouse Gases Observing Satellite) XCO2 product data sets. Comparison of the proposed approach with a standard geographic method shows a higher number of collocation points and an average bias reduction up to 0.15 ppm for a subset of 16 stations for the period from January 2010 to January 2014. Case studies of the Darwin and Reunion Island sites reveal that when the footprint area is rather curved, non-uniform and significantly different from a geographical rectangular area, the differences between these approaches are more noticeable. This emphasises that the collocation is sensitive to local meteorological conditions and flux distributions.
Probabilistic simulation of stress concentration in composite laminates
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Murthy, P. L. N.; Liaw, L.
1993-01-01
A computational methodology is described to probabilistically simulate the stress concentration factors in composite laminates. This new approach consists of coupling probabilistic composite mechanics with probabilistic finite element structural analysis. The probabilistic composite mechanics is used to probabilistically describe all the uncertainties inherent in composite material properties while probabilistic finite element is used to probabilistically describe the uncertainties associated with methods to experimentally evaluate stress concentration factors such as loads, geometry, and supports. The effectiveness of the methodology is demonstrated by using it to simulate the stress concentration factors in composite laminates made from three different composite systems. Simulated results match experimental data for probability density and for cumulative distribution functions. The sensitivity factors indicate that the stress concentration factors are influenced by local stiffness variables, by load eccentricities and by initial stress fields.
Lin, Yi-Chung; Pandy, Marcus G
2017-07-05
The aim of this study was to perform full-body three-dimensional (3D) dynamic optimization simulations of human locomotion by driving a neuromusculoskeletal model toward in vivo measurements of body-segmental kinematics and ground reaction forces. Gait data were recorded from 5 healthy participants who walked at their preferred speeds and ran at 2m/s. Participant-specific data-tracking dynamic optimization solutions were generated for one stride cycle using direct collocation in tandem with an OpenSim-MATLAB interface. The body was represented as a 12-segment, 21-degree-of-freedom skeleton actuated by 66 muscle-tendon units. Foot-ground interaction was simulated using six contact spheres under each foot. The dynamic optimization problem was to find the set of muscle excitations needed to reproduce 3D measurements of body-segmental motions and ground reaction forces while minimizing the time integral of muscle activations squared. Direct collocation took on average 2.7±1.0h and 2.2±1.6h of CPU time, respectively, to solve the optimization problems for walking and running. Model-computed kinematics and foot-ground forces were in good agreement with corresponding experimental data while the calculated muscle excitation patterns were consistent with measured EMG activity. The results demonstrate the feasibility of implementing direct collocation on a detailed neuromusculoskeletal model with foot-ground contact to accurately and efficiently generate 3D data-tracking dynamic optimization simulations of human locomotion. The proposed method offers a viable tool for creating feasible initial guesses needed to perform predictive simulations of movement using dynamic optimization theory. The source code for implementing the model and computational algorithm may be downloaded at http://simtk.org/home/datatracking. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Budisulistiorini, S. H.; Canagaratna, M. R.; Croteau, P. L.; Baumann, K.; Edgerton, E. S.; Kollman, M. S.; Ng, N. L.; Verma, V.; Shaw, S. L.; Knipping, E. M.; Worsnop, D. R.; Jayne, J. T.; Weber, R. J.; Surratt, J. D.
2014-07-01
Currently, there are a limited number of field studies that evaluate the long-term performance of the Aerodyne Aerosol Chemical Speciation Monitor (ACSM) against established monitoring networks. In this study, we present seasonal intercomparisons of the ACSM with collocated fine aerosol (PM2.5) measurements at the Southeastern Aerosol Research and Characterization (SEARCH) Jefferson Street (JST) site near downtown Atlanta, GA, during 2011-2012. Intercomparison of two collocated ACSMs resulted in strong correlations (r2 > 0.8) for all chemical species, except chloride (r2 = 0.21) indicating that ACSM instruments are capable of stable and reproducible operation. In general, speciated ACSM mass concentrations correlate well (r2 > 0.7) with the filter-adjusted continuous measurements from JST, although the correlation for nitrate is weaker (r2 = 0.55) in summer. Correlations of the ACSM NR-PM1 (non-refractory particulate matter with aerodynamic diameter less than or equal to 1 μm) plus elemental carbon (EC) with tapered element oscillating microbalance (TEOM) PM2.5 and Federal Reference Method (FRM) PM1 mass are strong with r2 > 0.7 and r2 > 0.8, respectively. Discrepancies might be attributed to evaporative losses of semi-volatile species from the filter measurements used to adjust the collocated continuous measurements. This suggests that adjusting the ambient aerosol continuous measurements with results from filter analysis introduced additional bias to the measurements. We also recommend to calibrate the ambient aerosol monitoring instruments using aerosol standards rather than gas-phase standards. The fitting approach for ACSM relative ionization for sulfate was shown to improve the comparisons between ACSM and collocated measurements in the absence of calibrated values, suggesting the importance of adding sulfate calibration into the ACSM calibration routine.
NASA Astrophysics Data System (ADS)
Dabiri, Arman; Butcher, Eric A.; Nazari, Morad
2017-02-01
Compliant impacts can be modeled using linear viscoelastic constitutive models. While such impact models for realistic viscoelastic materials using integer order derivatives of force and displacement usually require a large number of parameters, compliant impact models obtained using fractional calculus, however, can be advantageous since such models use fewer parameters and successfully capture the hereditary property. In this paper, we introduce the fractional Chebyshev collocation (FCC) method as an approximation tool for numerical simulation of several linear fractional viscoelastic compliant impact models in which the overall coefficient of restitution for the impact is studied as a function of the fractional model parameters for the first time. Other relevant impact characteristics such as hysteresis curves, impact force gradient, penetration and separation depths are also studied.
Nilles, M.A.; Gordon, J.D.; Schroder, L.J.
1994-01-01
A collocated, wet-deposition sampler program has been operated since October 1988 by the U.S. Geological Survey to estimate the overall sampling precision of wet atmospheric deposition data collected at selected sites in the National Atmospheric Deposition Program and National Trends Network (NADP/NTN). A duplicate set of wet-deposition sampling instruments was installed adjacent to existing sampling instruments at four different NADP/NTN sites for each year of the study. Wet-deposition samples from collocated sites were collected and analysed using standard NADP/NTN procedures. Laboratory analyses included determinations of pH, specific conductance, and concentrations of major cations and anions. The estimates of precision included all variability in the data-collection system, from the point of sample collection through storage in the NADP/NTN database. Sampling precision was determined from the absolute value of differences in the analytical results for the paired samples in terms of median relative and absolute difference. The median relative difference for Mg2+, Na+, K+ and NH4+ concentration and deposition was quite variable between sites and exceeded 10% at most sites. Relative error for analytes whose concentrations typically approached laboratory method detection limits were greater than for analytes that did not typically approach detection limits. The median relative difference for SO42- and NO3- concentration, specific conductance, and sample volume at all sites was less than 7%. Precision for H+ concentration and deposition ranged from less than 10% at sites with typically high levels of H+ concentration to greater than 30% at sites with low H+ concentration. Median difference for analyte concentration and deposition was typically 1.5-2-times greater for samples collected during the winter than during other seasons at two northern sites. Likewise, the median relative difference in sample volume for winter samples was more than double the annual median relative difference at the two northern sites. Bias accounted for less than 25% of the collocated variability in analyte concentration and deposition from weekly collocated precipitation samples at most sites.A collocated, wet-deposition sampler program has been operated since OCtober 1988 by the U.S Geological Survey to estimate the overall sampling precision of wet atmospheric deposition data collected at selected sites in the National Atmospheric Deposition Program and National Trends Network (NADP/NTN). A duplicate set of wet-deposition sampling instruments was installed adjacent to existing sampling instruments four different NADP/NTN sites for each year of the study. Wet-deposition samples from collocated sites were collected and analysed using standard NADP/NTN procedures. Laboratory analyses included determinations of pH, specific conductance, and concentrations of major cations and anions. The estimates of precision included all variability in the data-collection system, from the point of sample collection through storage in the NADP/NTN database.
Reliability and risk assessment of structures
NASA Technical Reports Server (NTRS)
Chamis, C. C.
1991-01-01
Development of reliability and risk assessment of structural components and structures is a major activity at Lewis Research Center. It consists of five program elements: (1) probabilistic loads; (2) probabilistic finite element analysis; (3) probabilistic material behavior; (4) assessment of reliability and risk; and (5) probabilistic structural performance evaluation. Recent progress includes: (1) the evaluation of the various uncertainties in terms of cumulative distribution functions for various structural response variables based on known or assumed uncertainties in primitive structural variables; (2) evaluation of the failure probability; (3) reliability and risk-cost assessment; and (4) an outline of an emerging approach for eventual certification of man-rated structures by computational methods. Collectively, the results demonstrate that the structural durability/reliability of man-rated structural components and structures can be effectively evaluated by using formal probabilistic methods.
NASA Astrophysics Data System (ADS)
Zheng, Y.; Han, F.; Wu, B.
2013-12-01
Process-based, spatially distributed and dynamic models provide desirable resolutions to watershed-scale water management. However, their reliability in solving real management problems has been seriously questioned, since the model simulation usually involves significant uncertainty with complicated origins. Uncertainty analysis (UA) for complex hydrological models has been a hot topic in the past decade, and a variety of UA approaches have been developed, but mostly in a theoretical setting. Whether and how a UA could benefit real management decisions remains to be critical questions. We have conducted a series of studies to investigate the applicability of classic approaches, such as GLUE and Markov Chain Monte Carlo (MCMC) methods, in real management settings, unravel the difficulties encountered by such methods, and tailor the methods to better serve the management. Frameworks and new algorithms, such as Probabilistic Collocation Method (PCM)-based approaches, were also proposed for specific management issues. This presentation summarize our past and ongoing studies on the role of UA in real water management. Challenges and potential strategies to bridge the gap between UA for complex models and decision-making for management will be discussed. Future directions for the research in this field will also be suggested. Two common water management settings were examined. One is the Total Maximum Daily Loads (TMDLs) management for surface water quality protection. The other is integrated water resources management for watershed sustainability. For the first setting, nutrients and pesticides TMDLs in the Newport Bay Watershed (Orange Country, California, USA) were discussed. It is a highly urbanized region with a semi-arid Mediterranean climate, typical of the western U.S. For the second setting, the water resources management in the Zhangye Basin (the midstream part of Heihe Baisn, China), where the famous 'Silk Road' came through, was investigated. The Zhangye Basin has a Gobi-oasis system typical of the western China, with extensive agriculture in its oasis.
Extracting Useful Semantic Information from Large Scale Corpora of Text
ERIC Educational Resources Information Center
Mendoza, Ray Padilla, Jr.
2012-01-01
Extracting and representing semantic information from large scale corpora is at the crux of computer-assisted knowledge generation. Semantic information depends on collocation extraction methods, mathematical models used to represent distributional information, and weighting functions which transform the space. This dissertation provides a…
Triple collocation based merging of satellite soil moisture retrievals
USDA-ARS?s Scientific Manuscript database
We propose a method for merging soil moisture retrievals from space borne active and passive microwave instruments based on weighted averaging taking into account the error characteristics of the individual data sets. The merging scheme is parameterized using error variance estimates obtained from u...
NASA Astrophysics Data System (ADS)
Reuter, Bryan; Oliver, Todd; Lee, M. K.; Moser, Robert
2017-11-01
We present an algorithm for a Direct Numerical Simulation of the variable-density Navier-Stokes equations based on the velocity-vorticity approach introduced by Kim, Moin, and Moser (1987). In the current work, a Helmholtz decomposition of the momentum is performed. Evolution equations for the curl and the Laplacian of the divergence-free portion are formulated by manipulation of the momentum equations and the curl-free portion is reconstructed by enforcing continuity. The solution is expanded in Fourier bases in the homogeneous directions and B-Spline bases in the inhomogeneous directions. Discrete equations are obtained through a mixed Fourier-Galerkin and collocation weighted residual method. The scheme is designed such that the numerical solution conserves mass locally and globally by ensuring the discrete divergence projection is exact through the use of higher order splines in the inhomogeneous directions. The formulation is tested on multiple variable-density flow problems.
An advanced probabilistic structural analysis method for implicit performance functions
NASA Technical Reports Server (NTRS)
Wu, Y.-T.; Millwater, H. R.; Cruse, T. A.
1989-01-01
In probabilistic structural analysis, the performance or response functions usually are implicitly defined and must be solved by numerical analysis methods such as finite element methods. In such cases, the most commonly used probabilistic analysis tool is the mean-based, second-moment method which provides only the first two statistical moments. This paper presents a generalized advanced mean value (AMV) method which is capable of establishing the distributions to provide additional information for reliability design. The method requires slightly more computations than the second-moment method but is highly efficient relative to the other alternative methods. In particular, the examples show that the AMV method can be used to solve problems involving non-monotonic functions that result in truncated distributions.
Probabilistic safety assessment of the design of a tall buildings under the extreme load
DOE Office of Scientific and Technical Information (OSTI.GOV)
Králik, Juraj, E-mail: juraj.kralik@stuba.sk
2016-06-08
The paper describes some experiences from the deterministic and probabilistic analysis of the safety of the tall building structure. There are presented the methods and requirements of Eurocode EN 1990, standard ISO 2394 and JCSS. The uncertainties of the model and resistance of the structures are considered using the simulation methods. The MONTE CARLO, LHS and RSM probabilistic methods are compared with the deterministic results. On the example of the probability analysis of the safety of the tall buildings is demonstrated the effectiveness of the probability design of structures using Finite Element Methods.
Probabilistic safety assessment of the design of a tall buildings under the extreme load
NASA Astrophysics Data System (ADS)
Králik, Juraj
2016-06-01
The paper describes some experiences from the deterministic and probabilistic analysis of the safety of the tall building structure. There are presented the methods and requirements of Eurocode EN 1990, standard ISO 2394 and JCSS. The uncertainties of the model and resistance of the structures are considered using the simulation methods. The MONTE CARLO, LHS and RSM probabilistic methods are compared with the deterministic results. On the example of the probability analysis of the safety of the tall buildings is demonstrated the effectiveness of the probability design of structures using Finite Element Methods.
Nonparametric triple collocation
USDA-ARS?s Scientific Manuscript database
Triple collocation derives variance-covariance relationships between three or more independent measurement sources and an indirectly observed truth variable in the case where the measurement operators are linear-Gaussian. We generalize that theory to arbitrary observation operators by deriving nonpa...
Spectral methods for time dependent partial differential equations
NASA Technical Reports Server (NTRS)
Gottlieb, D.; Turkel, E.
1983-01-01
The theory of spectral methods for time dependent partial differential equations is reviewed. When the domain is periodic Fourier methods are presented while for nonperiodic problems both Chebyshev and Legendre methods are discussed. The theory is presented for both hyperbolic and parabolic systems using both Galerkin and collocation procedures. While most of the review considers problems with constant coefficients the extension to nonlinear problems is also discussed. Some results for problems with shocks are presented.
Probabilistic dual heuristic programming-based adaptive critic
NASA Astrophysics Data System (ADS)
Herzallah, Randa
2010-02-01
Adaptive critic (AC) methods have common roots as generalisations of dynamic programming for neural reinforcement learning approaches. Since they approximate the dynamic programming solutions, they are potentially suitable for learning in noisy, non-linear and non-stationary environments. In this study, a novel probabilistic dual heuristic programming (DHP)-based AC controller is proposed. Distinct to current approaches, the proposed probabilistic (DHP) AC method takes uncertainties of forward model and inverse controller into consideration. Therefore, it is suitable for deterministic and stochastic control problems characterised by functional uncertainty. Theoretical development of the proposed method is validated by analytically evaluating the correct value of the cost function which satisfies the Bellman equation in a linear quadratic control problem. The target value of the probabilistic critic network is then calculated and shown to be equal to the analytically derived correct value. Full derivation of the Riccati solution for this non-standard stochastic linear quadratic control problem is also provided. Moreover, the performance of the proposed probabilistic controller is demonstrated on linear and non-linear control examples.
A fast Chebyshev method for simulating flexible-wing propulsion
NASA Astrophysics Data System (ADS)
Moore, M. Nicholas J.
2017-09-01
We develop a highly efficient numerical method to simulate small-amplitude flapping propulsion by a flexible wing in a nearly inviscid fluid. We allow the wing's elastic modulus and mass density to vary arbitrarily, with an eye towards optimizing these distributions for propulsive performance. The method to determine the wing kinematics is based on Chebyshev collocation of the 1D beam equation as coupled to the surrounding 2D fluid flow. Through small-amplitude analysis of the Euler equations (with trailing-edge vortex shedding), the complete hydrodynamics can be represented by a nonlocal operator that acts on the 1D wing kinematics. A class of semi-analytical solutions permits fast evaluation of this operator with O (Nlog N) operations, where N is the number of collocation points on the wing. This is in contrast to the minimum O (N2) cost of a direct 2D fluid solver. The coupled wing-fluid problem is thus recast as a PDE with nonlocal operator, which we solve using a preconditioned iterative method. These techniques yield a solver of near-optimal complexity, O (Nlog N) , allowing one to rapidly search the infinite-dimensional parameter space of all possible material distributions and even perform optimization over this space.
Probabilistic Aeroelastic Analysis of Turbomachinery Components
NASA Technical Reports Server (NTRS)
Reddy, T. S. R.; Mital, S. K.; Stefko, G. L.
2004-01-01
A probabilistic approach is described for aeroelastic analysis of turbomachinery blade rows. Blade rows with subsonic flow and blade rows with supersonic flow with subsonic leading edge are considered. To demonstrate the probabilistic approach, the flutter frequency, damping and forced response of a blade row representing a compressor geometry is considered. The analysis accounts for uncertainties in structural and aerodynamic design variables. The results are presented in the form of probabilistic density function (PDF) and sensitivity factors. For subsonic flow cascade, comparisons are also made with different probabilistic distributions, probabilistic methods, and Monte-Carlo simulation. The approach shows that the probabilistic approach provides a more realistic and systematic way to assess the effect of uncertainties in design variables on the aeroelastic instabilities and response.
Probabilistic numerics and uncertainty in computations
Hennig, Philipp; Osborne, Michael A.; Girolami, Mark
2015-01-01
We deliver a call to arms for probabilistic numerical methods: algorithms for numerical tasks, including linear algebra, integration, optimization and solving differential equations, that return uncertainties in their calculations. Such uncertainties, arising from the loss of precision induced by numerical calculation with limited time or hardware, are important for much contemporary science and industry. Within applications such as climate science and astrophysics, the need to make decisions on the basis of computations with large and complex data have led to a renewed focus on the management of numerical uncertainty. We describe how several seminal classic numerical methods can be interpreted naturally as probabilistic inference. We then show that the probabilistic view suggests new algorithms that can flexibly be adapted to suit application specifics, while delivering improved empirical performance. We provide concrete illustrations of the benefits of probabilistic numeric algorithms on real scientific problems from astrometry and astronomical imaging, while highlighting open problems with these new algorithms. Finally, we describe how probabilistic numerical methods provide a coherent framework for identifying the uncertainty in calculations performed with a combination of numerical algorithms (e.g. both numerical optimizers and differential equation solvers), potentially allowing the diagnosis (and control) of error sources in computations. PMID:26346321
Probabilistic numerics and uncertainty in computations.
Hennig, Philipp; Osborne, Michael A; Girolami, Mark
2015-07-08
We deliver a call to arms for probabilistic numerical methods : algorithms for numerical tasks, including linear algebra, integration, optimization and solving differential equations, that return uncertainties in their calculations. Such uncertainties, arising from the loss of precision induced by numerical calculation with limited time or hardware, are important for much contemporary science and industry. Within applications such as climate science and astrophysics, the need to make decisions on the basis of computations with large and complex data have led to a renewed focus on the management of numerical uncertainty. We describe how several seminal classic numerical methods can be interpreted naturally as probabilistic inference. We then show that the probabilistic view suggests new algorithms that can flexibly be adapted to suit application specifics, while delivering improved empirical performance. We provide concrete illustrations of the benefits of probabilistic numeric algorithms on real scientific problems from astrometry and astronomical imaging, while highlighting open problems with these new algorithms. Finally, we describe how probabilistic numerical methods provide a coherent framework for identifying the uncertainty in calculations performed with a combination of numerical algorithms (e.g. both numerical optimizers and differential equation solvers), potentially allowing the diagnosis (and control) of error sources in computations.
NASA Astrophysics Data System (ADS)
Bin, Che; Ruoying, Yu; Dongsheng, Dang; Xiangyan, Wang
2017-05-01
Distributed Generation (DG) integrating to the network would cause the harmonic pollution which would cause damages on electrical devices and affect the normal operation of power system. On the other hand, due to the randomness of the wind and solar irradiation, the output of DG is random, too, which leads to an uncertainty of the harmonic generated by the DG. Thus, probabilistic methods are needed to analyse the impacts of the DG integration. In this work we studied the harmonic voltage probabilistic distribution and the harmonic distortion in distributed network after the distributed photovoltaic (DPV) system integrating in different weather conditions, mainly the sunny day, cloudy day, rainy day and the snowy day. The probabilistic distribution function of the DPV output power in different typical weather conditions could be acquired via the parameter identification method of maximum likelihood estimation. The Monte-Carlo simulation method was adopted to calculate the probabilistic distribution of harmonic voltage content at different frequency orders as well as the harmonic distortion (THD) in typical weather conditions. The case study was based on the IEEE33 system and the results of harmonic voltage content probabilistic distribution as well as THD in typical weather conditions were compared.
Error analysis for spectral approximation of the Korteweg-De Vries equation
NASA Technical Reports Server (NTRS)
Maday, Y.; recent years.
1987-01-01
The conservation and convergence properties of spectral Fourier methods for the numerical approximation of the Korteweg-de Vries equation are analyzed. It is proved that the (aliased) collocation pseudospectral method enjoys the same convergence properties as the spectral Galerkin method, which is less effective from the computational point of view. This result provides a precise mathematical answer to a question raised by several authors in recent years.
NESSUS/EXPERT - An expert system for probabilistic structural analysis methods
NASA Technical Reports Server (NTRS)
Millwater, H.; Palmer, K.; Fink, P.
1988-01-01
An expert system (NESSUS/EXPERT) is presented which provides assistance in using probabilistic structural analysis methods. NESSUS/EXPERT is an interactive menu-driven expert system that provides information to assist in the use of the probabilistic finite element code NESSUS/FEM and the fast probability integrator. NESSUS/EXPERT was developed with a combination of FORTRAN and CLIPS, a C language expert system tool, to exploit the strengths of each language.
Numerical solution of boundary-integral equations for molecular electrostatics.
Bardhan, Jaydeep P
2009-03-07
Numerous molecular processes, such as ion permeation through channel proteins, are governed by relatively small changes in energetics. As a result, theoretical investigations of these processes require accurate numerical methods. In the present paper, we evaluate the accuracy of two approaches to simulating boundary-integral equations for continuum models of the electrostatics of solvation. The analysis emphasizes boundary-element method simulations of the integral-equation formulation known as the apparent-surface-charge (ASC) method or polarizable-continuum model (PCM). In many numerical implementations of the ASC/PCM model, one forces the integral equation to be satisfied exactly at a set of discrete points on the boundary. We demonstrate in this paper that this approach to discretization, known as point collocation, is significantly less accurate than an alternative approach known as qualocation. Furthermore, the qualocation method offers this improvement in accuracy without increasing simulation time. Numerical examples demonstrate that electrostatic part of the solvation free energy, when calculated using the collocation and qualocation methods, can differ significantly; for a polypeptide, the answers can differ by as much as 10 kcal/mol (approximately 4% of the total electrostatic contribution to solvation). The applicability of the qualocation discretization to other integral-equation formulations is also discussed, and two equivalences between integral-equation methods are derived.
NASA Astrophysics Data System (ADS)
Diamantopoulos, Theodore; Rowe, Kristopher; Diamessis, Peter
2017-11-01
The Collocation Penalty Method (CPM) solves a PDE on the interior of a domain, while weakly enforcing boundary conditions at domain edges via penalty terms, and naturally lends itself to high-order and multi-domain discretization. Such spectral multi-domain penalty methods (SMPM) have been used to solve the Navier-Stokes equations. Bounds for penalty coefficients are typically derived using the energy method to guarantee stability for time-dependent problems. The choice of collocation points and penalty parameter can greatly affect the conditioning and accuracy of a solution. Effort has been made in recent years to relate various high-order methods on multiple elements or domains under the umbrella of the Correction Procedure via Reconstruction (CPR). Most applications of CPR have focused on solving the compressible Navier-Stokes equations using explicit time-stepping procedures. A particularly important aspect which is still missing in the context of the SMPM is a study of the Helmholtz equation arising in many popular time-splitting schemes for the incompressible Navier-Stokes equations. Stability and convergence results for the SMPM for the Helmholtz equation will be presented. Emphasis will be placed on the efficiency and accuracy of high-order methods.
Long, Judith A; Wang, Andrew; Medvedeva, Elina L; Eisen, Susan V; Gordon, Adam J; Kreyenbuhl, Julie; Marcus, Steven C
2014-08-01
Persons with serious mental illness (SMI) may benefit from collocation of medical and mental health healthcare professionals and services in attending to their chronic comorbid medical conditions. We evaluated and compared glucose control and diabetes medication adherence among patients with SMI who received collocated care to those not receiving collocated care (which we call usual care). We performed a cross-sectional, observational cohort study of 363 veteran patients with type 2 diabetes and SMI who received care from one of three Veterans Affairs medical facilities: two sites that provided both collocated and usual care and one site that provided only usual care. Through a survey, laboratory tests, and medical records, we assessed patient characteristics, glucose control as measured by a current HbA1c, and adherence to diabetes medication as measured by the medication possession ration (MPR) and self-report. In the sample, the mean HbA1c was 7.4% (57 mmol/mol), the mean MPR was 80%, and 51% reported perfect adherence to their diabetes medications. In both unadjusted and adjusted analyses, there were no differences in glucose control and medication adherence by collocation of care. Patients seen in collocated care tended to have better HbA1c levels (β = -0.149; P = 0.393) and MPR values (β = 0.34; P = 0.132) and worse self-reported adherence (odds ratio 0.71; P = 0.143), but these were not statistically significant. In a population of veterans with comorbid diabetes and SMI, patients on average had good glucose control and medication adherence regardless of where they received primary care. © 2014 by the American Diabetes Association. Readers may use this article as long as the work is properly cited, the use is educational and not for profit, and the work is not altered.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bacvarov, D.C.
1981-01-01
A new method for probabilistic risk assessment of transmission line insulation flashovers caused by lightning strokes is presented. The utilized approach of applying the finite element method for probabilistic risk assessment is demonstrated to be very powerful. The reasons for this are two. First, the finite element method is inherently suitable for analysis of three dimensional spaces where the parameters, such as three variate probability densities of the lightning currents, are non-uniformly distributed. Second, the finite element method permits non-uniform discretization of the three dimensional probability spaces thus yielding high accuracy in critical regions, such as the area of themore » low probability events, while at the same time maintaining coarse discretization in the non-critical areas to keep the number of grid points and the size of the problem to a manageable low level. The finite element probabilistic risk assessment method presented here is based on a new multidimensional search algorithm. It utilizes an efficient iterative technique for finite element interpolation of the transmission line insulation flashover criteria computed with an electro-magnetic transients program. Compared to other available methods the new finite element probabilistic risk assessment method is significantly more accurate and approximately two orders of magnitude computationally more efficient. The method is especially suited for accurate assessment of rare, very low probability events.« less
DOT National Transportation Integrated Search
2009-10-13
This paper describes a probabilistic approach to estimate the conditional probability of release of hazardous materials from railroad tank cars during train accidents. Monte Carlo methods are used in developing a probabilistic model to simulate head ...
NASA Technical Reports Server (NTRS)
Joshi, S. M.
1985-01-01
Robustness properties are investigated for two types of controllers for large flexible space structures, which use collocated sensors and actuators. The first type is an attitude controller which uses negative definite feedback of measured attitude and rate, while the second type is a damping enhancement controller which uses only velocity (rate) feedback. It is proved that collocated attitude controllers preserve closed loop global asymptotic stability when linear actuator/sensor dynamics satisfying certain phase conditions are present, or monotonic increasing nonlinearities are present. For velocity feedback controllers, the global asymptotic stability is proved under much weaker conditions. In particular, they have 90 phase margin and can tolerate nonlinearities belonging to the (0,infinity) sector in the actuator/sensor characteristics. The results significantly enhance the viability of both types of collocated controllers, especially when the available information about the large space structure (LSS) parameters is inadequate or inaccurate.
Structural reliability methods: Code development status
NASA Astrophysics Data System (ADS)
Millwater, Harry R.; Thacker, Ben H.; Wu, Y.-T.; Cruse, T. A.
1991-05-01
The Probabilistic Structures Analysis Method (PSAM) program integrates state of the art probabilistic algorithms with structural analysis methods in order to quantify the behavior of Space Shuttle Main Engine structures subject to uncertain loadings, boundary conditions, material parameters, and geometric conditions. An advanced, efficient probabilistic structural analysis software program, NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) was developed as a deliverable. NESSUS contains a number of integrated software components to perform probabilistic analysis of complex structures. A nonlinear finite element module NESSUS/FEM is used to model the structure and obtain structural sensitivities. Some of the capabilities of NESSUS/FEM are shown. A Fast Probability Integration module NESSUS/FPI estimates the probability given the structural sensitivities. A driver module, PFEM, couples the FEM and FPI. NESSUS, version 5.0, addresses component reliability, resistance, and risk.
Structural reliability methods: Code development status
NASA Technical Reports Server (NTRS)
Millwater, Harry R.; Thacker, Ben H.; Wu, Y.-T.; Cruse, T. A.
1991-01-01
The Probabilistic Structures Analysis Method (PSAM) program integrates state of the art probabilistic algorithms with structural analysis methods in order to quantify the behavior of Space Shuttle Main Engine structures subject to uncertain loadings, boundary conditions, material parameters, and geometric conditions. An advanced, efficient probabilistic structural analysis software program, NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) was developed as a deliverable. NESSUS contains a number of integrated software components to perform probabilistic analysis of complex structures. A nonlinear finite element module NESSUS/FEM is used to model the structure and obtain structural sensitivities. Some of the capabilities of NESSUS/FEM are shown. A Fast Probability Integration module NESSUS/FPI estimates the probability given the structural sensitivities. A driver module, PFEM, couples the FEM and FPI. NESSUS, version 5.0, addresses component reliability, resistance, and risk.
Computation of Sound Propagation by Boundary Element Method
NASA Technical Reports Server (NTRS)
Guo, Yueping
2005-01-01
This report documents the development of a Boundary Element Method (BEM) code for the computation of sound propagation in uniform mean flows. The basic formulation and implementation follow the standard BEM methodology; the convective wave equation and the boundary conditions on the surfaces of the bodies in the flow are formulated into an integral equation and the method of collocation is used to discretize this equation into a matrix equation to be solved numerically. New features discussed here include the formulation of the additional terms due to the effects of the mean flow and the treatment of the numerical singularities in the implementation by the method of collocation. The effects of mean flows introduce terms in the integral equation that contain the gradients of the unknown, which is undesirable if the gradients are treated as additional unknowns, greatly increasing the sizes of the matrix equation, or if numerical differentiation is used to approximate the gradients, introducing numerical error in the computation. It is shown that these terms can be reformulated in terms of the unknown itself, making the integral equation very similar to the case without mean flows and simple for numerical implementation. To avoid asymptotic analysis in the treatment of numerical singularities in the method of collocation, as is conventionally done, we perform the surface integrations in the integral equation by using sub-triangles so that the field point never coincide with the evaluation points on the surfaces. This simplifies the formulation and greatly facilitates the implementation. To validate the method and the code, three canonic problems are studied. They are respectively the sound scattering by a sphere, the sound reflection by a plate in uniform mean flows and the sound propagation over a hump of irregular shape in uniform flows. The first two have analytical solutions and the third is solved by the method of Computational Aeroacoustics (CAA), all of which are used to compare the BEM solutions. The comparisons show very good agreements and validate the accuracy of the BEM approach implemented here.
Probabilistic structural analysis methods for space transportation propulsion systems
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Moore, N.; Anis, C.; Newell, J.; Nagpal, V.; Singhal, S.
1991-01-01
Information on probabilistic structural analysis methods for space propulsion systems is given in viewgraph form. Information is given on deterministic certification methods, probability of failure, component response analysis, stress responses for 2nd stage turbine blades, Space Shuttle Main Engine (SSME) structural durability, and program plans. .
Odum, Jack K.; Stephenson, William J.; Williams, Robert A.; von Hillebrandt-Andrade, Christa
2013-01-01
Shear‐wave velocity (VS) and time‐averaged shear‐wave velocity to 30 m depth (VS30) are the key parameters used in seismic site response modeling and earthquake engineering design. Where VS data are limited, available data are often used to develop and refine map‐based proxy models of VS30 for predicting ground‐motion intensities. In this paper, we present shallow VS data from 27 sites in Puerto Rico. These data were acquired using a multimethod acquisition approach consisting of noninvasive, collocated, active‐source body‐wave (refraction/reflection), active‐source surface wave at nine sites, and passive‐source surface‐wave refraction microtremor (ReMi) techniques. VS‐versus‐depth models are constructed and used to calculate spectral response plots for each site. Factors affecting method reliability are analyzed with respect to site‐specific differences in bedrock VS and spectral response. At many but not all sites, body‐ and surface‐wave methods generally determine similar depths to bedrock, and it is the difference in bedrock VS that influences site amplification. The predicted resonant frequencies for the majority of the sites are observed to be within a relatively narrow bandwidth of 1–3.5 Hz. For a first‐order comparison of peak frequency position, predictive spectral response plots from eight sites are plotted along with seismograph instrument spectra derived from the time series of the 16 May 2010 Puerto Rico earthquake. We show how a multimethod acquisition approach using collocated arrays compliments and corroborates VS results, thus adding confidence that reliable site characterization information has been obtained.
Parallel Aircraft Trajectory Optimization with Analytic Derivatives
NASA Technical Reports Server (NTRS)
Falck, Robert D.; Gray, Justin S.; Naylor, Bret
2016-01-01
Trajectory optimization is an integral component for the design of aerospace vehicles, but emerging aircraft technologies have introduced new demands on trajectory analysis that current tools are not well suited to address. Designing aircraft with technologies such as hybrid electric propulsion and morphing wings requires consideration of the operational behavior as well as the physical design characteristics of the aircraft. The addition of operational variables can dramatically increase the number of design variables which motivates the use of gradient based optimization with analytic derivatives to solve the larger optimization problems. In this work we develop an aircraft trajectory analysis tool using a Legendre-Gauss-Lobatto based collocation scheme, providing analytic derivatives via the OpenMDAO multidisciplinary optimization framework. This collocation method uses an implicit time integration scheme that provides a high degree of sparsity and thus several potential options for parallelization. The performance of the new implementation was investigated via a series of single and multi-trajectory optimizations using a combination of parallel computing and constraint aggregation. The computational performance results show that in order to take full advantage of the sparsity in the problem it is vital to parallelize both the non-linear analysis evaluations and the derivative computations themselves. The constraint aggregation results showed a significant numerical challenge due to difficulty in achieving tight convergence tolerances. Overall, the results demonstrate the value of applying analytic derivatives to trajectory optimization problems and lay the foundation for future application of this collocation based method to the design of aircraft with where operational scheduling of technologies is key to achieving good performance.
NASA Astrophysics Data System (ADS)
Liang, Hui; Chen, Xiaobo
2017-10-01
A novel multi-domain method based on an analytical control surface is proposed by combining the use of free-surface Green function and Rankine source function. A cylindrical control surface is introduced to subdivide the fluid domain into external and internal domains. Unlike the traditional domain decomposition strategy or multi-block method, the control surface here is not panelized, on which the velocity potential and normal velocity components are analytically expressed as a series of base functions composed of Laguerre function in vertical coordinate and Fourier series in the circumference. Free-surface Green function is applied in the external domain, and the boundary integral equation is constructed on the control surface in the sense of Galerkin collocation via integrating test functions orthogonal to base functions over the control surface. The external solution gives rise to the so-called Dirichlet-to-Neumann [DN2] and Neumann-to-Dirichlet [ND2] relations on the control surface. Irregular frequencies, which are only dependent on the radius of the control surface, are present in the external solution, and they are removed by extending the boundary integral equation to the interior free surface (circular disc) on which the null normal derivative of potential is imposed, and the dipole distribution is expressed as Fourier-Bessel expansion on the disc. In the internal domain, where the Rankine source function is adopted, new boundary integral equations are formulated. The point collocation is imposed over the body surface and free surface, while the collocation of the Galerkin type is applied on the control surface. The present method is valid in the computation of both linear and second-order mean drift wave loads. Furthermore, the second-order mean drift force based on the middle-field formulation can be calculated analytically by using the coefficients of the Fourier-Laguerre expansion.
Probabilistic structural analysis methods for improving Space Shuttle engine reliability
NASA Technical Reports Server (NTRS)
Boyce, L.
1989-01-01
Probabilistic structural analysis methods are particularly useful in the design and analysis of critical structural components and systems that operate in very severe and uncertain environments. These methods have recently found application in space propulsion systems to improve the structural reliability of Space Shuttle Main Engine (SSME) components. A computer program, NESSUS, based on a deterministic finite-element program and a method of probabilistic analysis (fast probability integration) provides probabilistic structural analysis for selected SSME components. While computationally efficient, it considers both correlated and nonnormal random variables as well as an implicit functional relationship between independent and dependent variables. The program is used to determine the response of a nickel-based superalloy SSME turbopump blade. Results include blade tip displacement statistics due to the variability in blade thickness, modulus of elasticity, Poisson's ratio or density. Modulus of elasticity significantly contributed to blade tip variability while Poisson's ratio did not. Thus, a rational method for choosing parameters to be modeled as random is provided.
Performance Evaluation and Community Application of Low-Cost Sensors for Ozone and Nitrogen Dioxide
This study reports on the performance of electrochemical-based low-cost sensors and their use in a community application. CairClip sensors were collocated with federal reference and equivalent methods and operated in a network of sites by citizen scientists (community members) in...
Multi-fidelity stochastic collocation method for computation of statistical moments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Xueyu, E-mail: xueyu-zhu@uiowa.edu; Linebarger, Erin M., E-mail: aerinline@sci.utah.edu; Xiu, Dongbin, E-mail: xiu.16@osu.edu
We present an efficient numerical algorithm to approximate the statistical moments of stochastic problems, in the presence of models with different fidelities. The method extends the multi-fidelity approximation method developed in . By combining the efficiency of low-fidelity models and the accuracy of high-fidelity models, our method exhibits fast convergence with a limited number of high-fidelity simulations. We establish an error bound of the method and present several numerical examples to demonstrate the efficiency and applicability of the multi-fidelity algorithm.
On the anomaly of velocity-pressure decoupling in collocated mesh solutions
NASA Technical Reports Server (NTRS)
Kim, Sang-Wook; Vanoverbeke, Thomas
1991-01-01
The use of various pressure correction algorithms originally developed for fully staggered meshes can yield a velocity-pressure decoupled solution for collocated meshes. The mechanism that causes velocity-pressure decoupling is identified. It is shown that the use of a partial differential equation for the incremental pressure eliminates such a mechanism and yields a velocity-pressure coupled solution. Example flows considered are a three dimensional lid-driven cavity flow and a laminar flow through a 90 deg bend square duct. Numerical results obtained using the collocated mesh are in good agreement with the measured data and other numerical results.
NASA Technical Reports Server (NTRS)
Duffy, S. F.; Hu, J.; Hopkins, D. A.
1995-01-01
The article begins by examining the fundamentals of traditional deterministic design philosophy. The initial section outlines the concepts of failure criteria and limit state functions two traditional notions that are embedded in deterministic design philosophy. This is followed by a discussion regarding safety factors (a possible limit state function) and the common utilization of statistical concepts in deterministic engineering design approaches. Next the fundamental aspects of a probabilistic failure analysis are explored and it is shown that deterministic design concepts mentioned in the initial portion of the article are embedded in probabilistic design methods. For components fabricated from ceramic materials (and other similarly brittle materials) the probabilistic design approach yields the widely used Weibull analysis after suitable assumptions are incorporated. The authors point out that Weibull analysis provides the rare instance where closed form solutions are available for a probabilistic failure analysis. Since numerical methods are usually required to evaluate component reliabilities, a section on Monte Carlo methods is included to introduce the concept. The article concludes with a presentation of the technical aspects that support the numerical method known as fast probability integration (FPI). This includes a discussion of the Hasofer-Lind and Rackwitz-Fiessler approximations.
Generalized INF-SUP condition for Chebyshev approximation of the Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Bernardi, Christine; Canuto, Claudio; Maday, Yvon
1986-01-01
An abstract mixed problem and its approximation are studied; both are well-posed if and only if several inf-sup conditions are satisfied. These results are applied to a spectral Galerkin method for the Stokes problem in a square, when it is formulated in Chebyshev weighted Sobolev spaces. Finally, a collocation method for the Navier-Stokes equations at Chebyshev nodes is analyzed.
2010-02-28
implemented a fast method to enable the statistical characterization of electromagnetic interference and compatibility (EMI/EMC) phenomena on electrically...higher accuracy is needed, e.g., to compute higher moment statistics . To address this problem, we have developed adaptive stochastic collocation methods ...SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) AF OFFICE OF SCIENTIFIC RESEARCH 875 N. RANDOLPH ST. ROOM 3112 ARLINGTON VA 22203 UA
An approximate methods approach to probabilistic structural analysis
NASA Technical Reports Server (NTRS)
Mcclung, R. C.; Millwater, H. R.; Wu, Y.-T.; Thacker, B. H.; Burnside, O. H.
1989-01-01
A probabilistic structural analysis method (PSAM) is described which makes an approximate calculation of the structural response of a system, including the associated probabilistic distributions, with minimal computation time and cost, based on a simplified representation of the geometry, loads, and material. The method employs the fast probability integration (FPI) algorithm of Wu and Wirsching. Typical solution strategies are illustrated by formulations for a representative critical component chosen from the Space Shuttle Main Engine (SSME) as part of a major NASA-sponsored program on PSAM. Typical results are presented to demonstrate the role of the methodology in engineering design and analysis.
Berti, Claudio; Gillespie, Dirk; Bardhan, Jaydeep P; Eisenberg, Robert S; Fiegna, Claudio
2012-07-01
Particle-based simulation represents a powerful approach to modeling physical systems in electronics, molecular biology, and chemical physics. Accounting for the interactions occurring among charged particles requires an accurate and efficient solution of Poisson's equation. For a system of discrete charges with inhomogeneous dielectrics, i.e., a system with discontinuities in the permittivity, the boundary element method (BEM) is frequently adopted. It provides the solution of Poisson's equation, accounting for polarization effects due to the discontinuity in the permittivity by computing the induced charges at the dielectric boundaries. In this framework, the total electrostatic potential is then found by superimposing the elemental contributions from both source and induced charges. In this paper, we present a comparison between two BEMs to solve a boundary-integral formulation of Poisson's equation, with emphasis on the BEMs' suitability for particle-based simulations in terms of solution accuracy and computation speed. The two approaches are the collocation and qualocation methods. Collocation is implemented following the induced-charge computation method of D. Boda et al. [J. Chem. Phys. 125, 034901 (2006)]. The qualocation method is described by J. Tausch et al. [IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 20, 1398 (2001)]. These approaches are studied using both flat and curved surface elements to discretize the dielectric boundary, using two challenging test cases: a dielectric sphere embedded in a different dielectric medium and a toy model of an ion channel. Earlier comparisons of the two BEM approaches did not address curved surface elements or semiatomistic models of ion channels. Our results support the earlier findings that for flat-element calculations, qualocation is always significantly more accurate than collocation. On the other hand, when the dielectric boundary is discretized with curved surface elements, the two methods are essentially equivalent; i.e., they have comparable accuracies for the same number of elements. We find that ions in water--charges embedded in a high-dielectric medium--are harder to compute accurately than charges in a low-dielectric medium.
Fuzzy-probabilistic model for risk assessment of radioactive material railway transportation.
Avramenko, M; Bolyatko, V; Kosterev, V
2005-01-01
Transportation of radioactive materials is obviously accompanied by a certain risk. A model for risk assessment of emergency situations and terrorist attacks may be useful for choosing possible routes and for comparing the various defence strategies. In particular, risk assessment is crucial for safe transportation of excess weapons-grade plutonium arising from the removal of plutonium from military employment. A fuzzy-probabilistic model for risk assessment of railway transportation has been developed taking into account the different natures of risk-affecting parameters (probabilistic and not probabilistic but fuzzy). Fuzzy set theory methods as well as standard methods of probability theory have been used for quantitative risk assessment. Information-preserving transformations are applied to realise the correct aggregation of probabilistic and fuzzy parameters. Estimations have also been made of the inhalation doses resulting from possible accidents during plutonium transportation. The obtained data show the scale of possible consequences that may arise from plutonium transportation accidents.
Beyond triple collocation: Applications to satellite soil moisture
USDA-ARS?s Scientific Manuscript database
Triple collocation is now routinely used to resolve the exact (linear) relationships between multiple measurements and/or representations of a geophysical variable that are subject to errors. It has been utilized in the context of calibration, rescaling and error characterisation to allow comparison...
Evaluating Remotely-Sensed Surface Soil Moisture Estimates Using Triple Collocation
USDA-ARS?s Scientific Manuscript database
Recent work has demonstrated the potential of enhancing remotely-sensed surface soil moisture validation activities through the application of triple collocation techniques which compare time series of three mutually independent geophysical variable estimates in order to acquire the root-mean-square...
Li, Zhixi; Peck, Kyung K.; Brennan, Nicole P.; Jenabi, Mehrnaz; Hsu, Meier; Zhang, Zhigang; Holodny, Andrei I.; Young, Robert J.
2014-01-01
Purpose The purpose of this study was to compare the deterministic and probabilistic tracking methods of diffusion tensor white matter fiber tractography in patients with brain tumors. Materials and Methods We identified 29 patients with left brain tumors <2 cm from the arcuate fasciculus who underwent pre-operative language fMRI and DTI. The arcuate fasciculus was reconstructed using a deterministic Fiber Assignment by Continuous Tracking (FACT) algorithm and a probabilistic method based on an extended Monte Carlo Random Walk algorithm. Tracking was controlled using two ROIs corresponding to Broca’s and Wernicke’s areas. Tracts in tumoraffected hemispheres were examined for extension between Broca’s and Wernicke’s areas, anterior-posterior length and volume, and compared with the normal contralateral tracts. Results Probabilistic tracts displayed more complete anterior extension to Broca’s area than did FACT tracts on the tumor-affected and normal sides (p < 0.0001). The median length ratio for tumor: normal sides was greater for probabilistic tracts than FACT tracts (p < 0.0001). The median tract volume ratio for tumor: normal sides was also greater for probabilistic tracts than FACT tracts (p = 0.01). Conclusion Probabilistic tractography reconstructs the arcuate fasciculus more completely and performs better through areas of tumor and/or edema. The FACT algorithm tends to underestimate the anterior-most fibers of the arcuate fasciculus, which are crossed by primary motor fibers. PMID:25328583
NASA Astrophysics Data System (ADS)
Doha, E. H.; Bhrawy, A. H.; Abdelkawy, M. A.; Van Gorder, Robert A.
2014-03-01
A Jacobi-Gauss-Lobatto collocation (J-GL-C) method, used in combination with the implicit Runge-Kutta method of fourth order, is proposed as a numerical algorithm for the approximation of solutions to nonlinear Schrödinger equations (NLSE) with initial-boundary data in 1+1 dimensions. Our procedure is implemented in two successive steps. In the first one, the J-GL-C is employed for approximating the functional dependence on the spatial variable, using (N-1) nodes of the Jacobi-Gauss-Lobatto interpolation which depends upon two general Jacobi parameters. The resulting equations together with the two-point boundary conditions induce a system of 2(N-1) first-order ordinary differential equations (ODEs) in time. In the second step, the implicit Runge-Kutta method of fourth order is applied to solve this temporal system. The proposed J-GL-C method, used in combination with the implicit Runge-Kutta method of fourth order, is employed to obtain highly accurate numerical approximations to four types of NLSE, including the attractive and repulsive NLSE and a Gross-Pitaevskii equation with space-periodic potential. The numerical results obtained by this algorithm have been compared with various exact solutions in order to demonstrate the accuracy and efficiency of the proposed method. Indeed, for relatively few nodes used, the absolute error in our numerical solutions is sufficiently small.
Memory Indexing: A Novel Method for Tracing Memory Processes in Complex Cognitive Tasks
ERIC Educational Resources Information Center
Renkewitz, Frank; Jahn, Georg
2012-01-01
We validate an eye-tracking method applicable for studying memory processes in complex cognitive tasks. The method is tested with a task on probabilistic inferences from memory. It provides valuable data on the time course of processing, thus clarifying previous results on heuristic probabilistic inference. Participants learned cue values of…
Fully probabilistic control for stochastic nonlinear control systems with input dependent noise.
Herzallah, Randa
2015-03-01
Robust controllers for nonlinear stochastic systems with functional uncertainties can be consistently designed using probabilistic control methods. In this paper a generalised probabilistic controller design for the minimisation of the Kullback-Leibler divergence between the actual joint probability density function (pdf) of the closed loop control system, and an ideal joint pdf is presented emphasising how the uncertainty can be systematically incorporated in the absence of reliable systems models. To achieve this objective all probabilistic models of the system are estimated from process data using mixture density networks (MDNs) where all the parameters of the estimated pdfs are taken to be state and control input dependent. Based on this dependency of the density parameters on the input values, explicit formulations to the construction of optimal generalised probabilistic controllers are obtained through the techniques of dynamic programming and adaptive critic methods. Using the proposed generalised probabilistic controller, the conditional joint pdfs can be made to follow the ideal ones. A simulation example is used to demonstrate the implementation of the algorithm and encouraging results are obtained. Copyright © 2014 Elsevier Ltd. All rights reserved.
UCAV path planning in the presence of radar-guided surface-to-air missile threats
NASA Astrophysics Data System (ADS)
Zeitz, Frederick H., III
This dissertation addresses the problem of path planning for unmanned combat aerial vehicles (UCAVs) in the presence of radar-guided surface-to-air missiles (SAMs). The radars, collocated with SAM launch sites, operate within the structure of an Integrated Air Defense System (IADS) that permits communication and cooperation between individual radars. The problem is formulated in the framework of the interaction between three sub-systems: the aircraft, the IADS, and the missile. The main features of this integrated model are: The aircraft radar cross section (RCS) depends explicitly on both the aspect and bank angles; hence, the RCS and aircraft dynamics are coupled. The probabilistic nature of IADS tracking is accounted for; namely, the probability that the aircraft has been continuously tracked by the IADS depends on the aircraft RCS and range from the perspective of each radar within the IADS. Finally, the requirement to maintain tracking prior to missile launch and during missile flyout are also modeled. Based on this model, the problem of UCAV path planning is formulated as a minimax optimal control problem, with the aircraft bank angle serving as control. Necessary conditions of optimality for this minimax problem are derived. Based on these necessary conditions, properties of the optimal paths are derived. These properties are used to discretize the dynamic optimization problem into a finite-dimensional, nonlinear programming problem that can be solved numerically. Properties of the optimal paths are also used to initialize the numerical procedure. A homotopy method is proposed to solve the finite-dimensional, nonlinear programming problem, and a heuristic method is proposed to improve the discretization during the homotopy process. Based upon the properties of numerical solutions, a method is proposed for parameterizing and storing information for later recall in flight to permit rapid replanning in response to changing threats. Illustrative examples are presented that confirm the standard flying tactics of "denying range, aspect, and aim," by yielding flight paths that "weave" to avoid long exposures of aspects with large RCS.
Probabilistic Simulation of Stress Concentration in Composite Laminates
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Murthy, P. L. N.; Liaw, D. G.
1994-01-01
A computational methodology is described to probabilistically simulate the stress concentration factors (SCF's) in composite laminates. This new approach consists of coupling probabilistic composite mechanics with probabilistic finite element structural analysis. The composite mechanics is used to probabilistically describe all the uncertainties inherent in composite material properties, whereas the finite element is used to probabilistically describe the uncertainties associated with methods to experimentally evaluate SCF's, such as loads, geometry, and supports. The effectiveness of the methodology is demonstrated by using is to simulate the SCF's in three different composite laminates. Simulated results match experimental data for probability density and for cumulative distribution functions. The sensitivity factors indicate that the SCF's are influenced by local stiffness variables, by load eccentricities, and by initial stress fields.
Discretization of the induced-charge boundary integral equation.
Bardhan, Jaydeep P; Eisenberg, Robert S; Gillespie, Dirk
2009-07-01
Boundary-element methods (BEMs) for solving integral equations numerically have been used in many fields to compute the induced charges at dielectric boundaries. In this paper, we consider a more accurate implementation of BEM in the context of ions in aqueous solution near proteins, but our results are applicable more generally. The ions that modulate protein function are often within a few angstroms of the protein, which leads to the significant accumulation of polarization charge at the protein-solvent interface. Computing the induced charge accurately and quickly poses a numerical challenge in solving a popular integral equation using BEM. In particular, the accuracy of simulations can depend strongly on seemingly minor details of how the entries of the BEM matrix are calculated. We demonstrate that when the dielectric interface is discretized into flat tiles, the qualocation method of Tausch [IEEE Trans Comput.-Comput.-Aided Des. 20, 1398 (2001)] to compute the BEM matrix elements is always more accurate than the traditional centroid-collocation method. Qualocation is not more expensive to implement than collocation and can save significant computational time by reducing the number of boundary elements needed to discretize the dielectric interfaces.
DOE Office of Scientific and Technical Information (OSTI.GOV)
D'Elia, M.; Edwards, H. C.; Hu, J.
Previous work has demonstrated that propagating groups of samples, called ensembles, together through forward simulations can dramatically reduce the aggregate cost of sampling-based uncertainty propagation methods [E. Phipps, M. D'Elia, H. C. Edwards, M. Hoemmen, J. Hu, and S. Rajamanickam, SIAM J. Sci. Comput., 39 (2017), pp. C162--C193]. However, critical to the success of this approach when applied to challenging problems of scientific interest is the grouping of samples into ensembles to minimize the total computational work. For example, the total number of linear solver iterations for ensemble systems may be strongly influenced by which samples form the ensemble whenmore » applying iterative linear solvers to parameterized and stochastic linear systems. In this paper we explore sample grouping strategies for local adaptive stochastic collocation methods applied to PDEs with uncertain input data, in particular canonical anisotropic diffusion problems where the diffusion coefficient is modeled by truncated Karhunen--Loève expansions. Finally, we demonstrate that a measure of the total anisotropy of the diffusion coefficient is a good surrogate for the number of linear solver iterations for each sample and therefore provides a simple and effective metric for grouping samples.« less
Discretization of the induced-charge boundary integral equation
NASA Astrophysics Data System (ADS)
Bardhan, Jaydeep P.; Eisenberg, Robert S.; Gillespie, Dirk
2009-07-01
Boundary-element methods (BEMs) for solving integral equations numerically have been used in many fields to compute the induced charges at dielectric boundaries. In this paper, we consider a more accurate implementation of BEM in the context of ions in aqueous solution near proteins, but our results are applicable more generally. The ions that modulate protein function are often within a few angstroms of the protein, which leads to the significant accumulation of polarization charge at the protein-solvent interface. Computing the induced charge accurately and quickly poses a numerical challenge in solving a popular integral equation using BEM. In particular, the accuracy of simulations can depend strongly on seemingly minor details of how the entries of the BEM matrix are calculated. We demonstrate that when the dielectric interface is discretized into flat tiles, the qualocation method of Tausch [IEEE Trans Comput.-Comput.-Aided Des. 20, 1398 (2001)] to compute the BEM matrix elements is always more accurate than the traditional centroid-collocation method. Qualocation is not more expensive to implement than collocation and can save significant computational time by reducing the number of boundary elements needed to discretize the dielectric interfaces.
D'Elia, M.; Edwards, H. C.; Hu, J.; ...
2018-01-18
Previous work has demonstrated that propagating groups of samples, called ensembles, together through forward simulations can dramatically reduce the aggregate cost of sampling-based uncertainty propagation methods [E. Phipps, M. D'Elia, H. C. Edwards, M. Hoemmen, J. Hu, and S. Rajamanickam, SIAM J. Sci. Comput., 39 (2017), pp. C162--C193]. However, critical to the success of this approach when applied to challenging problems of scientific interest is the grouping of samples into ensembles to minimize the total computational work. For example, the total number of linear solver iterations for ensemble systems may be strongly influenced by which samples form the ensemble whenmore » applying iterative linear solvers to parameterized and stochastic linear systems. In this paper we explore sample grouping strategies for local adaptive stochastic collocation methods applied to PDEs with uncertain input data, in particular canonical anisotropic diffusion problems where the diffusion coefficient is modeled by truncated Karhunen--Loève expansions. Finally, we demonstrate that a measure of the total anisotropy of the diffusion coefficient is a good surrogate for the number of linear solver iterations for each sample and therefore provides a simple and effective metric for grouping samples.« less
A conservative staggered-grid Chebyshev multidomain method for compressible flows
NASA Technical Reports Server (NTRS)
Kopriva, David A.; Kolias, John H.
1995-01-01
We present a new multidomain spectral collocation method that uses staggered grids for the solution of compressible flow problems. The solution unknowns are defined at the nodes of a Gauss quadrature rule. The fluxes are evaluated at the nodes of a Gauss-Lobatto rule. The method is conservative, free-stream preserving, and exponentially accurate. A significant advantage of the method is that subdomain corners are not included in the approximation, making solutions in complex geometries easier to compute.
Non-Deterministic Dynamic Instability of Composite Shells
NASA Technical Reports Server (NTRS)
Chamis, Christos C.; Abumeri, Galib H.
2004-01-01
A computationally effective method is described to evaluate the non-deterministic dynamic instability (probabilistic dynamic buckling) of thin composite shells. The method is a judicious combination of available computer codes for finite element, composite mechanics, and probabilistic structural analysis. The solution method is incrementally updated Lagrangian. It is illustrated by applying it to thin composite cylindrical shell subjected to dynamic loads. Both deterministic and probabilistic buckling loads are evaluated to demonstrate the effectiveness of the method. A universal plot is obtained for the specific shell that can be used to approximate buckling loads for different load rates and different probability levels. Results from this plot show that the faster the rate, the higher the buckling load and the shorter the time. The lower the probability, the lower is the buckling load for a specific time. Probabilistic sensitivity results show that the ply thickness, the fiber volume ratio and the fiber longitudinal modulus, dynamic load and loading rate are the dominant uncertainties, in that order.
Commercialization of NESSUS: Status
NASA Technical Reports Server (NTRS)
Thacker, Ben H.; Millwater, Harry R.
1991-01-01
A plan was initiated in 1988 to commercialize the Numerical Evaluation of Stochastic Structures Under Stress (NESSUS) probabilistic structural analysis software. The goal of the on-going commercialization effort is to begin the transfer of Probabilistic Structural Analysis Method (PSAM) developed technology into industry and to develop additional funding resources in the general area of structural reliability. The commercialization effort is summarized. The SwRI NESSUS Software System is a general purpose probabilistic finite element computer program using state of the art methods for predicting stochastic structural response due to random loads, material properties, part geometry, and boundary conditions. NESSUS can be used to assess structural reliability, to compute probability of failure, to rank the input random variables by importance, and to provide a more cost effective design than traditional methods. The goal is to develop a general probabilistic structural analysis methodology to assist in the certification of critical components in the next generation Space Shuttle Main Engine.
Mapped Chebyshev Pseudo-Spectral Method for Dynamic Aero-Elastic Problem of Limit Cycle Oscillation
NASA Astrophysics Data System (ADS)
Im, Dong Kyun; Kim, Hyun Soon; Choi, Seongim
2018-05-01
A mapped Chebyshev pseudo-spectral method is developed as one of the Fourier-spectral approaches and solves nonlinear PDE systems for unsteady flows and dynamic aero-elastic problem in a given time interval, where the flows or elastic motions can be periodic, nonperiodic, or periodic with an unknown frequency. The method uses the Chebyshev polynomials of the first kind for the basis function and redistributes the standard Chebyshev-Gauss-Lobatto collocation points more evenly by a conformal mapping function for improved numerical stability. Contributions of the method are several. It can be an order of magnitude more efficient than the conventional finite difference-based, time-accurate computation, depending on the complexity of solutions and the number of collocation points. The method reformulates the dynamic aero-elastic problem in spectral form for coupled analysis of aerodynamics and structures, which can be effective for design optimization of unsteady and dynamic problems. A limit cycle oscillation (LCO) is chosen for the validation and a new method to determine the LCO frequency is introduced based on the minimization of a second derivative of the aero-elastic formulation. Two examples of the limit cycle oscillation are tested: nonlinear, one degree-of-freedom mass-spring-damper system and two degrees-of-freedom oscillating airfoil under pitch and plunge motions. Results show good agreements with those of the conventional time-accurate simulations and wind tunnel experiments.
A Comparative Usage-Based Approach to the Reduction of the Spanish and Portuguese Preposition "Para"
ERIC Educational Resources Information Center
Gradoville, Michael Stephen
2013-01-01
This study examines the frequency effect of two-word collocations involving "para" "to," "for" (e.g. "fui para," "para que") on the reduction of "para" to "pa" (in Spanish) and "pra" (in Portuguese). Collocation frequency effects demonstrate that language speakers…
Students’ difficulties in probabilistic problem-solving
NASA Astrophysics Data System (ADS)
Arum, D. P.; Kusmayadi, T. A.; Pramudya, I.
2018-03-01
There are many errors can be identified when students solving mathematics problems, particularly in solving the probabilistic problem. This present study aims to investigate students’ difficulties in solving the probabilistic problem. It focuses on analyzing and describing students errors during solving the problem. This research used the qualitative method with case study strategy. The subjects in this research involve ten students of 9th grade that were selected by purposive sampling. Data in this research involve students’ probabilistic problem-solving result and recorded interview regarding students’ difficulties in solving the problem. Those data were analyzed descriptively using Miles and Huberman steps. The results show that students have difficulties in solving the probabilistic problem and can be divided into three categories. First difficulties relate to students’ difficulties in understanding the probabilistic problem. Second, students’ difficulties in choosing and using appropriate strategies for solving the problem. Third, students’ difficulties with the computational process in solving the problem. Based on the result seems that students still have difficulties in solving the probabilistic problem. It means that students have not able to use their knowledge and ability for responding probabilistic problem yet. Therefore, it is important for mathematics teachers to plan probabilistic learning which could optimize students probabilistic thinking ability.
On pseudo-spectral time discretizations in summation-by-parts form
NASA Astrophysics Data System (ADS)
Ruggiu, Andrea A.; Nordström, Jan
2018-05-01
Fully-implicit discrete formulations in summation-by-parts form for initial-boundary value problems must be invertible in order to provide well functioning procedures. We prove that, under mild assumptions, pseudo-spectral collocation methods for the time derivative lead to invertible discrete systems when energy-stable spatial discretizations are used.
2008-12-01
collocated independent time transfer techniques such as Two-Way Satellite Time and Frequency Transfer ( TWSTFT ) [10,11]. The issue of pseudorange errors...transfer methods, e.g. TWSTFT . There is a side benefit that far exceeds just meeting the objective we have set. The new model explicitly reveals, on
Evolutionary optimization with data collocation for reverse engineering of biological networks.
Tsai, Kuan-Yao; Wang, Feng-Sheng
2005-04-01
Modern experimental biology is moving away from analyses of single elements to whole-organism measurements. Such measured time-course data contain a wealth of information about the structure and dynamic of the pathway or network. The dynamic modeling of the whole systems is formulated as a reverse problem that requires a well-suited mathematical model and a very efficient computational method to identify the model structure and parameters. Numerical integration for differential equations and finding global parameter values are still two major challenges in this field of the parameter estimation of nonlinear dynamic biological systems. We compare three techniques of parameter estimation for nonlinear dynamic biological systems. In the proposed scheme, the modified collocation method is applied to convert the differential equations to the system of algebraic equations. The observed time-course data are then substituted into the algebraic system equations to decouple system interactions in order to obtain the approximate model profiles. Hybrid differential evolution (HDE) with population size of five is able to find a global solution. The method is not only suited for parameter estimation but also can be applied for structure identification. The solution obtained by HDE is then used as the starting point for a local search method to yield the refined estimates.
NASA Astrophysics Data System (ADS)
Wei, Linyang; Qi, Hong; Sun, Jianping; Ren, Yatao; Ruan, Liming
2017-05-01
The spectral collocation method (SCM) is employed to solve the radiative transfer in multi-layer semitransparent medium with graded index. A new flexible angular discretization scheme is employed to discretize the solid angle domain freely to overcome the limit of the number of discrete radiative direction when adopting traditional SN discrete ordinate scheme. Three radial basis function interpolation approaches, named as multi-quadric (MQ), inverse multi-quadric (IMQ) and inverse quadratic (IQ) interpolation, are employed to couple the radiative intensity at the interface between two adjacent layers and numerical experiments show that MQ interpolation has the highest accuracy and best stability. Variable radiative transfer problems in double-layer semitransparent media with different thermophysical properties are investigated and the influence of these thermophysical properties on the radiative transfer procedure in double-layer semitransparent media is also analyzed. All the simulated results show that the present SCM with the new angular discretization scheme can predict the radiative transfer in multi-layer semitransparent medium with graded index efficiently and accurately.
Parallel adaptive wavelet collocation method for PDEs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nejadmalayeri, Alireza, E-mail: Alireza.Nejadmalayeri@gmail.com; Vezolainen, Alexei, E-mail: Alexei.Vezolainen@Colorado.edu; Brown-Dymkoski, Eric, E-mail: Eric.Browndymkoski@Colorado.edu
2015-10-01
A parallel adaptive wavelet collocation method for solving a large class of Partial Differential Equations is presented. The parallelization is achieved by developing an asynchronous parallel wavelet transform, which allows one to perform parallel wavelet transform and derivative calculations with only one data synchronization at the highest level of resolution. The data are stored using tree-like structure with tree roots starting at a priori defined level of resolution. Both static and dynamic domain partitioning approaches are developed. For the dynamic domain partitioning, trees are considered to be the minimum quanta of data to be migrated between the processes. This allowsmore » fully automated and efficient handling of non-simply connected partitioning of a computational domain. Dynamic load balancing is achieved via domain repartitioning during the grid adaptation step and reassigning trees to the appropriate processes to ensure approximately the same number of grid points on each process. The parallel efficiency of the approach is discussed based on parallel adaptive wavelet-based Coherent Vortex Simulations of homogeneous turbulence with linear forcing at effective non-adaptive resolutions up to 2048{sup 3} using as many as 2048 CPU cores.« less
NASA Technical Reports Server (NTRS)
Belytschko, Ted; Wing, Kam Liu
1987-01-01
In the Probabilistic Finite Element Method (PFEM), finite element methods have been efficiently combined with second-order perturbation techniques to provide an effective method for informing the designer of the range of response which is likely in a given problem. The designer must provide as input the statistical character of the input variables, such as yield strength, load magnitude, and Young's modulus, by specifying their mean values and their variances. The output then consists of the mean response and the variance in the response. Thus the designer is given a much broader picture of the predicted performance than with simply a single response curve. These methods are applicable to a wide class of problems, provided that the scale of randomness is not too large and the probabilistic density functions possess decaying tails. By incorporating the computational techniques we have developed in the past 3 years for efficiency, the probabilistic finite element methods are capable of handling large systems with many sources of uncertainties. Sample results for an elastic-plastic ten-bar structure and an elastic-plastic plane continuum with a circular hole subject to cyclic loadings with the yield stress on the random field are given.
P. B. Woodbury; D. A. Weinstein
2010-01-01
We reviewed probabilistic regional risk assessment methodologies to identify the methods that are currently in use and are capable of estimating threats to ecosystems from fire and fuels, invasive species, and their interactions with stressors. In a companion chapter, we highlight methods useful for evaluating risks from fire. In this chapter, we highlight methods...
NASA Astrophysics Data System (ADS)
Wattanasakulpong, Nuttawit; Chaikittiratana, Arisara; Pornpeerakeat, Sacharuck
2018-06-01
In this paper, vibration analysis of functionally graded porous beams is carried out using the third-order shear deformation theory. The beams have uniform and non-uniform porosity distributions across their thickness and both ends are supported by rotational and translational springs. The material properties of the beams such as elastic moduli and mass density can be related to the porosity and mass coefficient utilizing the typical mechanical features of open-cell metal foams. The Chebyshev collocation method is applied to solve the governing equations derived from Hamilton's principle, which is used in order to obtain the accurate natural frequencies for the vibration problem of beams with various general and elastic boundary conditions. Based on the numerical experiments, it is revealed that the natural frequencies of the beams with asymmetric and non-uniform porosity distributions are higher than those of other beams with uniform and symmetric porosity distributions.
[On-orbit radiometric calibration accuracy of FY-3A MERSI thermal infrared channel].
Xu, Na; Hu, Xiu-qing; Chen, Lin; Zhang, Yong; Hu, Ju-yang; Sun, Ling
2014-12-01
Accurate satellite radiance measurements are significant for data assimilations and quantitative retrieval applications. In the present paper, radiometric calibration accuracy of FungYun-3A (FY-3A) Medium Resolution Spectral Imager (MERSI) thermal infrared (TIR) channel was evaluated based on simultaneous nadir observation (SNO) intercalibration method. Hyperspectral and high-quality measurements of METOP-A/IASI were used as reference. Assessment uncertainty from intercalibration method was also investigated by examining the relation between BT bias against four main collocation factors, i. e. observation time difference, view geometric difference related to zenith angles and azimuth angles, and scene spatial homogeneity. It was indicated that the BT bias is evenly distributed across the collocation variables with no significant linear relationship in MERSI IR channel. Among the four collocation factors, the scene spatial homogeneity may be the most important factor with the uncertainty less than 2% of BT bias. Statistical analysis of monitoring biases during one and a half years indicates that the brightness temperature measured by MERSI is much warmer than that of IASI. The annual mean bias (MERSI-IASI) in 2012 is (3.18±0.34) K. Monthly averaged BT biases show a little seasonal variation character, and fluctuation range is less than 0.8 K. To further verify the reliability, our evaluation result was also compared with the synchronous experiment results at Dunhuang and Qinghai Lake sites, which showed excellent agreement. Preliminary analysis indicates that there are two reasons leading to the warm bias. One is the overestimation of blackbody emissivity, and the other is probably the incorrect spectral respond function which has shifted to window spectral. Considering the variation character of BT biases, SRF error seems to be the dominant factor.
Non-unitary probabilistic quantum computing
NASA Technical Reports Server (NTRS)
Gingrich, Robert M.; Williams, Colin P.
2004-01-01
We present a method for designing quantum circuits that perform non-unitary quantum computations on n-qubit states probabilistically, and give analytic expressions for the success probability and fidelity.
Reports 10, The Yugoslav Serbo-Croatian-English Contrastive Project.
ERIC Educational Resources Information Center
Filipovic, Rudolf
The tenth volume in this series contains five articles dealing with various aspects of Serbo-Croatian-English contrastive analysis. They are: "The Infinitive as Subject in English and Serbo-Croatian," by Ljiljana Bibovic; "The Contrastive Analysis of Collocations: Collocational Ranges of "Make" and "Take" with Nouns and Their Serbo-Croatian…
No Silver Bullet: L2 Collocation Instruction in an Advanced Spanish Classroom
ERIC Educational Resources Information Center
Jensen, Eric Carl
2017-01-01
Many contemporary second language (L2) instructional materials feature collocation exercises; however, few studies have verified their effectiveness (Boers, Demecheleer, Coxhead, & Webb, 2014) or whether these exercises can be utilized for target languages beyond English (Higueras García, 2017). This study addresses these issues by…
Assessing Team Learning in Technology-Mediated Collaboration: An Experimental Study
ERIC Educational Resources Information Center
Andres, Hayward P.; Akan, Obasi H.
2010-01-01
This study examined the effects of collaboration mode (collocated versus non-collocated videoconferencing-mediated) on team learning and team interaction quality in a team-based problem solving context. Situated learning theory and the theory of affordances are used to provide a framework that describes how technology-mediated collaboration…
Collocation in Regional Development--The Peel Education and TAFE Response.
ERIC Educational Resources Information Center
Goff, Malcolm H.; Nevard, Jennifer
The collocation of services in regional Western Australia (WA) is an important strand of WA's regional development policy. The initiative is intended to foster working relationships among stakeholder groups with a view toward ensuring that regional WA communities have access to quality services. Clustering compatible services in smaller…
Interlanguage Development and Collocational Clash
ERIC Educational Resources Information Center
Shahheidaripour, Gholamabbass
2000-01-01
Background: Persian English learners committed mistakes and errors which were due to insufficient knowledge of different senses of the words and collocational structures they formed. Purpose: The study reported here was conducted for a thesis submitted in partial fulfillment of the requirements for The Master of Arts degree, School of Graduate…
47 CFR 51.323 - Standards for physical collocation and virtual collocation.
Code of Federal Regulations, 2012 CFR
2012-10-01
... unbundled network element if and only if the primary purpose and function of the equipment, as the... nondiscriminatory access to that unbundled network element, including any of its features, functions, or... must be a logical nexus between the additional functions the equipment would perform and the...
Testing ESL Learners' Knowledge of Collocations.
ERIC Educational Resources Information Center
Bonk, William J.
This study reports on the development, administration, and analysis of a test of collocational knowledge for English-as-a-Second-Language (ESL) learners of a wide range of proficiency levels. Through native speaker item validation and pilot testing, three subtests were developed and administered to 98 ESL learners of low-intermediate to advanced…
Probabilistic Methods for Structural Reliability and Risk
NASA Technical Reports Server (NTRS)
Chamis, Christos C.
2010-01-01
A probabilistic method is used to evaluate the structural reliability and risk of select metallic and composite structures. The method is a multiscale, multifunctional and it is based on the most elemental level. A multifactor interaction model is used to describe the material properties which are subsequently evaluated probabilistically. The metallic structure is a two rotor aircraft engine, while the composite structures consist of laminated plies (multiscale) and the properties of each ply are the multifunctional representation. The structural component is modeled by finite element. The solution method for structural responses is obtained by an updated simulation scheme. The results show that the risk for the two rotor engine is about 0.0001 and the composite built-up structure is also 0.0001.
Probabilistic Methods for Structural Reliability and Risk
NASA Technical Reports Server (NTRS)
Chamis, Christos C.
2008-01-01
A probabilistic method is used to evaluate the structural reliability and risk of select metallic and composite structures. The method is a multiscale, multifunctional and it is based on the most elemental level. A multi-factor interaction model is used to describe the material properties which are subsequently evaluated probabilistically. The metallic structure is a two rotor aircraft engine, while the composite structures consist of laminated plies (multiscale) and the properties of each ply are the multifunctional representation. The structural component is modeled by finite element. The solution method for structural responses is obtained by an updated simulation scheme. The results show that the risk for the two rotor engine is about 0.0001 and the composite built-up structure is also 0.0001.
Singularity Preserving Numerical Methods for Boundary Integral Equations
NASA Technical Reports Server (NTRS)
Kaneko, Hideaki (Principal Investigator)
1996-01-01
In the past twelve months (May 8, 1995 - May 8, 1996), under the cooperative agreement with Division of Multidisciplinary Optimization at NASA Langley, we have accomplished the following five projects: a note on the finite element method with singular basis functions; numerical quadrature for weakly singular integrals; superconvergence of degenerate kernel method; superconvergence of the iterated collocation method for Hammersteion equations; and singularity preserving Galerkin method for Hammerstein equations with logarithmic kernel. This final report consists of five papers describing these projects. Each project is preceeded by a brief abstract.
Probabilistic brain tissue segmentation in neonatal magnetic resonance imaging.
Anbeek, Petronella; Vincken, Koen L; Groenendaal, Floris; Koeman, Annemieke; van Osch, Matthias J P; van der Grond, Jeroen
2008-02-01
A fully automated method has been developed for segmentation of four different structures in the neonatal brain: white matter (WM), central gray matter (CEGM), cortical gray matter (COGM), and cerebrospinal fluid (CSF). The segmentation algorithm is based on information from T2-weighted (T2-w) and inversion recovery (IR) scans. The method uses a K nearest neighbor (KNN) classification technique with features derived from spatial information and voxel intensities. Probabilistic segmentations of each tissue type were generated. By applying thresholds on these probability maps, binary segmentations were obtained. These final segmentations were evaluated by comparison with a gold standard. The sensitivity, specificity, and Dice similarity index (SI) were calculated for quantitative validation of the results. High sensitivity and specificity with respect to the gold standard were reached: sensitivity >0.82 and specificity >0.9 for all tissue types. Tissue volumes were calculated from the binary and probabilistic segmentations. The probabilistic segmentation volumes of all tissue types accurately estimated the gold standard volumes. The KNN approach offers valuable ways for neonatal brain segmentation. The probabilistic outcomes provide a useful tool for accurate volume measurements. The described method is based on routine diagnostic magnetic resonance imaging (MRI) and is suitable for large population studies.
NASA Astrophysics Data System (ADS)
Donovan, Amy; Oppenheimer, Clive; Bravo, Michael
2012-12-01
This paper constitutes a philosophical and social scientific study of expert elicitation in the assessment and management of volcanic risk on Montserrat during the 1995-present volcanic activity. It outlines the broader context of subjective probabilistic methods and then uses a mixed-method approach to analyse the use of these methods in volcanic crises. Data from a global survey of volcanologists regarding the use of statistical methods in hazard assessment are presented. Detailed qualitative data from Montserrat are then discussed, particularly concerning the expert elicitation procedure that was pioneered during the eruptions. These data are analysed and conclusions about the use of these methods in volcanology are drawn. The paper finds that while many volcanologists are open to the use of these methods, there are still some concerns, which are similar to the concerns encountered in the literature on probabilistic and determinist approaches to seismic hazard analysis.
Probabilistic Structural Analysis of the SRB Aft Skirt External Fitting Modification
NASA Technical Reports Server (NTRS)
Townsend, John S.; Peck, J.; Ayala, S.
1999-01-01
NASA has funded several major programs (the PSAM Project is an example) to develop Probabilistic Structural Analysis Methods and tools for engineers to apply in the design and assessment of aerospace hardware. A probabilistic finite element design tool, known as NESSUS, is used to determine the reliability of the Space Shuttle Solid Rocket Booster (SRB) aft skirt critical weld. An external bracket modification to the aft skirt provides a comparison basis for examining the details of the probabilistic analysis and its contributions to the design process.
Probabilistic finite elements for fracture and fatigue analysis
NASA Technical Reports Server (NTRS)
Liu, W. K.; Belytschko, T.; Lawrence, M.; Besterfield, G. H.
1989-01-01
The fusion of the probabilistic finite element method (PFEM) and reliability analysis for probabilistic fracture mechanics (PFM) is presented. A comprehensive method for determining the probability of fatigue failure for curved crack growth was developed. The criterion for failure or performance function is stated as: the fatigue life of a component must exceed the service life of the component; otherwise failure will occur. An enriched element that has the near-crack-tip singular strain field embedded in the element is used to formulate the equilibrium equation and solve for the stress intensity factors at the crack-tip. Performance and accuracy of the method is demonstrated on a classical mode 1 fatigue problem.
ERIC Educational Resources Information Center
Gyllstad, Henrik; Wolter, Brent
2016-01-01
The present study investigates whether two types of word combinations (free combinations and collocations) differ in terms of processing by testing Howarth's Continuum Model based on word combination typologies from a phraseological tradition. A visual semantic judgment task was administered to advanced Swedish learners of English (n = 27) and…
47 CFR 51.323 - Standards for physical collocation and virtual collocation.
Code of Federal Regulations, 2014 CFR
2014-10-01
... unbundled network elements. (1) Equipment is necessary for interconnection if an inability to deploy that... obtains within its own network or the incumbent provides to any affiliate, subsidiary, or other party. (2) Equipment is necessary for access to an unbundled network element if an inability to deploy that equipment...
47 CFR 51.323 - Standards for physical collocation and virtual collocation.
Code of Federal Regulations, 2013 CFR
2013-10-01
... unbundled network elements. (1) Equipment is necessary for interconnection if an inability to deploy that... obtains within its own network or the incumbent provides to any affiliate, subsidiary, or other party. (2) Equipment is necessary for access to an unbundled network element if an inability to deploy that equipment...
ERIC Educational Resources Information Center
Snoder, Per
2017-01-01
This article reports on a classroom-based experiment that tested the effects of three vocabulary teaching constructs (involvement load, spacing, and intentionality) on the learning of English verb-noun collocations--for example, "shelve a plan." Laufer and Hulstijn's (2001) "involvement load" predicts that the higher the…
Strategies in Translating Collocations in Religious Texts from Arabic into English
ERIC Educational Resources Information Center
Dweik, Bader S.; Shakra, Mariam M. Abu
2010-01-01
The present study investigated the strategies adopted by students in translating specific lexical and semantic collocations in three religious texts namely, the Holy Quran, the Hadith and the Bible. For this purpose, the researchers selected a purposive sample of 35 MA translation students enrolled in three different public and private Jordanian…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-02
... quarter substitution test. ``Collocated'' indicates that the collocated data was substituted for missing... 24-hour standard design value is greater than the level of the standard. EPA addresses missing data... substituted for the missing data. In the maximum quarter test, maximum recorded values are substituted for the...
Shape Control of Plates with Piezo Actuators and Collocated Position/Rate Sensors
NASA Technical Reports Server (NTRS)
Balakrishnan, A. V.
1994-01-01
This paper treats the control problem of shaping the surface deformation of a circular plate using embedded piezo-electric actuators and collocated rate sensors. An explicit Linear Quadratic Gaussian (LQG) optimizer stability augmentation compensator is derived as well as the optimal feed-forward control. Corresponding performance evaluation formulas are also derived.
Shape Control of Plates with Piezo Actuators and Collocated Position/Rate Sensors
NASA Technical Reports Server (NTRS)
Balakrishnan, A. V.
1994-01-01
This paper treats the control problem of shaping the surface deformation of a circular plate using embedded piezo-electric actuator and collocated rate sensors. An explicit Linear Quadratic Gaussian (LQG) optimizer stability augmentation compensator is derived as well as the optimal feed-forward control. Corresponding performance evaluation formulas are also derived.
USDA-ARS?s Scientific Manuscript database
If not properly account for, auto-correlated errors in observations can lead to inaccurate results in soil moisture data analysis and reanalysis. Here, we propose a more generalized form of the triple collocation algorithm (GTC) capable of decomposing the total error variance of remotely-sensed surf...
Collocational Competence of Arabic Speaking Learners of English: A Study in Lexical Semantics.
ERIC Educational Resources Information Center
Zughoul, Muhammad Raji; Abdul-Fattah, Hussein S.
This study examined learners' productive competence in collocations and idioms by means of their performance on two interdependent tasks. Participants were two groups of English as a Foreign Language undergraduate and graduate students from the English department at Jordan's Yarmouk University. The two tasks included the following: a multiple…
Processing and Learning of Enhanced English Collocations: An Eye Movement Study
ERIC Educational Resources Information Center
Choi, Sungmook
2017-01-01
Research to date suggests that textual enhancement may positively affect the learning of multiword combinations known as collocations, but may impair recall of unenhanced text. However, the attentional mechanisms underlying such effects remain unclear. In this study, 38 undergraduate students were divided into two groups: one read a text…
Probabilistic methods for rotordynamics analysis
NASA Technical Reports Server (NTRS)
Wu, Y.-T.; Torng, T. Y.; Millwater, H. R.; Fossum, A. F.; Rheinfurth, M. H.
1991-01-01
This paper summarizes the development of the methods and a computer program to compute the probability of instability of dynamic systems that can be represented by a system of second-order ordinary linear differential equations. Two instability criteria based upon the eigenvalues or Routh-Hurwitz test functions are investigated. Computational methods based on a fast probability integration concept and an efficient adaptive importance sampling method are proposed to perform efficient probabilistic analysis. A numerical example is provided to demonstrate the methods.
Process for computing geometric perturbations for probabilistic analysis
Fitch, Simeon H. K. [Charlottesville, VA; Riha, David S [San Antonio, TX; Thacker, Ben H [San Antonio, TX
2012-04-10
A method for computing geometric perturbations for probabilistic analysis. The probabilistic analysis is based on finite element modeling, in which uncertainties in the modeled system are represented by changes in the nominal geometry of the model, referred to as "perturbations". These changes are accomplished using displacement vectors, which are computed for each node of a region of interest and are based on mean-value coordinate calculations.
Probabilistic Exposure Analysis for Chemical Risk Characterization
Bogen, Kenneth T.; Cullen, Alison C.; Frey, H. Christopher; Price, Paul S.
2009-01-01
This paper summarizes the state of the science of probabilistic exposure assessment (PEA) as applied to chemical risk characterization. Current probabilistic risk analysis methods applied to PEA are reviewed. PEA within the context of risk-based decision making is discussed, including probabilistic treatment of related uncertainty, interindividual heterogeneity, and other sources of variability. Key examples of recent experience gained in assessing human exposures to chemicals in the environment, and other applications to chemical risk characterization and assessment, are presented. It is concluded that, although improvements continue to be made, existing methods suffice for effective application of PEA to support quantitative analyses of the risk of chemically induced toxicity that play an increasing role in key decision-making objectives involving health protection, triage, civil justice, and criminal justice. Different types of information required to apply PEA to these different decision contexts are identified, and specific PEA methods are highlighted that are best suited to exposure assessment in these separate contexts. PMID:19223660
Probabilistic simulation of uncertainties in thermal structures
NASA Technical Reports Server (NTRS)
Chamis, Christos C.; Shiao, Michael
1990-01-01
Development of probabilistic structural analysis methods for hot structures is a major activity at Lewis Research Center. It consists of five program elements: (1) probabilistic loads; (2) probabilistic finite element analysis; (3) probabilistic material behavior; (4) assessment of reliability and risk; and (5) probabilistic structural performance evaluation. Recent progress includes: (1) quantification of the effects of uncertainties for several variables on high pressure fuel turbopump (HPFT) blade temperature, pressure, and torque of the Space Shuttle Main Engine (SSME); (2) the evaluation of the cumulative distribution function for various structural response variables based on assumed uncertainties in primitive structural variables; (3) evaluation of the failure probability; (4) reliability and risk-cost assessment, and (5) an outline of an emerging approach for eventual hot structures certification. Collectively, the results demonstrate that the structural durability/reliability of hot structural components can be effectively evaluated in a formal probabilistic framework. In addition, the approach can be readily extended to computationally simulate certification of hot structures for aerospace environments.
Fracture mechanics analysis of cracked structures using weight function and neural network method
NASA Astrophysics Data System (ADS)
Chen, J. G.; Zang, F. G.; Yang, Y.; Shi, K. K.; Fu, X. L.
2018-06-01
Stress intensity factors(SIFs) due to thermal-mechanical load has been established by using weight function method. Two reference stress states sere used to determine the coefficients in the weight function. Results were evaluated by using data from literature and show a good agreement between them. So, the SIFs can be determined quickly using the weight function obtained when cracks subjected to arbitrary loads, and presented method can be used for probabilistic fracture mechanics analysis. A probabilistic methodology considering Monte-Carlo with neural network (MCNN) has been developed. The results indicate that an accurate probabilistic characteristic of the KI can be obtained by using the developed method. The probability of failure increases with the increasing of loads, and the relationship between is nonlinear.
Best Merge Region Growing with Integrated Probabilistic Classification for Hyperspectral Imagery
NASA Technical Reports Server (NTRS)
Tarabalka, Yuliya; Tilton, James C.
2011-01-01
A new method for spectral-spatial classification of hyperspectral images is proposed. The method is based on the integration of probabilistic classification within the hierarchical best merge region growing algorithm. For this purpose, preliminary probabilistic support vector machines classification is performed. Then, hierarchical step-wise optimization algorithm is applied, by iteratively merging regions with the smallest Dissimilarity Criterion (DC). The main novelty of this method consists in defining a DC between regions as a function of region statistical and geometrical features along with classification probabilities. Experimental results are presented on a 200-band AVIRIS image of the Northwestern Indiana s vegetation area and compared with those obtained by recently proposed spectral-spatial classification techniques. The proposed method improves classification accuracies when compared to other classification approaches.
Probabilistic Methods for Structural Design and Reliability
NASA Technical Reports Server (NTRS)
Chamis, Christos C.; Whitlow, Woodrow, Jr. (Technical Monitor)
2002-01-01
This report describes a formal method to quantify structural damage tolerance and reliability in the presence of a multitude of uncertainties in turbine engine components. The method is based at the material behavior level where primitive variables with their respective scatter ranges are used to describe behavior. Computational simulation is then used to propagate the uncertainties to the structural scale where damage tolerance and reliability are usually specified. Several sample cases are described to illustrate the effectiveness, versatility, and maturity of the method. Typical results from this method demonstrate, that it is mature and that it can be used to probabilistically evaluate turbine engine structural components. It may be inferred from the results that the method is suitable for probabilistically predicting the remaining life in aging or in deteriorating structures, for making strategic projections and plans, and for achieving better, cheaper, faster products that give competitive advantages in world markets.
Guided SAR image despeckling with probabilistic non local weights
NASA Astrophysics Data System (ADS)
Gokul, Jithin; Nair, Madhu S.; Rajan, Jeny
2017-12-01
SAR images are generally corrupted by granular disturbances called speckle, which makes visual analysis and detail extraction a difficult task. Non Local despeckling techniques with probabilistic similarity has been a recent trend in SAR despeckling. To achieve effective speckle suppression without compromising detail preservation, we propose an improvement for the existing Generalized Guided Filter with Bayesian Non-Local Means (GGF-BNLM) method. The proposed method (Guided SAR Image Despeckling with Probabilistic Non Local Weights) replaces parametric constants based on heuristics in GGF-BNLM method with dynamically derived values based on the image statistics for weight computation. Proposed changes make GGF-BNLM method adaptive and as a result, significant improvement is achieved in terms of performance. Experimental analysis on SAR images shows excellent speckle reduction without compromising feature preservation when compared to GGF-BNLM method. Results are also compared with other state-of-the-art and classic SAR depseckling techniques to demonstrate the effectiveness of the proposed method.
Bayesian probabilistic population projections for all countries.
Raftery, Adrian E; Li, Nan; Ševčíková, Hana; Gerland, Patrick; Heilig, Gerhard K
2012-08-28
Projections of countries' future populations, broken down by age and sex, are widely used for planning and research. They are mostly done deterministically, but there is a widespread need for probabilistic projections. We propose a bayesian method for probabilistic population projections for all countries. The total fertility rate and female and male life expectancies at birth are projected probabilistically using bayesian hierarchical models estimated via Markov chain Monte Carlo using United Nations population data for all countries. These are then converted to age-specific rates and combined with a cohort component projection model. This yields probabilistic projections of any population quantity of interest. The method is illustrated for five countries of different demographic stages, continents and sizes. The method is validated by an out of sample experiment in which data from 1950-1990 are used for estimation, and applied to predict 1990-2010. The method appears reasonably accurate and well calibrated for this period. The results suggest that the current United Nations high and low variants greatly underestimate uncertainty about the number of oldest old from about 2050 and that they underestimate uncertainty for high fertility countries and overstate uncertainty for countries that have completed the demographic transition and whose fertility has started to recover towards replacement level, mostly in Europe. The results also indicate that the potential support ratio (persons aged 20-64 per person aged 65+) will almost certainly decline dramatically in most countries over the coming decades.
a Probabilistic Embedding Clustering Method for Urban Structure Detection
NASA Astrophysics Data System (ADS)
Lin, X.; Li, H.; Zhang, Y.; Gao, L.; Zhao, L.; Deng, M.
2017-09-01
Urban structure detection is a basic task in urban geography. Clustering is a core technology to detect the patterns of urban spatial structure, urban functional region, and so on. In big data era, diverse urban sensing datasets recording information like human behaviour and human social activity, suffer from complexity in high dimension and high noise. And unfortunately, the state-of-the-art clustering methods does not handle the problem with high dimension and high noise issues concurrently. In this paper, a probabilistic embedding clustering method is proposed. Firstly, we come up with a Probabilistic Embedding Model (PEM) to find latent features from high dimensional urban sensing data by "learning" via probabilistic model. By latent features, we could catch essential features hidden in high dimensional data known as patterns; with the probabilistic model, we can also reduce uncertainty caused by high noise. Secondly, through tuning the parameters, our model could discover two kinds of urban structure, the homophily and structural equivalence, which means communities with intensive interaction or in the same roles in urban structure. We evaluated the performance of our model by conducting experiments on real-world data and experiments with real data in Shanghai (China) proved that our method could discover two kinds of urban structure, the homophily and structural equivalence, which means clustering community with intensive interaction or under the same roles in urban space.
COMMUNICATING PROBABILISTIC RISK OUTCOMES TO RISK MANAGERS
Increasingly, risk assessors are moving away from simple deterministic assessments to probabilistic approaches that explicitly incorporate ecological variability, measurement imprecision, and lack of knowledge (collectively termed "uncertainty"). While the new methods provide an...
A comprehensive probabilistic analysis model of oil pipelines network based on Bayesian network
NASA Astrophysics Data System (ADS)
Zhang, C.; Qin, T. X.; Jiang, B.; Huang, C.
2018-02-01
Oil pipelines network is one of the most important facilities of energy transportation. But oil pipelines network accident may result in serious disasters. Some analysis models for these accidents have been established mainly based on three methods, including event-tree, accident simulation and Bayesian network. Among these methods, Bayesian network is suitable for probabilistic analysis. But not all the important influencing factors are considered and the deployment rule of the factors has not been established. This paper proposed a probabilistic analysis model of oil pipelines network based on Bayesian network. Most of the important influencing factors, including the key environment condition and emergency response are considered in this model. Moreover, the paper also introduces a deployment rule for these factors. The model can be used in probabilistic analysis and sensitive analysis of oil pipelines network accident.
Development of probabilistic design method for annular fuels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ozawa, Takayuki
2007-07-01
The increase of linear power and burn-up during the reactor operation is considered as one measure to ensure the utility of fast reactors in the future; for this the application of annular oxide fuels is under consideration. The annular fuel design code CEPTAR was developed in the Japan Atomic Energy Agency (JAEA) and verified by using many irradiation experiences with oxide fuels. In addition, the probabilistic fuel design code BORNFREE was also developed to provide a safe and reasonable fuel design and to evaluate the design margins quantitatively. This study aimed at the development of a probabilistic design method formore » annular oxide fuels; this was implemented in the developed BORNFREE-CEPTAR code, and the code was used to make a probabilistic evaluation with regard to the permissive linear power. (author)« less
Rivas, Elena; Lang, Raymond; Eddy, Sean R
2012-02-01
The standard approach for single-sequence RNA secondary structure prediction uses a nearest-neighbor thermodynamic model with several thousand experimentally determined energy parameters. An attractive alternative is to use statistical approaches with parameters estimated from growing databases of structural RNAs. Good results have been reported for discriminative statistical methods using complex nearest-neighbor models, including CONTRAfold, Simfold, and ContextFold. Little work has been reported on generative probabilistic models (stochastic context-free grammars [SCFGs]) of comparable complexity, although probabilistic models are generally easier to train and to use. To explore a range of probabilistic models of increasing complexity, and to directly compare probabilistic, thermodynamic, and discriminative approaches, we created TORNADO, a computational tool that can parse a wide spectrum of RNA grammar architectures (including the standard nearest-neighbor model and more) using a generalized super-grammar that can be parameterized with probabilities, energies, or arbitrary scores. By using TORNADO, we find that probabilistic nearest-neighbor models perform comparably to (but not significantly better than) discriminative methods. We find that complex statistical models are prone to overfitting RNA structure and that evaluations should use structurally nonhomologous training and test data sets. Overfitting has affected at least one published method (ContextFold). The most important barrier to improving statistical approaches for RNA secondary structure prediction is the lack of diversity of well-curated single-sequence RNA secondary structures in current RNA databases.
Rivas, Elena; Lang, Raymond; Eddy, Sean R.
2012-01-01
The standard approach for single-sequence RNA secondary structure prediction uses a nearest-neighbor thermodynamic model with several thousand experimentally determined energy parameters. An attractive alternative is to use statistical approaches with parameters estimated from growing databases of structural RNAs. Good results have been reported for discriminative statistical methods using complex nearest-neighbor models, including CONTRAfold, Simfold, and ContextFold. Little work has been reported on generative probabilistic models (stochastic context-free grammars [SCFGs]) of comparable complexity, although probabilistic models are generally easier to train and to use. To explore a range of probabilistic models of increasing complexity, and to directly compare probabilistic, thermodynamic, and discriminative approaches, we created TORNADO, a computational tool that can parse a wide spectrum of RNA grammar architectures (including the standard nearest-neighbor model and more) using a generalized super-grammar that can be parameterized with probabilities, energies, or arbitrary scores. By using TORNADO, we find that probabilistic nearest-neighbor models perform comparably to (but not significantly better than) discriminative methods. We find that complex statistical models are prone to overfitting RNA structure and that evaluations should use structurally nonhomologous training and test data sets. Overfitting has affected at least one published method (ContextFold). The most important barrier to improving statistical approaches for RNA secondary structure prediction is the lack of diversity of well-curated single-sequence RNA secondary structures in current RNA databases. PMID:22194308
Composite load spectra for select space propulsion structural components
NASA Technical Reports Server (NTRS)
Newell, J. F.; Ho, H. W.; Kurth, R. E.
1991-01-01
The work performed to develop composite load spectra (CLS) for the Space Shuttle Main Engine (SSME) using probabilistic methods. The three methods were implemented to be the engine system influence model. RASCAL was chosen to be the principal method as most component load models were implemented with the method. Validation of RASCAL was performed. High accuracy comparable to the Monte Carlo method can be obtained if a large enough bin size is used. Generic probabilistic models were developed and implemented for load calculations using the probabilistic methods discussed above. Each engine mission, either a real fighter or a test, has three mission phases: the engine start transient phase, the steady state phase, and the engine cut off transient phase. Power level and engine operating inlet conditions change during a mission. The load calculation module provides the steady-state and quasi-steady state calculation procedures with duty-cycle-data option. The quasi-steady state procedure is for engine transient phase calculations. In addition, a few generic probabilistic load models were also developed for specific conditions. These include the fixed transient spike model, the poison arrival transient spike model, and the rare event model. These generic probabilistic load models provide sufficient latitude for simulating loads with specific conditions. For SSME components, turbine blades, transfer ducts, LOX post, and the high pressure oxidizer turbopump (HPOTP) discharge duct were selected for application of the CLS program. They include static pressure loads and dynamic pressure loads for all four components, centrifugal force for the turbine blade, temperatures of thermal loads for all four components, and structural vibration loads for the ducts and LOX posts.
Factors Impacting Recognition of False Collocations by Speakers of English as L1 and L2
ERIC Educational Resources Information Center
Makinina, Olga
2017-01-01
Currently there is a general uncertainty about what makes collocations (i.e., fixed word combinations with specific, not easily interpreted relations between their components) hard for ESL learners to master, and about how to improve collocation recognition and learning process. This study explored and designed a comparative classification of…
Frequent Collocates and Major Senses of Two Prepositions in ESL and ENL Corpora
ERIC Educational Resources Information Center
Nkemleke, Daniel
2009-01-01
This contribution assesses in quantitative terms frequent collocates and major senses of "between" and "through" in the corpus of Cameroonian English (CCE), the corpus of East-African (Kenya and Tanzania) English which is part of the International Corpus of English (ICE) project (ICE-EA), and the London Oslo/Bergen (LOB) corpus…
Geostationary Collocation: Case Studies for Optimal Maneuvers
2016-03-01
NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS Approved for public release; distribution is unlimited GEOSTATIONARY ...DATES COVERED Master’s thesis 4. TITLE AND SUBTITLE GEOSTATIONARY COLLOCATION: CASE STUDIES FOR OPTIMAL MANEUVERS 5. FUNDING NUMBERS 6...The geostationary belt is considered a natural resource, and as time goes by, the physical spaces for geostationary satellites will run out. The
The Effect of Critical Reading Strategies on EFL Learners' Recall and Retention of Collocations
ERIC Educational Resources Information Center
NematTabrizi, Amir Reza; Saber, Mehrnoush Akhavan
2016-01-01
The study was an attempt to measure the effect of critical reading strategies, namely; re-reading, questioning and annotating on recall and retention of collocations by intermediate Iranian EFL learners. To this end, Nelson proficiency test was administered to ninety (n = 90) Iranian EFL learners studying at Zaban Sara language institute in…
Collocational Differences between L1 and L2: Implications for EFL Learners and Teachers
ERIC Educational Resources Information Center
Sadeghi, Karim
2009-01-01
Collocations are one of the areas that produce problems for learners of English as a foreign language. Iranian learners of English are by no means an exception. Teaching experience at schools, private language centers, and universities in Iran suggests that a significant part of EFL learners' problems with producing the language, especially at…
Huang, Lixi
2008-11-01
A spectral method of Chebyshev collocation with domain decomposition is introduced for linear interaction between sound and structure in a duct lined with flexible walls backed by cavities with or without a porous material. The spectral convergence is validated by a one-dimensional problem with a closed-form analytical solution, and is then extended to the two-dimensional configuration and compared favorably against a previous method based on the Fourier-Galerkin procedure and a finite element modeling. The nonlocal, exact Dirichlet-to-Neumann boundary condition is embedded in the domain decomposition scheme without imposing extra computational burden. The scheme is applied to the problem of high-frequency sound absorption by duct lining, which is normally ineffective when the wavelength is comparable with or shorter than the duct height. When a tensioned membrane covers the lining, however, it scatters the incident plane wave into higher-order modes, which then penetrate the duct lining more easily and get dissipated. For the frequency range of f=0.3-3 studied here, f=0.5 being the first cut-on frequency of the central duct, the membrane cover is found to offer an additional 0.9 dB attenuation per unit axial distance equal to half of the duct height.
NASA Astrophysics Data System (ADS)
Zhao, J. M.; Tan, J. Y.; Liu, L. H.
2013-01-01
A new second order form of radiative transfer equation (named MSORTE) is proposed, which overcomes the singularity problem of a previously proposed second order radiative transfer equation [J.E. Morel, B.T. Adams, T. Noh, J.M. McGhee, T.M. Evans, T.J. Urbatsch, Spatial discretizations for self-adjoint forms of the radiative transfer equations, J. Comput. Phys. 214 (1) (2006) 12-40 (where it was termed SAAI), J.M. Zhao, L.H. Liu, Second order radiative transfer equation and its properties of numerical solution using finite element method, Numer. Heat Transfer B 51 (2007) 391-409] in dealing with inhomogeneous media where some locations have very small/zero extinction coefficient. The MSORTE contains a naturally introduced diffusion (or second order) term which provides better numerical property than the classic first order radiative transfer equation (RTE). The stability and convergence characteristics of the MSORTE discretized by central difference scheme is analyzed theoretically, and the better numerical stability of the second order form radiative transfer equations than the RTE when discretized by the central difference type method is proved. A collocation meshless method is developed based on the MSORTE to solve radiative transfer in inhomogeneous media. Several critical test cases are taken to verify the performance of the presented method. The collocation meshless method based on the MSORTE is demonstrated to be capable of stably and accurately solve radiative transfer in strongly inhomogeneous media, media with void region and even with discontinuous extinction coefficient.
The dynamics and control of large flexible asymmetric spacecraft
NASA Astrophysics Data System (ADS)
Humphries, T. T.
1991-02-01
This thesis develops the equations of motion for a large flexible asymmetric Earth observation satellite and finds the characteristics of its motion under the influence of control forces. The mathematical model of the structure is produced using analytical methods. The equations of motion are formed using an expanded momentum technique which accounts for translational motion of the spacecraft hub and employs orthogonality relations between appendage and vehicle modes. The controllability and observability conditions of the full spacecraft motions using force and torque actuators are defined. A three axis reaction wheel control system is implemented for both slewing the spacecraft and controlling its resulting motions. From minor slew results it is shown that the lowest frequency elastic mode of the spacecraft is more important than higher frequency modes, when considering the effects of elastic motion on instrument pointing from the hub. Minor slews of the spacecraft configurations considered produce elastic deflections resulting in rotational attitude motions large enough to contravene pointing accuracy requirements of instruments aboard the spacecraft hub. Active vibration damping is required to reduce these hub motions to acceptable bounds in sufficiently small time. A comparison between hub mounted collocated and hub/appendage mounted non-collocated control systems verifies that provided the non-collocated system is stable, it can more effectively damp elastic modes whilst maintaining adequate damping of rigid modes. Analysis undertaken shows that the reaction wheel controller could be replaced by a thruster control system which decouples the modes of the spacecraft motion, enabling them to be individually damped.
NASA Astrophysics Data System (ADS)
Bastani, Ali Foroush; Dastgerdi, Maryam Vahid; Mighani, Abolfazl
2018-06-01
The main aim of this paper is the analytical and numerical study of a time-dependent second-order nonlinear partial differential equation (PDE) arising from the endogenous stochastic volatility model, introduced in [Bensoussan, A., Crouhy, M. and Galai, D., Stochastic equity volatility related to the leverage effect (I): equity volatility behavior. Applied Mathematical Finance, 1, 63-85, 1994]. As the first step, we derive a consistent set of initial and boundary conditions to complement the PDE, when the firm is financed by equity and debt. In the sequel, we propose a Newton-based iteration scheme for nonlinear parabolic PDEs which is an extension of a method for solving elliptic partial differential equations introduced in [Fasshauer, G. E., Newton iteration with multiquadrics for the solution of nonlinear PDEs. Computers and Mathematics with Applications, 43, 423-438, 2002]. The scheme is based on multilevel collocation using radial basis functions (RBFs) to solve the resulting locally linearized elliptic PDEs obtained at each level of the Newton iteration. We show the effectiveness of the resulting framework by solving a prototypical example from the field and compare the results with those obtained from three different techniques: (1) a finite difference discretization; (2) a naive RBF collocation and (3) a benchmark approximation, introduced for the first time in this paper. The numerical results confirm the robustness, higher convergence rate and good stability properties of the proposed scheme compared to other alternatives. We also comment on some possible research directions in this field.
Probabilistic models of cognition: conceptual foundations.
Chater, Nick; Tenenbaum, Joshua B; Yuille, Alan
2006-07-01
Remarkable progress in the mathematics and computer science of probability has led to a revolution in the scope of probabilistic models. In particular, 'sophisticated' probabilistic methods apply to structured relational systems such as graphs and grammars, of immediate relevance to the cognitive sciences. This Special Issue outlines progress in this rapidly developing field, which provides a potentially unifying perspective across a wide range of domains and levels of explanation. Here, we introduce the historical and conceptual foundations of the approach, explore how the approach relates to studies of explicit probabilistic reasoning, and give a brief overview of the field as it stands today.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-09
... Role of Risk Analysis in Decision-Making AGENCY: Environmental Protection Agency (EPA). ACTION: Notice... documents entitled, ``Using Probabilistic Methods to Enhance the Role of Risk Analysis in Decision- Making... Probabilistic Methods to Enhance the Role of Risk Analysis in Decision-Making, with Case Study Examples'' and...
Denis Valle; Benjamin Baiser; Christopher W. Woodall; Robin Chazdon; Jerome Chave
2014-01-01
We propose a novel multivariate method to analyse biodiversity data based on the Latent Dirichlet Allocation (LDA) model. LDA, a probabilistic model, reduces assemblages to sets of distinct component communities. It produces easily interpretable results, can represent abrupt and gradual changes in composition, accommodates missing data and allows for coherent estimates...
Probabilistic composite micromechanics
NASA Technical Reports Server (NTRS)
Stock, T. A.; Bellini, P. X.; Murthy, P. L. N.; Chamis, C. C.
1988-01-01
Probabilistic composite micromechanics methods are developed that simulate expected uncertainties in unidirectional fiber composite properties. These methods are in the form of computational procedures using Monte Carlo simulation. A graphite/epoxy unidirectional composite (ply) is studied to demonstrate fiber composite material properties at the micro level. Regression results are presented to show the relative correlation between predicted and response variables in the study.
Probabilistic BPRRC: Robust Change Detection against Illumination Changes and Background Movements
NASA Astrophysics Data System (ADS)
Yokoi, Kentaro
This paper presents Probabilistic Bi-polar Radial Reach Correlation (PrBPRRC), a change detection method that is robust against illumination changes and background movements. Most of the traditional change detection methods are robust against either illumination changes or background movements; BPRRC is one of the illumination-robust change detection methods. We introduce a probabilistic background texture model into BPRRC and add the robustness against background movements including foreground invasions such as moving cars, walking people, swaying trees, and falling snow. We show the superiority of PrBPRRC in the environment with illumination changes and background movements by using three public datasets and one private dataset: ATON Highway data, Karlsruhe traffic sequence data, PETS 2007 data, and Walking-in-a-room data.
Probabilistic learning of nonlinear dynamical systems using sequential Monte Carlo
NASA Astrophysics Data System (ADS)
Schön, Thomas B.; Svensson, Andreas; Murray, Lawrence; Lindsten, Fredrik
2018-05-01
Probabilistic modeling provides the capability to represent and manipulate uncertainty in data, models, predictions and decisions. We are concerned with the problem of learning probabilistic models of dynamical systems from measured data. Specifically, we consider learning of probabilistic nonlinear state-space models. There is no closed-form solution available for this problem, implying that we are forced to use approximations. In this tutorial we will provide a self-contained introduction to one of the state-of-the-art methods-the particle Metropolis-Hastings algorithm-which has proven to offer a practical approximation. This is a Monte Carlo based method, where the particle filter is used to guide a Markov chain Monte Carlo method through the parameter space. One of the key merits of the particle Metropolis-Hastings algorithm is that it is guaranteed to converge to the "true solution" under mild assumptions, despite being based on a particle filter with only a finite number of particles. We will also provide a motivating numerical example illustrating the method using a modeling language tailored for sequential Monte Carlo methods. The intention of modeling languages of this kind is to open up the power of sophisticated Monte Carlo methods-including particle Metropolis-Hastings-to a large group of users without requiring them to know all the underlying mathematical details.
Alternate Methods in Refining the SLS Nozzle Plug Loads
NASA Technical Reports Server (NTRS)
Burbank, Scott; Allen, Andrew
2013-01-01
Numerical analysis has shown that the SLS nozzle environmental barrier (nozzle plug) design is inadequate for the prelaunch condition, which consists of two dominant loads: 1) the main engines startup pressure and 2) an environmentally induced pressure. Efforts to reduce load conservatisms included a dynamic analysis which showed a 31% higher safety factor compared to the standard static analysis. The environmental load is typically approached with a deterministic method using the worst possible combinations of pressures and temperatures. An alternate probabilistic approach, utilizing the distributions of pressures and temperatures, resulted in a 54% reduction in the environmental pressure load. A Monte Carlo simulation of environmental load that used five years of historical pressure and temperature data supported the results of the probabilistic analysis, indicating the probabilistic load is reflective of a 3-sigma condition (1 in 370 probability). Utilizing the probabilistic load analysis eliminated excessive conservatisms and will prevent a future overdesign of the nozzle plug. Employing a similar probabilistic approach to other design and analysis activities can result in realistic yet adequately conservative solutions.
Barber, Jared; Tanase, Roxana; Yotov, Ivan
2016-06-01
Several Kalman filter algorithms are presented for data assimilation and parameter estimation for a nonlinear diffusion model of epithelial cell migration. These include the ensemble Kalman filter with Monte Carlo sampling and a stochastic collocation (SC) Kalman filter with structured sampling. Further, two types of noise are considered -uncorrelated noise resulting in one stochastic dimension for each element of the spatial grid and correlated noise parameterized by the Karhunen-Loeve (KL) expansion resulting in one stochastic dimension for each KL term. The efficiency and accuracy of the four methods are investigated for two cases with synthetic data with and without noise, as well as data from a laboratory experiment. While it is observed that all algorithms perform reasonably well in matching the target solution and estimating the diffusion coefficient and the growth rate, it is illustrated that the algorithms that employ SC and KL expansion are computationally more efficient, as they require fewer ensemble members for comparable accuracy. In the case of SC methods, this is due to improved approximation in stochastic space compared to Monte Carlo sampling. In the case of KL methods, the parameterization of the noise results in a stochastic space of smaller dimension. The most efficient method is the one combining SC and KL expansion. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Mayr, G. J.; Kneringer, P.; Dietz, S. J.; Zeileis, A.
2016-12-01
Low visibility or low cloud ceiling reduce the capacity of airports by requiring special low visibility procedures (LVP) for incoming/departing aircraft. Probabilistic forecasts when such procedures will become necessary help to mitigate delays and economic losses.We compare the performance of probabilistic nowcasts with two statistical methods: ordered logistic regression, and trees and random forests. These models harness historic and current meteorological measurements in the vicinity of the airport and LVP states, and incorporate diurnal and seasonal climatological information via generalized additive models (GAM). The methods are applied at Vienna International Airport (Austria). The performance is benchmarked against climatology, persistence and human forecasters.
NASA Technical Reports Server (NTRS)
Canfield, R. C.; Ricchiazzi, P. J.
1980-01-01
An approximate probabilistic radiative transfer equation and the statistical equilibrium equations are simultaneously solved for a model hydrogen atom consisting of three bound levels and ionization continuum. The transfer equation for L-alpha, L-beta, H-alpha, and the Lyman continuum is explicitly solved assuming complete redistribution. The accuracy of this approach is tested by comparing source functions and radiative loss rates to values obtained with a method that solves the exact transfer equation. Two recent model solar-flare chromospheres are used for this test. It is shown that for the test atmospheres the probabilistic method gives values of the radiative loss rate that are characteristically good to a factor of 2. The advantage of this probabilistic approach is that it retains a description of the dominant physical processes of radiative transfer in the complete redistribution case, yet it achieves a major reduction in computational requirements.
Pasta, D J; Taylor, J L; Henning, J M
1999-01-01
Decision-analytic models are frequently used to evaluate the relative costs and benefits of alternative therapeutic strategies for health care. Various types of sensitivity analysis are used to evaluate the uncertainty inherent in the models. Although probabilistic sensitivity analysis is more difficult theoretically and computationally, the results can be much more powerful and useful than deterministic sensitivity analysis. The authors show how a Monte Carlo simulation can be implemented using standard software to perform a probabilistic sensitivity analysis incorporating the bootstrap. The method is applied to a decision-analytic model evaluating the cost-effectiveness of Helicobacter pylori eradication. The necessary steps are straightforward and are described in detail. The use of the bootstrap avoids certain difficulties encountered with theoretical distributions. The probabilistic sensitivity analysis provided insights into the decision-analytic model beyond the traditional base-case and deterministic sensitivity analyses and should become the standard method for assessing sensitivity.
Advanced Small Modular Reactor (SMR) Probabilistic Risk Assessment (PRA) Technical Exchange Meeting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Curtis
2013-09-01
During FY13, the INL developed an advanced SMR PRA framework which has been described in the report Small Modular Reactor (SMR) Probabilistic Risk Assessment (PRA) Detailed Technical Framework Specification, INL/EXT-13-28974 (April 2013). In this framework, the various areas are considered: Probabilistic models to provide information specific to advanced SMRs Representation of specific SMR design issues such as having co-located modules and passive safety features Use of modern open-source and readily available analysis methods Internal and external events resulting in impacts to safety All-hazards considerations Methods to support the identification of design vulnerabilities Mechanistic and probabilistic data needs to support modelingmore » and tools In order to describe this framework more fully and obtain feedback on the proposed approaches, the INL hosted a technical exchange meeting during August 2013. This report describes the outcomes of that meeting.« less
Mode 1 stress intensity factors for round compact specimens
NASA Technical Reports Server (NTRS)
Gross, B.
1976-01-01
The mode 1 stress intensity factors were computed for round compact specimens by the boundary collocation method. Results are presented for ratios A sub T/R sub 0 in the range 0.3 to 0.8, where A sub t is the distance from the specimen center to the crack tip for a specimen of diameter 2R sub 0.
ERIC Educational Resources Information Center
Baker, Paul; Gabrielatos, Costas; McEnery, Tony
2013-01-01
This article uses methods from corpus linguistics and critical discourse analysis to examine patterns of representation around the word "Muslim" in a 143 million word corpus of British newspaper articles published between 1998 and 2009. Using the analysis tool Sketch Engine, an analysis of noun collocates of "Muslim" found that the following…
Impact of WhatsAPP on Learning and Retention of Collocation Knowledge among Iranian EFL Learners
ERIC Educational Resources Information Center
Ashiyan, Zahra; Salehi, Hadi
2016-01-01
During the recent technological years, language learning has been attempted to transform its path from the conventional methods to instrumental applications. Mobile phone provides people to reach and exchange information through chats (WhatsApp). It is a tool or mode that means the facilities are used for main purposes. The unique features of the…
Si, Weijian; Zhao, Pinjiao; Qu, Zhiyu
2016-01-01
This paper presents an L-shaped sparsely-distributed vector sensor (SD-VS) array with four different antenna compositions. With the proposed SD-VS array, a novel two-dimensional (2-D) direction of arrival (DOA) and polarization estimation method is proposed to handle the scenario where uncorrelated and coherent sources coexist. The uncorrelated and coherent sources are separated based on the moduli of the eigenvalues. For the uncorrelated sources, coarse estimates are acquired by extracting the DOA information embedded in the steering vectors from estimated array response matrix of the uncorrelated sources, and they serve as coarse references to disambiguate fine estimates with cyclical ambiguity obtained from the spatial phase factors. For the coherent sources, four Hankel matrices are constructed, with which the coherent sources are resolved in a similar way as for the uncorrelated sources. The proposed SD-VS array requires only two collocated antennas for each vector sensor, thus the mutual coupling effects across the collocated antennas are reduced greatly. Moreover, the inter-sensor spacings are allowed beyond a half-wavelength, which results in an extended array aperture. Simulation results demonstrate the effectiveness and favorable performance of the proposed method. PMID:27258271
NASA Astrophysics Data System (ADS)
Fang, Y.; Hou, J.; Engel, D.; Lin, G.; Yin, J.; Han, B.; Fang, Z.; Fountoulakis, V.
2011-12-01
In this study, we introduce an uncertainty quantification (UQ) software framework for carbon sequestration, with the focus of studying being the effect of spatial heterogeneity of reservoir properties on CO2 migration. We use a sequential Gaussian method (SGSIM) to generate realizations of permeability fields with various spatial statistical attributes. To deal with the computational difficulties, we integrate the following ideas/approaches: 1) firstly, we use three different sampling approaches (probabilistic collocation, quasi-Monte Carlo, and adaptive sampling approaches) to reduce the required forward calculations while trying to explore the parameter space and quantify the input uncertainty; 2) secondly, we use eSTOMP as the forward modeling simulator. eSTOMP is implemented using the Global Arrays toolkit (GA) that is based on one-sided inter-processor communication and supports a shared memory programming style on distributed memory platforms. It provides highly-scalable performance. It uses a data model to partition most of the large scale data structures into a relatively small number of distinct classes. The lower level simulator infrastructure (e.g. meshing support, associated data structures, and data mapping to processors) is separated from the higher level physics and chemistry algorithmic routines using a grid component interface; and 3) besides the faster model and more efficient algorithms to speed up the forward calculation, we built an adaptive system infrastructure to select the best possible data transfer mechanisms, to optimally allocate system resources to improve performance, and to integrate software packages and data for composing carbon sequestration simulation, computation, analysis, estimation and visualization. We will demonstrate the framework with a given CO2 injection scenario in a heterogeneous sandstone reservoir.
NASA Astrophysics Data System (ADS)
Győri, Erzsébet; Gráczer, Zoltán; Tóth, László; Bán, Zoltán; Horváth, Tibor
2017-04-01
Liquefaction potential evaluations are generally made to assess the hazard from specific scenario earthquakes. These evaluations may estimate the potential in a binary fashion (yes/no), define a factor of safety or predict the probability of liquefaction given a scenario event. Usually the level of ground shaking is obtained from the results of PSHA. Although it is determined probabilistically, a single level of ground shaking is selected and used within the liquefaction potential evaluation. In contrary, the fully probabilistic liquefaction potential assessment methods provide a complete picture of liquefaction hazard, namely taking into account the joint probability distribution of PGA and magnitude of earthquake scenarios; both of which are key inputs in the stress-based simplified methods. Kramer and Mayfield (2007) has developed a fully probabilistic liquefaction potential evaluation method using a performance-based earthquake engineering (PBEE) framework. The results of the procedure are the direct estimate of the return period of liquefaction and the liquefaction hazard curves in function of depth. The method combines the disaggregation matrices computed for different exceedance frequencies during probabilistic seismic hazard analysis with one of the recent models for the conditional probability of liquefaction. We have developed a software for the assessment of performance-based liquefaction triggering on the basis of Kramer and Mayfield method. Originally the SPT based probabilistic method of Cetin et al. (2004) was built-in into the procedure of Kramer and Mayfield to compute the conditional probability however there is no professional consensus about its applicability. Therefore we have included not only Cetin's method but Idriss and Boulanger (2012) SPT based moreover Boulanger and Idriss (2014) CPT based procedures into our computer program. In 1956, a damaging earthquake of magnitude 5.6 occurred in Dunaharaszti, in Hungary. Its epicenter was located about 5 km from the southern boundary of Budapest. The quake caused serious damages in the epicentral area and in the southern districts of the capital. The epicentral area of the earthquake is located along the Danube River. Sand boils were observed in some locations that indicated the occurrence of liquefaction. Because their exact locations were recorded at the time of the earthquake, in situ geotechnical measurements (CPT and SPT) could be performed at two (Dunaharaszti and Taksony) sites. The different types of measurements enabled the probabilistic liquefaction hazard computations at the two studied sites. We have compared the return periods of liquefaction that were computed using different built-in simplified stress based methods.
An efficient deterministic-probabilistic approach to modeling regional groundwater flow: 1. Theory
Yen, Chung-Cheng; Guymon, Gary L.
1990-01-01
An efficient probabilistic model is developed and cascaded with a deterministic model for predicting water table elevations in regional aquifers. The objective is to quantify model uncertainty where precise estimates of water table elevations may be required. The probabilistic model is based on the two-point probability method which only requires prior knowledge of uncertain variables mean and coefficient of variation. The two-point estimate method is theoretically developed and compared with the Monte Carlo simulation method. The results of comparisons using hypothetical determinisitic problems indicate that the two-point estimate method is only generally valid for linear problems where the coefficients of variation of uncertain parameters (for example, storage coefficient and hydraulic conductivity) is small. The two-point estimate method may be applied to slightly nonlinear problems with good results, provided coefficients of variation are small. In such cases, the two-point estimate method is much more efficient than the Monte Carlo method provided the number of uncertain variables is less than eight.
An Efficient Deterministic-Probabilistic Approach to Modeling Regional Groundwater Flow: 1. Theory
NASA Astrophysics Data System (ADS)
Yen, Chung-Cheng; Guymon, Gary L.
1990-07-01
An efficient probabilistic model is developed and cascaded with a deterministic model for predicting water table elevations in regional aquifers. The objective is to quantify model uncertainty where precise estimates of water table elevations may be required. The probabilistic model is based on the two-point probability method which only requires prior knowledge of uncertain variables mean and coefficient of variation. The two-point estimate method is theoretically developed and compared with the Monte Carlo simulation method. The results of comparisons using hypothetical determinisitic problems indicate that the two-point estimate method is only generally valid for linear problems where the coefficients of variation of uncertain parameters (for example, storage coefficient and hydraulic conductivity) is small. The two-point estimate method may be applied to slightly nonlinear problems with good results, provided coefficients of variation are small. In such cases, the two-point estimate method is much more efficient than the Monte Carlo method provided the number of uncertain variables is less than eight.
Variational approach to probabilistic finite elements
NASA Technical Reports Server (NTRS)
Belytschko, T.; Liu, W. K.; Mani, A.; Besterfield, G.
1991-01-01
Probabilistic finite element methods (PFEM), synthesizing the power of finite element methods with second-moment techniques, are formulated for various classes of problems in structural and solid mechanics. Time-invariant random materials, geometric properties and loads are incorporated in terms of their fundamental statistics viz. second-moments. Analogous to the discretization of the displacement field in finite element methods, the random fields are also discretized. Preserving the conceptual simplicity, the response moments are calculated with minimal computations. By incorporating certain computational techniques, these methods are shown to be capable of handling large systems with many sources of uncertainties. By construction, these methods are applicable when the scale of randomness is not very large and when the probabilistic density functions have decaying tails. The accuracy and efficiency of these methods, along with their limitations, are demonstrated by various applications. Results obtained are compared with those of Monte Carlo simulation and it is shown that good accuracy can be obtained for both linear and nonlinear problems. The methods are amenable to implementation in deterministic FEM based computer codes.
Variational approach to probabilistic finite elements
NASA Astrophysics Data System (ADS)
Belytschko, T.; Liu, W. K.; Mani, A.; Besterfield, G.
1991-08-01
Probabilistic finite element methods (PFEM), synthesizing the power of finite element methods with second-moment techniques, are formulated for various classes of problems in structural and solid mechanics. Time-invariant random materials, geometric properties and loads are incorporated in terms of their fundamental statistics viz. second-moments. Analogous to the discretization of the displacement field in finite element methods, the random fields are also discretized. Preserving the conceptual simplicity, the response moments are calculated with minimal computations. By incorporating certain computational techniques, these methods are shown to be capable of handling large systems with many sources of uncertainties. By construction, these methods are applicable when the scale of randomness is not very large and when the probabilistic density functions have decaying tails. The accuracy and efficiency of these methods, along with their limitations, are demonstrated by various applications. Results obtained are compared with those of Monte Carlo simulation and it is shown that good accuracy can be obtained for both linear and nonlinear problems. The methods are amenable to implementation in deterministic FEM based computer codes.
Variational approach to probabilistic finite elements
NASA Technical Reports Server (NTRS)
Belytschko, T.; Liu, W. K.; Mani, A.; Besterfield, G.
1987-01-01
Probabilistic finite element method (PFEM), synthesizing the power of finite element methods with second-moment techniques, are formulated for various classes of problems in structural and solid mechanics. Time-invariant random materials, geometric properties, and loads are incorporated in terms of their fundamental statistics viz. second-moments. Analogous to the discretization of the displacement field in finite element methods, the random fields are also discretized. Preserving the conceptual simplicity, the response moments are calculated with minimal computations. By incorporating certain computational techniques, these methods are shown to be capable of handling large systems with many sources of uncertainties. By construction, these methods are applicable when the scale of randomness is not very large and when the probabilistic density functions have decaying tails. The accuracy and efficiency of these methods, along with their limitations, are demonstrated by various applications. Results obtained are compared with those of Monte Carlo simulation and it is shown that good accuracy can be obtained for both linear and nonlinear problems. The methods are amenable to implementation in deterministic FEM based computer codes.
ERIC Educational Resources Information Center
Molina-Plaza, Silvia; de Gregorio-Godeo, Eduardo
2010-01-01
Within the context of on-going research, this paper explores the pedagogical implications of contrastive analyses of multiword units in English and Spanish based on electronic corpora as a CALL resource. The main tenets of collocations from a contrastive perspective--and the points of contact and departure between both languages--are discussed…
ERIC Educational Resources Information Center
Reynolds, Barry Lee
2016-01-01
Lack of knowledge in the conventional usage of collocations in one's respective field of expertise cause Taiwanese students to produce academic writing that is markedly different than more competent writing. This is because Taiwanese students are first and foremost English as a Foreign language (EFL) readers and may have difficulties picking up on…
Utilizing Lexical Data from a Web-Derived Corpus to Expand Productive Collocation Knowledge
ERIC Educational Resources Information Center
Wu, Shaoqun; Witten, Ian H.; Franken, Margaret
2010-01-01
Collocations are of great importance for second language learners, and a learner's knowledge of them plays a key role in producing language fluently (Nation, 2001: 323). In this article we describe and evaluate an innovative system that uses a Web-derived corpus and digital library software to produce a vast concordance and present it in a way…
The Effect of Corpus-Based Activities on Verb-Noun Collocations in EFL Classes
ERIC Educational Resources Information Center
Ucar, Serpil; Yükselir, Ceyhun
2015-01-01
This current study sought to reveal the impacts of corpus-based activities on verb-noun collocation learning in EFL classes. This study was carried out on two groups--experimental and control groups- each of which consists of 15 students. The students were preparatory class students at School of Foreign Languages, Osmaniye Korkut Ata University.…
A Study of Learners' Usage of a Mobile Learning Application for Learning Idioms and Collocations
ERIC Educational Resources Information Center
Amer, Mahmoud
2014-01-01
This study explored how four groups of language learners used a mobile software application developed by the researcher for learning idiomatic expressions and collocations. A total of 45 participants in the study used the application for a period of one week. Data for this study was collected from the application, a questionnaire, and follow-up…
ERIC Educational Resources Information Center
Ördem, Eser; Paker, Turan
2016-01-01
The purpose of this study was to investigate whether teaching vocabulary via collocations would contribute to retention and use of foreign language, English. A quasi-experimental design was formed to see whether there would be a significant difference between the treatment and control groups. Three instruments developed were conducted to 60…
ERIC Educational Resources Information Center
Okyar, Hatice; Yangin Eksi, Gonca
2017-01-01
This study compared the effectiveness of negative evidence and enriched input on learning the verb-noun collocations. There were 52 English as Foreign Language (EFL) learners in this research study and they were randomly assigned to the negative evidence or enriched input groups. While the negative evidence group (n = 27) was provided with…
ERIC Educational Resources Information Center
Wu, Yi-ju
2016-01-01
Data-Driven Learning (DDL), in which learners "confront [themselves] directly with the corpus data" (Johns, 2002, p. 108), has shown to be effective in collocation learning in L2 writing. Nevertheless, there have been only few research studies of this type examining the relationship between English proficiency and corpus consultation.…
Fully probabilistic control design in an adaptive critic framework.
Herzallah, Randa; Kárný, Miroslav
2011-12-01
Optimal stochastic controller pushes the closed-loop behavior as close as possible to the desired one. The fully probabilistic design (FPD) uses probabilistic description of the desired closed loop and minimizes Kullback-Leibler divergence of the closed-loop description to the desired one. Practical exploitation of the fully probabilistic design control theory continues to be hindered by the computational complexities involved in numerically solving the associated stochastic dynamic programming problem; in particular, very hard multivariate integration and an approximate interpolation of the involved multivariate functions. This paper proposes a new fully probabilistic control algorithm that uses the adaptive critic methods to circumvent the need for explicitly evaluating the optimal value function, thereby dramatically reducing computational requirements. This is a main contribution of this paper. Copyright © 2011 Elsevier Ltd. All rights reserved.
Development of a probabilistic analysis methodology for structural reliability estimation
NASA Technical Reports Server (NTRS)
Torng, T. Y.; Wu, Y.-T.
1991-01-01
The novel probabilistic analysis method for assessment of structural reliability presented, which combines fast-convolution with an efficient structural reliability analysis, can after identifying the most important point of a limit state proceed to establish a quadratic-performance function. It then transforms the quadratic function into a linear one, and applies fast convolution. The method is applicable to problems requiring computer-intensive structural analysis. Five illustrative examples of the method's application are given.
Bayesian probabilistic population projections for all countries
Raftery, Adrian E.; Li, Nan; Ševčíková, Hana; Gerland, Patrick; Heilig, Gerhard K.
2012-01-01
Projections of countries’ future populations, broken down by age and sex, are widely used for planning and research. They are mostly done deterministically, but there is a widespread need for probabilistic projections. We propose a Bayesian method for probabilistic population projections for all countries. The total fertility rate and female and male life expectancies at birth are projected probabilistically using Bayesian hierarchical models estimated via Markov chain Monte Carlo using United Nations population data for all countries. These are then converted to age-specific rates and combined with a cohort component projection model. This yields probabilistic projections of any population quantity of interest. The method is illustrated for five countries of different demographic stages, continents and sizes. The method is validated by an out of sample experiment in which data from 1950–1990 are used for estimation, and applied to predict 1990–2010. The method appears reasonably accurate and well calibrated for this period. The results suggest that the current United Nations high and low variants greatly underestimate uncertainty about the number of oldest old from about 2050 and that they underestimate uncertainty for high fertility countries and overstate uncertainty for countries that have completed the demographic transition and whose fertility has started to recover towards replacement level, mostly in Europe. The results also indicate that the potential support ratio (persons aged 20–64 per person aged 65+) will almost certainly decline dramatically in most countries over the coming decades. PMID:22908249
A probabilistic approach to composite micromechanics
NASA Technical Reports Server (NTRS)
Stock, T. A.; Bellini, P. X.; Murthy, P. L. N.; Chamis, C. C.
1988-01-01
Probabilistic composite micromechanics methods are developed that simulate expected uncertainties in unidirectional fiber composite properties. These methods are in the form of computational procedures using Monte Carlo simulation. A graphite/epoxy unidirectional composite (ply) is studied to demonstrate fiber composite material properties at the micro level. Regression results are presented to show the relative correlation between predicted and response variables in the study.
Probabilistic Wind Power Ramp Forecasting Based on a Scenario Generation Method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Qin; Florita, Anthony R; Krishnan, Venkat K
Wind power ramps (WPRs) are particularly important in the management and dispatch of wind power and currently drawing the attention of balancing authorities. With the aim to reduce the impact of WPRs for power system operations, this paper develops a probabilistic ramp forecasting method based on a large number of simulated scenarios. An ensemble machine learning technique is first adopted to forecast the basic wind power forecasting scenario and calculate the historical forecasting errors. A continuous Gaussian mixture model (GMM) is used to fit the probability distribution function (PDF) of forecasting errors. The cumulative distribution function (CDF) is analytically deduced.more » The inverse transform method based on Monte Carlo sampling and the CDF is used to generate a massive number of forecasting error scenarios. An optimized swinging door algorithm is adopted to extract all the WPRs from the complete set of wind power forecasting scenarios. The probabilistic forecasting results of ramp duration and start-time are generated based on all scenarios. Numerical simulations on publicly available wind power data show that within a predefined tolerance level, the developed probabilistic wind power ramp forecasting method is able to predict WPRs with a high level of sharpness and accuracy.« less
NASA Astrophysics Data System (ADS)
Dadashzadeh, N.; Duzgun, H. S. B.; Yesiloglu-Gultekin, N.
2017-08-01
While advanced numerical techniques in slope stability analysis are successfully used in deterministic studies, they have so far found limited use in probabilistic analyses due to their high computation cost. The first-order reliability method (FORM) is one of the most efficient probabilistic techniques to perform probabilistic stability analysis by considering the associated uncertainties in the analysis parameters. However, it is not possible to directly use FORM in numerical slope stability evaluations as it requires definition of a limit state performance function. In this study, an integrated methodology for probabilistic numerical modeling of rock slope stability is proposed. The methodology is based on response surface method, where FORM is used to develop an explicit performance function from the results of numerical simulations. The implementation of the proposed methodology is performed by considering a large potential rock wedge in Sumela Monastery, Turkey. The accuracy of the developed performance function to truly represent the limit state surface is evaluated by monitoring the slope behavior. The calculated probability of failure is compared with Monte Carlo simulation (MCS) method. The proposed methodology is found to be 72% more efficient than MCS, while the accuracy is decreased with an error of 24%.
Probabilistic Wind Power Ramp Forecasting Based on a Scenario Generation Method: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Qin; Florita, Anthony R; Krishnan, Venkat K
2017-08-31
Wind power ramps (WPRs) are particularly important in the management and dispatch of wind power, and they are currently drawing the attention of balancing authorities. With the aim to reduce the impact of WPRs for power system operations, this paper develops a probabilistic ramp forecasting method based on a large number of simulated scenarios. An ensemble machine learning technique is first adopted to forecast the basic wind power forecasting scenario and calculate the historical forecasting errors. A continuous Gaussian mixture model (GMM) is used to fit the probability distribution function (PDF) of forecasting errors. The cumulative distribution function (CDF) ismore » analytically deduced. The inverse transform method based on Monte Carlo sampling and the CDF is used to generate a massive number of forecasting error scenarios. An optimized swinging door algorithm is adopted to extract all the WPRs from the complete set of wind power forecasting scenarios. The probabilistic forecasting results of ramp duration and start time are generated based on all scenarios. Numerical simulations on publicly available wind power data show that within a predefined tolerance level, the developed probabilistic wind power ramp forecasting method is able to predict WPRs with a high level of sharpness and accuracy.« less
Radiation energy budget studies using collocated AVHRR and ERBE observations
NASA Technical Reports Server (NTRS)
Ackerman, Steven A.; Inoue, Toshiro
1994-01-01
Changes in the energy balance at the top of the atmosphere are specified as a function of atmospheric and surface properties using observations from the Advanced Very High Resolution Radiometer (AVHRR) and the Earth Radiation Budget Experiment (ERBE) scanner. By collocating the observations from the two instruments, flown on NOAA-9, the authors take advantage of the remote-sensing capabilities of each instrument. The AVHRR spectral channels were selected based on regions that are strongly transparent to clear sky conditions and are therefore useful for characterizing both surface and cloud-top conditions. The ERBE instruments make broadband observations that are important for climate studies. The approach of collocating these observations in time and space is used to study the radiative energy budget of three geographic regions: oceanic, savanna, and desert.
Probabilistic liver atlas construction.
Dura, Esther; Domingo, Juan; Ayala, Guillermo; Marti-Bonmati, Luis; Goceri, E
2017-01-13
Anatomical atlases are 3D volumes or shapes representing an organ or structure of the human body. They contain either the prototypical shape of the object of interest together with other shapes representing its statistical variations (statistical atlas) or a probability map of belonging to the object (probabilistic atlas). Probabilistic atlases are mostly built with simple estimations only involving the data at each spatial location. A new method for probabilistic atlas construction that uses a generalized linear model is proposed. This method aims to improve the estimation of the probability to be covered by the liver. Furthermore, all methods to build an atlas involve previous coregistration of the sample of shapes available. The influence of the geometrical transformation adopted for registration in the quality of the final atlas has not been sufficiently investigated. The ability of an atlas to adapt to a new case is one of the most important quality criteria that should be taken into account. The presented experiments show that some methods for atlas construction are severely affected by the previous coregistration step. We show the good performance of the new approach. Furthermore, results suggest that extremely flexible registration methods are not always beneficial, since they can reduce the variability of the atlas and hence its ability to give sensible values of probability when used as an aid in segmentation of new cases.
NASA Technical Reports Server (NTRS)
Warner, James E.; Zubair, Mohammad; Ranjan, Desh
2017-01-01
This work investigates novel approaches to probabilistic damage diagnosis that utilize surrogate modeling and high performance computing (HPC) to achieve substantial computational speedup. Motivated by Digital Twin, a structural health management (SHM) paradigm that integrates vehicle-specific characteristics with continual in-situ damage diagnosis and prognosis, the methods studied herein yield near real-time damage assessments that could enable monitoring of a vehicle's health while it is operating (i.e. online SHM). High-fidelity modeling and uncertainty quantification (UQ), both critical to Digital Twin, are incorporated using finite element method simulations and Bayesian inference, respectively. The crux of the proposed Bayesian diagnosis methods, however, is the reformulation of the numerical sampling algorithms (e.g. Markov chain Monte Carlo) used to generate the resulting probabilistic damage estimates. To this end, three distinct methods are demonstrated for rapid sampling that utilize surrogate modeling and exploit various degrees of parallelism for leveraging HPC. The accuracy and computational efficiency of the methods are compared on the problem of strain-based crack identification in thin plates. While each approach has inherent problem-specific strengths and weaknesses, all approaches are shown to provide accurate probabilistic damage diagnoses and several orders of magnitude computational speedup relative to a baseline Bayesian diagnosis implementation.
Probabilistic Composite Design
NASA Technical Reports Server (NTRS)
Chamis, Christos C.
1997-01-01
Probabilistic composite design is described in terms of a computational simulation. This simulation tracks probabilistically the composite design evolution from constituent materials, fabrication process, through composite mechanics and structural components. Comparisons with experimental data are provided to illustrate selection of probabilistic design allowables, test methods/specimen guidelines, and identification of in situ versus pristine strength, For example, results show that: in situ fiber tensile strength is 90% of its pristine strength; flat-wise long-tapered specimens are most suitable for setting ply tensile strength allowables: a composite radome can be designed with a reliability of 0.999999; and laminate fatigue exhibits wide-spread scatter at 90% cyclic-stress to static-strength ratios.
Probabilistic population projections with migration uncertainty
Azose, Jonathan J.; Ševčíková, Hana; Raftery, Adrian E.
2016-01-01
We produce probabilistic projections of population for all countries based on probabilistic projections of fertility, mortality, and migration. We compare our projections to those from the United Nations’ Probabilistic Population Projections, which uses similar methods for fertility and mortality but deterministic migration projections. We find that uncertainty in migration projection is a substantial contributor to uncertainty in population projections for many countries. Prediction intervals for the populations of Northern America and Europe are over 70% wider, whereas prediction intervals for the populations of Africa, Asia, and the world as a whole are nearly unchanged. Out-of-sample validation shows that the model is reasonably well calibrated. PMID:27217571
Application of Probabilistic Analysis to Aircraft Impact Dynamics
NASA Technical Reports Server (NTRS)
Lyle, Karen H.; Padula, Sharon L.; Stockwell, Alan E.
2003-01-01
Full-scale aircraft crash simulations performed with nonlinear, transient dynamic, finite element codes can incorporate structural complexities such as: geometrically accurate models; human occupant models; and advanced material models to include nonlinear stressstrain behaviors, laminated composites, and material failure. Validation of these crash simulations is difficult due to a lack of sufficient information to adequately determine the uncertainty in the experimental data and the appropriateness of modeling assumptions. This paper evaluates probabilistic approaches to quantify the uncertainty in the simulated responses. Several criteria are used to determine that a response surface method is the most appropriate probabilistic approach. The work is extended to compare optimization results with and without probabilistic constraints.
Integrated probabilistic risk assessment for nanoparticles: the case of nanosilica in food.
Jacobs, Rianne; van der Voet, Hilko; Ter Braak, Cajo J F
Insight into risks of nanotechnology and the use of nanoparticles is an essential condition for the social acceptance and safe use of nanotechnology. One of the problems with which the risk assessment of nanoparticles is faced is the lack of data, resulting in uncertainty in the risk assessment. We attempt to quantify some of this uncertainty by expanding a previous deterministic study on nanosilica (5-200 nm) in food into a fully integrated probabilistic risk assessment. We use the integrated probabilistic risk assessment method in which statistical distributions and bootstrap methods are used to quantify uncertainty and variability in the risk assessment. Due to the large amount of uncertainty present, this probabilistic method, which separates variability from uncertainty, contributed to a better understandable risk assessment. We found that quantifying the uncertainties did not increase the perceived risk relative to the outcome of the deterministic study. We pinpointed particular aspects of the hazard characterization that contributed most to the total uncertainty in the risk assessment, suggesting that further research would benefit most from obtaining more reliable data on those aspects.
NASA Technical Reports Server (NTRS)
Funaro, Daniele; Gottlieb, David
1989-01-01
A new method of imposing boundary conditions in the pseudospectral approximation of hyperbolic systems of equations is proposed. It is suggested to collocate the equations, not only at the inner grid points, but also at the boundary points and use the boundary conditions as penalty terms. In the pseudo-spectral Legrendre method with the new boundary treatment, a stability analysis for the case of a constant coefficient hyperbolic system is presented and error estimates are derived.
Decentralized model reference adaptive control of large flexible structures
NASA Technical Reports Server (NTRS)
Lee, Fu-Ming; Fong, I-Kong; Lin, Yu-Hwan
1988-01-01
A decentralized model reference adaptive control (DMRAC) method is developed for large flexible structures (LFS). The development follows that of a centralized model reference adaptive control for LFS that have been shown to be feasible. The proposed method is illustrated using a simply supported beam with collocated actuators and sensors. Results show that the DMRAC can achieve either output regulation or output tracking with adequate convergence, provided the reference model inputs and their time derivatives are integrable, bounded, and approach zero as t approaches infinity.
Influence Diagrams as Decision-Making Tools for Pesticide Risk Management
The pesticide policy arena is filled with discussion of probabilistic approaches to assess ecological risk, however, similar discussions about implementing formal probabilistic methods in pesticide risk decision making are less common. An influence diagram approach is proposed f...
Environmental probabilistic quantitative assessment methodologies
Crovelli, R.A.
1995-01-01
In this paper, four petroleum resource assessment methodologies are presented as possible pollution assessment methodologies, even though petroleum as a resource is desirable, whereas pollution is undesirable. A methodology is defined in this paper to consist of a probability model and a probabilistic method, where the method is used to solve the model. The following four basic types of probability models are considered: 1) direct assessment, 2) accumulation size, 3) volumetric yield, and 4) reservoir engineering. Three of the four petroleum resource assessment methodologies were written as microcomputer systems, viz. TRIAGG for direct assessment, APRAS for accumulation size, and FASPU for reservoir engineering. A fourth microcomputer system termed PROBDIST supports the three assessment systems. The three assessment systems have different probability models but the same type of probabilistic method. The type of advantages of the analytic method are in computational speed and flexibility, making it ideal for a microcomputer. -from Author
DOE Office of Scientific and Technical Information (OSTI.GOV)
MacKinnon, Robert J.; Kuhlman, Kristopher L
2016-05-01
We present a method of control variates for calculating improved estimates for mean performance quantities of interest, E(PQI) , computed from Monte Carlo probabilistic simulations. An example of a PQI is the concentration of a contaminant at a particular location in a problem domain computed from simulations of transport in porous media. To simplify the presentation, the method is described in the setting of a one- dimensional elliptical model problem involving a single uncertain parameter represented by a probability distribution. The approach can be easily implemented for more complex problems involving multiple uncertain parameters and in particular for application tomore » probabilistic performance assessment of deep geologic nuclear waste repository systems. Numerical results indicate the method can produce estimates of E(PQI)having superior accuracy on coarser meshes and reduce the required number of simulations needed to achieve an acceptable estimate.« less
Lexical Collocation and Topic Occurrence in Well-Written Editorials: A Study in Form.
ERIC Educational Resources Information Center
Addison, James C., Jr.
To explore the concept of lexical collocation, or relationships between words, a study was conducted based on three assumptions: (1) that a text structure for a unit of discourse was analogous to that existing at the level of the sentence, (2) that such a text form could be discovered if a large enough sample of generically similar texts was…
ERIC Educational Resources Information Center
Alqarni, Ibrahim R.
2017-01-01
This study investigates the impact that study in Australia has on the lexical knowledge of Saudi Arabian students. It focuses on: 1) the effects that the length of study in Australia has on the acquisition of lexical collocations, as reflected by lexical knowledge tests, and 2) whether there is a significant gender difference in the acquisition of…
Data-Driven Learning and the Acquisition of Italian Collocations: From Design to Student Evaluation
ERIC Educational Resources Information Center
Forti, Luciana
2017-01-01
This paper looks at how corpus data was used to design an Italian as an L2 language learning programme and how it was evaluated by students. The study focuses on the acquisition of Italian verb-noun collocations by Chinese native students attending a ten month long Italian language course before enrolling at an Italian university. It describes how…
Perez-Cruz, Pedro E.; dos Santos, Renata; Silva, Thiago Buosi; Crovador, Camila Souza; Nascimento, Maria Salete de Angelis; Hall, Stacy; Fajardo, Julieta; Bruera, Eduardo; Hui, David
2014-01-01
Context Survival prognostication is important during end-of-life. The accuracy of clinician prediction of survival (CPS) over time has not been well characterized. Objectives To examine changes in prognostication accuracy during the last 14 days of life in a cohort of patients with advanced cancer admitted to two acute palliative care units and to compare the accuracy between the temporal and probabilistic approaches. Methods Physicians and nurses prognosticated survival daily for cancer patients in two hospitals until death/discharge using two prognostic approaches: temporal and probabilistic. We assessed accuracy for each method daily during the last 14 days of life comparing accuracy at day −14 (baseline) with accuracy at each time point using a test of proportions. Results 6718 temporal and 6621 probabilistic estimations were provided by physicians and nurses for 311 patients, respectively. Median (interquartile range) survival was 8 (4, 20) days. Temporal CPS had low accuracy (10–40%) and did not change over time. In contrast, probabilistic CPS was significantly more accurate (p<.05 at each time point) but decreased close to death. Conclusion Probabilistic CPS was consistently more accurate than temporal CPS over the last 14 days of life; however, its accuracy decreased as patients approached death. Our findings suggest that better tools to predict impending death are necessary. PMID:24746583
NASA Technical Reports Server (NTRS)
Fayssal, Safie; Weldon, Danny
2008-01-01
The United States National Aeronautics and Space Administration (NASA) is in the midst of a space exploration program called Constellation to send crew and cargo to the international Space Station, to the moon, and beyond. As part of the Constellation program, a new launch vehicle, Ares I, is being developed by NASA Marshall Space Flight Center. Designing a launch vehicle with high reliability and increased safety requires a significant effort in understanding design variability and design uncertainty at the various levels of the design (system, element, subsystem, component, etc.) and throughout the various design phases (conceptual, preliminary design, etc.). In a previous paper [1] we discussed a probabilistic functional failure analysis approach intended mainly to support system requirements definition, system design, and element design during the early design phases. This paper provides an overview of the application of probabilistic engineering methods to support the detailed subsystem/component design and development as part of the "Design for Reliability and Safety" approach for the new Ares I Launch Vehicle. Specifically, the paper discusses probabilistic engineering design analysis cases that had major impact on the design and manufacturing of the Space Shuttle hardware. The cases represent important lessons learned from the Space Shuttle Program and clearly demonstrate the significance of probabilistic engineering analysis in better understanding design deficiencies and identifying potential design improvement for Ares I. The paper also discusses the probabilistic functional failure analysis approach applied during the early design phases of Ares I and the forward plans for probabilistic design analysis in the detailed design and development phases.
Martin, Sébastien; Troccaz, Jocelyne; Daanenc, Vincent
2010-04-01
The authors present a fully automatic algorithm for the segmentation of the prostate in three-dimensional magnetic resonance (MR) images. The approach requires the use of an anatomical atlas which is built by computing transformation fields mapping a set of manually segmented images to a common reference. These transformation fields are then applied to the manually segmented structures of the training set in order to get a probabilistic map on the atlas. The segmentation is then realized through a two stage procedure. In the first stage, the processed image is registered to the probabilistic atlas. Subsequently, a probabilistic segmentation is obtained by mapping the probabilistic map of the atlas to the patient's anatomy. In the second stage, a deformable surface evolves toward the prostate boundaries by merging information coming from the probabilistic segmentation, an image feature model and a statistical shape model. During the evolution of the surface, the probabilistic segmentation allows the introduction of a spatial constraint that prevents the deformable surface from leaking in an unlikely configuration. The proposed method is evaluated on 36 exams that were manually segmented by a single expert. A median Dice similarity coefficient of 0.86 and an average surface error of 2.41 mm are achieved. By merging prior knowledge, the presented method achieves a robust and completely automatic segmentation of the prostate in MR images. Results show that the use of a spatial constraint is useful to increase the robustness of the deformable model comparatively to a deformable surface that is only driven by an image appearance model.
General Purpose Probabilistic Programming Platform with Effective Stochastic Inference
2018-04-01
2.2 Venture 10 2.3 BayesDB 12 2.4 Picture 17 2.5 MetaProb 20 3.0 METHODS , ASSUMPTIONS, AND PROCEDURES 22 4.0 RESULTS AND DISCUSSION 23 4.1...The methods section outlines the research approach. The results and discussion section gives representative quantitative and qualitative results...modeling via CrossCat, a probabilistic method that emulates many of the judgment calls ordinarily made by a human data analyst. This AI assistance
Superposition-Based Analysis of First-Order Probabilistic Timed Automata
NASA Astrophysics Data System (ADS)
Fietzke, Arnaud; Hermanns, Holger; Weidenbach, Christoph
This paper discusses the analysis of first-order probabilistic timed automata (FPTA) by a combination of hierarchic first-order superposition-based theorem proving and probabilistic model checking. We develop the overall semantics of FPTAs and prove soundness and completeness of our method for reachability properties. Basically, we decompose FPTAs into their time plus first-order logic aspects on the one hand, and their probabilistic aspects on the other hand. Then we exploit the time plus first-order behavior by hierarchic superposition over linear arithmetic. The result of this analysis is the basis for the construction of a reachability equivalent (to the original FPTA) probabilistic timed automaton to which probabilistic model checking is finally applied. The hierarchic superposition calculus required for the analysis is sound and complete on the first-order formulas generated from FPTAs. It even works well in practice. We illustrate the potential behind it with a real-life DHCP protocol example, which we analyze by means of tool chain support.
NASA Astrophysics Data System (ADS)
Klügel, J.
2006-12-01
Deterministic scenario-based seismic hazard analysis has a long tradition in earthquake engineering for developing the design basis of critical infrastructures like dams, transport infrastructures, chemical plants and nuclear power plants. For many applications besides of the design of infrastructures it is of interest to assess the efficiency of the design measures taken. These applications require a method allowing to perform a meaningful quantitative risk analysis. A new method for a probabilistic scenario-based seismic risk analysis has been developed based on a probabilistic extension of proven deterministic methods like the MCE- methodology. The input data required for the method are entirely based on the information which is necessary to perform any meaningful seismic hazard analysis. The method is based on the probabilistic risk analysis approach common for applications in nuclear technology developed originally by Kaplan & Garrick (1981). It is based (1) on a classification of earthquake events into different size classes (by magnitude), (2) the evaluation of the frequency of occurrence of events, assigned to the different classes (frequency of initiating events, (3) the development of bounding critical scenarios assigned to each class based on the solution of an optimization problem and (4) in the evaluation of the conditional probability of exceedance of critical design parameters (vulnerability analysis). The advantage of the method in comparison with traditional PSHA consists in (1) its flexibility, allowing to use different probabilistic models for earthquake occurrence as well as to incorporate advanced physical models into the analysis, (2) in the mathematically consistent treatment of uncertainties, and (3) in the explicit consideration of the lifetime of the critical structure as a criterion to formulate different risk goals. The method was applied for the evaluation of the risk of production interruption losses of a nuclear power plant during its residual lifetime.
A probabilistic approach to aircraft design emphasizing stability and control uncertainties
NASA Astrophysics Data System (ADS)
Delaurentis, Daniel Andrew
In order to address identified deficiencies in current approaches to aerospace systems design, a new method has been developed. This new method for design is based on the premise that design is a decision making activity, and that deterministic analysis and synthesis can lead to poor, or misguided decision making. This is due to a lack of disciplinary knowledge of sufficient fidelity about the product, to the presence of uncertainty at multiple levels of the aircraft design hierarchy, and to a failure to focus on overall affordability metrics as measures of goodness. Design solutions are desired which are robust to uncertainty and are based on the maximum knowledge possible. The new method represents advances in the two following general areas. 1. Design models and uncertainty. The research performed completes a transition from a deterministic design representation to a probabilistic one through a modeling of design uncertainty at multiple levels of the aircraft design hierarchy, including: (1) Consistent, traceable uncertainty classification and representation; (2) Concise mathematical statement of the Probabilistic Robust Design problem; (3) Variants of the Cumulative Distribution Functions (CDFs) as decision functions for Robust Design; (4) Probabilistic Sensitivities which identify the most influential sources of variability. 2. Multidisciplinary analysis and design. Imbedded in the probabilistic methodology is a new approach for multidisciplinary design analysis and optimization (MDA/O), employing disciplinary analysis approximations formed through statistical experimentation and regression. These approximation models are a function of design variables common to the system level as well as other disciplines. For aircraft, it is proposed that synthesis/sizing is the proper avenue for integrating multiple disciplines. Research hypotheses are translated into a structured method, which is subsequently tested for validity. Specifically, the implementation involves the study of the relaxed static stability technology for a supersonic commercial transport aircraft. The probabilistic robust design method is exercised resulting in a series of robust design solutions based on different interpretations of "robustness". Insightful results are obtained and the ability of the method to expose trends in the design space are noted as a key advantage.
A novel probabilistic framework for event-based speech recognition
NASA Astrophysics Data System (ADS)
Juneja, Amit; Espy-Wilson, Carol
2003-10-01
One of the reasons for unsatisfactory performance of the state-of-the-art automatic speech recognition (ASR) systems is the inferior acoustic modeling of low-level acoustic-phonetic information in the speech signal. An acoustic-phonetic approach to ASR, on the other hand, explicitly targets linguistic information in the speech signal, but such a system for continuous speech recognition (CSR) is not known to exist. A probabilistic and statistical framework for CSR based on the idea of the representation of speech sounds by bundles of binary valued articulatory phonetic features is proposed. Multiple probabilistic sequences of linguistically motivated landmarks are obtained using binary classifiers of manner phonetic features-syllabic, sonorant and continuant-and the knowledge-based acoustic parameters (APs) that are acoustic correlates of those features. The landmarks are then used for the extraction of knowledge-based APs for source and place phonetic features and their binary classification. Probabilistic landmark sequences are constrained using manner class language models for isolated or connected word recognition. The proposed method could overcome the disadvantages encountered by the early acoustic-phonetic knowledge-based systems that led the ASR community to switch to systems highly dependent on statistical pattern analysis methods and probabilistic language or grammar models.
Scalable collaborative risk management technology for complex critical systems
NASA Technical Reports Server (NTRS)
Campbell, Scott; Torgerson, Leigh; Burleigh, Scott; Feather, Martin S.; Kiper, James D.
2004-01-01
We describe here our project and plans to develop methods, software tools, and infrastructure tools to address challenges relating to geographically distributed software development. Specifically, this work is creating an infrastructure that supports applications working over distributed geographical and organizational domains and is using this infrastructure to develop a tool that supports project development using risk management and analysis techniques where the participants are not collocated.
Joint Probabilistic Projection of Female and Male Life Expectancy
Raftery, Adrian E.; Lalic, Nevena; Gerland, Patrick
2014-01-01
BACKGROUND The United Nations (UN) produces population projections for all countries every two years. These are used by international organizations, governments, the private sector and researchers for policy planning, for monitoring development goals, as inputs to economic and environmental models, and for social and health research. The UN is considering producing fully probabilistic population projections, for which joint probabilistic projections of future female and male life expectancy at birth are needed. OBJECTIVE We propose a methodology for obtaining joint probabilistic projections of female and male life expectancy at birth. METHODS We first project female life expectancy using a one-sex method for probabilistic projection of life expectancy. We then project the gap between female and male life expectancy. We propose an autoregressive model for the gap in a future time period for a particular country, which is a function of female life expectancy and a t-distributed random perturbation. This method takes into account mortality data limitations, is comparable across countries, and accounts for shocks. We estimate all parameters based on life expectancy estimates for 1950–2010. The methods are implemented in the bayesLife and bayesPop R packages. RESULTS We evaluated our model using out-of-sample projections for the period 1995–2010, and found that our method performed better than several possible alternatives. CONCLUSIONS We find that the average gap between female and male life expectancy has been increasing for female life expectancy below 75, and decreasing for female life expectancy above 75. Our projections of the gap are lower than the UN’s 2008 projections for most countries and so lead to higher projections of male life expectancy. PMID:25580082
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yilmaz, Şeyda, E-mail: seydayilmaz@ktu.edu.tr; Bayrak, Erdem, E-mail: erdmbyrk@gmail.com; Bayrak, Yusuf, E-mail: bayrak@ktu.edu.tr
In this study we examined and compared the three different probabilistic distribution methods for determining the best suitable model in probabilistic assessment of earthquake hazards. We analyzed a reliable homogeneous earthquake catalogue between a time period 1900-2015 for magnitude M ≥ 6.0 and estimated the probabilistic seismic hazard in the North Anatolian Fault zone (39°-41° N 30°-40° E) using three distribution methods namely Weibull distribution, Frechet distribution and three-parameter Weibull distribution. The distribution parameters suitability was evaluated Kolmogorov-Smirnov (K-S) goodness-of-fit test. We also compared the estimated cumulative probability and the conditional probabilities of occurrence of earthquakes for different elapsed timemore » using these three distribution methods. We used Easyfit and Matlab software to calculate these distribution parameters and plotted the conditional probability curves. We concluded that the Weibull distribution method was the most suitable than other distribution methods in this region.« less
NASA Technical Reports Server (NTRS)
1992-01-01
The technical effort and computer code developed during the first year are summarized. Several formulations for Probabilistic Finite Element Analysis (PFEA) are described with emphasis on the selected formulation. The strategies being implemented in the first-version computer code to perform linear, elastic PFEA is described. The results of a series of select Space Shuttle Main Engine (SSME) component surveys are presented. These results identify the critical components and provide the information necessary for probabilistic structural analysis.
Incremental dynamical downscaling for probabilistic analysis based on multiple GCM projections
NASA Astrophysics Data System (ADS)
Wakazuki, Y.
2015-12-01
A dynamical downscaling method for probabilistic regional scale climate change projections was developed to cover an uncertainty of multiple general circulation model (GCM) climate simulations. The climatological increments (future minus present climate states) estimated by GCM simulation results were statistically analyzed using the singular vector decomposition. Both positive and negative perturbations from the ensemble mean with the magnitudes of their standard deviations were extracted and were added to the ensemble mean of the climatological increments. The analyzed multiple modal increments were utilized to create multiple modal lateral boundary conditions for the future climate regional climate model (RCM) simulations by adding to an objective analysis data. This data handling is regarded to be an advanced method of the pseudo-global-warming (PGW) method previously developed by Kimura and Kitoh (2007). The incremental handling for GCM simulations realized approximated probabilistic climate change projections with the smaller number of RCM simulations. Three values of a climatological variable simulated by RCMs for a mode were used to estimate the response to the perturbation of the mode. For the probabilistic analysis, climatological variables of RCMs were assumed to show linear response to the multiple modal perturbations, although the non-linearity was seen for local scale rainfall. Probability of temperature was able to be estimated within two modes perturbation simulations, where the number of RCM simulations for the future climate is five. On the other hand, local scale rainfalls needed four modes simulations, where the number of the RCM simulations is nine. The probabilistic method is expected to be used for regional scale climate change impact assessment in the future.
Probabilistic structural analysis using a general purpose finite element program
NASA Astrophysics Data System (ADS)
Riha, D. S.; Millwater, H. R.; Thacker, B. H.
1992-07-01
This paper presents an accurate and efficient method to predict the probabilistic response for structural response quantities, such as stress, displacement, natural frequencies, and buckling loads, by combining the capabilities of MSC/NASTRAN, including design sensitivity analysis and fast probability integration. Two probabilistic structural analysis examples have been performed and verified by comparison with Monte Carlo simulation of the analytical solution. The first example consists of a cantilevered plate with several point loads. The second example is a probabilistic buckling analysis of a simply supported composite plate under in-plane loading. The coupling of MSC/NASTRAN and fast probability integration is shown to be orders of magnitude more efficient than Monte Carlo simulation with excellent accuracy.
Structural system reliability calculation using a probabilistic fault tree analysis method
NASA Technical Reports Server (NTRS)
Torng, T. Y.; Wu, Y.-T.; Millwater, H. R.
1992-01-01
The development of a new probabilistic fault tree analysis (PFTA) method for calculating structural system reliability is summarized. The proposed PFTA procedure includes: developing a fault tree to represent the complex structural system, constructing an approximation function for each bottom event, determining a dominant sampling sequence for all bottom events, and calculating the system reliability using an adaptive importance sampling method. PFTA is suitable for complicated structural problems that require computer-intensive computer calculations. A computer program has been developed to implement the PFTA.
Solving the hypersingular boundary integral equation for the Burton and Miller formulation.
Langrenne, Christophe; Garcia, Alexandre; Bonnet, Marc
2015-11-01
This paper presents an easy numerical implementation of the Burton and Miller (BM) formulation, where the hypersingular Helmholtz integral is regularized by identities from the associated Laplace equation and thus needing only the evaluation of weakly singular integrals. The Helmholtz equation and its normal derivative are combined directly with combinations at edge or corner collocation nodes not used when the surface is not smooth. The hypersingular operators arising in this process are regularized and then evaluated by an indirect procedure based on discretized versions of the Calderón identities linking the integral operators for associated Laplace problems. The method is valid for acoustic radiation and scattering problems involving arbitrarily shaped three-dimensional bodies. Unlike other approaches using direct evaluation of hypersingular integrals, collocation points still coincide with mesh nodes, as is usual when using conforming elements. Using higher-order shape functions (with the boundary element method model size kept fixed) reduces the overall numerical integration effort while increasing the solution accuracy. To reduce the condition number of the resulting BM formulation at low frequencies, a regularized version α = ik/(k(2 )+ λ) of the classical BM coupling factor α = i/k is proposed. Comparisons with the combined Helmholtz integral equation Formulation method of Schenck are made for four example configurations, two of them featuring non-smooth surfaces.
PCEMCAN - Probabilistic Ceramic Matrix Composites Analyzer: User's Guide, Version 1.0
NASA Technical Reports Server (NTRS)
Shah, Ashwin R.; Mital, Subodh K.; Murthy, Pappu L. N.
1998-01-01
PCEMCAN (Probabalistic CEramic Matrix Composites ANalyzer) is an integrated computer code developed at NASA Lewis Research Center that simulates uncertainties associated with the constituent properties, manufacturing process, and geometric parameters of fiber reinforced ceramic matrix composites and quantifies their random thermomechanical behavior. The PCEMCAN code can perform the deterministic as well as probabilistic analyses to predict thermomechanical properties. This User's guide details the step-by-step procedure to create input file and update/modify the material properties database required to run PCEMCAN computer code. An overview of the geometric conventions, micromechanical unit cell, nonlinear constitutive relationship and probabilistic simulation methodology is also provided in the manual. Fast probability integration as well as Monte-Carlo simulation methods are available for the uncertainty simulation. Various options available in the code to simulate probabilistic material properties and quantify sensitivity of the primitive random variables have been described. The description of deterministic as well as probabilistic results have been described using demonstration problems. For detailed theoretical description of deterministic and probabilistic analyses, the user is referred to the companion documents "Computational Simulation of Continuous Fiber-Reinforced Ceramic Matrix Composite Behavior," NASA TP-3602, 1996 and "Probabilistic Micromechanics and Macromechanics for Ceramic Matrix Composites", NASA TM 4766, June 1997.
Proposal of a method for evaluating tsunami risk using response-surface methodology
NASA Astrophysics Data System (ADS)
Fukutani, Y.
2017-12-01
Information on probabilistic tsunami inundation hazards is needed to define and evaluate tsunami risk. Several methods for calculating these hazards have been proposed (e.g. Løvholt et al. (2012), Thio (2012), Fukutani et al. (2014), Goda et al. (2015)). However, these methods are inefficient, and their calculation cost is high, since they require multiple tsunami numerical simulations, therefore lacking versatility. In this study, we proposed a simpler method for tsunami risk evaluation using response-surface methodology. Kotani et al. (2016) proposed an evaluation method for the probabilistic distribution of tsunami wave-height using a response-surface methodology. We expanded their study and developed a probabilistic distribution of tsunami inundation depth. We set the depth (x1) and the slip (x2) of an earthquake fault as explanatory variables and tsunami inundation depth (y) as an object variable. Subsequently, tsunami risk could be evaluated by conducting a Monte Carlo simulation, assuming that the generation probability of an earthquake follows a Poisson distribution, the probability distribution of tsunami inundation depth follows the distribution derived from a response-surface, and the damage probability of a target follows a log normal distribution. We applied the proposed method to a wood building located on the coast of Tokyo Bay. We implemented a regression analysis based on the results of 25 tsunami numerical calculations and developed a response-surface, which was defined as y=ax1+bx2+c (a:0.2615, b:3.1763, c=-1.1802). We assumed proper probabilistic distribution for earthquake generation, inundation height, and vulnerability. Based on these probabilistic distributions, we conducted Monte Carlo simulations of 1,000,000 years. We clarified that the expected damage probability of the studied wood building is 22.5%, assuming that an earthquake occurs. The proposed method is therefore a useful and simple way to evaluate tsunami risk using a response-surface and Monte Carlo simulation without conducting multiple tsunami numerical simulations.
76 FR 28102 - Notice of Issuance of Regulatory Guide
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-13
..., Probabilistic Risk Assessment Branch, Division of Risk Analysis, Office of Nuclear Regulatory Research, U.S... approaches and methods (whether quantitative or qualitative, deterministic or probabilistic), data, and... uses in evaluating specific problems or postulated accidents, and data that the staff needs in its...
Campbell, Kieran R; Yau, Christopher
2017-03-15
Modeling bifurcations in single-cell transcriptomics data has become an increasingly popular field of research. Several methods have been proposed to infer bifurcation structure from such data, but all rely on heuristic non-probabilistic inference. Here we propose the first generative, fully probabilistic model for such inference based on a Bayesian hierarchical mixture of factor analyzers. Our model exhibits competitive performance on large datasets despite implementing full Markov-Chain Monte Carlo sampling, and its unique hierarchical prior structure enables automatic determination of genes driving the bifurcation process. We additionally propose an Empirical-Bayes like extension that deals with the high levels of zero-inflation in single-cell RNA-seq data and quantify when such models are useful. We apply or model to both real and simulated single-cell gene expression data and compare the results to existing pseudotime methods. Finally, we discuss both the merits and weaknesses of such a unified, probabilistic approach in the context practical bioinformatics analyses.
Probabilistic Analysis and Density Parameter Estimation Within Nessus
NASA Astrophysics Data System (ADS)
Godines, Cody R.; Manteufel, Randall D.
2002-12-01
This NASA educational grant has the goal of promoting probabilistic analysis methods to undergraduate and graduate UTSA engineering students. Two undergraduate-level and one graduate-level course were offered at UTSA providing a large number of students exposure to and experience in probabilistic techniques. The grant provided two research engineers from Southwest Research Institute the opportunity to teach these courses at UTSA, thereby exposing a large number of students to practical applications of probabilistic methods and state-of-the-art computational methods. In classroom activities, students were introduced to the NESSUS computer program, which embodies many algorithms in probabilistic simulation and reliability analysis. Because the NESSUS program is used at UTSA in both student research projects and selected courses, a student version of a NESSUS manual has been revised and improved, with additional example problems being added to expand the scope of the example application problems. This report documents two research accomplishments in the integration of a new sampling algorithm into NESSUS and in the testing of the new algorithm. The new Latin Hypercube Sampling (LHS) subroutines use the latest NESSUS input file format and specific files for writing output. The LHS subroutines are called out early in the program so that no unnecessary calculations are performed. Proper correlation between sets of multidimensional coordinates can be obtained by using NESSUS' LHS capabilities. Finally, two types of correlation are written to the appropriate output file. The program enhancement was tested by repeatedly estimating the mean, standard deviation, and 99th percentile of four different responses using Monte Carlo (MC) and LHS. These test cases, put forth by the Society of Automotive Engineers, are used to compare probabilistic methods. For all test cases, it is shown that LHS has a lower estimation error than MC when used to estimate the mean, standard deviation, and 99th percentile of the four responses at the 50 percent confidence level and using the same number of response evaluations for each method. In addition, LHS requires fewer calculations than MC in order to be 99.7 percent confident that a single mean, standard deviation, or 99th percentile estimate will be within at most 3 percent of the true value of the each parameter. Again, this is shown for all of the test cases studied. For that reason it can be said that NESSUS is an important reliability tool that has a variety of sound probabilistic methods a user can employ; furthermore, the newest LHS module is a valuable new enhancement of the program.
Probabilistic Analysis and Density Parameter Estimation Within Nessus
NASA Technical Reports Server (NTRS)
Godines, Cody R.; Manteufel, Randall D.; Chamis, Christos C. (Technical Monitor)
2002-01-01
This NASA educational grant has the goal of promoting probabilistic analysis methods to undergraduate and graduate UTSA engineering students. Two undergraduate-level and one graduate-level course were offered at UTSA providing a large number of students exposure to and experience in probabilistic techniques. The grant provided two research engineers from Southwest Research Institute the opportunity to teach these courses at UTSA, thereby exposing a large number of students to practical applications of probabilistic methods and state-of-the-art computational methods. In classroom activities, students were introduced to the NESSUS computer program, which embodies many algorithms in probabilistic simulation and reliability analysis. Because the NESSUS program is used at UTSA in both student research projects and selected courses, a student version of a NESSUS manual has been revised and improved, with additional example problems being added to expand the scope of the example application problems. This report documents two research accomplishments in the integration of a new sampling algorithm into NESSUS and in the testing of the new algorithm. The new Latin Hypercube Sampling (LHS) subroutines use the latest NESSUS input file format and specific files for writing output. The LHS subroutines are called out early in the program so that no unnecessary calculations are performed. Proper correlation between sets of multidimensional coordinates can be obtained by using NESSUS' LHS capabilities. Finally, two types of correlation are written to the appropriate output file. The program enhancement was tested by repeatedly estimating the mean, standard deviation, and 99th percentile of four different responses using Monte Carlo (MC) and LHS. These test cases, put forth by the Society of Automotive Engineers, are used to compare probabilistic methods. For all test cases, it is shown that LHS has a lower estimation error than MC when used to estimate the mean, standard deviation, and 99th percentile of the four responses at the 50 percent confidence level and using the same number of response evaluations for each method. In addition, LHS requires fewer calculations than MC in order to be 99.7 percent confident that a single mean, standard deviation, or 99th percentile estimate will be within at most 3 percent of the true value of the each parameter. Again, this is shown for all of the test cases studied. For that reason it can be said that NESSUS is an important reliability tool that has a variety of sound probabilistic methods a user can employ; furthermore, the newest LHS module is a valuable new enhancement of the program.
New decoding methods of interleaved burst error-correcting codes
NASA Astrophysics Data System (ADS)
Nakano, Y.; Kasahara, M.; Namekawa, T.
1983-04-01
A probabilistic method of single burst error correction, using the syndrome correlation of subcodes which constitute the interleaved code, is presented. This method makes it possible to realize a high capability of burst error correction with less decoding delay. By generalizing this method it is possible to obtain probabilistic method of multiple (m-fold) burst error correction. After estimating the burst error positions using syndrome correlation of subcodes which are interleaved m-fold burst error detecting codes, this second method corrects erasure errors in each subcode and m-fold burst errors. The performance of these two methods is analyzed via computer simulation, and their effectiveness is demonstrated.
Weickert, Thomas W.; Goldberg, Terry E.; Egan, Michael F.; Apud, Jose A.; Meeter, Martijn; Myers, Catherine E.; Gluck, Mark A; Weinberger, Daniel R.
2010-01-01
Background While patients with schizophrenia display an overall probabilistic category learning performance deficit, the extent to which this deficit occurs in unaffected siblings of patients with schizophrenia is unknown. There are also discrepant findings regarding probabilistic category learning acquisition rate and performance in patients with schizophrenia. Methods A probabilistic category learning test was administered to 108 patients with schizophrenia, 82 unaffected siblings, and 121 healthy participants. Results Patients with schizophrenia displayed significant differences from their unaffected siblings and healthy participants with respect to probabilistic category learning acquisition rates. Although siblings on the whole failed to differ from healthy participants on strategy and quantitative indices of overall performance and learning acquisition, application of a revised learning criterion enabling classification into good and poor learners based on individual learning curves revealed significant differences between percentages of sibling and healthy poor learners: healthy (13.2%), siblings (34.1%), patients (48.1%), yielding a moderate relative risk. Conclusions These results clarify previous discrepant findings pertaining to probabilistic category learning acquisition rate in schizophrenia and provide the first evidence for the relative risk of probabilistic category learning abnormalities in unaffected siblings of patients with schizophrenia, supporting genetic underpinnings of probabilistic category learning deficits in schizophrenia. These findings also raise questions regarding the contribution of antipsychotic medication to the probabilistic category learning deficit in schizophrenia. The distinction between good and poor learning may be used to inform genetic studies designed to detect schizophrenia risk alleles. PMID:20172502
Probabilistic liquefaction triggering based on the cone penetration test
Moss, R.E.S.; Seed, R.B.; Kayen, R.E.; Stewart, J.P.; Tokimatsu, K.
2005-01-01
Performance-based earthquake engineering requires a probabilistic treatment of potential failure modes in order to accurately quantify the overall stability of the system. This paper is a summary of the application portions of the probabilistic liquefaction triggering correlations proposed recently proposed by Moss and co-workers. To enable probabilistic treatment of liquefaction triggering, the variables comprising the seismic load and the liquefaction resistance were treated as inherently uncertain. Supporting data from an extensive Cone Penetration Test (CPT)-based liquefaction case history database were used to develop a probabilistic correlation. The methods used to measure the uncertainty of the load and resistance variables, how the interactions of these variables were treated using Bayesian updating, and how reliability analysis was applied to produce curves of equal probability of liquefaction are presented. The normalization for effective overburden stress, the magnitude correlated duration weighting factor, and the non-linear shear mass participation factor used are also discussed.
Probabilistic thinking and death anxiety: a terror management based study.
Hayslip, Bert; Schuler, Eric R; Page, Kyle S; Carver, Kellye S
2014-01-01
Terror Management Theory has been utilized to understand how death can change behavioral outcomes and social dynamics. One area that is not well researched is why individuals willingly engage in risky behavior that could accelerate their mortality. One method of distancing a potential life threatening outcome when engaging in risky behaviors is through stacking probability in favor of the event not occurring, termed probabilistic thinking. The present study examines the creation and psychometric properties of the Probabilistic Thinking scale in a sample of young, middle aged, and older adults (n = 472). The scale demonstrated adequate internal consistency reliability for each of the four subscales, excellent overall internal consistency, and good construct validity regarding relationships with measures of death anxiety. Reliable age and gender effects in probabilistic thinking were also observed. The relationship of probabilistic thinking as part of a cultural buffer against death anxiety is discussed, as well as its implications for Terror Management research.
Development of probabilistic emission inventories of air toxics for Jacksonville, Florida, USA.
Zhao, Yuchao; Frey, H Christopher
2004-11-01
Probabilistic emission inventories were developed for 1,3-butadiene, mercury (Hg), arsenic (As), benzene, formaldehyde, and lead for Jacksonville, FL. To quantify inter-unit variability in empirical emission factor data, the Maximum Likelihood Estimation (MLE) method or the Method of Matching Moments was used to fit parametric distributions. For data sets that contain nondetected measurements, a method based upon MLE was used for parameter estimation. To quantify the uncertainty in urban air toxic emission factors, parametric bootstrap simulation and empirical bootstrap simulation were applied to uncensored and censored data, respectively. The probabilistic emission inventories were developed based on the product of the uncertainties in the emission factors and in the activity factors. The uncertainties in the urban air toxics emission inventories range from as small as -25 to +30% for Hg to as large as -83 to +243% for As. The key sources of uncertainty in the emission inventory for each toxic are identified based upon sensitivity analysis. Typically, uncertainty in the inventory of a given pollutant can be attributed primarily to a small number of source categories. Priorities for improving the inventories and for refining the probabilistic analysis are discussed.
Bayesian Probabilistic Projections of Life Expectancy for All Countries
Raftery, Adrian E.; Chunn, Jennifer L.; Gerland, Patrick; Ševčíková, Hana
2014-01-01
We propose a Bayesian hierarchical model for producing probabilistic forecasts of male period life expectancy at birth for all the countries of the world from the present to 2100. Such forecasts would be an input to the production of probabilistic population projections for all countries, which is currently being considered by the United Nations. To evaluate the method, we did an out-of-sample cross-validation experiment, fitting the model to the data from 1950–1995, and using the estimated model to forecast for the subsequent ten years. The ten-year predictions had a mean absolute error of about 1 year, about 40% less than the current UN methodology. The probabilistic forecasts were calibrated, in the sense that (for example) the 80% prediction intervals contained the truth about 80% of the time. We illustrate our method with results from Madagascar (a typical country with steadily improving life expectancy), Latvia (a country that has had a mortality crisis), and Japan (a leading country). We also show aggregated results for South Asia, a region with eight countries. Free publicly available R software packages called bayesLife and bayesDem are available to implement the method. PMID:23494599
Offerman, Theo; Palley, Asa B
2016-01-01
Strictly proper scoring rules are designed to truthfully elicit subjective probabilistic beliefs from risk neutral agents. Previous experimental studies have identified two problems with this method: (i) risk aversion causes agents to bias their reports toward the probability of [Formula: see text], and (ii) for moderate beliefs agents simply report [Formula: see text]. Applying a prospect theory model of risk preferences, we show that loss aversion can explain both of these behavioral phenomena. Using the insights of this model, we develop a simple off-the-shelf probability assessment mechanism that encourages loss-averse agents to report true beliefs. In an experiment, we demonstrate the effectiveness of this modification in both eliminating uninformative reports and eliciting true probabilistic beliefs.
NASA Astrophysics Data System (ADS)
Chen, Tzikang J.; Shiao, Michael
2016-04-01
This paper verified a generic and efficient assessment concept for probabilistic fatigue life management. The concept is developed based on an integration of damage tolerance methodology, simulations methods1, 2, and a probabilistic algorithm RPI (recursive probability integration)3-9 considering maintenance for damage tolerance and risk-based fatigue life management. RPI is an efficient semi-analytical probabilistic method for risk assessment subjected to various uncertainties such as the variability in material properties including crack growth rate, initial flaw size, repair quality, random process modeling of flight loads for failure analysis, and inspection reliability represented by probability of detection (POD). In addition, unlike traditional Monte Carlo simulations (MCS) which requires a rerun of MCS when maintenance plan is changed, RPI can repeatedly use a small set of baseline random crack growth histories excluding maintenance related parameters from a single MCS for various maintenance plans. In order to fully appreciate the RPI method, a verification procedure was performed. In this study, MC simulations in the orders of several hundred billions were conducted for various flight conditions, material properties, and inspection scheduling, POD and repair/replacement strategies. Since the MC simulations are time-consuming methods, the simulations were conducted parallelly on DoD High Performance Computers (HPC) using a specialized random number generator for parallel computing. The study has shown that RPI method is several orders of magnitude more efficient than traditional Monte Carlo simulations.
NASA Technical Reports Server (NTRS)
Cunningham, A. M., Jr.
1973-01-01
The method presented uses a collocation technique with the nonplanar kernel function to solve supersonic lifting surface problems with and without interference. A set of pressure functions are developed based on conical flow theory solutions which account for discontinuities in the supersonic pressure distributions. These functions permit faster solution convergence than is possible with conventional supersonic pressure functions. An improper integral of a 3/2 power singularity along the Mach hyperbola of the nonplanar supersonic kernel function is described and treated. The method is compared with other theories and experiment for a variety of cases.
Oscillatory supersonic kernel function method for interfering surfaces
NASA Technical Reports Server (NTRS)
Cunningham, A. M., Jr.
1974-01-01
In the method presented in this paper, a collocation technique is used with the nonplanar supersonic kernel function to solve multiple lifting surface problems with interference in steady or oscillatory flow. The pressure functions used are based on conical flow theory solutions and provide faster solution convergence than is possible with conventional functions. In the application of the nonplanar supersonic kernel function, an improper integral of a 3/2 power singularity along the Mach hyperbola is described and treated. The method is compared with other theories and experiment for two wing-tail configurations in steady and oscillatory flow.
ERIC Educational Resources Information Center
Peters, Elke
2014-01-01
This article examines how form recall of target lexical items by learners of English as a foreign language (EFL) is affected (1) by repetition (1, 3 or 5 number of occurrences), (2) by the type of target item (single words versus collocations), and (3) by the time of post-test administration (immediately or one week after the learning session).…
Investigation of IPPD: A Case Study of the Marine Corps AAAV.
1998-03-01
process. ( Rafii , 1995, p. 78) The United States Marine Corps is in the process of developing their next generation of Advanced Amphibious Assault...particular product or process. ( Rafii , 1995, p. 78) DiTrapani and Geither’s (1996) study of IPTs stressed the collocation of team members to the...School of Systems Management, Naval Postgraduate School, Monterey, CA, December 1996. Rafii , F., "How Important Is Physical Collocation to Product
Integrated High-Speed Torque Control System for a Robotic Joint
NASA Technical Reports Server (NTRS)
Davis, Donald R. (Inventor); Radford, Nicolaus A. (Inventor); Permenter, Frank Noble (Inventor); Valvo, Michael C. (Inventor); Askew, R. Scott (Inventor)
2013-01-01
A control system for achieving high-speed torque for a joint of a robot includes a printed circuit board assembly (PCBA) having a collocated joint processor and high-speed communication bus. The PCBA may also include a power inverter module (PIM) and local sensor conditioning electronics (SCE) for processing sensor data from one or more motor position sensors. Torque control of a motor of the joint is provided via the PCBA as a high-speed torque loop. Each joint processor may be embedded within or collocated with the robotic joint being controlled. Collocation of the joint processor, PIM, and high-speed bus may increase noise immunity of the control system, and the localized processing of sensor data from the joint motor at the joint level may minimize bus cabling to and from each control node. The joint processor may include a field programmable gate array (FPGA).
Extending the Fellegi-Sunter probabilistic record linkage method for approximate field comparators.
DuVall, Scott L; Kerber, Richard A; Thomas, Alun
2010-02-01
Probabilistic record linkage is a method commonly used to determine whether demographic records refer to the same person. The Fellegi-Sunter method is a probabilistic approach that uses field weights based on log likelihood ratios to determine record similarity. This paper introduces an extension of the Fellegi-Sunter method that incorporates approximate field comparators in the calculation of field weights. The data warehouse of a large academic medical center was used as a case study. The approximate comparator extension was compared with the Fellegi-Sunter method in its ability to find duplicate records previously identified in the data warehouse using different demographic fields and matching cutoffs. The approximate comparator extension misclassified 25% fewer pairs and had a larger Welch's T statistic than the Fellegi-Sunter method for all field sets and matching cutoffs. The accuracy gain provided by the approximate comparator extension grew as less information was provided and as the matching cutoff increased. Given the ubiquity of linkage in both clinical and research settings, the incremental improvement of the extension has the potential to make a considerable impact.
Probabilistic framework for product design optimization and risk management
NASA Astrophysics Data System (ADS)
Keski-Rahkonen, J. K.
2018-05-01
Probabilistic methods have gradually gained ground within engineering practices but currently it is still the industry standard to use deterministic safety margin approaches to dimensioning components and qualitative methods to manage product risks. These methods are suitable for baseline design work but quantitative risk management and product reliability optimization require more advanced predictive approaches. Ample research has been published on how to predict failure probabilities for mechanical components and furthermore to optimize reliability through life cycle cost analysis. This paper reviews the literature for existing methods and tries to harness their best features and simplify the process to be applicable in practical engineering work. Recommended process applies Monte Carlo method on top of load-resistance models to estimate failure probabilities. Furthermore, it adds on existing literature by introducing a practical framework to use probabilistic models in quantitative risk management and product life cycle costs optimization. The main focus is on mechanical failure modes due to the well-developed methods used to predict these types of failures. However, the same framework can be applied on any type of failure mode as long as predictive models can be developed.
Geodesy and Cartography (Selected Articles),
1979-08-10
C-OO/b73 GEODESY AND CARTOGRAPHY (SELECTED ARTICLES) English pages: 40 Source: GeodezJa i Kartografia, Vol. 27, Nr. 1, 1978, PP. 3-27 Country of...1976. 14) kledzixski, J., Zibek, Z., Czarnecki, K., Rogowski, J.B., Problems in Using Satellite Surveys in an Astronomical-Geodesic Network, Geodezja i...Based on Observations of Low-Low Satellites Using Collocation Methods, Geodezja i Kartografia, Vol. XXVI, No. 4, 1977. [-7. Krynski, J., Schwarz, K.P
NASA Astrophysics Data System (ADS)
Wu, Kai; Shu, Hong; Nie, Lei; Jiao, Zhenhang
2018-01-01
Spatially correlated errors are typically ignored in data assimilation, thus degenerating the observation error covariance R to a diagonal matrix. We argue that a nondiagonal R carries more observation information making assimilation results more accurate. A method, denoted TC_Cov, was proposed for soil moisture data assimilation to estimate spatially correlated observation error covariance based on triple collocation (TC). Assimilation experiments were carried out to test the performance of TC_Cov. AMSR-E soil moisture was assimilated with a diagonal R matrix computed using the TC and assimilated using a nondiagonal R matrix, as estimated by proposed TC_Cov. The ensemble Kalman filter was considered as the assimilation method. Our assimilation results were validated against climate change initiative data and ground-based soil moisture measurements using the Pearson correlation coefficient and unbiased root mean square difference metrics. These experiments confirmed that deterioration of diagonal R assimilation results occurred when model simulation is more accurate than observation data. Furthermore, nondiagonal R achieved higher correlation coefficient and lower ubRMSD values over diagonal R in experiments and demonstrated the effectiveness of TC_Cov to estimate richly structuralized R in data assimilation. In sum, compared with diagonal R, nondiagonal R may relieve the detrimental effects of assimilation when simulated model results outperform observation data.
3D electromagnetic modelling of a TTI medium and TTI effects in inversion
NASA Astrophysics Data System (ADS)
Jaysaval, Piyoosh; Shantsev, Daniil; de la Kethulle de Ryhove, Sébastien
2016-04-01
We present a numerical algorithm for 3D electromagnetic (EM) forward modelling in conducting media with general electric anisotropy. The algorithm is based on the finite-difference discretization of frequency-domain Maxwell's equations on a Lebedev grid, in which all components of the electric field are collocated but half a spatial step staggered with respect to the magnetic field components, which also are collocated. This leads to a system of linear equations that is solved using a stabilized biconjugate gradient method with a multigrid preconditioner. We validate the accuracy of the numerical results for layered and 3D tilted transverse isotropic (TTI) earth models representing typical scenarios used in the marine controlled-source EM method. It is then demonstrated that not taking into account the full anisotropy of the conductivity tensor can lead to misleading inversion results. For simulation data corresponding to a 3D model with a TTI anticlinal structure, a standard vertical transverse isotropic inversion is not able to image a resistor, while for a 3D model with a TTI synclinal structure the inversion produces a false resistive anomaly. If inversion uses the proposed forward solver that can handle TTI anisotropy, it produces resistivity images consistent with the true models.
NASA Astrophysics Data System (ADS)
Dehghan, Mehdi; Mohammadi, Vahid
2017-03-01
As is said in [27], the tumor-growth model is the incorporation of nutrient within the mixture as opposed to being modeled with an auxiliary reaction-diffusion equation. The formulation involves systems of highly nonlinear partial differential equations of surface effects through diffuse-interface models [27]. Simulations of this practical model using numerical methods can be applied for evaluating it. The present paper investigates the solution of the tumor growth model with meshless techniques. Meshless methods are applied based on the collocation technique which employ multiquadrics (MQ) radial basis function (RBFs) and generalized moving least squares (GMLS) procedures. The main advantages of these choices come back to the natural behavior of meshless approaches. As well as, a method based on meshless approach can be applied easily for finding the solution of partial differential equations in high-dimension using any distributions of points on regular and irregular domains. The present paper involves a time-dependent system of partial differential equations that describes four-species tumor growth model. To overcome the time variable, two procedures will be used. One of them is a semi-implicit finite difference method based on Crank-Nicolson scheme and another one is based on explicit Runge-Kutta time integration. The first case gives a linear system of algebraic equations which will be solved at each time-step. The second case will be efficient but conditionally stable. The obtained numerical results are reported to confirm the ability of these techniques for solving the two and three-dimensional tumor-growth equations.
Singular boundary method for global gravity field modelling
NASA Astrophysics Data System (ADS)
Cunderlik, Robert
2014-05-01
The singular boundary method (SBM) and method of fundamental solutions (MFS) are meshless boundary collocation techniques that use the fundamental solution of a governing partial differential equation (e.g. the Laplace equation) as their basis functions. They have been developed to avoid singular numerical integration as well as mesh generation in the traditional boundary element method (BEM). SBM have been proposed to overcome a main drawback of MFS - its controversial fictitious boundary outside the domain. The key idea of SBM is to introduce a concept of the origin intensity factors that isolate singularities of the fundamental solution and its derivatives using some appropriate regularization techniques. Consequently, the source points can be placed directly on the real boundary and coincide with the collocation nodes. In this study we deal with SBM applied for high-resolution global gravity field modelling. The first numerical experiment presents a numerical solution to the fixed gravimetric boundary value problem. The achieved results are compared with the numerical solutions obtained by MFS or the direct BEM indicating efficiency of all methods. In the second numerical experiments, SBM is used to derive the geopotential and its first derivatives from the Tzz components of the gravity disturbing tensor observed by the GOCE satellite mission. A determination of the origin intensity factors allows to evaluate the disturbing potential and gravity disturbances directly on the Earth's surface where the source points are located. To achieve high-resolution numerical solutions, the large-scale parallel computations are performed on the cluster with 1TB of the distributed memory and an iterative elimination of far zones' contributions is applied.
Proceedings of the international meeting on thermal nuclear reactor safety. Vol. 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
Separate abstracts are included for each of the papers presented concerning current issues in nuclear power plant safety; national programs in nuclear power plant safety; radiological source terms; probabilistic risk assessment methods and techniques; non LOCA and small-break-LOCA transients; safety goals; pressurized thermal shocks; applications of reliability and risk methods to probabilistic risk assessment; human factors and man-machine interface; and data bases and special applications.
Efficient Sensitivity Methods for Probabilistic Lifing and Engine Prognostics
2010-09-01
AFRL-RX-WP-TR-2010-4297 EFFICIENT SENSITIVITY METHODS FOR PROBABILISTIC LIFING AND ENGINE PROGNOSTICS Harry Millwater , Ronald Bagley, Jose...5c. PROGRAM ELEMENT NUMBER 62102F 6. AUTHOR(S) Harry Millwater , Ronald Bagley, Jose Garza, D. Wagner, Andrew Bates, and Andy Voorhees 5d...Reliability Assessment, MIL-HDBK-1823, 30 April 1999. 9. Leverant GR, Millwater HR, McClung RC, Enright MP, A New Tool for Design and Certification of
NASA Technical Reports Server (NTRS)
1991-01-01
The technical effort and computer code enhancements performed during the sixth year of the Probabilistic Structural Analysis Methods program are summarized. Various capabilities are described to probabilistically combine structural response and structural resistance to compute component reliability. A library of structural resistance models is implemented in the Numerical Evaluations of Stochastic Structures Under Stress (NESSUS) code that included fatigue, fracture, creep, multi-factor interaction, and other important effects. In addition, a user interface was developed for user-defined resistance models. An accurate and efficient reliability method was developed and was successfully implemented in the NESSUS code to compute component reliability based on user-selected response and resistance models. A risk module was developed to compute component risk with respect to cost, performance, or user-defined criteria. The new component risk assessment capabilities were validated and demonstrated using several examples. Various supporting methodologies were also developed in support of component risk assessment.
Krejsa, Martin; Janas, Petr; Yilmaz, Işık; Marschalko, Marian; Bouchal, Tomas
2013-01-01
The load-carrying system of each construction should fulfill several conditions which represent reliable criteria in the assessment procedure. It is the theory of structural reliability which determines probability of keeping required properties of constructions. Using this theory, it is possible to apply probabilistic computations based on the probability theory and mathematic statistics. Development of those methods has become more and more popular; it is used, in particular, in designs of load-carrying structures with the required level or reliability when at least some input variables in the design are random. The objective of this paper is to indicate the current scope which might be covered by the new method—Direct Optimized Probabilistic Calculation (DOProC) in assessments of reliability of load-carrying structures. DOProC uses a purely numerical approach without any simulation techniques. This provides more accurate solutions to probabilistic tasks, and, in some cases, such approach results in considerably faster completion of computations. DOProC can be used to solve efficiently a number of probabilistic computations. A very good sphere of application for DOProC is the assessment of the bolt reinforcement in the underground and mining workings. For the purposes above, a special software application—“Anchor”—has been developed. PMID:23935412
Constructing Sample Space with Combinatorial Reasoning: A Mixed Methods Study
ERIC Educational Resources Information Center
McGalliard, William A., III.
2012-01-01
Recent curricular developments suggest that students at all levels need to be statistically literate and able to efficiently and accurately make probabilistic decisions. Furthermore, statistical literacy is a requirement to being a well-informed citizen of society. Research also recognizes that the ability to reason probabilistically is supported…
One of the major recommendations of the National Academy of Science to the USEPA, NMFS and USFWS was to utilize probabilistic methods when assessing the risks of pesticides to federally listed endangered and threatened species. The Terrestrial Investigation Model (TIM, version 3....
Three key areas of scientific inquiry in the study of human exposure to environmental contaminants are 1) assessment of aggregate (i.e., multi-pathway, multi-route) exposures, 2) application of probabilistic methods to exposure prediction, and 3) the interpretation of biomarker m...
EXPERIENCES WITH USING PROBABILISTIC EXPOSURE ANALYSIS METHODS IN THE U.S. EPA
Over the past decade various Offices and Programs within the U.S. EPA have either initiated or increased the development and application of probabilistic exposure analysis models. These models have been applied to a broad range of research or regulatory problems in EPA, such as e...
NASA Astrophysics Data System (ADS)
Lowe, R.; Ballester, J.; Robine, J.; Herrmann, F. R.; Jupp, T. E.; Stephenson, D.; Rodó, X.
2013-12-01
Users of climate information often require probabilistic information on which to base their decisions. However, communicating information contained within a probabilistic forecast presents a challenge. In this paper we demonstrate a novel visualisation technique to display ternary probabilistic forecasts on a map in order to inform decision making. In this method, ternary probabilistic forecasts, which assign probabilities to a set of three outcomes (e.g. low, medium, and high risk), are considered as a point in a triangle of barycentric coordinates. This allows a unique colour to be assigned to each forecast from a continuum of colours defined on the triangle. Colour saturation increases with information gain relative to the reference forecast (i.e. the long term average). This provides additional information to decision makers compared with conventional methods used in seasonal climate forecasting, where one colour is used to represent one forecast category on a forecast map (e.g. red = ';dry'). We use the tool to present climate-related mortality projections across Europe. Temperature and humidity are related to human mortality via location-specific transfer functions, calculated using historical data. Daily mortality data at the NUTS2 level for 16 countries in Europe were obtain from 1998-2005. Transfer functions were calculated for 54 aggregations in Europe, defined using criteria related to population and climatological similarities. Aggregations are restricted to fall within political boundaries to avoid problems related to varying adaptation policies between countries. A statistical model is fit to cold and warm tails to estimate future mortality using forecast temperatures, in a Bayesian probabilistic framework. Using predefined categories of temperature-related mortality risk, we present maps of probabilistic projections for human mortality at seasonal to decadal time scales. We demonstrate the information gained from using this technique compared to more traditional methods to display ternary probabilistic forecasts. This technique allows decision makers to identify areas where the model predicts with certainty area-specific heat waves or cold snaps, in order to effectively target resources to those areas most at risk, for a given season or year. It is hoped that this visualisation tool will facilitate the interpretation of the probabilistic forecasts not only for public health decision makers but also within a multi-sectoral climate service framework.
Bayesian-information-gap decision theory with an application to CO 2 sequestration
O'Malley, D.; Vesselinov, V. V.
2015-09-04
Decisions related to subsurface engineering problems such as groundwater management, fossil fuel production, and geologic carbon sequestration are frequently challenging because of an overabundance of uncertainties (related to conceptualizations, parameters, observations, etc.). Because of the importance of these problems to agriculture, energy, and the climate (respectively), good decisions that are scientifically defensible must be made despite the uncertainties. We describe a general approach to making decisions for challenging problems such as these in the presence of severe uncertainties that combines probabilistic and non-probabilistic methods. The approach uses Bayesian sampling to assess parametric uncertainty and Information-Gap Decision Theory (IGDT) to addressmore » model inadequacy. The combined approach also resolves an issue that frequently arises when applying Bayesian methods to real-world engineering problems related to the enumeration of possible outcomes. In the case of zero non-probabilistic uncertainty, the method reduces to a Bayesian method. Lastly, to illustrate the approach, we apply it to a site-selection decision for geologic CO 2 sequestration.« less
Matrix form of Legendre polynomials for solving linear integro-differential equations of high order
NASA Astrophysics Data System (ADS)
Kammuji, M.; Eshkuvatov, Z. K.; Yunus, Arif A. M.
2017-04-01
This paper presents an effective approximate solution of high order of Fredholm-Volterra integro-differential equations (FVIDEs) with boundary condition. Legendre truncated series is used as a basis functions to estimate the unknown function. Matrix operation of Legendre polynomials is used to transform FVIDEs with boundary conditions into matrix equation of Fredholm-Volterra type. Gauss Legendre quadrature formula and collocation method are applied to transfer the matrix equation into system of linear algebraic equations. The latter equation is solved by Gauss elimination method. The accuracy and validity of this method are discussed by solving two numerical examples and comparisons with wavelet and methods.
Spectral methods and their implementation to solution of aerodynamic and fluid mechanic problems
NASA Technical Reports Server (NTRS)
Streett, C. L.
1987-01-01
Fundamental concepts underlying spectral collocation methods, especially pertaining to their use in the solution of partial differential equations, are outlined. Theoretical accuracy results are reviewed and compared with results from test problems. A number of practical aspects of the construction and use of spectral methods are detailed, along with several solution schemes which have found utility in applications of spectral methods to practical problems. Results from a few of the successful applications of spectral methods to problems of aerodynamic and fluid mechanic interest are then outlined, followed by a discussion of the problem areas in spectral methods and the current research under way to overcome these difficulties.
Probabilistic finite elements for fatigue and fracture analysis
NASA Astrophysics Data System (ADS)
Belytschko, Ted; Liu, Wing Kam
Attenuation is focused on the development of Probabilistic Finite Element Method (PFEM), which combines the finite element method with statistics and reliability methods, and its application to linear, nonlinear structural mechanics problems and fracture mechanics problems. The computational tool based on the Stochastic Boundary Element Method is also given for the reliability analysis of a curvilinear fatigue crack growth. The existing PFEM's have been applied to solve for two types of problems: (1) determination of the response uncertainty in terms of the means, variance and correlation coefficients; and (2) determination the probability of failure associated with prescribed limit states.
Probabilistic finite elements for fatigue and fracture analysis
NASA Technical Reports Server (NTRS)
Belytschko, Ted; Liu, Wing Kam
1992-01-01
Attenuation is focused on the development of Probabilistic Finite Element Method (PFEM), which combines the finite element method with statistics and reliability methods, and its application to linear, nonlinear structural mechanics problems and fracture mechanics problems. The computational tool based on the Stochastic Boundary Element Method is also given for the reliability analysis of a curvilinear fatigue crack growth. The existing PFEM's have been applied to solve for two types of problems: (1) determination of the response uncertainty in terms of the means, variance and correlation coefficients; and (2) determination the probability of failure associated with prescribed limit states.
Satellite altitude determination uncertainties
NASA Technical Reports Server (NTRS)
Siry, J. W.
1972-01-01
Satellite altitude determination uncertainties will be discussed from the standpoint of the GEOS-C satellite, from the longer range viewpoint afforded by the Geopause concept. Data are focused on methods for short-arc tracking which are essentially geometric in nature. One uses combinations of lasers and collocated cameras. The other method relies only on lasers, using three or more to obtain the position fix. Two typical locales are looked at, the Caribbean area, and a region associated with tracking sites at Goddard, Bermuda and Canada which encompasses a portion of the Gulf Stream in which meanders develop.
Approximating a retarded-advanced differential equation that models human phonation
NASA Astrophysics Data System (ADS)
Teodoro, M. Filomena
2017-11-01
In [1, 2, 3] we have got the numerical solution of a linear mixed type functional differential equation (MTFDE) introduced initially in [4], considering the autonomous and non-autonomous case by collocation, least squares and finite element methods considering B-splines basis set. The present work introduces a numerical scheme using least squares method (LSM) and Gaussian basis functions to solve numerically a nonlinear mixed type equation with symmetric delay and advance which models human phonation. The preliminary results are promising. We obtain an accuracy comparable with the previous results.
Boundary-element modelling of dynamics in external poroviscoelastic problems
NASA Astrophysics Data System (ADS)
Igumnov, L. A.; Litvinchuk, S. Yu; Ipatov, A. A.; Petrov, A. N.
2018-04-01
A problem of a spherical cavity in porous media is considered. Porous media are assumed to be isotropic poroelastic or isotropic poroviscoelastic. The poroviscoelastic formulation is treated as a combination of Biot’s theory of poroelasticity and elastic-viscoelastic correspondence principle. Such viscoelastic models as Kelvin–Voigt, Standard linear solid, and a model with weakly singular kernel are considered. Boundary field study is employed with the help of the boundary element method. The direct approach is applied. The numerical scheme is based on the collocation method, regularized boundary integral equation, and Radau stepped scheme.
Probabilistic numerical methods for PDE-constrained Bayesian inverse problems
NASA Astrophysics Data System (ADS)
Cockayne, Jon; Oates, Chris; Sullivan, Tim; Girolami, Mark
2017-06-01
This paper develops meshless methods for probabilistically describing discretisation error in the numerical solution of partial differential equations. This construction enables the solution of Bayesian inverse problems while accounting for the impact of the discretisation of the forward problem. In particular, this drives statistical inferences to be more conservative in the presence of significant solver error. Theoretical results are presented describing rates of convergence for the posteriors in both the forward and inverse problems. This method is tested on a challenging inverse problem with a nonlinear forward model.
1988-09-23
DOWNGRADING SCHEDULE D~istribution Unlimited 4. PERFORMING ORGANIZATiON REPORT NUMVBER(S) 5. MONITORiG ORGANIZATION REPORT NUMBER(S) AFGL-TR-88-0237...Collocations were performed on launch sites of the cloud contamination, aerosol problems, collocation 1200 UT radiosondes on 25 Aug 1987. Statistics were...al (1987) and Thomason, 1987). In this imagery opaque clouds to this problem appear white, low clouds and fog appear bright red against a brown
Precise control of flexible manipulators
NASA Technical Reports Server (NTRS)
Cannon, R. H., Jr.
1984-01-01
Experimental apparatus were developed for physically testing control systems for pointing flexible structures, such as limber spacecraft, for the case that control actuators cannot be collocated with sensors. Structural damping ratios are less than 0.003, each basic configuration of sensor/actuator noncollocation is available, and inertias can be halved or doubled abruptly during control maneuvers, thereby imposing, in particular, a sudden reversal in the plant's pole-zero sequence. First experimental results are presented, including stable control with both collocation and noncollocation.
Design and Application of a Collocated Capacitance Sensor for Magnetic Bearing Spindle
NASA Technical Reports Server (NTRS)
Shin, Dongwon; Liu, Seon-Jung; Kim, Jongwon
1996-01-01
This paper presents a collocated capacitance sensor for magnetic bearings. The main feature of the sensor is that it is made of a specific compact printed circuit board (PCB). The signal processing unit has been also developed. The results of the experimental performance evaluation on the sensitivity, resolution and frequency response of the sensor are presented. Finally, an application example of the sensor to the active control of a magnetic bearing is described.
Radiation Transport in Random Media With Large Fluctuations
NASA Astrophysics Data System (ADS)
Olson, Aaron; Prinja, Anil; Franke, Brian
2017-09-01
Neutral particle transport in media exhibiting large and complex material property spatial variation is modeled by representing cross sections as lognormal random functions of space and generated through a nonlinear memory-less transformation of a Gaussian process with covariance uniquely determined by the covariance of the cross section. A Karhunen-Loève decomposition of the Gaussian process is implemented to effciently generate realizations of the random cross sections and Woodcock Monte Carlo used to transport particles on each realization and generate benchmark solutions for the mean and variance of the particle flux as well as probability densities of the particle reflectance and transmittance. A computationally effcient stochastic collocation method is implemented to directly compute the statistical moments such as the mean and variance, while a polynomial chaos expansion in conjunction with stochastic collocation provides a convenient surrogate model that also produces probability densities of output quantities of interest. Extensive numerical testing demonstrates that use of stochastic reduced-order modeling provides an accurate and cost-effective alternative to random sampling for particle transport in random media.
Rahman, A.; Tsai, F.T.-C.; White, C.D.; Carlson, D.A.; Willson, C.S.
2008-01-01
Data integration is challenging where there are different levels of support between primary and secondary data that need to be correlated in various ways. A geostatistical method is described, which integrates the hydraulic conductivity (K) measurements and electrical resistivity data to better estimate the K distribution in the Upper Chicot Aquifer of southwestern Louisiana, USA. The K measurements were obtained from pumping tests and represent the primary (hard) data. Borehole electrical resistivity data from electrical logs were regarded as the secondary (soft) data, and were used to infer K values through Archie's law and the Kozeny-Carman equation. A pseudo cross-semivariogram was developed to cope with the resistivity data non-collocation. Uncertainties in the auto-semivariograms and pseudo cross-semivariogram were quantified. The groundwater flow model responses by the regionalized and coregionalized models of K were compared using analysis of variance (ANOVA). The results indicate that non-collocated secondary data may improve estimates of K and affect groundwater flow responses of practical interest, including specific capacity and drawdown. ?? Springer-Verlag 2007.
Random mechanics: Nonlinear vibrations, turbulences, seisms, swells, fatigue
NASA Astrophysics Data System (ADS)
Kree, P.; Soize, C.
The random modeling of physical phenomena, together with probabilistic methods for the numerical calculation of random mechanical forces, are analytically explored. Attention is given to theoretical examinations such as probabilistic concepts, linear filtering techniques, and trajectory statistics. Applications of the methods to structures experiencing atmospheric turbulence, the quantification of turbulence, and the dynamic responses of the structures are considered. A probabilistic approach is taken to study the effects of earthquakes on structures and to the forces exerted by ocean waves on marine structures. Theoretical analyses by means of vector spaces and stochastic modeling are reviewed, as are Markovian formulations of Gaussian processes and the definition of stochastic differential equations. Finally, random vibrations with a variable number of links and linear oscillators undergoing the square of Gaussian processes are investigated.
Seismic, high wind, tornado, and probabilistic risk assessments of the High Flux Isotope Reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harris, S.P.; Stover, R.L.; Hashimoto, P.S.
1989-01-01
Natural phenomena analyses were performed on the High Flux Isotope Reactor (HFIR) Deterministic and probabilistic evaluations were made to determine the risks resulting from earthquakes, high winds, and tornadoes. Analytic methods in conjunction with field evaluations and an earthquake experience data base evaluation methods were used to provide more realistic results in a shorter amount of time. Plant modifications completed in preparation for HFIR restart and potential future enhancements are discussed. 5 figs.
Bayesian Probabilistic Projection of International Migration.
Azose, Jonathan J; Raftery, Adrian E
2015-10-01
We propose a method for obtaining joint probabilistic projections of migration for all countries, broken down by age and sex. Joint trajectories for all countries are constrained to satisfy the requirement of zero global net migration. We evaluate our model using out-of-sample validation and compare point projections to the projected migration rates from a persistence model similar to the method used in the United Nations' World Population Prospects, and also to a state-of-the-art gravity model.
Probabilistic structural analysis methods and applications
NASA Technical Reports Server (NTRS)
Cruse, T. A.; Wu, Y.-T.; Dias, B.; Rajagopal, K. R.
1988-01-01
An advanced algorithm for simulating the probabilistic distribution of structural responses due to statistical uncertainties in loads, geometry, material properties, and boundary conditions is reported. The method effectively combines an advanced algorithm for calculating probability levels for multivariate problems (fast probability integration) together with a general-purpose finite-element code for stress, vibration, and buckling analysis. Application is made to a space propulsion system turbine blade for which the geometry and material properties are treated as random variables.
Uncertainty characterization approaches for risk assessment of DBPs in drinking water: a review.
Chowdhury, Shakhawat; Champagne, Pascale; McLellan, P James
2009-04-01
The management of risk from disinfection by-products (DBPs) in drinking water has become a critical issue over the last three decades. The areas of concern for risk management studies include (i) human health risk from DBPs, (ii) disinfection performance, (iii) technical feasibility (maintenance, management and operation) of treatment and disinfection approaches, and (iv) cost. Human health risk assessment is typically considered to be the most important phase of the risk-based decision-making or risk management studies. The factors associated with health risk assessment and other attributes are generally prone to considerable uncertainty. Probabilistic and non-probabilistic approaches have both been employed to characterize uncertainties associated with risk assessment. The probabilistic approaches include sampling-based methods (typically Monte Carlo simulation and stratified sampling) and asymptotic (approximate) reliability analysis (first- and second-order reliability methods). Non-probabilistic approaches include interval analysis, fuzzy set theory and possibility theory. However, it is generally accepted that no single method is suitable for the entire spectrum of problems encountered in uncertainty analyses for risk assessment. Each method has its own set of advantages and limitations. In this paper, the feasibility and limitations of different uncertainty analysis approaches are outlined for risk management studies of drinking water supply systems. The findings assist in the selection of suitable approaches for uncertainty analysis in risk management studies associated with DBPs and human health risk.
Constructing probabilistic scenarios for wide-area solar power generation
Woodruff, David L.; Deride, Julio; Staid, Andrea; ...
2017-12-22
Optimizing thermal generation commitments and dispatch in the presence of high penetrations of renewable resources such as solar energy requires a characterization of their stochastic properties. In this study, we describe novel methods designed to create day-ahead, wide-area probabilistic solar power scenarios based only on historical forecasts and associated observations of solar power production. Each scenario represents a possible trajectory for solar power in next-day operations with an associated probability computed by algorithms that use historical forecast errors. Scenarios are created by segmentation of historic data, fitting non-parametric error distributions using epi-splines, and then computing specific quantiles from these distributions.more » Additionally, we address the challenge of establishing an upper bound on solar power output. Our specific application driver is for use in stochastic variants of core power systems operations optimization problems, e.g., unit commitment and economic dispatch. These problems require as input a range of possible future realizations of renewables production. However, the utility of such probabilistic scenarios extends to other contexts, e.g., operator and trader situational awareness. Finally, we compare the performance of our approach to a recently proposed method based on quantile regression, and demonstrate that our method performs comparably to this approach in terms of two widely used methods for assessing the quality of probabilistic scenarios: the Energy score and the Variogram score.« less
Constructing probabilistic scenarios for wide-area solar power generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Woodruff, David L.; Deride, Julio; Staid, Andrea
Optimizing thermal generation commitments and dispatch in the presence of high penetrations of renewable resources such as solar energy requires a characterization of their stochastic properties. In this study, we describe novel methods designed to create day-ahead, wide-area probabilistic solar power scenarios based only on historical forecasts and associated observations of solar power production. Each scenario represents a possible trajectory for solar power in next-day operations with an associated probability computed by algorithms that use historical forecast errors. Scenarios are created by segmentation of historic data, fitting non-parametric error distributions using epi-splines, and then computing specific quantiles from these distributions.more » Additionally, we address the challenge of establishing an upper bound on solar power output. Our specific application driver is for use in stochastic variants of core power systems operations optimization problems, e.g., unit commitment and economic dispatch. These problems require as input a range of possible future realizations of renewables production. However, the utility of such probabilistic scenarios extends to other contexts, e.g., operator and trader situational awareness. Finally, we compare the performance of our approach to a recently proposed method based on quantile regression, and demonstrate that our method performs comparably to this approach in terms of two widely used methods for assessing the quality of probabilistic scenarios: the Energy score and the Variogram score.« less
A note on probabilistic models over strings: the linear algebra approach.
Bouchard-Côté, Alexandre
2013-12-01
Probabilistic models over strings have played a key role in developing methods that take into consideration indels as phylogenetically informative events. There is an extensive literature on using automata and transducers on phylogenies to do inference on these probabilistic models, in which an important theoretical question is the complexity of computing the normalization of a class of string-valued graphical models. This question has been investigated using tools from combinatorics, dynamic programming, and graph theory, and has practical applications in Bayesian phylogenetics. In this work, we revisit this theoretical question from a different point of view, based on linear algebra. The main contribution is a set of results based on this linear algebra view that facilitate the analysis and design of inference algorithms on string-valued graphical models. As an illustration, we use this method to give a new elementary proof of a known result on the complexity of inference on the "TKF91" model, a well-known probabilistic model over strings. Compared to previous work, our proving method is easier to extend to other models, since it relies on a novel weak condition, triangular transducers, which is easy to establish in practice. The linear algebra view provides a concise way of describing transducer algorithms and their compositions, opens the possibility of transferring fast linear algebra libraries (for example, based on GPUs), as well as low rank matrix approximation methods, to string-valued inference problems.
Investigation of advanced UQ for CRUD prediction with VIPRE.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eldred, Michael Scott
2011-09-01
This document summarizes the results from a level 3 milestone study within the CASL VUQ effort. It demonstrates the application of 'advanced UQ,' in particular dimension-adaptive p-refinement for polynomial chaos and stochastic collocation. The study calculates statistics for several quantities of interest that are indicators for the formation of CRUD (Chalk River unidentified deposit), which can lead to CIPS (CRUD induced power shift). Stochastic expansion methods are attractive methods for uncertainty quantification due to their fast convergence properties. For smooth functions (i.e., analytic, infinitely-differentiable) in L{sup 2} (i.e., possessing finite variance), exponential convergence rates can be obtained under order refinementmore » for integrated statistical quantities of interest such as mean, variance, and probability. Two stochastic expansion methods are of interest: nonintrusive polynomial chaos expansion (PCE), which computes coefficients for a known basis of multivariate orthogonal polynomials, and stochastic collocation (SC), which forms multivariate interpolation polynomials for known coefficients. Within the DAKOTA project, recent research in stochastic expansion methods has focused on automated polynomial order refinement ('p-refinement') of expansions to support scalability to higher dimensional random input spaces [4, 3]. By preferentially refining only in the most important dimensions of the input space, the applicability of these methods can be extended from O(10{sup 0})-O(10{sup 1}) random variables to O(10{sup 2}) and beyond, depending on the degree of anisotropy (i.e., the extent to which randominput variables have differing degrees of influence on the statistical quantities of interest (QOIs)). Thus, the purpose of this study is to investigate the application of these adaptive stochastic expansion methods to the analysis of CRUD using the VIPRE simulation tools for two different plant models of differing random dimension, anisotropy, and smoothness.« less
ERIC Educational Resources Information Center
Vahabi, Mandana
2010-01-01
Objective: To test whether the format in which women receive probabilistic information about breast cancer and mammography affects their comprehension. Methods: A convenience sample of 180 women received pre-assembled randomized packages containing a breast health information brochure, with probabilities presented in either verbal or numeric…
On the Measurement and Properties of Ambiguity in Probabilistic Expectations
ERIC Educational Resources Information Center
Pickett, Justin T.; Loughran, Thomas A.; Bushway, Shawn
2015-01-01
Survey respondents' probabilistic expectations are now widely used in many fields to study risk perceptions, decision-making processes, and behavior. Researchers have developed several methods to account for the fact that the probability of an event may be more ambiguous for some respondents than others, but few prior studies have empirically…
2018-03-01
MARCH 2018 FINAL TECHNICAL REPORT APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED STINFO COPY AIR FORCE RESEARCH LABORATORY INFORMATION...SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) Air Force Research Laboratory/RITA DARPA 525 Brooks Road 675 North Randolph Street Rome...1 3.0 METHODS , ASSUMPTIONS, AND PROCEDURES
Reliability, Risk and Cost Trade-Offs for Composite Designs
NASA Technical Reports Server (NTRS)
Shiao, Michael C.; Singhal, Surendra N.; Chamis, Christos C.
1996-01-01
Risk and cost trade-offs have been simulated using a probabilistic method. The probabilistic method accounts for all naturally-occurring uncertainties including those in constituent material properties, fabrication variables, structure geometry and loading conditions. The probability density function of first buckling load for a set of uncertain variables is computed. The probabilistic sensitivity factors of uncertain variables to the first buckling load is calculated. The reliability-based cost for a composite fuselage panel is defined and minimized with respect to requisite design parameters. The optimization is achieved by solving a system of nonlinear algebraic equations whose coefficients are functions of probabilistic sensitivity factors. With optimum design parameters such as the mean and coefficient of variation (representing range of scatter) of uncertain variables, the most efficient and economical manufacturing procedure can be selected. In this paper, optimum values of the requisite design parameters for a predetermined cost due to failure occurrence are computationally determined. The results for the fuselage panel analysis show that the higher the cost due to failure occurrence, the smaller the optimum coefficient of variation of fiber modulus (design parameter) in longitudinal direction.
Development of probabilistic regional climate scenario in East Asia
NASA Astrophysics Data System (ADS)
Dairaku, K.; Ueno, G.; Ishizaki, N. N.
2015-12-01
Climate information and services for Impacts, Adaptation and Vulnerability (IAV) Assessments are of great concern. In order to develop probabilistic regional climate information that represents the uncertainty in climate scenario experiments in East Asia (CORDEX-EA and Japan), the probability distribution of 2m air temperature was estimated by using developed regression model. The method can be easily applicable to other regions and other physical quantities, and also to downscale to finer-scale dependent on availability of observation dataset. Probabilistic climate information in present (1969-1998) and future (2069-2098) climate was developed using CMIP3 SRES A1b scenarios 21 models and the observation data (CRU_TS3.22 & University of Delaware in CORDEX-EA, NIAES AMeDAS mesh data in Japan). The prototype of probabilistic information in CORDEX-EA and Japan represent the quantified structural uncertainties of multi-model ensemble experiments of climate change scenarios. Appropriate combination of statistical methods and optimization of climate ensemble experiments using multi-General Circulation Models (GCMs) and multi-regional climate models (RCMs) ensemble downscaling experiments are investigated.