NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.; Hornby, Gregory; Ishihara, Abe
2013-01-01
This paper describes two methods of trajectory optimization to obtain an optimal trajectory of minimum-fuel- to-climb for an aircraft. The first method is based on the adjoint method, and the second method is based on a direct trajectory optimization method using a Chebyshev polynomial approximation and cubic spine approximation. The approximate optimal trajectory will be compared with the adjoint-based optimal trajectory which is considered as the true optimal solution of the trajectory optimization problem. The adjoint-based optimization problem leads to a singular optimal control solution which results in a bang-singular-bang optimal control.
Sensitivity analysis and approximation methods for general eigenvalue problems
NASA Technical Reports Server (NTRS)
Murthy, D. V.; Haftka, R. T.
1986-01-01
Optimization of dynamic systems involving complex non-hermitian matrices is often computationally expensive. Major contributors to the computational expense are the sensitivity analysis and reanalysis of a modified design. The present work seeks to alleviate this computational burden by identifying efficient sensitivity analysis and approximate reanalysis methods. For the algebraic eigenvalue problem involving non-hermitian matrices, algorithms for sensitivity analysis and approximate reanalysis are classified, compared and evaluated for efficiency and accuracy. Proper eigenvector normalization is discussed. An improved method for calculating derivatives of eigenvectors is proposed based on a more rational normalization condition and taking advantage of matrix sparsity. Important numerical aspects of this method are also discussed. To alleviate the problem of reanalysis, various approximation methods for eigenvalues are proposed and evaluated. Linear and quadratic approximations are based directly on the Taylor series. Several approximation methods are developed based on the generalized Rayleigh quotient for the eigenvalue problem. Approximation methods based on trace theorem give high accuracy without needing any derivatives. Operation counts for the computation of the approximations are given. General recommendations are made for the selection of appropriate approximation technique as a function of the matrix size, number of design variables, number of eigenvalues of interest and the number of design points at which approximation is sought.
Differential equation based method for accurate approximations in optimization
NASA Technical Reports Server (NTRS)
Pritchard, Jocelyn I.; Adelman, Howard M.
1990-01-01
This paper describes a method to efficiently and accurately approximate the effect of design changes on structural response. The key to this new method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in msot cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacement are used to approximate bending stresses.
Differential equation based method for accurate approximations in optimization
NASA Technical Reports Server (NTRS)
Pritchard, Jocelyn I.; Adelman, Howard M.
1990-01-01
A method to efficiently and accurately approximate the effect of design changes on structural response is described. The key to this method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in most cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacements are used to approximate bending stresses.
Kim, Eunwoo; Lee, Minsik; Choi, Chong-Ho; Kwak, Nojun; Oh, Songhwai
2015-02-01
Low-rank matrix approximation plays an important role in the area of computer vision and image processing. Most of the conventional low-rank matrix approximation methods are based on the l2 -norm (Frobenius norm) with principal component analysis (PCA) being the most popular among them. However, this can give a poor approximation for data contaminated by outliers (including missing data), because the l2 -norm exaggerates the negative effect of outliers. Recently, to overcome this problem, various methods based on the l1 -norm, such as robust PCA methods, have been proposed for low-rank matrix approximation. Despite the robustness of the methods, they require heavy computational effort and substantial memory for high-dimensional data, which is impractical for real-world problems. In this paper, we propose two efficient low-rank factorization methods based on the l1 -norm that find proper projection and coefficient matrices using the alternating rectified gradient method. The proposed methods are applied to a number of low-rank matrix approximation problems to demonstrate their efficiency and robustness. The experimental results show that our proposals are efficient in both execution time and reconstruction performance unlike other state-of-the-art methods.
A DEIM Induced CUR Factorization
2015-09-18
CUR approximate matrix factorization based on the Discrete Empirical Interpolation Method (DEIM). For a given matrix A, such a factorization provides a...CUR approximations based on leverage scores. 1 Introduction This work presents a new CUR matrix factorization based upon the Discrete Empirical...SUPPLEMENTARY NOTES 14. ABSTRACT We derive a CUR approximate matrix factorization based on the Discrete Empirical Interpolation Method (DEIM). For a given
Combining global and local approximations
NASA Technical Reports Server (NTRS)
Haftka, Raphael T.
1991-01-01
A method based on a linear approximation to a scaling factor, designated the 'global-local approximation' (GLA) method, is presented and shown capable of extending the range of usefulness of derivative-based approximations to a more refined model. The GLA approach refines the conventional scaling factor by means of a linearly varying, rather than constant, scaling factor. The capabilities of the method are demonstrated for a simple beam example with a crude and more refined FEM model.
NASA Technical Reports Server (NTRS)
Ito, K.; Teglas, R.
1984-01-01
The numerical scheme based on the Legendre-tau approximation is proposed to approximate the feedback solution to the linear quadratic optimal control problem for hereditary differential systems. The convergence property is established using Trotter ideas. The method yields very good approximations at low orders and provides an approximation technique for computing closed-loop eigenvalues of the feedback system. A comparison with existing methods (based on averaging and spline approximations) is made.
NASA Technical Reports Server (NTRS)
Ito, Kazufumi; Teglas, Russell
1987-01-01
The numerical scheme based on the Legendre-tau approximation is proposed to approximate the feedback solution to the linear quadratic optimal control problem for hereditary differential systems. The convergence property is established using Trotter ideas. The method yields very good approximations at low orders and provides an approximation technique for computing closed-loop eigenvalues of the feedback system. A comparison with existing methods (based on averaging and spline approximations) is made.
NASA Astrophysics Data System (ADS)
Hu, Jinyan; Li, Li; Yang, Yunfeng
2017-06-01
The hierarchical and successive approximate registration method of non-rigid medical image based on the thin-plate splines is proposed in the paper. There are two major novelties in the proposed method. First, the hierarchical registration based on Wavelet transform is used. The approximate image of Wavelet transform is selected as the registered object. Second, the successive approximation registration method is used to accomplish the non-rigid medical images registration, i.e. the local regions of the couple images are registered roughly based on the thin-plate splines, then, the current rough registration result is selected as the object to be registered in the following registration procedure. Experiments show that the proposed method is effective in the registration process of the non-rigid medical images.
Approximate Dynamic Programming: Combining Regional and Local State Following Approximations.
Deptula, Patryk; Rosenfeld, Joel A; Kamalapurkar, Rushikesh; Dixon, Warren E
2018-06-01
An infinite-horizon optimal regulation problem for a control-affine deterministic system is solved online using a local state following (StaF) kernel and a regional model-based reinforcement learning (R-MBRL) method to approximate the value function. Unlike traditional methods such as R-MBRL that aim to approximate the value function over a large compact set, the StaF kernel approach aims to approximate the value function in a local neighborhood of the state that travels within a compact set. In this paper, the value function is approximated using a state-dependent convex combination of the StaF-based and the R-MBRL-based approximations. As the state enters a neighborhood containing the origin, the value function transitions from being approximated by the StaF approach to the R-MBRL approach. Semiglobal uniformly ultimately bounded (SGUUB) convergence of the system states to the origin is established using a Lyapunov-based analysis. Simulation results are provided for two, three, six, and ten-state dynamical systems to demonstrate the scalability and performance of the developed method.
An approximation method for configuration optimization of trusses
NASA Technical Reports Server (NTRS)
Hansen, Scott R.; Vanderplaats, Garret N.
1988-01-01
Two- and three-dimensional elastic trusses are designed for minimum weight by varying the areas of the members and the location of the joints. Constraints on member stresses and Euler buckling are imposed and multiple static loading conditions are considered. The method presented here utilizes an approximate structural analysis based on first order Taylor series expansions of the member forces. A numerical optimizer minimizes the weight of the truss using information from the approximate structural analysis. Comparisons with results from other methods are made. It is shown that the method of forming an approximate structural analysis based on linearized member forces leads to a highly efficient method of truss configuration optimization.
Varughese, J K; Wentzel-Larsen, T; Vassbotn, F; Moen, G; Lund-Johansen, M
2010-04-01
In this volumetric study of the vestibular schwannoma, we evaluated the accuracy and reliability of several approximation methods that are in use, and determined the minimum volume difference that needs to be measured for it to be attributable to an actual difference rather than a retest error. We also found empirical proportionality coefficients for the different methods. DESIGN/SETTING AND PARTICIPANTS: Methodological study with investigation of three different VS measurement methods compared to a reference method that was based on serial slice volume estimates. These volume estimates were based on: (i) one single diameter, (ii) three orthogonal diameters or (iii) the maximal slice area. Altogether 252 T1-weighted MRI images with gadolinium contrast, from 139 VS patients, were examined. The retest errors, in terms of relative percentages, were determined by undertaking repeated measurements on 63 scans for each method. Intraclass correlation coefficients were used to assess the agreement between each of the approximation methods and the reference method. The tendency for approximation methods to systematically overestimate/underestimate different-sized tumours was also assessed, with the help of Bland-Altman plots. The most commonly used approximation method, the maximum diameter, was the least reliable measurement method and has inherent weaknesses that need to be considered. This includes greater retest errors than area-based measurements (25% and 15%, respectively), and that it was the only approximation method that could not easily be converted into volumetric units. Area-based measurements can furthermore be more reliable for smaller volume differences than diameter-based measurements. All our findings suggest that the maximum diameter should not be used as an approximation method. We propose the use of measurement modalities that take into account growth in multiple dimensions instead.
An Exact Model-Based Method for Near-Field Sources Localization with Bistatic MIMO System.
Singh, Parth Raj; Wang, Yide; Chargé, Pascal
2017-03-30
In this paper, we propose an exact model-based method for near-field sources localization with a bistatic multiple input, multiple output (MIMO) radar system, and compare it with an approximated model-based method. The aim of this paper is to propose an efficient way to use the exact model of the received signals of near-field sources in order to eliminate the systematic error introduced by the use of approximated model in most existing near-field sources localization techniques. The proposed method uses parallel factor (PARAFAC) decomposition to deal with the exact model. Thanks to the exact model, the proposed method has better precision and resolution than the compared approximated model-based method. The simulation results show the performance of the proposed method.
Approximation of the exponential integral (well function) using sampling methods
NASA Astrophysics Data System (ADS)
Baalousha, Husam Musa
2015-04-01
Exponential integral (also known as well function) is often used in hydrogeology to solve Theis and Hantush equations. Many methods have been developed to approximate the exponential integral. Most of these methods are based on numerical approximations and are valid for a certain range of the argument value. This paper presents a new approach to approximate the exponential integral. The new approach is based on sampling methods. Three different sampling methods; Latin Hypercube Sampling (LHS), Orthogonal Array (OA), and Orthogonal Array-based Latin Hypercube (OA-LH) have been used to approximate the function. Different argument values, covering a wide range, have been used. The results of sampling methods were compared with results obtained by Mathematica software, which was used as a benchmark. All three sampling methods converge to the result obtained by Mathematica, at different rates. It was found that the orthogonal array (OA) method has the fastest convergence rate compared with LHS and OA-LH. The root mean square error RMSE of OA was in the order of 1E-08. This method can be used with any argument value, and can be used to solve other integrals in hydrogeology such as the leaky aquifer integral.
Rectal temperature-based death time estimation in infants.
Igari, Yui; Hosokai, Yoshiyuki; Funayama, Masato
2016-03-01
In determining the time of death in infants based on rectal temperature, the same methods used in adults are generally used. However, whether the methods for adults are suitable for infants is unclear. In this study, we examined the following 3 methods in 20 infant death cases: computer simulation of rectal temperature based on the infinite cylinder model (Ohno's method), computer-based double exponential approximation based on Marshall and Hoare's double exponential model with Henssge's parameter determination (Henssge's method), and computer-based collinear approximation based on extrapolation of the rectal temperature curve (collinear approximation). The interval between the last time the infant was seen alive and the time that he/she was found dead was defined as the death time interval and compared with the estimated time of death. In Ohno's method, 7 cases were within the death time interval, and the average deviation in the other 12 cases was approximately 80 min. The results of both Henssge's method and collinear approximation were apparently inferior to the results of Ohno's method. The corrective factor was set within the range of 0.7-1.3 in Henssge's method, and a modified program was newly developed to make it possible to change the corrective factors. Modification A, in which the upper limit of the corrective factor range was set as the maximum value in each body weight, produced the best results: 8 cases were within the death time interval, and the average deviation in the other 12 cases was approximately 80min. There was a possibility that the influence of thermal isolation on the actual infants was stronger than that previously shown by Henssge. We conclude that Ohno's method and Modification A are useful for death time estimation in infants. However, it is important to accept the estimated time of death with certain latitude considering other circumstances. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Technical Note: Approximate Bayesian parameterization of a process-based tropical forest model
NASA Astrophysics Data System (ADS)
Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.
2014-02-01
Inverse parameter estimation of process-based models is a long-standing problem in many scientific disciplines. A key question for inverse parameter estimation is how to define the metric that quantifies how well model predictions fit to the data. This metric can be expressed by general cost or objective functions, but statistical inversion methods require a particular metric, the probability of observing the data given the model parameters, known as the likelihood. For technical and computational reasons, likelihoods for process-based stochastic models are usually based on general assumptions about variability in the observed data, and not on the stochasticity generated by the model. Only in recent years have new methods become available that allow the generation of likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional Markov chain Monte Carlo (MCMC) sampler, performs well in retrieving known parameter values from virtual inventory data generated by the forest model. We analyze the results of the parameter estimation, examine its sensitivity to the choice and aggregation of model outputs and observed data (summary statistics), and demonstrate the application of this method by fitting the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss how this approach differs from approximate Bayesian computation (ABC), another method commonly used to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can be successfully applied to process-based models of high complexity. The methodology is particularly suitable for heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models.
Quantum Approximate Methods for the Atomistic Modeling of Multicomponent Alloys. Chapter 7
NASA Technical Reports Server (NTRS)
Bozzolo, Guillermo; Garces, Jorge; Mosca, Hugo; Gargano, pablo; Noebe, Ronald D.; Abel, Phillip
2007-01-01
This chapter describes the role of quantum approximate methods in the understanding of complex multicomponent alloys at the atomic level. The need to accelerate materials design programs based on economical and efficient modeling techniques provides the framework for the introduction of approximations and simplifications in otherwise rigorous theoretical schemes. As a promising example of the role that such approximate methods might have in the development of complex systems, the BFS method for alloys is presented and applied to Ru-rich Ni-base superalloys and also to the NiAI(Ti,Cu) system, highlighting the benefits that can be obtained from introducing simple modeling techniques to the investigation of such complex systems.
Trajectories for High Specific Impulse High Specific Power Deep Space Exploration
NASA Technical Reports Server (NTRS)
Polsgrove, T.; Adams, R. B.; Brady, Hugh J. (Technical Monitor)
2002-01-01
Preliminary results are presented for two methods to approximate the mission performance of high specific impulse high specific power vehicles. The first method is based on an analytical approximation derived by Williams and Shepherd and can be used to approximate mission performance to outer planets and interstellar space. The second method is based on a parametric analysis of trajectories created using the well known trajectory optimization code, VARITOP. This parametric analysis allows the reader to approximate payload ratios and optimal power requirements for both one-way and round-trip missions. While this second method only addresses missions to and from Jupiter, future work will encompass all of the outer planet destinations and some interstellar precursor missions.
A Gaussian-based rank approximation for subspace clustering
NASA Astrophysics Data System (ADS)
Xu, Fei; Peng, Chong; Hu, Yunhong; He, Guoping
2018-04-01
Low-rank representation (LRR) has been shown successful in seeking low-rank structures of data relationships in a union of subspaces. Generally, LRR and LRR-based variants need to solve the nuclear norm-based minimization problems. Beyond the success of such methods, it has been widely noted that the nuclear norm may not be a good rank approximation because it simply adds all singular values of a matrix together and thus large singular values may dominant the weight. This results in far from satisfactory rank approximation and may degrade the performance of lowrank models based on the nuclear norm. In this paper, we propose a novel nonconvex rank approximation based on the Gaussian distribution function, which has demanding properties to be a better rank approximation than the nuclear norm. Then a low-rank model is proposed based on the new rank approximation with application to motion segmentation. Experimental results have shown significant improvements and verified the effectiveness of our method.
Aben, Ilse; Tanzi, Cristina P; Hartmann, Wouter; Stam, Daphne M; Stammes, Piet
2003-06-20
A method is presented for in-flight validation of space-based polarization measurements based on approximation of the direction of polarization of scattered sunlight by the Rayleigh single-scattering value. This approximation is verified by simulations of radiative transfer calculations for various atmospheric conditions. The simulations show locations along an orbit where the scattering geometries are such that the intensities of the parallel and orthogonal polarization components of the light are equal, regardless of the observed atmosphere and surface. The method can be applied to any space-based instrument that measures the polarization of reflected solar light. We successfully applied the method to validate the Global Ozone Monitoring Experiment (GOME) polarization measurements. The error in the GOME's three broadband polarization measurements appears to be approximately 1%.
NASA Technical Reports Server (NTRS)
Desmarais, R. N.
1982-01-01
The method is capable of generating approximations of arbitrary accuracy. It is based on approximating the algebraic part of the nonelementary integrals in the kernel by exponential functions and then integrating termwise. The exponent spacing in the approximation is a geometric sequence. The coefficients and exponent multiplier of the exponential approximation are computed by least squares so the method is completely automated. Exponential approximates generated in this manner are two orders of magnitude more accurate than the exponential approximation that is currently most often used for this purpose. The method can be used to generate approximations to attain any desired trade-off between accuracy and computing cost.
Approximate error conjugation gradient minimization methods
Kallman, Jeffrey S
2013-05-21
In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.
More on approximations of Poisson probabilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kao, C
1980-05-01
Calculation of Poisson probabilities frequently involves calculating high factorials, which becomes tedious and time-consuming with regular calculators. The usual way to overcome this difficulty has been to find approximations by making use of the table of the standard normal distribution. A new transformation proposed by Kao in 1978 appears to perform better for this purpose than traditional transformations. In the present paper several approximation methods are stated and compared numerically, including an approximation method that utilizes a modified version of Kao's transformation. An approximation based on a power transformation was found to outperform those based on the square-root type transformationsmore » as proposed in literature. The traditional Wilson-Hilferty approximation and Makabe-Morimura approximation are extremely poor compared with this approximation. 4 tables. (RWR)« less
NASA Astrophysics Data System (ADS)
Tan, Jun; Song, Peng; Li, Jinshan; Wang, Lei; Zhong, Mengxuan; Zhang, Xiaobo
2017-06-01
The surface-related multiple elimination (SRME) method is based on feedback formulation and has become one of the most preferred multiple suppression methods used. However, some differences are apparent between the predicted multiples and those in the source seismic records, which may result in conventional adaptive multiple subtraction methods being barely able to effectively suppress multiples in actual production. This paper introduces a combined adaptive multiple attenuation method based on the optimized event tracing technique and extended Wiener filtering. The method firstly uses multiple records predicted by SRME to generate a multiple velocity spectrum, then separates the original record to an approximate primary record and an approximate multiple record by applying the optimized event tracing method and short-time window FK filtering method. After applying the extended Wiener filtering method, residual multiples in the approximate primary record can then be eliminated and the damaged primary can be restored from the approximate multiple record. This method combines the advantages of multiple elimination based on the optimized event tracing method and the extended Wiener filtering technique. It is an ideal method for suppressing typical hyperbolic and other types of multiples, with the advantage of minimizing damage of the primary. Synthetic and field data tests show that this method produces better multiple elimination results than the traditional multi-channel Wiener filter method and is more suitable for multiple elimination in complicated geological areas.
Gai, Litao; Bilige, Sudao; Jie, Yingmo
2016-01-01
In this paper, we successfully obtained the exact solutions and the approximate analytic solutions of the (2 + 1)-dimensional KP equation based on the Lie symmetry, the extended tanh method and the homotopy perturbation method. In first part, we obtained the symmetries of the (2 + 1)-dimensional KP equation based on the Wu-differential characteristic set algorithm and reduced it. In the second part, we constructed the abundant exact travelling wave solutions by using the extended tanh method. These solutions are expressed by the hyperbolic functions, the trigonometric functions and the rational functions respectively. It should be noted that when the parameters are taken as special values, some solitary wave solutions are derived from the hyperbolic function solutions. Finally, we apply the homotopy perturbation method to obtain the approximate analytic solutions based on four kinds of initial conditions.
Technical Note: Approximate Bayesian parameterization of a complex tropical forest model
NASA Astrophysics Data System (ADS)
Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.
2013-08-01
Inverse parameter estimation of process-based models is a long-standing problem in ecology and evolution. A key problem of inverse parameter estimation is to define a metric that quantifies how well model predictions fit to the data. Such a metric can be expressed by general cost or objective functions, but statistical inversion approaches are based on a particular metric, the probability of observing the data given the model, known as the likelihood. Deriving likelihoods for dynamic models requires making assumptions about the probability for observations to deviate from mean model predictions. For technical reasons, these assumptions are usually derived without explicit consideration of the processes in the simulation. Only in recent years have new methods become available that allow generating likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional MCMC, performs well in retrieving known parameter values from virtual field data generated by the forest model. We analyze the results of the parameter estimation, examine the sensitivity towards the choice and aggregation of model outputs and observed data (summary statistics), and show results from using this method to fit the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss differences of this approach to Approximate Bayesian Computing (ABC), another commonly used method to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can successfully be applied to process-based models of high complexity. The methodology is particularly suited to heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models in ecology and evolution.
NASA Technical Reports Server (NTRS)
Desmarais, R. N.
1982-01-01
This paper describes an accurate economical method for generating approximations to the kernel of the integral equation relating unsteady pressure to normalwash in nonplanar flow. The method is capable of generating approximations of arbitrary accuracy. It is based on approximating the algebraic part of the non elementary integrals in the kernel by exponential approximations and then integrating termwise. The exponent spacing in the approximation is a geometric sequence. The coefficients and exponent multiplier of the exponential approximation are computed by least squares so the method is completely automated. Exponential approximates generated in this manner are two orders of magnitude more accurate than the exponential approximation that is currently most often used for this purpose. Coefficients for 8, 12, 24, and 72 term approximations are tabulated in the report. Also, since the method is automated, it can be used to generate approximations to attain any desired trade-off between accuracy and computing cost.
A Lyapunov and Sacker–Sell spectral stability theory for one-step methods
Steyer, Andrew J.; Van Vleck, Erik S.
2018-04-13
Approximation theory for Lyapunov and Sacker–Sell spectra based upon QR techniques is used to analyze the stability of a one-step method solving a time-dependent (nonautonomous) linear ordinary differential equation (ODE) initial value problem in terms of the local error. Integral separation is used to characterize the conditioning of stability spectra calculations. The stability of the numerical solution by a one-step method of a nonautonomous linear ODE using real-valued, scalar, nonautonomous linear test equations is justified. This analysis is used to approximate exponential growth/decay rates on finite and infinite time intervals and establish global error bounds for one-step methods approximating uniformly,more » exponentially stable trajectories of nonautonomous and nonlinear ODEs. A time-dependent stiffness indicator and a one-step method that switches between explicit and implicit Runge–Kutta methods based upon time-dependent stiffness are developed based upon the theoretical results.« less
A Lyapunov and Sacker–Sell spectral stability theory for one-step methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steyer, Andrew J.; Van Vleck, Erik S.
Approximation theory for Lyapunov and Sacker–Sell spectra based upon QR techniques is used to analyze the stability of a one-step method solving a time-dependent (nonautonomous) linear ordinary differential equation (ODE) initial value problem in terms of the local error. Integral separation is used to characterize the conditioning of stability spectra calculations. The stability of the numerical solution by a one-step method of a nonautonomous linear ODE using real-valued, scalar, nonautonomous linear test equations is justified. This analysis is used to approximate exponential growth/decay rates on finite and infinite time intervals and establish global error bounds for one-step methods approximating uniformly,more » exponentially stable trajectories of nonautonomous and nonlinear ODEs. A time-dependent stiffness indicator and a one-step method that switches between explicit and implicit Runge–Kutta methods based upon time-dependent stiffness are developed based upon the theoretical results.« less
NASA Astrophysics Data System (ADS)
Ma, Yuan-Zhuo; Li, Hong-Shuang; Yao, Wei-Xing
2018-05-01
The evaluation of the probabilistic constraints in reliability-based design optimization (RBDO) problems has always been significant and challenging work, which strongly affects the performance of RBDO methods. This article deals with RBDO problems using a recently developed generalized subset simulation (GSS) method and a posterior approximation approach. The posterior approximation approach is used to transform all the probabilistic constraints into ordinary constraints as in deterministic optimization. The assessment of multiple failure probabilities required by the posterior approximation approach is achieved by GSS in a single run at all supporting points, which are selected by a proper experimental design scheme combining Sobol' sequences and Bucher's design. Sequentially, the transformed deterministic design optimization problem can be solved by optimization algorithms, for example, the sequential quadratic programming method. Three optimization problems are used to demonstrate the efficiency and accuracy of the proposed method.
NASA Technical Reports Server (NTRS)
Rosen, I. G.
1985-01-01
Rayleigh-Ritz methods for the approximation of the natural modes for a class of vibration problems involving flexible beams with tip bodies using subspaces of piecewise polynomial spline functions are developed. An abstract operator theoretic formulation of the eigenvalue problem is derived and spectral properties investigated. The existing theory for spline-based Rayleigh-Ritz methods applied to elliptic differential operators and the approximation properties of interpolatory splines are useed to argue convergence and establish rates of convergence. An example and numerical results are discussed.
Interpolation Method Needed for Numerical Uncertainty
NASA Technical Reports Server (NTRS)
Groves, Curtis E.; Ilie, Marcel; Schallhorn, Paul A.
2014-01-01
Using Computational Fluid Dynamics (CFD) to predict a flow field is an approximation to the exact problem and uncertainties exist. There is a method to approximate the errors in CFD via Richardson's Extrapolation. This method is based off of progressive grid refinement. To estimate the errors, the analyst must interpolate between at least three grids. This paper describes a study to find an appropriate interpolation scheme that can be used in Richardson's extrapolation or other uncertainty method to approximate errors.
A comparison of transport algorithms for premixed, laminar steady state flames
NASA Technical Reports Server (NTRS)
Coffee, T. P.; Heimerl, J. M.
1980-01-01
The effects of different methods of approximating multispecies transport phenomena in models of premixed, laminar, steady state flames were studied. Five approximation methods that span a wide range of computational complexity were developed. Identical data for individual species properties were used for each method. Each approximation method is employed in the numerical solution of a set of five H2-02-N2 flames. For each flame the computed species and temperature profiles, as well as the computed flame speeds, are found to be very nearly independent of the approximation method used. This does not indicate that transport phenomena are unimportant, but rather that the selection of the input values for the individual species transport properties is more important than the selection of the method used to approximate the multispecies transport. Based on these results, a sixth approximation method was developed that is computationally efficient and provides results extremely close to the most sophisticated and precise method used.
NASA Astrophysics Data System (ADS)
Bervillier, C.; Boisseau, B.; Giacomini, H.
2008-02-01
The relation between the Wilson-Polchinski and the Litim optimized ERGEs in the local potential approximation is studied with high accuracy using two different analytical approaches based on a field expansion: a recently proposed genuine analytical approximation scheme to two-point boundary value problems of ordinary differential equations, and a new one based on approximating the solution by generalized hypergeometric functions. A comparison with the numerical results obtained with the shooting method is made. A similar accuracy is reached in each case. Both two methods appear to be more efficient than the usual field expansions frequently used in the current studies of ERGEs (in particular for the Wilson-Polchinski case in the study of which they fail).
Lindsay, Kaitlin E; Rühli, Frank J; Deleon, Valerie Burke
2015-06-01
The technique of forensic facial approximation, or reconstruction, is one of many facets of the field of mummy studies. Although far from a rigorous scientific technique, evidence-based visualization of antemortem appearance may supplement radiological, chemical, histological, and epidemiological studies of ancient remains. Published guidelines exist for creating facial approximations, but few approximations are published with documentation of the specific process and references used. Additionally, significant new research has taken place in recent years which helps define best practices in the field. This case study records the facial approximation of a 3,000-year-old ancient Egyptian woman using medical imaging data and the digital sculpting program, ZBrush. It represents a synthesis of current published techniques based on the most solid anatomical and/or statistical evidence. Through this study, it was found that although certain improvements have been made in developing repeatable, evidence-based guidelines for facial approximation, there are many proposed methods still awaiting confirmation from comprehensive studies. This study attempts to assist artists, anthropologists, and forensic investigators working in facial approximation by presenting the recommended methods in a chronological and usable format. © 2015 Wiley Periodicals, Inc.
Difference equation state approximations for nonlinear hereditary control problems
NASA Technical Reports Server (NTRS)
Rosen, I. G.
1982-01-01
Discrete approximation schemes for the solution of nonlinear hereditary control problems are constructed. The methods involve approximation by a sequence of optimal control problems in which the original infinite dimensional state equation has been approximated by a finite dimensional discrete difference equation. Convergence of the state approximations is argued using linear semigroup theory and is then used to demonstrate that solutions to the approximating optimal control problems in some sense approximate solutions to the original control problem. Two schemes, one based upon piecewise constant approximation, and the other involving spline functions are discussed. Numerical results are presented, analyzed and used to compare the schemes to other available approximation methods for the solution of hereditary control problems.
A Jacobi collocation approximation for nonlinear coupled viscous Burgers' equation
NASA Astrophysics Data System (ADS)
Doha, Eid H.; Bhrawy, Ali H.; Abdelkawy, Mohamed A.; Hafez, Ramy M.
2014-02-01
This article presents a numerical approximation of the initial-boundary nonlinear coupled viscous Burgers' equation based on spectral methods. A Jacobi-Gauss-Lobatto collocation (J-GL-C) scheme in combination with the implicit Runge-Kutta-Nyström (IRKN) scheme are employed to obtain highly accurate approximations to the mentioned problem. This J-GL-C method, based on Jacobi polynomials and Gauss-Lobatto quadrature integration, reduces solving the nonlinear coupled viscous Burgers' equation to a system of nonlinear ordinary differential equation which is far easier to solve. The given examples show, by selecting relatively few J-GL-C points, the accuracy of the approximations and the utility of the approach over other analytical or numerical methods. The illustrative examples demonstrate the accuracy, efficiency, and versatility of the proposed algorithm.
Gorban, A N; Mirkes, E M; Zinovyev, A
2016-12-01
Most of machine learning approaches have stemmed from the application of minimizing the mean squared distance principle, based on the computationally efficient quadratic optimization methods. However, when faced with high-dimensional and noisy data, the quadratic error functionals demonstrated many weaknesses including high sensitivity to contaminating factors and dimensionality curse. Therefore, a lot of recent applications in machine learning exploited properties of non-quadratic error functionals based on L 1 norm or even sub-linear potentials corresponding to quasinorms L p (0
Difference equation state approximations for nonlinear hereditary control problems
NASA Technical Reports Server (NTRS)
Rosen, I. G.
1984-01-01
Discrete approximation schemes for the solution of nonlinear hereditary control problems are constructed. The methods involve approximation by a sequence of optimal control problems in which the original infinite dimensional state equation has been approximated by a finite dimensional discrete difference equation. Convergence of the state approximations is argued using linear semigroup theory and is then used to demonstrate that solutions to the approximating optimal control problems in some sense approximate solutions to the original control problem. Two schemes, one based upon piecewise constant approximation, and the other involving spline functions are discussed. Numerical results are presented, analyzed and used to compare the schemes to other available approximation methods for the solution of hereditary control problems. Previously announced in STAR as N83-33589
Interpolation Method Needed for Numerical Uncertainty Analysis of Computational Fluid Dynamics
NASA Technical Reports Server (NTRS)
Groves, Curtis; Ilie, Marcel; Schallhorn, Paul
2014-01-01
Using Computational Fluid Dynamics (CFD) to predict a flow field is an approximation to the exact problem and uncertainties exist. There is a method to approximate the errors in CFD via Richardson's Extrapolation. This method is based off of progressive grid refinement. To estimate the errors in an unstructured grid, the analyst must interpolate between at least three grids. This paper describes a study to find an appropriate interpolation scheme that can be used in Richardson's extrapolation or other uncertainty method to approximate errors. Nomenclature
Poisson Approximation-Based Score Test for Detecting Association of Rare Variants.
Fang, Hongyan; Zhang, Hong; Yang, Yaning
2016-07-01
Genome-wide association study (GWAS) has achieved great success in identifying genetic variants, but the nature of GWAS has determined its inherent limitations. Under the common disease rare variants (CDRV) hypothesis, the traditional association analysis methods commonly used in GWAS for common variants do not have enough power for detecting rare variants with a limited sample size. As a solution to this problem, pooling rare variants by their functions provides an efficient way for identifying susceptible genes. Rare variant typically have low frequencies of minor alleles, and the distribution of the total number of minor alleles of the rare variants can be approximated by a Poisson distribution. Based on this fact, we propose a new test method, the Poisson Approximation-based Score Test (PAST), for association analysis of rare variants. Two testing methods, namely, ePAST and mPAST, are proposed based on different strategies of pooling rare variants. Simulation results and application to the CRESCENDO cohort data show that our methods are more powerful than the existing methods. © 2016 John Wiley & Sons Ltd/University College London.
NASA Astrophysics Data System (ADS)
Rao, T. R. Ramesh
2018-04-01
In this paper, we study the analytical method based on reduced differential transform method coupled with sumudu transform through Pades approximants. The proposed method may be considered as alternative approach for finding exact solution of Gas dynamics equation in an effective manner. This method does not require any discretization, linearization and perturbation.
Testing non-inferiority of a new treatment in three-arm clinical trials with binary endpoints.
Tang, Nian-Sheng; Yu, Bin; Tang, Man-Lai
2014-12-18
A two-arm non-inferiority trial without a placebo is usually adopted to demonstrate that an experimental treatment is not worse than a reference treatment by a small pre-specified non-inferiority margin due to ethical concerns. Selection of the non-inferiority margin and establishment of assay sensitivity are two major issues in the design, analysis and interpretation for two-arm non-inferiority trials. Alternatively, a three-arm non-inferiority clinical trial including a placebo is usually conducted to assess the assay sensitivity and internal validity of a trial. Recently, some large-sample approaches have been developed to assess the non-inferiority of a new treatment based on the three-arm trial design. However, these methods behave badly with small sample sizes in the three arms. This manuscript aims to develop some reliable small-sample methods to test three-arm non-inferiority. Saddlepoint approximation, exact and approximate unconditional, and bootstrap-resampling methods are developed to calculate p-values of the Wald-type, score and likelihood ratio tests. Simulation studies are conducted to evaluate their performance in terms of type I error rate and power. Our empirical results show that the saddlepoint approximation method generally behaves better than the asymptotic method based on the Wald-type test statistic. For small sample sizes, approximate unconditional and bootstrap-resampling methods based on the score test statistic perform better in the sense that their corresponding type I error rates are generally closer to the prespecified nominal level than those of other test procedures. Both approximate unconditional and bootstrap-resampling test procedures based on the score test statistic are generally recommended for three-arm non-inferiority trials with binary outcomes.
Elfwing, Stefan; Uchibe, Eiji; Doya, Kenji
2016-12-01
Free-energy based reinforcement learning (FERL) was proposed for learning in high-dimensional state and action spaces. However, the FERL method does only really work well with binary, or close to binary, state input, where the number of active states is fewer than the number of non-active states. In the FERL method, the value function is approximated by the negative free energy of a restricted Boltzmann machine (RBM). In our earlier study, we demonstrated that the performance and the robustness of the FERL method can be improved by scaling the free energy by a constant that is related to the size of network. In this study, we propose that RBM function approximation can be further improved by approximating the value function by the negative expected energy (EERL), instead of the negative free energy, as well as being able to handle continuous state input. We validate our proposed method by demonstrating that EERL: (1) outperforms FERL, as well as standard neural network and linear function approximation, for three versions of a gridworld task with high-dimensional image state input; (2) achieves new state-of-the-art results in stochastic SZ-Tetris in both model-free and model-based learning settings; and (3) significantly outperforms FERL and standard neural network function approximation for a robot navigation task with raw and noisy RGB images as state input and a large number of actions. Copyright © 2016 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Global optimization method based on ray tracing to achieve optimum figure error compensation
NASA Astrophysics Data System (ADS)
Liu, Xiaolin; Guo, Xuejia; Tang, Tianjin
2017-02-01
Figure error would degrade the performance of optical system. When predicting the performance and performing system assembly, compensation by clocking of optical components around the optical axis is a conventional but user-dependent method. Commercial optical software cannot optimize this clocking. Meanwhile existing automatic figure-error balancing methods can introduce approximate calculation error and the build process of optimization model is complex and time-consuming. To overcome these limitations, an accurate and automatic global optimization method of figure error balancing is proposed. This method is based on precise ray tracing to calculate the wavefront error, not approximate calculation, under a given elements' rotation angles combination. The composite wavefront error root-mean-square (RMS) acts as the cost function. Simulated annealing algorithm is used to seek the optimal combination of rotation angles of each optical element. This method can be applied to all rotational symmetric optics. Optimization results show that this method is 49% better than previous approximate analytical method.
New approach to CT pixel-based photon dose calculations in heterogeneous media
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wong, J.W.; Henkelman, R.M.
The effects of small cavities on dose in water and the dose in a homogeneous nonunit density medium illustrate that inhomogeneities do not act independently in photon dose perturbation, and serve as two constraints which should be satisfied by approximate methods of computed tomography (CT) pixel-based dose calculations. Current methods at best satisfy only one of the two constraints and show inadequacies in some intermediate geometries. We have developed an approximate method that satisfies both these constraints and treats much of the synergistic effect of multiple inhomogeneities correctly. The method calculates primary and first-scatter doses by first-order ray tracing withmore » the first-scatter contribution augmented by a component of second scatter that behaves like first scatter. Multiple-scatter dose perturbation values extracted from small cavity experiments are used in a function which approximates the small residual multiple-scatter dose. For a wide range of geometries tested, our method agrees very well with measurements. The average deviation is less than 2% with a maximum of 3%. In comparison, calculations based on existing methods can have errors larger than 10%.« less
NASA Technical Reports Server (NTRS)
Bennett, Floyd V.; Yntema, Robert T.
1959-01-01
Several approximate procedures for calculating the bending-moment response of flexible airplanes to continuous isotropic turbulence are presented and evaluated. The modal methods (the mode-displacement and force-summation methods) and a matrix method (segmented-wing method) are considered. These approximate procedures are applied to a simplified airplane for which an exact solution to the equation of motion can be obtained. The simplified airplane consists of a uniform beam with a concentrated fuselage mass at the center. Airplane motions are limited to vertical rigid-body translation and symmetrical wing bending deflections. Output power spectra of wing bending moments based on the exact transfer-function solutions are used as a basis for the evaluation of the approximate methods. It is shown that the force-summation and the matrix methods give satisfactory accuracy and that the mode-displacement method gives unsatisfactory accuracy.
On the integration of reinforcement learning and approximate reasoning for control
NASA Technical Reports Server (NTRS)
Berenji, Hamid R.
1991-01-01
The author discusses the importance of strengthening the knowledge representation characteristic of reinforcement learning techniques using methods such as approximate reasoning. The ARIC (approximate reasoning-based intelligent control) architecture is an example of such a hybrid approach in which the fuzzy control rules are modified (fine-tuned) using reinforcement learning. ARIC also demonstrates that it is possible to start with an approximately correct control knowledge base and learn to refine this knowledge through further experience. On the other hand, techniques such as the TD (temporal difference) algorithm and Q-learning establish stronger theoretical foundations for their use in adaptive control and also in stability analysis of hybrid reinforcement learning and approximate reasoning-based controllers.
Approximation of reliability of direct genomic breeding values
USDA-ARS?s Scientific Manuscript database
Two methods to efficiently approximate theoretical genomic reliabilities are presented. The first method is based on the direct inverse of the left hand side (LHS) of mixed model equations. It uses the genomic relationship matrix for a small subset of individuals with the highest genomic relationshi...
Methods to approximate reliabilities in single-step genomic evaluation
USDA-ARS?s Scientific Manuscript database
Reliability of predictions from single-step genomic BLUP (ssGBLUP) can be calculated by inversion, but that is not feasible for large data sets. Two methods of approximating reliability were developed based on decomposition of a function of reliability into contributions from records, pedigrees, and...
A consensus algorithm for approximate string matching and its application to QRS complex detection
NASA Astrophysics Data System (ADS)
Alba, Alfonso; Mendez, Martin O.; Rubio-Rincon, Miguel E.; Arce-Santana, Edgar R.
2016-08-01
In this paper, a novel algorithm for approximate string matching (ASM) is proposed. The novelty resides in the fact that, unlike most other methods, the proposed algorithm is not based on the Hamming or Levenshtein distances, but instead computes a score for each symbol in the search text based on a consensus measure. Those symbols with sufficiently high scores will likely correspond to approximate instances of the pattern string. To demonstrate the usefulness of the proposed method, it has been applied to the detection of QRS complexes in electrocardiographic signals with competitive results when compared against the classic Pan-Tompkins (PT) algorithm. The proposed method outperformed PT in 72% of the test cases, with no extra computational cost.
NASA Astrophysics Data System (ADS)
Doha, E. H.; Abd-Elhameed, W. M.
2005-09-01
We present a double ultraspherical spectral methods that allow the efficient approximate solution for the parabolic partial differential equations in a square subject to the most general inhomogeneous mixed boundary conditions. The differential equations with their boundary and initial conditions are reduced to systems of ordinary differential equations for the time-dependent expansion coefficients. These systems are greatly simplified by using tensor matrix algebra, and are solved by using the step-by-step method. Numerical applications of how to use these methods are described. Numerical results obtained compare favorably with those of the analytical solutions. Accurate double ultraspherical spectral approximations for Poisson's and Helmholtz's equations are also noted. Numerical experiments show that spectral approximation based on Chebyshev polynomials of the first kind is not always better than others based on ultraspherical polynomials.
Piecewise-homotopy analysis method (P-HAM) for first order nonlinear ODE
NASA Astrophysics Data System (ADS)
Chin, F. Y.; Lem, K. H.; Chong, F. S.
2013-09-01
In homotopy analysis method (HAM), the determination for the value of the auxiliary parameter h is based on the valid region of the h-curve in which the horizontal segment of the h-curve will decide the valid h-region. All h-value taken from the valid region, provided that the order of deformation is large enough, will in principle yield an approximation series that converges to the exact solution. However it is found out that the h-value chosen within this valid region does not always promise a good approximation under finite order. This paper suggests an improved method called Piecewise-HAM (P-HAM). In stead of a single h-value, this method suggests using many h-values. Each of the h-values comes from an individual h-curve while each h-curve is plotted by fixing the time t at a different value. Each h-value is claimed to produce a good approximation only about a neighborhood centered at the corresponding t which the h-curve is based on. Each segment of these good approximations is then joined to form the approximation curve. By this, the convergence region is enhanced further. The P-HAM is illustrated and supported by examples.
NASA Technical Reports Server (NTRS)
Tsai, C.; Szabo, B. A.
1973-01-01
An approch to the finite element method which utilizes families of conforming finite elements based on complete polynomials is presented. Finite element approximations based on this method converge with respect to progressively reduced element sizes as well as with respect to progressively increasing orders of approximation. Numerical results of static and dynamic applications of plates are presented to demonstrate the efficiency of the method. Comparisons are made with plate elements in NASTRAN and the high-precision plate element developed by Cowper and his co-workers. Some considerations are given to implementation of the constraint method into general purpose computer programs such as NASTRAN.
Fast, large-scale hologram calculation in wavelet domain
NASA Astrophysics Data System (ADS)
Shimobaba, Tomoyoshi; Matsushima, Kyoji; Takahashi, Takayuki; Nagahama, Yuki; Hasegawa, Satoki; Sano, Marie; Hirayama, Ryuji; Kakue, Takashi; Ito, Tomoyoshi
2018-04-01
We propose a large-scale hologram calculation using WAvelet ShrinkAge-Based superpositIon (WASABI), a wavelet transform-based algorithm. An image-type hologram calculated using the WASABI method is printed on a glass substrate with the resolution of 65 , 536 × 65 , 536 pixels and a pixel pitch of 1 μm. The hologram calculation time amounts to approximately 354 s on a commercial CPU, which is approximately 30 times faster than conventional methods.
NONLINEAR MULTIGRID SOLVER EXPLOITING AMGe COARSE SPACES WITH APPROXIMATION PROPERTIES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christensen, Max La Cour; Villa, Umberto E.; Engsig-Karup, Allan P.
The paper introduces a nonlinear multigrid solver for mixed nite element discretizations based on the Full Approximation Scheme (FAS) and element-based Algebraic Multigrid (AMGe). The main motivation to use FAS for unstruc- tured problems is the guaranteed approximation property of the AMGe coarse spaces that were developed recently at Lawrence Livermore National Laboratory. These give the ability to derive stable and accurate coarse nonlinear discretization problems. The previous attempts (including ones with the original AMGe method, [5, 11]), were less successful due to lack of such good approximation properties of the coarse spaces. With coarse spaces with approximation properties, ourmore » FAS approach on un- structured meshes should be as powerful/successful as FAS on geometrically re ned meshes. For comparison, Newton's method and Picard iterations with an inner state-of-the-art linear solver is compared to FAS on a nonlinear saddle point problem with applications to porous media ow. It is demonstrated that FAS is faster than Newton's method and Picard iterations for the experiments considered here. Due to the guaranteed approximation properties of our AMGe, the coarse spaces are very accurate, providing a solver with the potential for mesh-independent convergence on general unstructured meshes.« less
Test particle propagation in magnetostatic turbulence. 2: The local approximation method
NASA Technical Reports Server (NTRS)
Klimas, A. J.; Sandri, G.; Scudder, J. D.; Howell, D. R.
1976-01-01
An approximation method for statistical mechanics is presented and applied to a class of problems which contains a test particle propagation problem. All of the available basic equations used in statistical mechanics are cast in the form of a single equation which is integrodifferential in time and which is then used as the starting point for the construction of the local approximation method. Simplification of the integrodifferential equation is achieved through approximation to the Laplace transform of its kernel. The approximation is valid near the origin in the Laplace space and is based on the assumption of small Laplace variable. No other small parameter is necessary for the construction of this approximation method. The n'th level of approximation is constructed formally, and the first five levels of approximation are calculated explicitly. It is shown that each level of approximation is governed by an inhomogeneous partial differential equation in time with time independent operator coefficients. The order in time of these partial differential equations is found to increase as n does. At n = 0 the most local first order partial differential equation which governs the Markovian limit is regained.
Numerical solution of 2D-vector tomography problem using the method of approximate inverse
DOE Office of Scientific and Technical Information (OSTI.GOV)
Svetov, Ivan; Maltseva, Svetlana; Polyakova, Anna
2016-08-10
We propose a numerical solution of reconstruction problem of a two-dimensional vector field in a unit disk from the known values of the longitudinal and transverse ray transforms. The algorithm is based on the method of approximate inverse. Numerical simulations confirm that the proposed method yields good results of reconstruction of vector fields.
Lognormal Approximations of Fault Tree Uncertainty Distributions.
El-Shanawany, Ashraf Ben; Ardron, Keith H; Walker, Simon P
2018-01-26
Fault trees are used in reliability modeling to create logical models of fault combinations that can lead to undesirable events. The output of a fault tree analysis (the top event probability) is expressed in terms of the failure probabilities of basic events that are input to the model. Typically, the basic event probabilities are not known exactly, but are modeled as probability distributions: therefore, the top event probability is also represented as an uncertainty distribution. Monte Carlo methods are generally used for evaluating the uncertainty distribution, but such calculations are computationally intensive and do not readily reveal the dominant contributors to the uncertainty. In this article, a closed-form approximation for the fault tree top event uncertainty distribution is developed, which is applicable when the uncertainties in the basic events of the model are lognormally distributed. The results of the approximate method are compared with results from two sampling-based methods: namely, the Monte Carlo method and the Wilks method based on order statistics. It is shown that the closed-form expression can provide a reasonable approximation to results obtained by Monte Carlo sampling, without incurring the computational expense. The Wilks method is found to be a useful means of providing an upper bound for the percentiles of the uncertainty distribution while being computationally inexpensive compared with full Monte Carlo sampling. The lognormal approximation method and Wilks's method appear attractive, practical alternatives for the evaluation of uncertainty in the output of fault trees and similar multilinear models. © 2018 Society for Risk Analysis.
Approximated maximum likelihood estimation in multifractal random walks
NASA Astrophysics Data System (ADS)
Løvsletten, O.; Rypdal, M.
2012-04-01
We present an approximated maximum likelihood method for the multifractal random walk processes of [E. Bacry , Phys. Rev. EPLEEE81539-375510.1103/PhysRevE.64.026103 64, 026103 (2001)]. The likelihood is computed using a Laplace approximation and a truncation in the dependency structure for the latent volatility. The procedure is implemented as a package in the r computer language. Its performance is tested on synthetic data and compared to an inference approach based on the generalized method of moments. The method is applied to estimate parameters for various financial stock indices.
Hoang, Tuan; Tran, Dat; Huang, Xu
2013-01-01
Common Spatial Pattern (CSP) is a state-of-the-art method for feature extraction in Brain-Computer Interface (BCI) systems. However it is designed for 2-class BCI classification problems. Current extensions of this method to multiple classes based on subspace union and covariance matrix similarity do not provide a high performance. This paper presents a new approach to solving multi-class BCI classification problems by forming a subspace resembled from original subspaces and the proposed method for this approach is called Approximation-based Common Principal Component (ACPC). We perform experiments on Dataset 2a used in BCI Competition IV to evaluate the proposed method. This dataset was designed for motor imagery classification with 4 classes. Preliminary experiments show that the proposed ACPC feature extraction method when combining with Support Vector Machines outperforms CSP-based feature extraction methods on the experimental dataset.
Optimization of Turbine Engine Cycle Analysis with Analytic Derivatives
NASA Technical Reports Server (NTRS)
Hearn, Tristan; Hendricks, Eric; Chin, Jeffrey; Gray, Justin; Moore, Kenneth T.
2016-01-01
A new engine cycle analysis tool, called Pycycle, was recently built using the OpenMDAO framework. This tool uses equilibrium chemistry based thermodynamics, and provides analytic derivatives. This allows for stable and efficient use of gradient-based optimization and sensitivity analysis methods on engine cycle models, without requiring the use of finite difference derivative approximation methods. To demonstrate this, a gradient-based design optimization was performed on a multi-point turbofan engine model. Results demonstrate very favorable performance compared to an optimization of an identical model using finite-difference approximated derivatives.
Feng, Hao; Ashkar, Rana; Steinke, Nina; ...
2018-02-01
A method dubbed grating-based holography was recently used to determine the structure of colloidal fluids in the rectangular grooves of a diffraction grating from X-ray scattering measurements. Similar grating-based measurements have also been recently made with neutrons using a technique called spin-echo small-angle neutron scattering. The analysis of the X-ray diffraction data was done using an approximation that treats the X-ray phase change caused by the colloidal structure as a small perturbation to the overall phase pattern generated by the grating. In this paper, the adequacy of this weak phase approximation is explored for both X-ray and neutron grating holography.more » Additionally, it is found that there are several approximations hidden within the weak phase approximation that can lead to incorrect conclusions from experiments. In particular, the phase contrast for the empty grating is a critical parameter. Finally, while the approximation is found to be perfectly adequate for X-ray grating holography experiments performed to date, it cannot be applied to similar neutron experiments because the latter technique requires much deeper grating channels.« less
NASA Technical Reports Server (NTRS)
Ito, K.
1983-01-01
Approximation schemes based on Legendre-tau approximation are developed for application to parameter identification problem for delay and partial differential equations. The tau method is based on representing the approximate solution as a truncated series of orthonormal functions. The characteristic feature of the Legendre-tau approach is that when the solution to a problem is infinitely differentiable, the rate of convergence is faster than any finite power of 1/N; higher accuracy is thus achieved, making the approach suitable for small N.
Optimization of Turbine Engine Cycle Analysis with Analytic Derivatives
NASA Technical Reports Server (NTRS)
Hearn, Tristan; Hendricks, Eric; Chin, Jeffrey; Gray, Justin; Moore, Kenneth T.
2016-01-01
A new engine cycle analysis tool, called Pycycle, was built using the OpenMDAO framework. Pycycle provides analytic derivatives allowing for an efficient use of gradient-based optimization methods on engine cycle models, without requiring the use of finite difference derivative approximation methods. To demonstrate this, a gradient-based design optimization was performed on a turbofan engine model. Results demonstrate very favorable performance compared to an optimization of an identical model using finite-difference approximated derivatives.
NASA Astrophysics Data System (ADS)
Wang, Jing; Yang, Tianyu; Staskevich, Gennady; Abbe, Brian
2017-04-01
This paper studies the cooperative control problem for a class of multiagent dynamical systems with partially unknown nonlinear system dynamics. In particular, the control objective is to solve the state consensus problem for multiagent systems based on the minimisation of certain cost functions for individual agents. Under the assumption that there exist admissible cooperative controls for such class of multiagent systems, the formulated problem is solved through finding the optimal cooperative control using the approximate dynamic programming and reinforcement learning approach. With the aid of neural network parameterisation and online adaptive learning, our method renders a practically implementable approximately adaptive neural cooperative control for multiagent systems. Specifically, based on the Bellman's principle of optimality, the Hamilton-Jacobi-Bellman (HJB) equation for multiagent systems is first derived. We then propose an approximately adaptive policy iteration algorithm for multiagent cooperative control based on neural network approximation of the value functions. The convergence of the proposed algorithm is rigorously proved using the contraction mapping method. The simulation results are included to validate the effectiveness of the proposed algorithm.
NASA Astrophysics Data System (ADS)
Chui, Siu Lit; Lu, Ya Yan
2004-03-01
Wide-angle full-vector beam propagation methods (BPMs) for three-dimensional wave-guiding structures can be derived on the basis of rational approximants of a square root operator or its exponential (i.e., the one-way propagator). While the less accurate BPM based on the slowly varying envelope approximation can be efficiently solved by the alternating direction implicit (ADI) method, the wide-angle variants involve linear systems that are more difficult to handle. We present an efficient solver for these linear systems that is based on a Krylov subspace method with an ADI preconditioner. The resulting wide-angle full-vector BPM is used to simulate the propagation of wave fields in a Y branch and a taper.
Chui, Siu Lit; Lu, Ya Yan
2004-03-01
Wide-angle full-vector beam propagation methods (BPMs) for three-dimensional wave-guiding structures can be derived on the basis of rational approximants of a square root operator or its exponential (i.e., the one-way propagator). While the less accurate BPM based on the slowly varying envelope approximation can be efficiently solved by the alternating direction implicit (ADI) method, the wide-angle variants involve linear systems that are more difficult to handle. We present an efficient solver for these linear systems that is based on a Krylov subspace method with an ADI preconditioner. The resulting wide-angle full-vector BPM is used to simulate the propagation of wave fields in a Y branch and a taper.
Polynomial probability distribution estimation using the method of moments
Mattsson, Lars; Rydén, Jesper
2017-01-01
We suggest a procedure for estimating Nth degree polynomial approximations to unknown (or known) probability density functions (PDFs) based on N statistical moments from each distribution. The procedure is based on the method of moments and is setup algorithmically to aid applicability and to ensure rigor in use. In order to show applicability, polynomial PDF approximations are obtained for the distribution families Normal, Log-Normal, Weibull as well as for a bimodal Weibull distribution and a data set of anonymized household electricity use. The results are compared with results for traditional PDF series expansion methods of Gram–Charlier type. It is concluded that this procedure is a comparatively simple procedure that could be used when traditional distribution families are not applicable or when polynomial expansions of probability distributions might be considered useful approximations. In particular this approach is practical for calculating convolutions of distributions, since such operations become integrals of polynomial expressions. Finally, in order to show an advanced applicability of the method, it is shown to be useful for approximating solutions to the Smoluchowski equation. PMID:28394949
Polynomial probability distribution estimation using the method of moments.
Munkhammar, Joakim; Mattsson, Lars; Rydén, Jesper
2017-01-01
We suggest a procedure for estimating Nth degree polynomial approximations to unknown (or known) probability density functions (PDFs) based on N statistical moments from each distribution. The procedure is based on the method of moments and is setup algorithmically to aid applicability and to ensure rigor in use. In order to show applicability, polynomial PDF approximations are obtained for the distribution families Normal, Log-Normal, Weibull as well as for a bimodal Weibull distribution and a data set of anonymized household electricity use. The results are compared with results for traditional PDF series expansion methods of Gram-Charlier type. It is concluded that this procedure is a comparatively simple procedure that could be used when traditional distribution families are not applicable or when polynomial expansions of probability distributions might be considered useful approximations. In particular this approach is practical for calculating convolutions of distributions, since such operations become integrals of polynomial expressions. Finally, in order to show an advanced applicability of the method, it is shown to be useful for approximating solutions to the Smoluchowski equation.
NASA Astrophysics Data System (ADS)
Krishnan, Karthik; Reddy, Kasireddy V.; Ajani, Bhavya; Yalavarthy, Phaneendra K.
2017-02-01
CT and MR perfusion weighted imaging (PWI) enable quantification of perfusion parameters in stroke studies. These parameters are calculated from the residual impulse response function (IRF) based on a physiological model for tissue perfusion. The standard approach for estimating the IRF is deconvolution using oscillatory-limited singular value decomposition (oSVD) or Frequency Domain Deconvolution (FDD). FDD is widely recognized as the fastest approach currently available for deconvolution of CT Perfusion/MR PWI. In this work, three faster methods are proposed. The first is a direct (model based) crude approximation to the final perfusion quantities (Blood flow, Blood volume, Mean Transit Time and Delay) using the Welch-Satterthwaite approximation for gamma fitted concentration time curves (CTC). The second method is a fast accurate deconvolution method, we call Analytical Fourier Filtering (AFF). The third is another fast accurate deconvolution technique using Showalter's method, we call Analytical Showalter's Spectral Filtering (ASSF). Through systematic evaluation on phantom and clinical data, the proposed methods are shown to be computationally more than twice as fast as FDD. The two deconvolution based methods, AFF and ASSF, are also shown to be quantitatively accurate compared to FDD and oSVD.
The scenario-based generalization of radiation therapy margins.
Fredriksson, Albin; Bokrantz, Rasmus
2016-03-07
We give a scenario-based treatment plan optimization formulation that is equivalent to planning with geometric margins if the scenario doses are calculated using the static dose cloud approximation. If the scenario doses are instead calculated more accurately, then our formulation provides a novel robust planning method that overcomes many of the difficulties associated with previous scenario-based robust planning methods. In particular, our method protects only against uncertainties that can occur in practice, it gives a sharp dose fall-off outside high dose regions, and it avoids underdosage of the target in 'easy' scenarios. The method shares the benefits of the previous scenario-based robust planning methods over geometric margins for applications where the static dose cloud approximation is inaccurate, such as irradiation with few fields and irradiation with ion beams. These properties are demonstrated on a suite of phantom cases planned for treatment with scanned proton beams subject to systematic setup uncertainty.
Comparing capacity value estimation techniques for photovoltaic solar power
Madaeni, Seyed Hossein; Sioshansi, Ramteen; Denholm, Paul
2012-09-28
In this paper, we estimate the capacity value of photovoltaic (PV) solar plants in the western U.S. Our results show that PV plants have capacity values that range between 52% and 93%, depending on location and sun-tracking capability. We further compare more robust but data- and computationally-intense reliability-based estimation techniques with simpler approximation methods. We show that if implemented properly, these techniques provide accurate approximations of reliability-based methods. Overall, methods that are based on the weighted capacity factor of the plant provide the most accurate estimate. As a result, we also examine the sensitivity of PV capacity value to themore » inclusion of sun-tracking systems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shadid, John Nicolas; Elman, Howard; Shuttleworth, Robert R.
2007-04-01
In recent years, considerable effort has been placed on developing efficient and robust solution algorithms for the incompressible Navier-Stokes equations based on preconditioned Krylov methods. These include physics-based methods, such as SIMPLE, and purely algebraic preconditioners based on the approximation of the Schur complement. All these techniques can be represented as approximate block factorization (ABF) type preconditioners. The goal is to decompose the application of the preconditioner into simplified sub-systems in which scalable multi-level type solvers can be applied. In this paper we develop a taxonomy of these ideas based on an adaptation of a generalized approximate factorization of themore » Navier-Stokes system first presented in [25]. This taxonomy illuminates the similarities and differences among these preconditioners and the central role played by efficient approximation of certain Schur complement operators. We then present a parallel computational study that examines the performance of these methods and compares them to an additive Schwarz domain decomposition (DD) algorithm. Results are presented for two and three-dimensional steady state problems for enclosed domains and inflow/outflow systems on both structured and unstructured meshes. The numerical experiments are performed using MPSalsa, a stabilized finite element code.« less
On the parallel solution of parabolic equations
NASA Technical Reports Server (NTRS)
Gallopoulos, E.; Saad, Youcef
1989-01-01
Parallel algorithms for the solution of linear parabolic problems are proposed. The first of these methods is based on using polynomial approximation to the exponential. It does not require solving any linear systems and is highly parallelizable. The two other methods proposed are based on Pade and Chebyshev approximations to the matrix exponential. The parallelization of these methods is achieved by using partial fraction decomposition techniques to solve the resulting systems and thus offers the potential for increased time parallelism in time dependent problems. Experimental results from the Alliant FX/8 and the Cray Y-MP/832 vector multiprocessors are also presented.
Nano-material and method of fabrication
Menchhofer, Paul A; Seals, Roland D; Howe, Jane Y; Wang, Wei
2015-02-03
A fluffy nano-material and method of manufacture are described. At 2000.times. magnification the fluffy nanomaterial has the appearance of raw, uncarded wool, with individual fiber lengths ranging from approximately four microns to twenty microns. Powder-based nanocatalysts are dispersed in the fluffy nanomaterial. The production of fluffy nanomaterial typically involves flowing about 125 cc/min of organic vapor at a pressure of about 400 torr over powder-based nano-catalysts for a period of time that may range from approximately thirty minutes to twenty-four hours.
Convergence analysis of surrogate-based methods for Bayesian inverse problems
NASA Astrophysics Data System (ADS)
Yan, Liang; Zhang, Yuan-Xiang
2017-12-01
The major challenges in the Bayesian inverse problems arise from the need for repeated evaluations of the forward model, as required by Markov chain Monte Carlo (MCMC) methods for posterior sampling. Many attempts at accelerating Bayesian inference have relied on surrogates for the forward model, typically constructed through repeated forward simulations that are performed in an offline phase. Although such approaches can be quite effective at reducing computation cost, there has been little analysis of the approximation on posterior inference. In this work, we prove error bounds on the Kullback-Leibler (KL) distance between the true posterior distribution and the approximation based on surrogate models. Our rigorous error analysis show that if the forward model approximation converges at certain rate in the prior-weighted L 2 norm, then the posterior distribution generated by the approximation converges to the true posterior at least two times faster in the KL sense. The error bound on the Hellinger distance is also provided. To provide concrete examples focusing on the use of the surrogate model based methods, we present an efficient technique for constructing stochastic surrogate models to accelerate the Bayesian inference approach. The Christoffel least squares algorithms, based on generalized polynomial chaos, are used to construct a polynomial approximation of the forward solution over the support of the prior distribution. The numerical strategy and the predicted convergence rates are then demonstrated on the nonlinear inverse problems, involving the inference of parameters appearing in partial differential equations.
Approximate convective heating equations for hypersonic flows
NASA Technical Reports Server (NTRS)
Zoby, E. V.; Moss, J. N.; Sutton, K.
1979-01-01
Laminar and turbulent heating-rate equations appropriate for engineering predictions of the convective heating rates about blunt reentry spacecraft at hypersonic conditions are developed. The approximate methods are applicable to both nonreacting and reacting gas mixtures for either constant or variable-entropy edge conditions. A procedure which accounts for variable-entropy effects and is not based on mass balancing is presented. Results of the approximate heating methods are in good agreement with existing experimental results as well as boundary-layer and viscous-shock-layer solutions.
Extending the Fellegi-Sunter probabilistic record linkage method for approximate field comparators.
DuVall, Scott L; Kerber, Richard A; Thomas, Alun
2010-02-01
Probabilistic record linkage is a method commonly used to determine whether demographic records refer to the same person. The Fellegi-Sunter method is a probabilistic approach that uses field weights based on log likelihood ratios to determine record similarity. This paper introduces an extension of the Fellegi-Sunter method that incorporates approximate field comparators in the calculation of field weights. The data warehouse of a large academic medical center was used as a case study. The approximate comparator extension was compared with the Fellegi-Sunter method in its ability to find duplicate records previously identified in the data warehouse using different demographic fields and matching cutoffs. The approximate comparator extension misclassified 25% fewer pairs and had a larger Welch's T statistic than the Fellegi-Sunter method for all field sets and matching cutoffs. The accuracy gain provided by the approximate comparator extension grew as less information was provided and as the matching cutoff increased. Given the ubiquity of linkage in both clinical and research settings, the incremental improvement of the extension has the potential to make a considerable impact.
NASA Astrophysics Data System (ADS)
Erhard, Jannis; Bleiziffer, Patrick; Görling, Andreas
2016-09-01
A power series approximation for the correlation kernel of time-dependent density-functional theory is presented. Using this approximation in the adiabatic-connection fluctuation-dissipation (ACFD) theorem leads to a new family of Kohn-Sham methods. The new methods yield reaction energies and barriers of unprecedented accuracy and enable a treatment of static (strong) correlation with an accuracy of high-level multireference configuration interaction methods but are single-reference methods allowing for a black-box-like handling of static correlation. The new methods exhibit a better scaling of the computational effort with the system size than rivaling wave-function-based electronic structure methods. Moreover, the new methods do not suffer from the problem of singularities in response functions plaguing previous ACFD methods and therefore are applicable to any type of electronic system.
Small-Tip-Angle Spokes Pulse Design Using Interleaved Greedy and Local Optimization Methods
Grissom, William A.; Khalighi, Mohammad-Mehdi; Sacolick, Laura I.; Rutt, Brian K.; Vogel, Mika W.
2013-01-01
Current spokes pulse design methods can be grouped into methods based either on sparse approximation or on iterative local (gradient descent-based) optimization of the transverse-plane spatial frequency locations visited by the spokes. These two classes of methods have complementary strengths and weaknesses: sparse approximation-based methods perform an efficient search over a large swath of candidate spatial frequency locations but most are incompatible with off-resonance compensation, multifrequency designs, and target phase relaxation, while local methods can accommodate off-resonance and target phase relaxation but are sensitive to initialization and suboptimal local cost function minima. This article introduces a method that interleaves local iterations, which optimize the radiofrequency pulses, target phase patterns, and spatial frequency locations, with a greedy method to choose new locations. Simulations and experiments at 3 and 7 T show that the method consistently produces single- and multifrequency spokes pulses with lower flip angle inhomogeneity compared to current methods. PMID:22392822
NASA Astrophysics Data System (ADS)
Fujiwara, Takeo; Nishino, Shinya; Yamamoto, Susumu; Suzuki, Takashi; Ikeda, Minoru; Ohtani, Yasuaki
2018-06-01
A novel tight-binding method is developed, based on the extended Hückel approximation and charge self-consistency, with referring the band structure and the total energy of the local density approximation of the density functional theory. The parameters are so adjusted by computer that the result reproduces the band structure and the total energy, and the algorithm for determining parameters is established. The set of determined parameters is applicable to a variety of crystalline compounds and change of lattice constants, and, in other words, it is transferable. Examples are demonstrated for Si crystals of several crystalline structures varying lattice constants. Since the set of parameters is transferable, the present tight-binding method may be applicable also to molecular dynamics simulations of large-scale systems and long-time dynamical processes.
Screen Space Ambient Occlusion Based Multiple Importance Sampling for Real-Time Rendering
NASA Astrophysics Data System (ADS)
Zerari, Abd El Mouméne; Babahenini, Mohamed Chaouki
2018-03-01
We propose a new approximation technique for accelerating the Global Illumination algorithm for real-time rendering. The proposed approach is based on the Screen-Space Ambient Occlusion (SSAO) method, which approximates the global illumination for large, fully dynamic scenes at interactive frame rates. Current algorithms that are based on the SSAO method suffer from difficulties due to the large number of samples that are required. In this paper, we propose an improvement to the SSAO technique by integrating it with a Multiple Importance Sampling technique that combines a stratified sampling method with an importance sampling method, with the objective of reducing the number of samples. Experimental evaluation demonstrates that our technique can produce high-quality images in real time and is significantly faster than traditional techniques.
Boson expansions based on the random phase approximation representation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pedrocchi, V.G.; Tamura, T.
1984-04-01
A new boson expansion theory based on the random phase approximation is presented. The boson expansions are derived here directly in the random phase approximation representation with the help of a technique that combines the use of the Usui operator with that of a new bosonization procedure, called the term-by-term bosonization method. The present boson expansion theory is constructed by retaining a single collective quadrupole random phase approximation component, a truncation that allows for a perturbative treatment of the whole problem. Both Hermitian, as well as non-Hermitian boson expansions, valid for even nuclei, are obtained.
Big geo data surface approximation using radial basis functions: A comparative study
NASA Astrophysics Data System (ADS)
Majdisova, Zuzana; Skala, Vaclav
2017-12-01
Approximation of scattered data is often a task in many engineering problems. The Radial Basis Function (RBF) approximation is appropriate for big scattered datasets in n-dimensional space. It is a non-separable approximation, as it is based on the distance between two points. This method leads to the solution of an overdetermined linear system of equations. In this paper the RBF approximation methods are briefly described, a new approach to the RBF approximation of big datasets is presented, and a comparison for different Compactly Supported RBFs (CS-RBFs) is made with respect to the accuracy of the computation. The proposed approach uses symmetry of a matrix, partitioning the matrix into blocks and data structures for storage of the sparse matrix. The experiments are performed for synthetic and real datasets.
Research on Modeling of Propeller in a Turboprop Engine
NASA Astrophysics Data System (ADS)
Huang, Jiaqin; Huang, Xianghua; Zhang, Tianhong
2015-05-01
In the simulation of engine-propeller integrated control system for a turboprop aircraft, a real-time propeller model with high-accuracy is required. A study is conducted to compare the real-time and precision performance of propeller models based on strip theory and lifting surface theory. The emphasis in modeling by strip theory is focused on three points as follows: First, FLUENT is adopted to calculate the lift and drag coefficients of the propeller. Next, a method to calculate the induced velocity which occurs in the ground rig test is presented. Finally, an approximate method is proposed to obtain the downwash angle of the propeller when the conventional algorithm has no solution. An advanced approximation of the velocities induced by helical horseshoe vortices is applied in the model based on lifting surface theory. This approximate method will reduce computing time and remain good accuracy. Comparison between the two modeling techniques shows that the model based on strip theory which owns more advantage on both real-time and high-accuracy can meet the requirement.
Approximate Solutions for a Self-Folding Problem of Carbon Nanotubes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Y Mikata
2006-08-22
This paper treats approximate solutions for a self-folding problem of carbon nanotubes. It has been observed in the molecular dynamics calculations [1] that a carbon nanotube with a large aspect ratio can self-fold due to van der Waals force between the parts of the same carbon nanotube. The main issue in the self-folding problem is to determine the minimum threshold length of the carbon nanotube at which it becomes possible for the carbon nanotube to self-fold due to the van der Waals force. An approximate mathematical model based on the force method is constructed for the self-folding problem of carbonmore » nanotubes, and it is solved exactly as an elastica problem using elliptic functions. Additionally, three other mathematical models are constructed based on the energy method. As a particular example, the lower and upper estimates for the critical threshold (minimum) length are determined based on both methods for the (5,5) armchair carbon nanotube.« less
Approximate l-fold cross-validation with Least Squares SVM and Kernel Ridge Regression
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edwards, Richard E; Zhang, Hao; Parker, Lynne Edwards
2013-01-01
Kernel methods have difficulties scaling to large modern data sets. The scalability issues are based on computational and memory requirements for working with a large matrix. These requirements have been addressed over the years by using low-rank kernel approximations or by improving the solvers scalability. However, Least Squares Support VectorMachines (LS-SVM), a popular SVM variant, and Kernel Ridge Regression still have several scalability issues. In particular, the O(n^3) computational complexity for solving a single model, and the overall computational complexity associated with tuning hyperparameters are still major problems. We address these problems by introducing an O(n log n) approximate l-foldmore » cross-validation method that uses a multi-level circulant matrix to approximate the kernel. In addition, we prove our algorithm s computational complexity and present empirical runtimes on data sets with approximately 1 million data points. We also validate our approximate method s effectiveness at selecting hyperparameters on real world and standard benchmark data sets. Lastly, we provide experimental results on using a multi-level circulant kernel approximation to solve LS-SVM problems with hyperparameters selected using our method.« less
The large discretization step method for time-dependent partial differential equations
NASA Technical Reports Server (NTRS)
Haras, Zigo; Taasan, Shlomo
1995-01-01
A new method for the acceleration of linear and nonlinear time dependent calculations is presented. It is based on the Large Discretization Step (LDS) approximation, defined in this work, which employs an extended system of low accuracy schemes to approximate a high accuracy discrete approximation to a time dependent differential operator. Error bounds on such approximations are derived. These approximations are efficiently implemented in the LDS methods for linear and nonlinear hyperbolic equations, presented here. In these algorithms the high and low accuracy schemes are interpreted as the same discretization of a time dependent operator on fine and coarse grids, respectively. Thus, a system of correction terms and corresponding equations are derived and solved on the coarse grid to yield the fine grid accuracy. These terms are initialized by visiting the fine grid once in many coarse grid time steps. The resulting methods are very general, simple to implement and may be used to accelerate many existing time marching schemes.
NASA Technical Reports Server (NTRS)
Patniak, Surya N.; Guptill, James D.; Hopkins, Dale A.; Lavelle, Thomas M.
1998-01-01
Nonlinear mathematical-programming-based design optimization can be an elegant method. However, the calculations required to generate the merit function, constraints, and their gradients, which are frequently required, can make the process computational intensive. The computational burden can be greatly reduced by using approximating analyzers derived from an original analyzer utilizing neural networks and linear regression methods. The experience gained from using both of these approximation methods in the design optimization of a high speed civil transport aircraft is the subject of this paper. The Langley Research Center's Flight Optimization System was selected for the aircraft analysis. This software was exercised to generate a set of training data with which a neural network and a regression method were trained, thereby producing the two approximating analyzers. The derived analyzers were coupled to the Lewis Research Center's CometBoards test bed to provide the optimization capability. With the combined software, both approximation methods were examined for use in aircraft design optimization, and both performed satisfactorily. The CPU time for solution of the problem, which had been measured in hours, was reduced to minutes with the neural network approximation and to seconds with the regression method. Instability encountered in the aircraft analysis software at certain design points was also eliminated. On the other hand, there were costs and difficulties associated with training the approximating analyzers. The CPU time required to generate the input-output pairs and to train the approximating analyzers was seven times that required for solution of the problem.
Invisible Base Electrode Coordinates Approximation for Simultaneous SPECT and EEG Data Visualization
NASA Astrophysics Data System (ADS)
Kowalczyk, L.; Goszczynska, H.; Zalewska, E.; Bajera, A.; Krolicki, L.
2014-04-01
This work was performed as part of a larger research concerning the feasibility of improving the localization of epileptic foci, as compared to the standard SPECT examination, by applying the technique of EEG mapping. The presented study extends our previous work on the development of a method for superposition of SPECT images and EEG 3D maps when these two examinations are performed simultaneously. Due to the lack of anatomical data in SPECT images it is a much more difficult task than in the case of MRI/EEG study where electrodes are visible in morphological images. Using the appropriate dose of radioisotope we mark five base electrodes to make them visible in the SPECT image and then approximate the coordinates of the remaining electrodes using properties of the 10-20 electrode placement system and the proposed nine-ellipses model. This allows computing a sequence of 3D EEG maps spanning on all electrodes. It happens, however, that not all five base electrodes can be reliably identified in SPECT data. The aim of the current study was to develop a method for determining the coordinates of base electrode(s) missing in the SPECT image. The algorithm for coordinates approximation has been developed and was tested on data collected for three subjects with all visible electrodes. To increase the accuracy of the approximation we used head surface models. Freely available model from Oostenveld research based on data from SPM package and our own model based on data from our EEG/SPECT studies were used. For data collected in four cases with one electrode not visible we compared the invisible base electrode coordinates approximation for Oostenveld and our models. The results vary depending on the missing electrode placement, but application of the realistic head model significantly increases the accuracy of the approximation.
Conjugate-gradient preconditioning methods for shift-variant PET image reconstruction.
Fessler, J A; Booth, S D
1999-01-01
Gradient-based iterative methods often converge slowly for tomographic image reconstruction and image restoration problems, but can be accelerated by suitable preconditioners. Diagonal preconditioners offer some improvement in convergence rate, but do not incorporate the structure of the Hessian matrices in imaging problems. Circulant preconditioners can provide remarkable acceleration for inverse problems that are approximately shift-invariant, i.e., for those with approximately block-Toeplitz or block-circulant Hessians. However, in applications with nonuniform noise variance, such as arises from Poisson statistics in emission tomography and in quantum-limited optical imaging, the Hessian of the weighted least-squares objective function is quite shift-variant, and circulant preconditioners perform poorly. Additional shift-variance is caused by edge-preserving regularization methods based on nonquadratic penalty functions. This paper describes new preconditioners that approximate more accurately the Hessian matrices of shift-variant imaging problems. Compared to diagonal or circulant preconditioning, the new preconditioners lead to significantly faster convergence rates for the unconstrained conjugate-gradient (CG) iteration. We also propose a new efficient method for the line-search step required by CG methods. Applications to positron emission tomography (PET) illustrate the method.
Approximate relations and charts for low-speed stability derivatives of swept wings
NASA Technical Reports Server (NTRS)
Toll, Thomas A; Queijo, M J
1948-01-01
Contains derivations, based on a simplified theory, of approximate relations for low-speed stability derivatives of swept wings. Method accounts for the effects and, in most cases, taper ratio. Charts, based on the derived relations, are presented for the stability derivatives of untapered swept wings. Calculated values of the derivatives are compared with experimental results.
Kernel K-Means Sampling for Nyström Approximation.
He, Li; Zhang, Hong
2018-05-01
A fundamental problem in Nyström-based kernel matrix approximation is the sampling method by which training set is built. In this paper, we suggest to use kernel -means sampling, which is shown in our works to minimize the upper bound of a matrix approximation error. We first propose a unified kernel matrix approximation framework, which is able to describe most existing Nyström approximations under many popular kernels, including Gaussian kernel and polynomial kernel. We then show that, the matrix approximation error upper bound, in terms of the Frobenius norm, is equal to the -means error of data points in kernel space plus a constant. Thus, the -means centers of data in kernel space, or the kernel -means centers, are the optimal representative points with respect to the Frobenius norm error upper bound. Experimental results, with both Gaussian kernel and polynomial kernel, on real-world data sets and image segmentation tasks show the superiority of the proposed method over the state-of-the-art methods.
Forward multiple scattering corrections as function of detector field of view
NASA Astrophysics Data System (ADS)
Zardecki, A.; Deepak, A.
1983-06-01
The theoretical formulations are given for an approximate method based on the solution of the radiative transfer equation in the small angle approximation. The method is approximate in the sense that an approximation is made in addition to the small angle approximation. Numerical results were obtained for multiple scattering effects as functions of the detector field of view, as well as the size of the detector's aperture for three different values of the optical depth tau (=1.0, 4.0 and 10.0). Three cases of aperture size were considered--namely, equal to or smaller or larger than the laser beam diameter. The contrast between the on-axis intensity and the received power for the last three cases is clearly evident.
Triangle based TVD schemes for hyperbolic conservation laws
NASA Technical Reports Server (NTRS)
Durlofsky, Louis J.; Osher, Stanley; Engquist, Bjorn
1990-01-01
A triangle based total variation diminishing (TVD) scheme for the numerical approximation of hyperbolic conservation laws in two space dimensions is constructed. The novelty of the scheme lies in the nature of the preprocessing of the cell averaged data, which is accomplished via a nearest neighbor linear interpolation followed by a slope limiting procedures. Two such limiting procedures are suggested. The resulting method is considerably more simple than other triangle based non-oscillatory approximations which, like this scheme, approximate the flux up to second order accuracy. Numerical results for linear advection and Burgers' equation are presented.
NASA Astrophysics Data System (ADS)
Shimelevich, M. I.; Obornev, E. A.; Obornev, I. E.; Rodionov, E. A.
2017-07-01
The iterative approximation neural network method for solving conditionally well-posed nonlinear inverse problems of geophysics is presented. The method is based on the neural network approximation of the inverse operator. The inverse problem is solved in the class of grid (block) models of the medium on a regularized parameterization grid. The construction principle of this grid relies on using the calculated values of the continuity modulus of the inverse operator and its modifications determining the degree of ambiguity of the solutions. The method provides approximate solutions of inverse problems with the maximal degree of detail given the specified degree of ambiguity with the total number of the sought parameters n × 103 of the medium. The a priori and a posteriori estimates of the degree of ambiguity of the approximated solutions are calculated. The work of the method is illustrated by the example of the three-dimensional (3D) inversion of the synthesized 2D areal geoelectrical (audio magnetotelluric sounding, AMTS) data corresponding to the schematic model of a kimberlite pipe.
NASA Technical Reports Server (NTRS)
Simpson, Timothy W.
1998-01-01
The use of response surface models and kriging models are compared for approximating non-random, deterministic computer analyses. After discussing the traditional response surface approach for constructing polynomial models for approximation, kriging is presented as an alternative statistical-based approximation method for the design and analysis of computer experiments. Both approximation methods are applied to the multidisciplinary design and analysis of an aerospike nozzle which consists of a computational fluid dynamics model and a finite element analysis model. Error analysis of the response surface and kriging models is performed along with a graphical comparison of the approximations. Four optimization problems are formulated and solved using both approximation models. While neither approximation technique consistently outperforms the other in this example, the kriging models using only a constant for the underlying global model and a Gaussian correlation function perform as well as the second order polynomial response surface models.
Hasenauer, J; Wolf, V; Kazeroonian, A; Theis, F J
2014-09-01
The time-evolution of continuous-time discrete-state biochemical processes is governed by the Chemical Master Equation (CME), which describes the probability of the molecular counts of each chemical species. As the corresponding number of discrete states is, for most processes, large, a direct numerical simulation of the CME is in general infeasible. In this paper we introduce the method of conditional moments (MCM), a novel approximation method for the solution of the CME. The MCM employs a discrete stochastic description for low-copy number species and a moment-based description for medium/high-copy number species. The moments of the medium/high-copy number species are conditioned on the state of the low abundance species, which allows us to capture complex correlation structures arising, e.g., for multi-attractor and oscillatory systems. We prove that the MCM provides a generalization of previous approximations of the CME based on hybrid modeling and moment-based methods. Furthermore, it improves upon these existing methods, as we illustrate using a model for the dynamics of stochastic single-gene expression. This application example shows that due to the more general structure, the MCM allows for the approximation of multi-modal distributions.
Numerical integration techniques for curved-element discretizations of molecule-solvent interfaces.
Bardhan, Jaydeep P; Altman, Michael D; Willis, David J; Lippow, Shaun M; Tidor, Bruce; White, Jacob K
2007-07-07
Surface formulations of biophysical modeling problems offer attractive theoretical and computational properties. Numerical simulations based on these formulations usually begin with discretization of the surface under consideration; often, the surface is curved, possessing complicated structure and possibly singularities. Numerical simulations commonly are based on approximate, rather than exact, discretizations of these surfaces. To assess the strength of the dependence of simulation accuracy on the fidelity of surface representation, here methods were developed to model several important surface formulations using exact surface discretizations. Following and refining Zauhar's work [J. Comput.-Aided Mol. Des. 9, 149 (1995)], two classes of curved elements were defined that can exactly discretize the van der Waals, solvent-accessible, and solvent-excluded (molecular) surfaces. Numerical integration techniques are presented that can accurately evaluate nonsingular and singular integrals over these curved surfaces. After validating the exactness of the surface discretizations and demonstrating the correctness of the presented integration methods, a set of calculations are presented that compare the accuracy of approximate, planar-triangle-based discretizations and exact, curved-element-based simulations of surface-generalized-Born (sGB), surface-continuum van der Waals (scvdW), and boundary-element method (BEM) electrostatics problems. Results demonstrate that continuum electrostatic calculations with BEM using curved elements, piecewise-constant basis functions, and centroid collocation are nearly ten times more accurate than planar-triangle BEM for basis sets of comparable size. The sGB and scvdW calculations give exceptional accuracy even for the coarsest obtainable discretized surfaces. The extra accuracy is attributed to the exact representation of the solute-solvent interface; in contrast, commonly used planar-triangle discretizations can only offer improved approximations with increasing discretization and associated increases in computational resources. The results clearly demonstrate that the methods for approximate integration on an exact geometry are far more accurate than exact integration on an approximate geometry. A MATLAB implementation of the presented integration methods and sample data files containing curved-element discretizations of several small molecules are available online as supplemental material.
NASA Astrophysics Data System (ADS)
Dabiri, M.; Ghafouri, M.; Rohani Raftar, H. R.; Björk, T.
2018-03-01
Methods to estimate the strain-life curve, which were divided into three categories: simple approximations, artificial neural network-based approaches and continuum damage mechanics models, were examined, and their accuracy was assessed in strain-life evaluation of a direct-quenched high-strength steel. All the prediction methods claim to be able to perform low-cycle fatigue analysis using available or easily obtainable material properties, thus eliminating the need for costly and time-consuming fatigue tests. Simple approximations were able to estimate the strain-life curve with satisfactory accuracy using only monotonic properties. The tested neural network-based model, although yielding acceptable results for the material in question, was found to be overly sensitive to the data sets used for training and showed an inconsistency in estimation of the fatigue life and fatigue properties. The studied continuum damage-based model was able to produce a curve detecting early stages of crack initiation. This model requires more experimental data for calibration than approaches using simple approximations. As a result of the different theories underlying the analyzed methods, the different approaches have different strengths and weaknesses. However, it was found that the group of parametric equations categorized as simple approximations are the easiest for practical use, with their applicability having already been verified for a broad range of materials.
Influence of scattering processes on electron quantum states in nanowires
Galenchik, Vadim; Borzdov, Andrei; Borzdov, Vladimir; Komarov, Fadei
2007-01-01
In the framework of quantum perturbation theory the self-consistent method of calculation of electron scattering rates in nanowires with the one-dimensional electron gas in the quantum limit is worked out. The developed method allows both the collisional broadening and the quantum correlations between scattering events to be taken into account. It is an alternativeper seto the Fock approximation for the self-energy approach based on Green’s function formalism. However this approach is free of mathematical difficulties typical to the Fock approximation. Moreover, the developed method is simpler than the Fock approximation from the computational point of view. Using the approximation of stable one-particle quantum states it is proved that the electron scattering processes determine the dependence of electron energy versus its wave vector.
An efficient method for quantum transport simulations in the time domain
NASA Astrophysics Data System (ADS)
Wang, Y.; Yam, C.-Y.; Frauenheim, Th.; Chen, G. H.; Niehaus, T. A.
2011-11-01
An approximate method based on adiabatic time dependent density functional theory (TDDFT) is presented, that allows for the description of the electron dynamics in nanoscale junctions under arbitrary time dependent external potentials. The density matrix of the device region is propagated according to the Liouville-von Neumann equation. The semi-infinite leads give rise to dissipative terms in the equation of motion which are calculated from first principles in the wide band limit. In contrast to earlier ab initio implementations of this formalism, the Hamiltonian is here approximated in the spirit of the density functional based tight-binding (DFTB) method. Results are presented for two prototypical molecular devices and compared to full TDDFT calculations. The temporal profile of the current traces is qualitatively well captured by the DFTB scheme. Steady state currents show considerable variations, both in comparison of approximate and full TDDFT, but also among TDDFT calculations with different basis sets.
The Alignment and Blending of Payment Incentives within Physician Organizations
Robinson, James C; Shortell, Stephen M; Li, Rui; Casalino, Lawrence P; Rundall, Thomas
2004-01-01
Objective To analyze the blend of retrospective (fee-for-service, productivity-based salary) and prospective (capitation, nonproductivity-based salary) methods for compensating individual physicians within medical groups and independent practice associations (IPAs) and the influence of managed care on the compensation blend used by these physician organizations. Data Sources Of the 1,587 medical groups and IPAs with 20 or more physicians in the United States, 1,104 responded to a one-hour telephone survey, with 627 providing detailed information on physician payment methods. Study Design We calculated the distribution of compensation methods for primary care and specialty physicians, separately, in both medical groups and IPAs. Multivariate regression methods were used to analyze the influence of market and organizational factors on the payment method developed by physician organizations for individual physicians. Principal Findings Within physician organizations, approximately one-quarter of physicians are paid on a purely retrospective (fee-for-service) basis, approximately one-quarter are paid on a purely prospective (capitation, nonproductivity-based salary) basis, and approximately one-half on blends of retrospective and prospective methods. Medical groups and IPAs in heavily penetrated managed care markets are significantly less likely to pay their individual physicians based on fee-for-service than are organizations in less heavily penetrated markets. Conclusions Physician organizations rely on a wide range of prospective, retrospective, and blended payment methods and seek to align the incentives faced by individual physicians with the market incentives faced by the physician organization. PMID:15333124
Regularization of the double period method for experimental data processing
NASA Astrophysics Data System (ADS)
Belov, A. A.; Kalitkin, N. N.
2017-11-01
In physical and technical applications, an important task is to process experimental curves measured with large errors. Such problems are solved by applying regularization methods, in which success depends on the mathematician's intuition. We propose an approximation based on the double period method developed for smooth nonperiodic functions. Tikhonov's stabilizer with a squared second derivative is used for regularization. As a result, the spurious oscillations are suppressed and the shape of an experimental curve is accurately represented. This approach offers a universal strategy for solving a broad class of problems. The method is illustrated by approximating cross sections of nuclear reactions important for controlled thermonuclear fusion. Tables recommended as reference data are obtained. These results are used to calculate the reaction rates, which are approximated in a way convenient for gasdynamic codes. These approximations are superior to previously known formulas in the covered temperature range and accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rishi, Varun; Perera, Ajith; Bartlett, Rodney J., E-mail: bartlett@qtp.ufl.edu
2016-03-28
Obtaining the correct potential energy curves for the dissociation of multiple bonds is a challenging problem for ab initio methods which are affected by the choice of a spin-restricted reference function. Coupled cluster (CC) methods such as CCSD (coupled cluster singles and doubles model) and CCSD(T) (CCSD + perturbative triples) correctly predict the geometry and properties at equilibrium but the process of bond dissociation, particularly when more than one bond is simultaneously broken, is much more complicated. New modifications of CC theory suggest that the deleterious role of the reference function can be diminished, provided a particular subset of termsmore » is retained in the CC equations. The Distinguishable Cluster (DC) approach of Kats and Manby [J. Chem. Phys. 139, 021102 (2013)], seemingly overcomes the deficiencies for some bond-dissociation problems and might be of use in quasi-degenerate situations in general. DC along with other approximate coupled cluster methods such as ACCD (approximate coupled cluster doubles), ACP-D45, ACP-D14, 2CC, and pCCSD(α, β) (all defined in text) falls under a category of methods that are basically obtained by the deletion of some quadratic terms in the double excitation amplitude equation for CCD/CCSD (coupled cluster doubles model/coupled cluster singles and doubles model). Here these approximate methods, particularly those based on the DC approach, are studied in detail for the nitrogen molecule bond-breaking. The N{sub 2} problem is further addressed with conventional single reference methods but based on spatial symmetry-broken restricted Hartree–Fock (HF) solutions to assess the use of these references for correlated calculations in the situation where CC methods using fully symmetry adapted SCF solutions fail. The distinguishable cluster method is generalized: 1) to different orbitals for different spins (unrestricted HF based DCD and DCSD), 2) by adding triples correction perturbatively (DCSD(T)) and iteratively (DCSDT-n), and 3) via an excited state approximation through the equation of motion (EOM) approach (EOM-DCD, EOM-DCSD). The EOM-CC method is used to identify lower-energy CC solutions to overcome singularities in the CC potential energy curves. It is also shown that UHF based CC and DC methods behave very similarly in bond-breaking of N{sub 2}, and that using spatially broken but spin preserving SCF references makes the CCSD solutions better than those for DCSD.« less
Dual-scale Galerkin methods for Darcy flow
NASA Astrophysics Data System (ADS)
Wang, Guoyin; Scovazzi, Guglielmo; Nouveau, Léo; Kees, Christopher E.; Rossi, Simone; Colomés, Oriol; Main, Alex
2018-02-01
The discontinuous Galerkin (DG) method has found widespread application in elliptic problems with rough coefficients, of which the Darcy flow equations are a prototypical example. One of the long-standing issues of DG approximations is the overall computational cost, and many different strategies have been proposed, such as the variational multiscale DG method, the hybridizable DG method, the multiscale DG method, the embedded DG method, and the Enriched Galerkin method. In this work, we propose a mixed dual-scale Galerkin method, in which the degrees-of-freedom of a less computationally expensive coarse-scale approximation are linked to the degrees-of-freedom of a base DG approximation. We show that the proposed approach has always similar or improved accuracy with respect to the base DG method, with a considerable reduction in computational cost. For the specific definition of the coarse-scale space, we consider Raviart-Thomas finite elements for the mass flux and piecewise-linear continuous finite elements for the pressure. We provide a complete analysis of stability and convergence of the proposed method, in addition to a study on its conservation and consistency properties. We also present a battery of numerical tests to verify the results of the analysis, and evaluate a number of possible variations, such as using piecewise-linear continuous finite elements for the coarse-scale mass fluxes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Yubo; Zhang, Jiawei; Wang, Youwei
Diamond-like Cu-based multinary semiconductors are a rich family of materials that hold promise in a wide range of applications. Unfortunately, accurate theoretical understanding of the electronic properties of these materials is hindered by the involvement of Cu d electrons. Density functional theory (DFT) based calculations using the local density approximation or generalized gradient approximation often give qualitative wrong electronic properties of these materials, especially for narrow-gap systems. The modified Becke-Johnson (mBJ) method has been shown to be a promising alternative to more elaborate theory such as the GW approximation for fast materials screening and predictions. However, straightforward applications of themore » mBJ method to these materials still encounter significant difficulties because of the insufficient treatment of the localized d electrons. We show that combining the promise of mBJ potential and the spirit of the well-established DFT + U method leads to a much improved description of the electronic structures, including the most challenging narrow-gap systems. A survey of the band gaps of about 20 Cu-based semiconductors calculated using the mBJ + U method shows that the results agree with reliable values to within ±0.2 eV.« less
Unfolding the Second Riemann sheet with Pade Approximants: hunting resonance poles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Masjuan, Pere; Departamento de Fisica Teorica y del Cosmos, Universidad de Granada, Campus de Fuentenueva, E-18071 Granada
2011-05-23
Based on Pade Theory, a new procedure for extracting the pole mass and width of resonances is proposed. The method is systematic and provides a model-independent treatment for the prediction and the errors of the approximation.
3DHZETRN: Inhomogeneous Geometry Issues
NASA Technical Reports Server (NTRS)
Wilson, John W.; Slaba, Tony C.; Badavi, Francis F.
2017-01-01
Historical methods for assessing radiation exposure inside complicated geometries for space applications were limited by computational constraints and lack of knowledge associated with nuclear processes occurring over a broad range of particles and energies. Various methods were developed and utilized to simplify geometric representations and enable coupling with simplified but efficient particle transport codes. Recent transport code development efforts, leading to 3DHZETRN, now enable such approximate methods to be carefully assessed to determine if past exposure analyses and validation efforts based on those approximate methods need to be revisited. In this work, historical methods of representing inhomogeneous spacecraft geometry for radiation protection analysis are first reviewed. Two inhomogeneous geometry cases, previously studied with 3DHZETRN and Monte Carlo codes, are considered with various levels of geometric approximation. Fluence, dose, and dose equivalent values are computed in all cases and compared. It is found that although these historical geometry approximations can induce large errors in neutron fluences up to 100 MeV, errors on dose and dose equivalent are modest (<10%) for the cases studied here.
Goldstein, Darlene R
2006-10-01
Studies of gene expression using high-density short oligonucleotide arrays have become a standard in a variety of biological contexts. Of the expression measures that have been proposed to quantify expression in these arrays, multi-chip-based measures have been shown to perform well. As gene expression studies increase in size, however, utilizing multi-chip expression measures is more challenging in terms of computing memory requirements and time. A strategic alternative to exact multi-chip quantification on a full large chip set is to approximate expression values based on subsets of chips. This paper introduces an extrapolation method, Extrapolation Averaging (EA), and a resampling method, Partition Resampling (PR), to approximate expression in large studies. An examination of properties indicates that subset-based methods can perform well compared with exact expression quantification. The focus is on short oligonucleotide chips, but the same ideas apply equally well to any array type for which expression is quantified using an entire set of arrays, rather than for only a single array at a time. Software implementing Partition Resampling and Extrapolation Averaging is under development as an R package for the BioConductor project.
Heats of Segregation of BCC Binaries from ab Initio and Quantum Approximate Calculations
NASA Technical Reports Server (NTRS)
Good, Brian S.
2004-01-01
We compare dilute-limit heats of segregation for selected BCC transition metal binaries computed using ab initio and quantum approximate energy methods. Ab initio calculations are carried out using the CASTEP plane-wave pseudopotential computer code, while quantum approximate results are computed using the Bozzolo-Ferrante-Smith (BFS) method with the most recent LMTO-based parameters. Quantum approximate segregation energies are computed with and without atomistic relaxation, while the ab initio calculations are performed without relaxation. Results are discussed within the context of a segregation model driven by strain and bond-breaking effects. We compare our results with full-potential quantum calculations and with available experimental results.
Spline Approximation of Thin Shell Dynamics
NASA Technical Reports Server (NTRS)
delRosario, R. C. H.; Smith, R. C.
1996-01-01
A spline-based method for approximating thin shell dynamics is presented here. While the method is developed in the context of the Donnell-Mushtari thin shell equations, it can be easily extended to the Byrne-Flugge-Lur'ye equations or other models for shells of revolution as warranted by applications. The primary requirements for the method include accuracy, flexibility and efficiency in smart material applications. To accomplish this, the method was designed to be flexible with regard to boundary conditions, material nonhomogeneities due to sensors and actuators, and inputs from smart material actuators such as piezoceramic patches. The accuracy of the method was also of primary concern, both to guarantee full resolution of structural dynamics and to facilitate the development of PDE-based controllers which ultimately require real-time implementation. Several numerical examples provide initial evidence demonstrating the efficacy of the method.
Stable multi-domain spectral penalty methods for fractional partial differential equations
NASA Astrophysics Data System (ADS)
Xu, Qinwu; Hesthaven, Jan S.
2014-01-01
We propose stable multi-domain spectral penalty methods suitable for solving fractional partial differential equations with fractional derivatives of any order. First, a high order discretization is proposed to approximate fractional derivatives of any order on any given grids based on orthogonal polynomials. The approximation order is analyzed and verified through numerical examples. Based on the discrete fractional derivative, we introduce stable multi-domain spectral penalty methods for solving fractional advection and diffusion equations. The equations are discretized in each sub-domain separately and the global schemes are obtained by weakly imposed boundary and interface conditions through a penalty term. Stability of the schemes are analyzed and numerical examples based on both uniform and nonuniform grids are considered to highlight the flexibility and high accuracy of the proposed schemes.
Data-Driven Model Reduction and Transfer Operator Approximation
NASA Astrophysics Data System (ADS)
Klus, Stefan; Nüske, Feliks; Koltai, Péter; Wu, Hao; Kevrekidis, Ioannis; Schütte, Christof; Noé, Frank
2018-06-01
In this review paper, we will present different data-driven dimension reduction techniques for dynamical systems that are based on transfer operator theory as well as methods to approximate transfer operators and their eigenvalues, eigenfunctions, and eigenmodes. The goal is to point out similarities and differences between methods developed independently by the dynamical systems, fluid dynamics, and molecular dynamics communities such as time-lagged independent component analysis, dynamic mode decomposition, and their respective generalizations. As a result, extensions and best practices developed for one particular method can be carried over to other related methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vidal-Codina, F., E-mail: fvidal@mit.edu; Nguyen, N.C., E-mail: cuongng@mit.edu; Giles, M.B., E-mail: mike.giles@maths.ox.ac.uk
We present a model and variance reduction method for the fast and reliable computation of statistical outputs of stochastic elliptic partial differential equations. Our method consists of three main ingredients: (1) the hybridizable discontinuous Galerkin (HDG) discretization of elliptic partial differential equations (PDEs), which allows us to obtain high-order accurate solutions of the governing PDE; (2) the reduced basis method for a new HDG discretization of the underlying PDE to enable real-time solution of the parameterized PDE in the presence of stochastic parameters; and (3) a multilevel variance reduction method that exploits the statistical correlation among the different reduced basismore » approximations and the high-fidelity HDG discretization to accelerate the convergence of the Monte Carlo simulations. The multilevel variance reduction method provides efficient computation of the statistical outputs by shifting most of the computational burden from the high-fidelity HDG approximation to the reduced basis approximations. Furthermore, we develop a posteriori error estimates for our approximations of the statistical outputs. Based on these error estimates, we propose an algorithm for optimally choosing both the dimensions of the reduced basis approximations and the sizes of Monte Carlo samples to achieve a given error tolerance. We provide numerical examples to demonstrate the performance of the proposed method.« less
Jaccard distance based weighted sparse representation for coarse-to-fine plant species recognition.
Zhang, Shanwen; Wu, Xiaowei; You, Zhuhong
2017-01-01
Leaf based plant species recognition plays an important role in ecological protection, however its application to large and modern leaf databases has been a long-standing obstacle due to the computational cost and feasibility. Recognizing such limitations, we propose a Jaccard distance based sparse representation (JDSR) method which adopts a two-stage, coarse to fine strategy for plant species recognition. In the first stage, we use the Jaccard distance between the test sample and each training sample to coarsely determine the candidate classes of the test sample. The second stage includes a Jaccard distance based weighted sparse representation based classification(WSRC), which aims to approximately represent the test sample in the training space, and classify it by the approximation residuals. Since the training model of our JDSR method involves much fewer but more informative representatives, this method is expected to overcome the limitation of high computational and memory costs in traditional sparse representation based classification. Comparative experimental results on a public leaf image database demonstrate that the proposed method outperforms other existing feature extraction and SRC based plant recognition methods in terms of both accuracy and computational speed.
Momentum-space cluster dual-fermion method
NASA Astrophysics Data System (ADS)
Iskakov, Sergei; Terletska, Hanna; Gull, Emanuel
2018-03-01
Recent years have seen the development of two types of nonlocal extensions to the single-site dynamical mean field theory. On one hand, cluster approximations, such as the dynamical cluster approximation, recover short-range momentum-dependent correlations nonperturbatively. On the other hand, diagrammatic extensions, such as the dual-fermion theory, recover long-ranged corrections perturbatively. The correct treatment of both strong short-ranged and weak long-ranged correlations within the same framework is therefore expected to lead to a quick convergence of results, and offers the potential of obtaining smooth self-energies in nonperturbative regimes of phase space. In this paper, we present an exact cluster dual-fermion method based on an expansion around the dynamical cluster approximation. Unlike previous formulations, our method does not employ a coarse-graining approximation to the interaction, which we show to be the leading source of error at high temperature, and converges to the exact result independently of the size of the underlying cluster. We illustrate the power of the method with results for the second-order cluster dual-fermion approximation to the single-particle self-energies and double occupancies.
A method of power analysis based on piecewise discrete Fourier transform
NASA Astrophysics Data System (ADS)
Xin, Miaomiao; Zhang, Yanchi; Xie, Da
2018-04-01
The paper analyzes the existing feature extraction methods. The characteristics of discrete Fourier transform and piecewise aggregation approximation are analyzed. Combining with the advantages of the two methods, a new piecewise discrete Fourier transform is proposed. And the method is used to analyze the lighting power of a large customer in this paper. The time series feature maps of four different cases are compared with the original data, discrete Fourier transform, piecewise aggregation approximation and piecewise discrete Fourier transform. This new method can reflect both the overall trend of electricity change and its internal changes in electrical analysis.
Reduced size first-order subsonic and supersonic aeroelastic modeling
NASA Technical Reports Server (NTRS)
Karpel, Mordechay
1990-01-01
Various aeroelastic, aeroservoelastic, dynamic-response, and sensitivity analyses are based on a time-domain first-order (state-space) formulation of the equations of motion. The formulation of this paper is based on the minimum-state (MS) aerodynamic approximation method, which yields a low number of aerodynamic augmenting states. Modifications of the MS and the physical weighting procedures make the modeling method even more attractive. The flexibility of constraint selection is increased without increasing the approximation problem size; the accuracy of dynamic residualization of high-frequency modes is improved; and the resulting model is less sensitive to parametric changes in subsequent analyses. Applications to subsonic and supersonic cases demonstrate the generality, flexibility, accuracy, and efficiency of the method.
NASA Astrophysics Data System (ADS)
Dehghan, Mehdi; Mohammadi, Vahid
2017-08-01
In this research, we investigate the numerical solution of nonlinear Schrödinger equations in two and three dimensions. The numerical meshless method which will be used here is RBF-FD technique. The main advantage of this method is the approximation of the required derivatives based on finite difference technique at each local-support domain as Ωi. At each Ωi, we require to solve a small linear system of algebraic equations with a conditionally positive definite matrix of order 1 (interpolation matrix). This scheme is efficient and its computational cost is same as the moving least squares (MLS) approximation. A challengeable issue is choosing suitable shape parameter for interpolation matrix in this way. In order to overcome this matter, an algorithm which was established by Sarra (2012), will be applied. This algorithm computes the condition number of the local interpolation matrix using the singular value decomposition (SVD) for obtaining the smallest and largest singular values of that matrix. Moreover, an explicit method based on Runge-Kutta formula of fourth-order accuracy will be applied for approximating the time variable. It also decreases the computational costs at each time step since we will not solve a nonlinear system. On the other hand, to compare RBF-FD method with another meshless technique, the moving kriging least squares (MKLS) approximation is considered for the studied model. Our results demonstrate the ability of the present approach for solving the applicable model which is investigated in the current research work.
S-curve networks and an approximate method for estimating degree distributions of complex networks
NASA Astrophysics Data System (ADS)
Guo, Jin-Li
2010-12-01
In the study of complex networks almost all theoretical models have the property of infinite growth, but the size of actual networks is finite. According to statistics from the China Internet IPv4 (Internet Protocol version 4) addresses, this paper proposes a forecasting model by using S curve (logistic curve). The growing trend of IPv4 addresses in China is forecasted. There are some reference values for optimizing the distribution of IPv4 address resource and the development of IPv6. Based on the laws of IPv4 growth, that is, the bulk growth and the finitely growing limit, it proposes a finite network model with a bulk growth. The model is said to be an S-curve network. Analysis demonstrates that the analytic method based on uniform distributions (i.e., Barabási-Albert method) is not suitable for the network. It develops an approximate method to predict the growth dynamics of the individual nodes, and uses this to calculate analytically the degree distribution and the scaling exponents. The analytical result agrees with the simulation well, obeying an approximately power-law form. This method can overcome a shortcoming of Barabási-Albert method commonly used in current network research.
Comparing methods for modelling spreading cell fronts.
Markham, Deborah C; Simpson, Matthew J; Maini, Philip K; Gaffney, Eamonn A; Baker, Ruth E
2014-07-21
Spreading cell fronts play an essential role in many physiological processes. Classically, models of this process are based on the Fisher-Kolmogorov equation; however, such continuum representations are not always suitable as they do not explicitly represent behaviour at the level of individual cells. Additionally, many models examine only the large time asymptotic behaviour, where a travelling wave front with a constant speed has been established. Many experiments, such as a scratch assay, never display this asymptotic behaviour, and in these cases the transient behaviour must be taken into account. We examine the transient and the asymptotic behaviour of moving cell fronts using techniques that go beyond the continuum approximation via a volume-excluding birth-migration process on a regular one-dimensional lattice. We approximate the averaged discrete results using three methods: (i) mean-field, (ii) pair-wise, and (iii) one-hole approximations. We discuss the performance of these methods, in comparison to the averaged discrete results, for a range of parameter space, examining both the transient and asymptotic behaviours. The one-hole approximation, based on techniques from statistical physics, is not capable of predicting transient behaviour but provides excellent agreement with the asymptotic behaviour of the averaged discrete results, provided that cells are proliferating fast enough relative to their rate of migration. The mean-field and pair-wise approximations give indistinguishable asymptotic results, which agree with the averaged discrete results when cells are migrating much more rapidly than they are proliferating. The pair-wise approximation performs better in the transient region than does the mean-field, despite having the same asymptotic behaviour. Our results show that each approximation only works in specific situations, thus we must be careful to use a suitable approximation for a given system, otherwise inaccurate predictions could be made. Copyright © 2014 Elsevier Ltd. All rights reserved.
Spectral methods in machine learning and new strategies for very large datasets
Belabbas, Mohamed-Ali; Wolfe, Patrick J.
2009-01-01
Spectral methods are of fundamental importance in statistics and machine learning, because they underlie algorithms from classical principal components analysis to more recent approaches that exploit manifold structure. In most cases, the core technical problem can be reduced to computing a low-rank approximation to a positive-definite kernel. For the growing number of applications dealing with very large or high-dimensional datasets, however, the optimal approximation afforded by an exact spectral decomposition is too costly, because its complexity scales as the cube of either the number of training examples or their dimensionality. Motivated by such applications, we present here 2 new algorithms for the approximation of positive-semidefinite kernels, together with error bounds that improve on results in the literature. We approach this problem by seeking to determine, in an efficient manner, the most informative subset of our data relative to the kernel approximation task at hand. This leads to two new strategies based on the Nyström method that are directly applicable to massive datasets. The first of these—based on sampling—leads to a randomized algorithm whereupon the kernel induces a probability distribution on its set of partitions, whereas the latter approach—based on sorting—provides for the selection of a partition in a deterministic way. We detail their numerical implementation and provide simulation results for a variety of representative problems in statistical data analysis, each of which demonstrates the improved performance of our approach relative to existing methods. PMID:19129490
NASA Technical Reports Server (NTRS)
Lang, Christapher G.; Bey, Kim S. (Technical Monitor)
2002-01-01
This research investigates residual-based a posteriori error estimates for finite element approximations of heat conduction in single-layer and multi-layered materials. The finite element approximation, based upon hierarchical modelling combined with p-version finite elements, is described with specific application to a two-dimensional, steady state, heat-conduction problem. Element error indicators are determined by solving an element equation for the error with the element residual as a source, and a global error estimate in the energy norm is computed by collecting the element contributions. Numerical results of the performance of the error estimate are presented by comparisons to the actual error. Two methods are discussed and compared for approximating the element boundary flux. The equilibrated flux method provides more accurate results for estimating the error than the average flux method. The error estimation is applied to multi-layered materials with a modification to the equilibrated flux method to approximate the discontinuous flux along a boundary at the material interfaces. A directional error indicator is developed which distinguishes between the hierarchical modeling error and the finite element error. Numerical results are presented for single-layered materials which show that the directional indicators accurately determine which contribution to the total error dominates.
Zhang, Huaguang; Cui, Lili; Zhang, Xin; Luo, Yanhong
2011-12-01
In this paper, a novel data-driven robust approximate optimal tracking control scheme is proposed for unknown general nonlinear systems by using the adaptive dynamic programming (ADP) method. In the design of the controller, only available input-output data is required instead of known system dynamics. A data-driven model is established by a recurrent neural network (NN) to reconstruct the unknown system dynamics using available input-output data. By adding a novel adjustable term related to the modeling error, the resultant modeling error is first guaranteed to converge to zero. Then, based on the obtained data-driven model, the ADP method is utilized to design the approximate optimal tracking controller, which consists of the steady-state controller and the optimal feedback controller. Further, a robustifying term is developed to compensate for the NN approximation errors introduced by implementing the ADP method. Based on Lyapunov approach, stability analysis of the closed-loop system is performed to show that the proposed controller guarantees the system state asymptotically tracking the desired trajectory. Additionally, the obtained control input is proven to be close to the optimal control input within a small bound. Finally, two numerical examples are used to demonstrate the effectiveness of the proposed control scheme.
NASA Astrophysics Data System (ADS)
Kahnert, Michael
2016-07-01
Numerical solution methods for electromagnetic scattering by non-spherical particles comprise a variety of different techniques, which can be traced back to different assumptions and solution strategies applied to the macroscopic Maxwell equations. One can distinguish between time- and frequency-domain methods; further, one can divide numerical techniques into finite-difference methods (which are based on approximating the differential operators), separation-of-variables methods (which are based on expanding the solution in a complete set of functions, thus approximating the fields), and volume integral-equation methods (which are usually solved by discretisation of the target volume and invoking the long-wave approximation in each volume cell). While existing reviews of the topic often tend to have a target audience of program developers and expert users, this tutorial review is intended to accommodate the needs of practitioners as well as novices to the field. The required conciseness is achieved by limiting the presentation to a selection of illustrative methods, and by omitting many technical details that are not essential at a first exposure to the subject. On the other hand, the theoretical basis of numerical methods is explained with little compromises in mathematical rigour; the rationale is that a good grasp of numerical light scattering methods is best achieved by understanding their foundation in Maxwell's theory.
Lu, Jia-Yang; Cheung, Michael Lok-Man; Huang, Bao-Tian; Wu, Li-Li; Xie, Wen-Jia; Chen, Zhi-Jian; Li, De-Rui; Xie, Liang-Xi
2015-01-01
To assess the performance of a simple optimisation method for improving target coverage and organ-at-risk (OAR) sparing in intensity-modulated radiotherapy (IMRT) for cervical oesophageal cancer. For 20 selected patients, clinically acceptable original IMRT plans (Original plans) were created, and two optimisation methods were adopted to improve the plans: 1) a base dose function (BDF)-based method, in which the treatment plans were re-optimised based on the original plans, and 2) a dose-controlling structure (DCS)-based method, in which the original plans were re-optimised by assigning additional constraints for hot and cold spots. The Original, BDF-based and DCS-based plans were compared with regard to target dose homogeneity, conformity, OAR sparing, planning time and monitor units (MUs). Dosimetric verifications were performed and delivery times were recorded for the BDF-based and DCS-based plans. The BDF-based plans provided significantly superior dose homogeneity and conformity compared with both the DCS-based and Original plans. The BDF-based method further reduced the doses delivered to the OARs by approximately 1-3%. The re-optimisation time was reduced by approximately 28%, but the MUs and delivery time were slightly increased. All verification tests were passed and no significant differences were found. The BDF-based method for the optimisation of IMRT for cervical oesophageal cancer can achieve significantly better dose distributions with better planning efficiency at the expense of slightly more MUs.
Sové, Richard J; Drakos, Nicole E; Fraser, Graham M; Ellis, Christopher G
2018-05-25
Red blood cell oxygen saturation is an important indicator of oxygen supply to tissues in the body. Oxygen saturation can be measured by taking advantage of spectroscopic properties of hemoglobin. When this technique is applied to transmission microscopy, the calculation of saturation requires determination of incident light intensity at each pixel occupied by the red blood cell; this value is often approximated from a sequence of images as the maximum intensity over time. This method often fails when the red blood cells are moving too slowly, or if hematocrit is too large since there is not a large enough gap between the cells to accurately calculate the incident intensity value. A new method of approximating incident light intensity is proposed using digital inpainting. This novel approach estimates incident light intensity with an average percent error of approximately 3%, which exceeds the accuracy of the maximum intensity based method in most cases. The error in incident light intensity corresponds to a maximum error of approximately 2% saturation. Therefore, though this new method is computationally more demanding than the traditional technique, it can be used in cases where the maximum intensity-based method fails (e.g. stationary cells), or when higher accuracy is required. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Robust approximation-free prescribed performance control for nonlinear systems and its application
NASA Astrophysics Data System (ADS)
Sun, Ruisheng; Na, Jing; Zhu, Bin
2018-02-01
This paper presents a robust prescribed performance control approach and its application to nonlinear tail-controlled missile systems with unknown dynamics and uncertainties. The idea of prescribed performance function (PPF) is incorporated into the control design, such that both the steady-state and transient control performance can be strictly guaranteed. Unlike conventional PPF-based control methods, we further tailor a recently proposed systematic control design procedure (i.e. approximation-free control) using the transformed tracking error dynamics, which provides a proportional-like control action. Hence, the function approximators (e.g. neural networks, fuzzy systems) that are widely used to address the unknown nonlinearities in the nonlinear control designs are not needed. The proposed control design leads to a robust yet simplified function approximation-free control for nonlinear systems. The closed-loop system stability and the control error convergence are all rigorously proved. Finally, comparative simulations are conducted based on nonlinear missile systems to validate the improved response and the robustness of the proposed control method.
Guo, Yang; Riplinger, Christoph; Becker, Ute; Liakos, Dimitrios G; Minenkov, Yury; Cavallo, Luigi; Neese, Frank
2018-01-07
In this communication, an improved perturbative triples correction (T) algorithm for domain based local pair-natural orbital singles and doubles coupled cluster (DLPNO-CCSD) theory is reported. In our previous implementation, the semi-canonical approximation was used and linear scaling was achieved for both the DLPNO-CCSD and (T) parts of the calculation. In this work, we refer to this previous method as DLPNO-CCSD(T 0 ) to emphasize the semi-canonical approximation. It is well-established that the DLPNO-CCSD method can predict very accurate absolute and relative energies with respect to the parent canonical CCSD method. However, the (T 0 ) approximation may introduce significant errors in absolute energies as the triples correction grows up in magnitude. In the majority of cases, the relative energies from (T 0 ) are as accurate as the canonical (T) results of themselves. Unfortunately, in rare cases and in particular for small gap systems, the (T 0 ) approximation breaks down and relative energies show large deviations from the parent canonical CCSD(T) results. To address this problem, an iterative (T) algorithm based on the previous DLPNO-CCSD(T 0 ) algorithm has been implemented [abbreviated here as DLPNO-CCSD(T)]. Using triples natural orbitals to represent the virtual spaces for triples amplitudes, storage bottlenecks are avoided. Various carefully designed approximations ease the computational burden such that overall, the increase in the DLPNO-(T) calculation time over DLPNO-(T 0 ) only amounts to a factor of about two (depending on the basis set). Benchmark calculations for the GMTKN30 database show that compared to DLPNO-CCSD(T 0 ), the errors in absolute energies are greatly reduced and relative energies are moderately improved. The particularly problematic case of cumulene chains of increasing lengths is also successfully addressed by DLPNO-CCSD(T).
NASA Astrophysics Data System (ADS)
Guo, Yang; Riplinger, Christoph; Becker, Ute; Liakos, Dimitrios G.; Minenkov, Yury; Cavallo, Luigi; Neese, Frank
2018-01-01
In this communication, an improved perturbative triples correction (T) algorithm for domain based local pair-natural orbital singles and doubles coupled cluster (DLPNO-CCSD) theory is reported. In our previous implementation, the semi-canonical approximation was used and linear scaling was achieved for both the DLPNO-CCSD and (T) parts of the calculation. In this work, we refer to this previous method as DLPNO-CCSD(T0) to emphasize the semi-canonical approximation. It is well-established that the DLPNO-CCSD method can predict very accurate absolute and relative energies with respect to the parent canonical CCSD method. However, the (T0) approximation may introduce significant errors in absolute energies as the triples correction grows up in magnitude. In the majority of cases, the relative energies from (T0) are as accurate as the canonical (T) results of themselves. Unfortunately, in rare cases and in particular for small gap systems, the (T0) approximation breaks down and relative energies show large deviations from the parent canonical CCSD(T) results. To address this problem, an iterative (T) algorithm based on the previous DLPNO-CCSD(T0) algorithm has been implemented [abbreviated here as DLPNO-CCSD(T)]. Using triples natural orbitals to represent the virtual spaces for triples amplitudes, storage bottlenecks are avoided. Various carefully designed approximations ease the computational burden such that overall, the increase in the DLPNO-(T) calculation time over DLPNO-(T0) only amounts to a factor of about two (depending on the basis set). Benchmark calculations for the GMTKN30 database show that compared to DLPNO-CCSD(T0), the errors in absolute energies are greatly reduced and relative energies are moderately improved. The particularly problematic case of cumulene chains of increasing lengths is also successfully addressed by DLPNO-CCSD(T).
Barrenechea, Gabriel R; Burman, Erik; Karakatsani, Fotini
2017-01-01
For the case of approximation of convection-diffusion equations using piecewise affine continuous finite elements a new edge-based nonlinear diffusion operator is proposed that makes the scheme satisfy a discrete maximum principle. The diffusion operator is shown to be Lipschitz continuous and linearity preserving. Using these properties we provide a full stability and error analysis, which, in the diffusion dominated regime, shows existence, uniqueness and optimal convergence. Then the algebraic flux correction method is recalled and we show that the present method can be interpreted as an algebraic flux correction method for a particular definition of the flux limiters. The performance of the method is illustrated on some numerical test cases in two space dimensions.
Low-Rank Correction Methods for Algebraic Domain Decomposition Preconditioners
Li, Ruipeng; Saad, Yousef
2017-08-01
This study presents a parallel preconditioning method for distributed sparse linear systems, based on an approximate inverse of the original matrix, that adopts a general framework of distributed sparse matrices and exploits domain decomposition (DD) and low-rank corrections. The DD approach decouples the matrix and, once inverted, a low-rank approximation is applied by exploiting the Sherman--Morrison--Woodbury formula, which yields two variants of the preconditioning methods. The low-rank expansion is computed by the Lanczos procedure with reorthogonalizations. Numerical experiments indicate that, when combined with Krylov subspace accelerators, this preconditioner can be efficient and robust for solving symmetric sparse linear systems. Comparisonsmore » with pARMS, a DD-based parallel incomplete LU (ILU) preconditioning method, are presented for solving Poisson's equation and linear elasticity problems.« less
Using Wavelet Bases to Separate Scales in Quantum Field Theory
NASA Astrophysics Data System (ADS)
Michlin, Tracie L.
This thesis investigates the use of Daubechies wavelets to separate scales in local quantum field theory. Field theories have an infinite number of degrees of freedom on all distance scales. Quantum field theories are believed to describe the physics of subatomic particles. These theories have no known mathematically convergent approximation methods. Daubechies wavelet bases can be used separate degrees of freedom on different distance scales. Volume and resolution truncations lead to mathematically well-defined truncated theories that can be treated using established methods. This work demonstrates that flow equation methods can be used to block diagonalize truncated field theoretic Hamiltonians by scale. This eliminates the fine scale degrees of freedom. This may lead to approximation methods and provide an understanding of how to formulate well-defined fine resolution limits.
Low-Rank Correction Methods for Algebraic Domain Decomposition Preconditioners
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Ruipeng; Saad, Yousef
This study presents a parallel preconditioning method for distributed sparse linear systems, based on an approximate inverse of the original matrix, that adopts a general framework of distributed sparse matrices and exploits domain decomposition (DD) and low-rank corrections. The DD approach decouples the matrix and, once inverted, a low-rank approximation is applied by exploiting the Sherman--Morrison--Woodbury formula, which yields two variants of the preconditioning methods. The low-rank expansion is computed by the Lanczos procedure with reorthogonalizations. Numerical experiments indicate that, when combined with Krylov subspace accelerators, this preconditioner can be efficient and robust for solving symmetric sparse linear systems. Comparisonsmore » with pARMS, a DD-based parallel incomplete LU (ILU) preconditioning method, are presented for solving Poisson's equation and linear elasticity problems.« less
Minimal entropy approximation for cellular automata
NASA Astrophysics Data System (ADS)
Fukś, Henryk
2014-02-01
We present a method for the construction of approximate orbits of measures under the action of cellular automata which is complementary to the local structure theory. The local structure theory is based on the idea of Bayesian extension, that is, construction of a probability measure consistent with given block probabilities and maximizing entropy. If instead of maximizing entropy one minimizes it, one can develop another method for the construction of approximate orbits, at the heart of which is the iteration of finite-dimensional maps, called minimal entropy maps. We present numerical evidence that the minimal entropy approximation sometimes outperforms the local structure theory in characterizing the properties of cellular automata. The density response curve for elementary CA rule 26 is used to illustrate this claim.
NASA Technical Reports Server (NTRS)
Omidvar, K.
1980-01-01
Using the method of explicit summation over the intermediate states two-photon absorption cross sections in light and intermediate atoms based on the simplistic frozen-core approximation and LS coupling have been formulated. Formulas for the cross section in terms of integrals over radial wave functions are given. Two selection rules, one exact and one approximate, valid within the stated approximations are derived. The formulas are applied to two-photon absorptions in nitrogen, oxygen, and chlorine. In evaluating the radial integrals, for low-lying levels, the Hartree-Fock wave functions, and for high-lying levels, hydrogenic wave functions obtained by the quantum-defect method have been used. A relationship between the cross section and the oscillator strengths is derived.
ERIC Educational Resources Information Center
Lee, Young-Jin
2012-01-01
This paper presents a computational method that can efficiently estimate the ability of students from the log files of a Web-based learning environment capturing their problem solving processes. The computational method developed in this study approximates the posterior distribution of the student's ability obtained from the conventional Bayes…
Kong, Xiang-Zhen; Liu, Jin-Xing; Zheng, Chun-Hou; Hou, Mi-Xiao; Wang, Juan
2017-07-01
High dimensionality has become a typical feature of biomolecular data. In this paper, a novel dimension reduction method named p-norm singular value decomposition (PSVD) is proposed to seek the low-rank approximation matrix to the biomolecular data. To enhance the robustness to outliers, the Lp-norm is taken as the error function and the Schatten p-norm is used as the regularization function in the optimization model. To evaluate the performance of PSVD, the Kmeans clustering method is then employed for tumor clustering based on the low-rank approximation matrix. Extensive experiments are carried out on five gene expression data sets including two benchmark data sets and three higher dimensional data sets from the cancer genome atlas. The experimental results demonstrate that the PSVD-based method outperforms many existing methods. Especially, it is experimentally proved that the proposed method is more efficient for processing higher dimensional data with good robustness, stability, and superior time performance.
Representation of the exact relativistic electronic Hamiltonian within the regular approximation
NASA Astrophysics Data System (ADS)
Filatov, Michael; Cremer, Dieter
2003-12-01
The exact relativistic Hamiltonian for electronic states is expanded in terms of energy-independent linear operators within the regular approximation. An effective relativistic Hamiltonian has been obtained, which yields in lowest order directly the infinite-order regular approximation (IORA) rather than the zeroth-order regular approximation method. Further perturbational expansion of the exact relativistic electronic energy utilizing the effective Hamiltonian leads to new methods based on ordinary (IORAn) or double [IORAn(2)] perturbation theory (n: order of expansion), which provide improved energies in atomic calculations. Energies calculated with IORA4 and IORA3(2) are accurate up to c-20. Furthermore, IORA is improved by using the IORA wave function to calculate the Rayleigh quotient, which, if minimized, leads to the exact relativistic energy. The outstanding performance of this new IORA method coined scaled IORA is documented in atomic and molecular calculations.
NASA Astrophysics Data System (ADS)
Kosterina, E. A.
2018-01-01
The situation of leakage of a polluting liquid from a longitudinal crack of the pipeline lying on the ground surface is considered. The two-dimensional nonstationary mathematical model is based on the mass balance equation in terms of pressure, which is satisfied in a domain with an unknown moving boundary. This area corresponds to the area of contaminated zone. A function characterizing the region of action of the equation is introduced, which makes it possible to obtain the formulation of the problem in a fixed domain. Two types of finite-difference approximation of the problem statement are proposed. They differ by approximation of the convective term. Counter-current approximation and approximation along characteristics are used. The results of computational experiments, which are in favor of using the method of characteristics, are presented. The methods application is illustrated by an example of spread of oil pollution.
Detecting Past Positive Selection through Ongoing Negative Selection
Bazykin, Georgii A.; Kondrashov, Alexey S.
2011-01-01
Detecting positive selection is a challenging task. We propose a method for detecting past positive selection through ongoing negative selection, based on comparison of the parameters of intraspecies polymorphism at functionally important and selectively neutral sites where a nucleotide substitution of the same kind occurred recently. Reduced occurrence of recently replaced ancestral alleles at functionally important sites indicates that negative selection currently acts against these alleles and, therefore, that their replacements were driven by positive selection. Application of this method to the Drosophila melanogaster lineage shows that the fraction of adaptive amino acid replacements remained approximately 0.5 for a long time. In the Homo sapiens lineage, however, this fraction drops from approximately 0.5 before the Ponginae–Homininae divergence to approximately 0 after it. The proposed method is based on essentially the same data as the McDonald–Kreitman test but is free from some of its limitations, which may open new opportunities, especially when many genotypes within a species are known. PMID:21859804
A variable-order laminated plate theory based on the variational-asymptotical method
NASA Technical Reports Server (NTRS)
Lee, Bok W.; Sutyrin, Vladislav G.; Hodges, Dewey H.
1993-01-01
The variational-asymptotical method is a mathematical technique by which the three-dimensional analysis of laminated plate deformation can be split into a linear, one-dimensional, through-the-thickness analysis and a nonlinear, two-dimensional, plate analysis. The elastic constants used in the plate analysis are obtained from the through-the-thickness analysis, along with approximate, closed-form three-dimensional distributions of displacement, strain, and stress. In this paper, a theory based on this technique is developed which is capable of approximating three-dimensional elasticity to any accuracy desired. The asymptotical method allows for the approximation of the through-the-thickness behavior in terms of the eigenfunctions of a certain Sturm-Liouville problem associated with the thickness coordinate. These eigenfunctions contain all the necessary information about the nonhomogeneities along the thickness coordinate of the plate and thus possess the appropriate discontinuities in the derivatives of displacement. The theory is presented in this paper along with numerical results for the eigenfunctions of various laminated plates.
NASA Astrophysics Data System (ADS)
Gurrala, Praveen; Downs, Andrew; Chen, Kun; Song, Jiming; Roberts, Ron
2018-04-01
Full wave scattering models for ultrasonic waves are necessary for the accurate prediction of voltage signals received from complex defects/flaws in practical nondestructive evaluation (NDE) measurements. We propose the high-order Nyström method accelerated by the multilevel fast multipole algorithm (MLFMA) as an improvement to the state-of-the-art full-wave scattering models that are based on boundary integral equations. We present numerical results demonstrating improvements in simulation time and memory requirement. Particularly, we demonstrate the need for higher order geom-etry and field approximation in modeling NDE measurements. Also, we illustrate the importance of full-wave scattering models using experimental pulse-echo data from a spherical inclusion in a solid, which cannot be modeled accurately by approximation-based scattering models such as the Kirchhoff approximation.
Stable computations with flat radial basis functions using vector-valued rational approximations
NASA Astrophysics Data System (ADS)
Wright, Grady B.; Fornberg, Bengt
2017-02-01
One commonly finds in applications of smooth radial basis functions (RBFs) that scaling the kernels so they are 'flat' leads to smaller discretization errors. However, the direct numerical approach for computing with flat RBFs (RBF-Direct) is severely ill-conditioned. We present an algorithm for bypassing this ill-conditioning that is based on a new method for rational approximation (RA) of vector-valued analytic functions with the property that all components of the vector share the same singularities. This new algorithm (RBF-RA) is more accurate, robust, and easier to implement than the Contour-Padé method, which is similarly based on vector-valued rational approximation. In contrast to the stable RBF-QR and RBF-GA algorithms, which are based on finding a better conditioned base in the same RBF-space, the new algorithm can be used with any type of smooth radial kernel, and it is also applicable to a wider range of tasks (including calculating Hermite type implicit RBF-FD stencils). We present a series of numerical experiments demonstrating the effectiveness of this new method for computing RBF interpolants in the flat regime. We also demonstrate the flexibility of the method by using it to compute implicit RBF-FD formulas in the flat regime and then using these for solving Poisson's equation in a 3-D spherical shell.
Approximation methods for inverse problems involving the vibration of beams with tip bodies
NASA Technical Reports Server (NTRS)
Rosen, I. G.
1984-01-01
Two cubic spline based approximation schemes for the estimation of structural parameters associated with the transverse vibration of flexible beams with tip appendages are outlined. The identification problem is formulated as a least squares fit to data subject to the system dynamics which are given by a hybrid system of coupled ordinary and partial differential equations. The first approximation scheme is based upon an abstract semigroup formulation of the state equation while a weak/variational form is the basis for the second. Cubic spline based subspaces together with a Rayleigh-Ritz-Galerkin approach were used to construct sequences of easily solved finite dimensional approximating identification problems. Convergence results are briefly discussed and a numerical example demonstrating the feasibility of the schemes and exhibiting their relative performance for purposes of comparison is provided.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gu, Z.; Ching, W.Y.
Based on the Sterne-Inkson model for the self-energy correction to the single-particle energy in the local-density approximation (LDA), we have implemented an approximate energy-dependent and [bold k]-dependent [ital GW] correction scheme to the orthogonalized linear combination of atomic orbital-based local-density calculation for insulators. In contrast to the approach of Jenkins, Srivastava, and Inkson, we evaluate the on-site exchange integrals using the LDA Bloch functions throughout the Brillouin zone. By using a [bold k]-weighted band gap [ital E][sub [ital g
Artificial neural networks and approximate reasoning for intelligent control in space
NASA Technical Reports Server (NTRS)
Berenji, Hamid R.
1991-01-01
A method is introduced for learning to refine the control rules of approximate reasoning-based controllers. A reinforcement-learning technique is used in conjunction with a multi-layer neural network model of an approximate reasoning-based controller. The model learns by updating its prediction of the physical system's behavior. The model can use the control knowledge of an experienced operator and fine-tune it through the process of learning. Some of the space domains suitable for applications of the model such as rendezvous and docking, camera tracking, and tethered systems control are discussed.
Symplectic partitioned Runge-Kutta scheme for Maxwell's equations
NASA Astrophysics Data System (ADS)
Huang, Zhi-Xiang; Wu, Xian-Liang
Using the symplectic partitioned Runge-Kutta (PRK) method, we construct a new scheme for approximating the solution to infinite dimensional nonseparable Hamiltonian systems of Maxwell's equations for the first time. The scheme is obtained by discretizing the Maxwell's equations in the time direction based on symplectic PRK method, and then evaluating the equation in the spatial direction with a suitable finite difference approximation. Several numerical examples are presented to verify the efficiency of the scheme.
Edge-augmented Fourier partial sums with applications to Magnetic Resonance Imaging (MRI)
NASA Astrophysics Data System (ADS)
Larriva-Latt, Jade; Morrison, Angela; Radgowski, Alison; Tobin, Joseph; Iwen, Mark; Viswanathan, Aditya
2017-08-01
Certain applications such as Magnetic Resonance Imaging (MRI) require the reconstruction of functions from Fourier spectral data. When the underlying functions are piecewise-smooth, standard Fourier approximation methods suffer from the Gibbs phenomenon - with associated oscillatory artifacts in the vicinity of edges and an overall reduced order of convergence in the approximation. This paper proposes an edge-augmented Fourier reconstruction procedure which uses only the first few Fourier coefficients of an underlying piecewise-smooth function to accurately estimate jump information and then incorporate it into a Fourier partial sum approximation. We provide both theoretical and empirical results showing the improved accuracy of the proposed method, as well as comparisons demonstrating superior performance over existing state-of-the-art sparse optimization-based methods.
NASA Astrophysics Data System (ADS)
Christenson, J. G.; Austin, R. A.; Phillips, R. J.
2018-05-01
The phonon Boltzmann transport equation is used to analyze model problems in one and two spatial dimensions, under transient and steady-state conditions. New, explicit solutions are obtained by using the P1 and P3 approximations, based on expansions in spherical harmonics, and are compared with solutions from the discrete ordinates method. For steady-state energy transfer, it is shown that analytic expressions derived using the P1 and P3 approximations agree quantitatively with the discrete ordinates method, in some cases for large Knudsen numbers, and always for Knudsen numbers less than unity. However, for time-dependent energy transfer, the PN solutions differ qualitatively from converged solutions obtained by the discrete ordinates method. Although they correctly capture the wave-like behavior of energy transfer at short times, the P1 and P3 approximations rely on one or two wave velocities, respectively, yielding abrupt, step-changes in temperature profiles that are absent when the angular dependence of the phonon velocities is captured more completely. It is shown that, with the gray approximation, the P1 approximation is formally equivalent to the so-called "hyperbolic heat equation." Overall, these results support the use of the PN approximation to find solutions to the phonon Boltzmann transport equation for steady-state conditions. Such solutions can be useful in the design and analysis of devices that involve heat transfer at nanometer length scales, where continuum-scale approaches become inaccurate.
Numerical Integration Techniques for Curved-Element Discretizations of Molecule–Solvent Interfaces
Bardhan, Jaydeep P.; Altman, Michael D.; Willis, David J.; Lippow, Shaun M.; Tidor, Bruce; White, Jacob K.
2012-01-01
Surface formulations of biophysical modeling problems offer attractive theoretical and computational properties. Numerical simulations based on these formulations usually begin with discretization of the surface under consideration; often, the surface is curved, possessing complicated structure and possibly singularities. Numerical simulations commonly are based on approximate, rather than exact, discretizations of these surfaces. To assess the strength of the dependence of simulation accuracy on the fidelity of surface representation, we have developed methods to model several important surface formulations using exact surface discretizations. Following and refining Zauhar’s work (J. Comp.-Aid. Mol. Des. 9:149-159, 1995), we define two classes of curved elements that can exactly discretize the van der Waals, solvent-accessible, and solvent-excluded (molecular) surfaces. We then present numerical integration techniques that can accurately evaluate nonsingular and singular integrals over these curved surfaces. After validating the exactness of the surface discretizations and demonstrating the correctness of the presented integration methods, we present a set of calculations that compare the accuracy of approximate, planar-triangle-based discretizations and exact, curved-element-based simulations of surface-generalized-Born (sGB), surface-continuum van der Waals (scvdW), and boundary-element method (BEM) electrostatics problems. Results demonstrate that continuum electrostatic calculations with BEM using curved elements, piecewise-constant basis functions, and centroid collocation are nearly ten times more accurate than planartriangle BEM for basis sets of comparable size. The sGB and scvdW calculations give exceptional accuracy even for the coarsest obtainable discretized surfaces. The extra accuracy is attributed to the exact representation of the solute–solvent interface; in contrast, commonly used planar-triangle discretizations can only offer improved approximations with increasing discretization and associated increases in computational resources. The results clearly demonstrate that our methods for approximate integration on an exact geometry are far more accurate than exact integration on an approximate geometry. A MATLAB implementation of the presented integration methods and sample data files containing curved-element discretizations of several small molecules are available online at http://web.mit.edu/tidor. PMID:17627358
NASA Astrophysics Data System (ADS)
Zhang, G.; Lu, D.; Ye, M.; Gunzburger, M.
2011-12-01
Markov Chain Monte Carlo (MCMC) methods have been widely used in many fields of uncertainty analysis to estimate the posterior distributions of parameters and credible intervals of predictions in the Bayesian framework. However, in practice, MCMC may be computationally unaffordable due to slow convergence and the excessive number of forward model executions required, especially when the forward model is expensive to compute. Both disadvantages arise from the curse of dimensionality, i.e., the posterior distribution is usually a multivariate function of parameters. Recently, sparse grid method has been demonstrated to be an effective technique for coping with high-dimensional interpolation or integration problems. Thus, in order to accelerate the forward model and avoid the slow convergence of MCMC, we propose a new method for uncertainty analysis based on sparse grid interpolation and quasi-Monte Carlo sampling. First, we construct a polynomial approximation of the forward model in the parameter space by using the sparse grid interpolation. This approximation then defines an accurate surrogate posterior distribution that can be evaluated repeatedly at minimal computational cost. Second, instead of using MCMC, a quasi-Monte Carlo method is applied to draw samples in the parameter space. Then, the desired probability density function of each prediction is approximated by accumulating the posterior density values of all the samples according to the prediction values. Our method has the following advantages: (1) the polynomial approximation of the forward model on the sparse grid provides a very efficient evaluation of the surrogate posterior distribution; (2) the quasi-Monte Carlo method retains the same accuracy in approximating the PDF of predictions but avoids all disadvantages of MCMC. The proposed method is applied to a controlled numerical experiment of groundwater flow modeling. The results show that our method attains the same accuracy much more efficiently than traditional MCMC.
Method of characteristics for three-dimensional axially symmetrical supersonic flows.
NASA Technical Reports Server (NTRS)
Sauer, R
1947-01-01
An approximation method for three-dimensional axially symmetrical supersonic flows is developed; it is based on the characteristics theory (represented partly graphically, partly analytically). Thereafter this method is applied to the construction of rotationally symmetrical nozzles. (author)
NASA Astrophysics Data System (ADS)
Maass, Bolko
2016-12-01
This paper describes an efficient and easily implemented algorithmic approach to extracting an approximation to an image's dominant projected illumination direction, based on intermediary results from a segmentation-based crater detection algorithm (CDA), at a computational cost that is negligible in comparison to that of the prior stages of the CDA. Most contemporary CDAs built for spacecraft navigation use this illumination direction as a means of improving performance or even require it to function at all. Deducing the illumination vector from the image alone reduces the reliance on external information such as the accurate knowledge of the spacecraft inertial state, accurate time base and solar system ephemerides. Therefore, a method such as the one described in this paper is a prerequisite for true "Lost in Space" operation of a purely segmentation-based crater detecting and matching method for spacecraft navigation. The proposed method is verified using ray-traced lunar elevation model data, asteroid image data, and in a laboratory setting with a camera in the loop.
Ranking Support Vector Machine with Kernel Approximation
Dou, Yong
2017-01-01
Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms. PMID:28293256
Ranking Support Vector Machine with Kernel Approximation.
Chen, Kai; Li, Rongchun; Dou, Yong; Liang, Zhengfa; Lv, Qi
2017-01-01
Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.
NASA Technical Reports Server (NTRS)
Pratt, D. T.
1984-01-01
Conventional algorithms for the numerical integration of ordinary differential equations (ODEs) are based on the use of polynomial functions as interpolants. However, the exact solutions of stiff ODEs behave like decaying exponential functions, which are poorly approximated by polynomials. An obvious choice of interpolant are the exponential functions themselves, or their low-order diagonal Pade (rational function) approximants. A number of explicit, A-stable, integration algorithms were derived from the use of a three-parameter exponential function as interpolant, and their relationship to low-order, polynomial-based and rational-function-based implicit and explicit methods were shown by examining their low-order diagonal Pade approximants. A robust implicit formula was derived by exponential fitting the trapezoidal rule. Application of these algorithms to integration of the ODEs governing homogenous, gas-phase chemical kinetics was demonstrated in a developmental code CREK1D, which compares favorably with the Gear-Hindmarsh code LSODE in spite of the use of a primitive stepsize control strategy.
Yeung, Dit-Yan; Chang, Hong; Dai, Guang
2008-11-01
In recent years, metric learning in the semisupervised setting has aroused a lot of research interest. One type of semisupervised metric learning utilizes supervisory information in the form of pairwise similarity or dissimilarity constraints. However, most methods proposed so far are either limited to linear metric learning or unable to scale well with the data set size. In this letter, we propose a nonlinear metric learning method based on the kernel approach. By applying low-rank approximation to the kernel matrix, our method can handle significantly larger data sets. Moreover, our low-rank approximation scheme can naturally lead to out-of-sample generalization. Experiments performed on both artificial and real-world data show very promising results.
NASA Astrophysics Data System (ADS)
Alam Khan, Najeeb; Razzaq, Oyoon Abdul
2016-03-01
In the present work a wavelets approximation method is employed to solve fuzzy boundary value differential equations (FBVDEs). Essentially, a truncated Legendre wavelets series together with the Legendre wavelets operational matrix of derivative are utilized to convert FB- VDE into a simple computational problem by reducing it into a system of fuzzy algebraic linear equations. The capability of scheme is investigated on second order FB- VDE considered under generalized H-differentiability. Solutions are represented graphically showing competency and accuracy of this method.
Factorization and reduction methods for optimal control of distributed parameter systems
NASA Technical Reports Server (NTRS)
Burns, J. A.; Powers, R. K.
1985-01-01
A Chandrasekhar-type factorization method is applied to the linear-quadratic optimal control problem for distributed parameter systems. An aeroelastic control problem is used as a model example to demonstrate that if computationally efficient algorithms, such as those of Chandrasekhar-type, are combined with the special structure often available to a particular problem, then an abstract approximation theory developed for distributed parameter control theory becomes a viable method of solution. A numerical scheme based on averaging approximations is applied to hereditary control problems. Numerical examples are given.
An effective solution to the nonlinear, nonstationary Navier-Stokes equations for two dimensions
NASA Technical Reports Server (NTRS)
Gabrielsen, R. E.
1975-01-01
A sequence of approximate solutions for the nonlinear, nonstationary Navier-Stokes equations for a two-dimensional domain, from which explicit error estimates and rates of convergence are obtained, is described. This sequence of approximate solutions is based primarily on the Newton-Kantorovich method.
Some Surprising Errors in Numerical Differentiation
ERIC Educational Resources Information Center
Gordon, Sheldon P.
2012-01-01
Data analysis methods, both numerical and visual, are used to discover a variety of surprising patterns in the errors associated with successive approximations to the derivatives of sinusoidal and exponential functions based on the Newton difference-quotient. L'Hopital's rule and Taylor polynomial approximations are then used to explain why these…
A Noncentral "t" Regression Model for Meta-Analysis
ERIC Educational Resources Information Center
Camilli, Gregory; de la Torre, Jimmy; Chiu, Chia-Yi
2010-01-01
In this article, three multilevel models for meta-analysis are examined. Hedges and Olkin suggested that effect sizes follow a noncentral "t" distribution and proposed several approximate methods. Raudenbush and Bryk further refined this model; however, this procedure is based on a normal approximation. In the current research literature, this…
Optical properties of electrohydrodynamic convection patterns: rigorous and approximate methods.
Bohley, Christian; Heuer, Jana; Stannarius, Ralf
2005-12-01
We analyze the optical behavior of two-dimensionally periodic structures that occur in electrohydrodynamic convection (EHC) patterns in nematic sandwich cells. These structures are anisotropic, locally uniaxial, and periodic on the scale of micrometers. For the first time, the optics of these structures is investigated with a rigorous method. The method used for the description of the electromagnetic waves interacting with EHC director patterns is a numerical approach that discretizes directly the Maxwell equations. It works as a space-grid-time-domain method and computes electric and magnetic fields in time steps. This so-called finite-difference-time-domain (FDTD) method is able to generate the fields with arbitrary accuracy. We compare this rigorous method with earlier attempts based on ray-tracing and analytical approximations. Results of optical studies of EHC structures made earlier based on ray-tracing methods are confirmed for thin cells, when the spatial periods of the pattern are sufficiently large. For the treatment of small-scale convection structures, the FDTD method is without alternatives.
NASA Astrophysics Data System (ADS)
Lorin, E.; Yang, X.; Antoine, X.
2016-06-01
The paper is devoted to develop efficient domain decomposition methods for the linear Schrödinger equation beyond the semiclassical regime, which does not carry a small enough rescaled Planck constant for asymptotic methods (e.g. geometric optics) to produce a good accuracy, but which is too computationally expensive if direct methods (e.g. finite difference) are applied. This belongs to the category of computing middle-frequency wave propagation, where neither asymptotic nor direct methods can be directly used with both efficiency and accuracy. Motivated by recent works of the authors on absorbing boundary conditions (Antoine et al. (2014) [13] and Yang and Zhang (2014) [43]), we introduce Semiclassical Schwarz Waveform Relaxation methods (SSWR), which are seamless integrations of semiclassical approximation to Schwarz Waveform Relaxation methods. Two versions are proposed respectively based on Herman-Kluk propagation and geometric optics, and we prove the convergence and provide numerical evidence of efficiency and accuracy of these methods.
Application of Newton's method to the postbuckling of rings under pressure loadings
NASA Technical Reports Server (NTRS)
Thurston, Gaylen A.
1989-01-01
The postbuckling response of circular rings (or long cylinders) is examined. The rings are subjected to four types of external pressure loadings; each type of pressure is defined by its magnitude and direction at points on the buckled ring. Newton's method is applied to the nonlinear differential equations of the exact inextensional theory for the ring problem. A zeroth approximation for the solution of the nonlinear equations, based on the mode shape corresponding to the first buckling pressure, is derived in closed form for each of the four types of pressure. The zeroth approximation is used to start the iteration cycle in Newton's method to compute numerical solutions of the nonlinear equations. The zeroth approximations for the postbuckling pressure-deflection curves are compared with the converged solutions from Newton's method and with similar results reported in the literature.
A dynamic programming approach to estimate the capacity value of energy storage
Sioshansi, Ramteen; Madaeni, Seyed Hossein; Denholm, Paul
2013-09-17
Here, we present a method to estimate the capacity value of storage. Our method uses a dynamic program to model the effect of power system outages on the operation and state of charge of storage in subsequent periods. We combine the optimized dispatch from the dynamic program with estimated system loss of load probabilities to compute a probability distribution for the state of charge of storage in each period. This probability distribution can be used as a forced outage rate for storage in standard reliability-based capacity value estimation methods. Our proposed method has the advantage over existing approximations that itmore » explicitly captures the effect of system shortage events on the state of charge of storage in subsequent periods. We also use a numerical case study, based on five utility systems in the U.S., to demonstrate our technique and compare it to existing approximation methods.« less
Modeling Sound Propagation Through Non-Axisymmetric Jets
NASA Technical Reports Server (NTRS)
Leib, Stewart J.
2014-01-01
A method for computing the far-field adjoint Green's function of the generalized acoustic analogy equations under a locally parallel mean flow approximation is presented. The method is based on expanding the mean-flow-dependent coefficients in the governing equation and the scalar Green's function in truncated Fourier series in the azimuthal direction and a finite difference approximation in the radial direction in circular cylindrical coordinates. The combined spectral/finite difference method yields a highly banded system of algebraic equations that can be efficiently solved using a standard sparse system solver. The method is applied to test cases, with mean flow specified by analytical functions, corresponding to two noise reduction concepts of current interest: the offset jet and the fluid shield. Sample results for the Green's function are given for these two test cases and recommendations made as to the use of the method as part of a RANS-based jet noise prediction code.
Pair production in low-energy collisions of uranium nuclei beyond the monopole approximation
NASA Astrophysics Data System (ADS)
Maltsev, I. A.; Shabaev, V. M.; Tupitsyn, I. I.; Kozhedub, Y. S.; Plunien, G.; Stöhlker, Th.
2017-10-01
A method for calculation of electron-positron pair production in low-energy heavy-ion collisions beyond the monopole approximation is presented. The method is based on the numerical solving of the time-dependent Dirac equation with the full two-center potential. The one-electron wave functions are expanded in the finite basis set constructed on the two-dimensional spatial grid. Employing the developed approach the probabilities of bound-free pair production are calculated for collisions of bare uranium nuclei at the energy near the Coulomb barrier. The obtained results are compared with the corresponding values calculated in the monopole approximation.
Restoring the Pauli principle in the random phase approximation ground state
NASA Astrophysics Data System (ADS)
Kosov, D. S.
2017-12-01
Random phase approximation ground state contains electronic configurations where two (and more) identical electrons can occupy the same molecular spin-orbital violating the Pauli exclusion principle. This overcounting of electronic configurations happens due to quasiboson approximation in the treatment of electron-hole pair operators. We describe the method to restore the Pauli principle in the RPA wavefunction. The proposed theory is illustrated by the calculations of molecular dipole moments and electronic kinetic energies. The Hartree-Fock based RPA, which is corrected for the Pauli principle, gives the results of comparable accuracy with Møller-Plesset second order perturbation theory and coupled-cluster singles and doubles method.
An hp-adaptivity and error estimation for hyperbolic conservation laws
NASA Technical Reports Server (NTRS)
Bey, Kim S.
1995-01-01
This paper presents an hp-adaptive discontinuous Galerkin method for linear hyperbolic conservation laws. A priori and a posteriori error estimates are derived in mesh-dependent norms which reflect the dependence of the approximate solution on the element size (h) and the degree (p) of the local polynomial approximation. The a posteriori error estimate, based on the element residual method, provides bounds on the actual global error in the approximate solution. The adaptive strategy is designed to deliver an approximate solution with the specified level of error in three steps. The a posteriori estimate is used to assess the accuracy of a given approximate solution and the a priori estimate is used to predict the mesh refinements and polynomial enrichment needed to deliver the desired solution. Numerical examples demonstrate the reliability of the a posteriori error estimates and the effectiveness of the hp-adaptive strategy.
Flexible scheme to truncate the hierarchy of pure states.
Zhang, P-P; Bentley, C D B; Eisfeld, A
2018-04-07
The hierarchy of pure states (HOPS) is a wavefunction-based method that can be used for numerically modeling open quantum systems. Formally, HOPS recovers the exact system dynamics for an infinite depth of the hierarchy. However, truncation of the hierarchy is required to numerically implement HOPS. We want to choose a "good" truncation method, where by "good" we mean that it is numerically feasible to check convergence of the results. For the truncation approximation used in previous applications of HOPS, convergence checks are numerically challenging. In this work, we demonstrate the application of the "n-particle approximation" to HOPS. We also introduce a new approximation, which we call the "n-mode approximation." We then explore the convergence of these truncation approximations with respect to the number of equations required in the hierarchy in two exemplary problems: absorption and energy transfer of molecular aggregates.
Flexible scheme to truncate the hierarchy of pure states
NASA Astrophysics Data System (ADS)
Zhang, P.-P.; Bentley, C. D. B.; Eisfeld, A.
2018-04-01
The hierarchy of pure states (HOPS) is a wavefunction-based method that can be used for numerically modeling open quantum systems. Formally, HOPS recovers the exact system dynamics for an infinite depth of the hierarchy. However, truncation of the hierarchy is required to numerically implement HOPS. We want to choose a "good" truncation method, where by "good" we mean that it is numerically feasible to check convergence of the results. For the truncation approximation used in previous applications of HOPS, convergence checks are numerically challenging. In this work, we demonstrate the application of the "n-particle approximation" to HOPS. We also introduce a new approximation, which we call the "n-mode approximation." We then explore the convergence of these truncation approximations with respect to the number of equations required in the hierarchy in two exemplary problems: absorption and energy transfer of molecular aggregates.
Identification of approximately duplicate material records in ERP systems
NASA Astrophysics Data System (ADS)
Zong, Wei; Wu, Feng; Chu, Lap-Keung; Sculli, Domenic
2017-03-01
The quality of master data is crucial for the accurate functioning of the various modules of an enterprise resource planning (ERP) system. This study addresses specific data problems arising from the generation of approximately duplicate material records in ERP databases. Such problems are mainly due to the firm's lack of unique and global identifiers for the material records, and to the arbitrary assignment of alternative names for the same material by various users. Traditional duplicate detection methods are ineffective in identifying such approximately duplicate material records because these methods typically rely on string comparisons of each field. To address this problem, a machine learning-based framework is developed to recognise semantic similarity between strings and to further identify and reunify approximately duplicate material records - a process referred to as de-duplication in this article. First, the keywords of the material records are extracted to form vectors of discriminating words. Second, a machine learning method using a probabilistic neural network is applied to determine the semantic similarity between these material records. The approach was evaluated using data from a real case study. The test results indicate that the proposed method outperforms traditional algorithms in identifying approximately duplicate material records.
Roussel, Marc R; Tang, Terry
2006-12-07
A slow manifold is a low-dimensional invariant manifold to which trajectories nearby are rapidly attracted on the way to the equilibrium point. The exact computation of the slow manifold simplifies the model without sacrificing accuracy on the slow time scales of the system. The Maas-Pope intrinsic low-dimensional manifold (ILDM) [Combust. Flame 88, 239 (1992)] is frequently used as an approximation to the slow manifold. This approximation is based on a linearized analysis of the differential equations and thus neglects curvature. We present here an efficient way to calculate an approximation equivalent to the ILDM. Our method, called functional equation truncation (FET), first develops a hierarchy of functional equations involving higher derivatives which can then be truncated at second-derivative terms to explicitly neglect the curvature. We prove that the ILDM and FET-approximated (FETA) manifolds are identical for the one-dimensional slow manifold of any planar system. In higher-dimensional spaces, the ILDM and FETA manifolds agree to numerical accuracy almost everywhere. Solution of the FET equations is, however, expected to generally be faster than the ILDM method.
A dynamic-solver-consistent minimum action method: With an application to 2D Navier-Stokes equations
NASA Astrophysics Data System (ADS)
Wan, Xiaoliang; Yu, Haijun
2017-02-01
This paper discusses the necessity and strategy to unify the development of a dynamic solver and a minimum action method (MAM) for a spatially extended system when employing the large deviation principle (LDP) to study the effects of small random perturbations. A dynamic solver is used to approximate the unperturbed system, and a minimum action method is used to approximate the LDP, which corresponds to solving an Euler-Lagrange equation related to but more complicated than the unperturbed system. We will clarify possible inconsistencies induced by independent numerical approximations of the unperturbed system and the LDP, based on which we propose to define both the dynamic solver and the MAM on the same approximation space for spatial discretization. The semi-discrete LDP can then be regarded as the exact LDP of the semi-discrete unperturbed system, which is a finite-dimensional ODE system. We achieve this methodology for the two-dimensional Navier-Stokes equations using a divergence-free approximation space. The method developed can be used to study the nonlinear instability of wall-bounded parallel shear flows, and be generalized straightforwardly to three-dimensional cases. Numerical experiments are presented.
A Full-Maxwell Approach for Large-Angle Polar Wander of Viscoelastic Bodies
NASA Astrophysics Data System (ADS)
Hu, H.; van der Wal, W.; Vermeersen, L. L. A.
2017-12-01
For large-angle long-term true polar wander (TPW) there are currently two types of nonlinear methods which give approximated solutions: those assuming that the rotational axis coincides with the axis of maximum moment of inertia (MoI), which simplifies the Liouville equation, and those based on the quasi-fluid approximation, which approximates the Love number. Recent studies show that both can have a significant bias for certain models. Therefore, we still lack an (semi)analytical method which can give exact solutions for large-angle TPW for a model based on Maxwell rheology. This paper provides a method which analytically solves the MoI equation and adopts an extended iterative procedure introduced in Hu et al. (2017) to obtain a time-dependent solution. The new method can be used to simulate the effect of a remnant bulge or models in different hydrostatic states. We show the effect of the viscosity of the lithosphere on long-term, large-angle TPW. We also simulate models without hydrostatic equilibrium and show that the choice of the initial stress-free shape for the elastic (or highly viscous) lithosphere of a given model is as important as its thickness for obtaining a correct TPW behavior. The initial shape of the lithosphere can be an alternative explanation to mantle convection for the difference between the observed and model predicted flattening. Finally, it is concluded that based on the quasi-fluid approximation, TPW speed on Earth and Mars is underestimated, while the speed of the rotational axis approaching the end position on Venus is overestimated.
An Implicit Characteristic Based Method for Electromagnetics
NASA Technical Reports Server (NTRS)
Beggs, John H.; Briley, W. Roger
2001-01-01
An implicit characteristic-based approach for numerical solution of Maxwell's time-dependent curl equations in flux conservative form is introduced. This method combines a characteristic based finite difference spatial approximation with an implicit lower-upper approximate factorization (LU/AF) time integration scheme. This approach is advantageous for three-dimensional applications because the characteristic differencing enables a two-factor approximate factorization that retains its unconditional stability in three space dimensions, and it does not require solution of tridiagonal systems. Results are given both for a Fourier analysis of stability, damping and dispersion properties, and for one-dimensional model problems involving propagation and scattering for free space and dielectric materials using both uniform and nonuniform grids. The explicit Finite Difference Time Domain Method (FDTD) algorithm is used as a convenient reference algorithm for comparison. The one-dimensional results indicate that for low frequency problems on a highly resolved uniform or nonuniform grid, this LU/AF algorithm can produce accurate solutions at Courant numbers significantly greater than one, with a corresponding improvement in efficiency for simulating a given period of time. This approach appears promising for development of dispersion optimized LU/AF schemes for three dimensional applications.
A Least-Squares-Based Weak Galerkin Finite Element Method for Second Order Elliptic Equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mu, Lin; Wang, Junping; Ye, Xiu
Here, in this article, we introduce a least-squares-based weak Galerkin finite element method for the second order elliptic equation. This new method is shown to provide very accurate numerical approximations for both the primal and the flux variables. In contrast to other existing least-squares finite element methods, this new method allows us to use discontinuous approximating functions on finite element partitions consisting of arbitrary polygon/polyhedron shapes. We also develop a Schur complement algorithm for the resulting discretization problem by eliminating all the unknowns that represent the solution information in the interior of each element. Optimal order error estimates for bothmore » the primal and the flux variables are established. An extensive set of numerical experiments are conducted to demonstrate the robustness, reliability, flexibility, and accuracy of the least-squares-based weak Galerkin finite element method. Finally, the numerical examples cover a wide range of applied problems, including singularly perturbed reaction-diffusion equations and the flow of fluid in porous media with strong anisotropy and heterogeneity.« less
A Least-Squares-Based Weak Galerkin Finite Element Method for Second Order Elliptic Equations
Mu, Lin; Wang, Junping; Ye, Xiu
2017-08-17
Here, in this article, we introduce a least-squares-based weak Galerkin finite element method for the second order elliptic equation. This new method is shown to provide very accurate numerical approximations for both the primal and the flux variables. In contrast to other existing least-squares finite element methods, this new method allows us to use discontinuous approximating functions on finite element partitions consisting of arbitrary polygon/polyhedron shapes. We also develop a Schur complement algorithm for the resulting discretization problem by eliminating all the unknowns that represent the solution information in the interior of each element. Optimal order error estimates for bothmore » the primal and the flux variables are established. An extensive set of numerical experiments are conducted to demonstrate the robustness, reliability, flexibility, and accuracy of the least-squares-based weak Galerkin finite element method. Finally, the numerical examples cover a wide range of applied problems, including singularly perturbed reaction-diffusion equations and the flow of fluid in porous media with strong anisotropy and heterogeneity.« less
Second order upwind Lagrangian particle method for Euler equations
Samulyak, Roman; Chen, Hsin -Chiang; Yu, Kwangmin
2016-06-01
A new second order upwind Lagrangian particle method for solving Euler equations for compressible inviscid fluid or gas flows is proposed. Similar to smoothed particle hydrodynamics (SPH), the method represents fluid cells with Lagrangian particles and is suitable for the simulation of complex free surface / multiphase flows. The main contributions of our method, which is different from SPH in all other aspects, are (a) significant improvement of approximation of differential operators based on a polynomial fit via weighted least squares approximation and the convergence of prescribed order, (b) an upwind second-order particle-based algorithm with limiter, providing accuracy and longmore » term stability, and (c) accurate resolution of states at free interfaces. In conclusion, numerical verification tests demonstrating the convergence order for fixed domain and free surface problems are presented.« less
Second order upwind Lagrangian particle method for Euler equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samulyak, Roman; Chen, Hsin -Chiang; Yu, Kwangmin
A new second order upwind Lagrangian particle method for solving Euler equations for compressible inviscid fluid or gas flows is proposed. Similar to smoothed particle hydrodynamics (SPH), the method represents fluid cells with Lagrangian particles and is suitable for the simulation of complex free surface / multiphase flows. The main contributions of our method, which is different from SPH in all other aspects, are (a) significant improvement of approximation of differential operators based on a polynomial fit via weighted least squares approximation and the convergence of prescribed order, (b) an upwind second-order particle-based algorithm with limiter, providing accuracy and longmore » term stability, and (c) accurate resolution of states at free interfaces. In conclusion, numerical verification tests demonstrating the convergence order for fixed domain and free surface problems are presented.« less
NASA Astrophysics Data System (ADS)
Castro, Manuel J.; Gallardo, José M.; Marquina, Antonio
2017-10-01
We present recent advances in PVM (Polynomial Viscosity Matrix) methods based on internal approximations to the absolute value function, and compare them with Chebyshev-based PVM solvers. These solvers only require a bound on the maximum wave speed, so no spectral decomposition is needed. Another important feature of the proposed methods is that they are suitable to be written in Jacobian-free form, in which only evaluations of the physical flux are used. This is particularly interesting when considering systems for which the Jacobians involve complex expressions, e.g., the relativistic magnetohydrodynamics (RMHD) equations. On the other hand, the proposed Jacobian-free solvers have also been extended to the case of approximate DOT (Dumbser-Osher-Toro) methods, which can be regarded as simple and efficient approximations to the classical Osher-Solomon method, sharing most of it interesting features and being applicable to general hyperbolic systems. To test the properties of our schemes a number of numerical experiments involving the RMHD equations are presented, both in one and two dimensions. The obtained results are in good agreement with those found in the literature and show that our schemes are robust and accurate, running stable under a satisfactory time step restriction. It is worth emphasizing that, although this work focuses on RMHD, the proposed schemes are suitable to be applied to general hyperbolic systems.
NASA Technical Reports Server (NTRS)
Desantis, A.
1994-01-01
In this paper the approximation problem for a class of optimal compensators for flexible structures is considered. The particular case of a simply supported truss with an offset antenna is dealt with. The nonrational positive real optimal compensator transfer function is determined, and it is proposed that an approximation scheme based on a continued fraction expansion method be used. Comparison with the more popular modal expansion technique is performed in terms of stability margin and parameters sensitivity of the relative approximated closed loop transfer functions.
Summation rules for a fully nonlocal energy-based quasicontinuum method
NASA Astrophysics Data System (ADS)
Amelang, J. S.; Venturini, G. N.; Kochmann, D. M.
2015-09-01
The quasicontinuum (QC) method coarse-grains crystalline atomic ensembles in order to bridge the scales from individual atoms to the micro- and mesoscales. A crucial cornerstone of all QC techniques, summation or quadrature rules efficiently approximate the thermodynamic quantities of interest. Here, we investigate summation rules for a fully nonlocal, energy-based QC method to approximate the total Hamiltonian of a crystalline atomic ensemble by a weighted sum over a small subset of all atoms in the crystal lattice. Our formulation does not conceptually differentiate between atomistic and coarse-grained regions and thus allows for seamless bridging without domain-coupling interfaces. We review traditional summation rules and discuss their strengths and weaknesses with a focus on energy approximation errors and spurious force artifacts. Moreover, we introduce summation rules which produce no residual or spurious force artifacts in centrosymmetric crystals in the large-element limit under arbitrary affine deformations in two dimensions (and marginal force artifacts in three dimensions), while allowing us to seamlessly bridge to full atomistics. Through a comprehensive suite of examples with spatially non-uniform QC discretizations in two and three dimensions, we compare the accuracy of the new scheme to various previous ones. Our results confirm that the new summation rules exhibit significantly smaller force artifacts and energy approximation errors. Our numerical benchmark examples include the calculation of elastic constants from completely random QC meshes and the inhomogeneous deformation of aggressively coarse-grained crystals containing nano-voids. In the elastic regime, we directly compare QC results to those of full atomistics to assess global and local errors in complex QC simulations. Going beyond elasticity, we illustrate the performance of the energy-based QC method with the new second-order summation rule by the help of nanoindentation examples with automatic mesh adaptation. Overall, our findings provide guidelines for the selection of summation rules for the fully nonlocal energy-based QC method.
Lagrangian particle method for compressible fluid dynamics
NASA Astrophysics Data System (ADS)
Samulyak, Roman; Wang, Xingyu; Chen, Hsin-Chiang
2018-06-01
A new Lagrangian particle method for solving Euler equations for compressible inviscid fluid or gas flows is proposed. Similar to smoothed particle hydrodynamics (SPH), the method represents fluid cells with Lagrangian particles and is suitable for the simulation of complex free surface/multiphase flows. The main contributions of our method, which is different from SPH in all other aspects, are (a) significant improvement of approximation of differential operators based on a polynomial fit via weighted least squares approximation and the convergence of prescribed order, (b) a second-order particle-based algorithm that reduces to the first-order upwind method at local extremal points, providing accuracy and long term stability, and (c) more accurate resolution of entropy discontinuities and states at free interfaces. While the method is consistent and convergent to a prescribed order, the conservation of momentum and energy is not exact and depends on the convergence order. The method is generalizable to coupled hyperbolic-elliptic systems. Numerical verification tests demonstrating the convergence order are presented as well as examples of complex multiphase flows.
Model-Free Adaptive Control for Unknown Nonlinear Zero-Sum Differential Game.
Zhong, Xiangnan; He, Haibo; Wang, Ding; Ni, Zhen
2018-05-01
In this paper, we present a new model-free globalized dual heuristic dynamic programming (GDHP) approach for the discrete-time nonlinear zero-sum game problems. First, the online learning algorithm is proposed based on the GDHP method to solve the Hamilton-Jacobi-Isaacs equation associated with optimal regulation control problem. By setting backward one step of the definition of performance index, the requirement of system dynamics, or an identifier is relaxed in the proposed method. Then, three neural networks are established to approximate the optimal saddle point feedback control law, the disturbance law, and the performance index, respectively. The explicit updating rules for these three neural networks are provided based on the data generated during the online learning along the system trajectories. The stability analysis in terms of the neural network approximation errors is discussed based on the Lyapunov approach. Finally, two simulation examples are provided to show the effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Natarajan, Sundararajan
2014-12-01
The main objectives of the paper are to (1) present an overview of nonlocal integral elasticity and Aifantis gradient elasticity theory and (2) discuss the application of partition of unity methods to study the response of low-dimensional structures. We present different choices of approximation functions for gradient elasticity, namely Lagrange intepolants, moving least-squares approximants and non-uniform rational B-splines. Next, we employ these approximation functions to study the response of nanobeams based on Euler-Bernoulli and Timoshenko theories as well as to study nanoplates based on first-order shear deformation theory. The response of nanobeams and nanoplates is studied using Eringen's nonlocal elasticity theory. The influence of the nonlocal parameter, the beam and the plate aspect ratio and the boundary conditions on the global response is numerically studied. The influence of a crack on the axial vibration and buckling characteristics of nanobeams is also numerically studied.
Local density approximation in site-occupation embedding theory
NASA Astrophysics Data System (ADS)
Senjean, Bruno; Tsuchiizu, Masahisa; Robert, Vincent; Fromager, Emmanuel
2017-01-01
Site-occupation embedding theory (SOET) is a density functional theory (DFT)-based method which aims at modelling strongly correlated electrons. It is in principle exact and applicable to model and quantum chemical Hamiltonians. The theory is presented here for the Hubbard Hamiltonian. In contrast to conventional DFT approaches, the site (or orbital) occupations are deduced in SOET from a partially interacting system consisting of one (or more) impurity site(s) and non-interacting bath sites. The correlation energy of the bath is then treated implicitly by means of a site-occupation functional. In this work, we propose a simple impurity-occupation functional approximation based on the two-level (2L) Hubbard model which is referred to as two-level impurity local density approximation (2L-ILDA). Results obtained on a prototypical uniform eight-site Hubbard ring are promising. The extension of the method to larger systems and more sophisticated model Hamiltonians is currently in progress.
Huang, C.; Townshend, J.R.G.
2003-01-01
A stepwise regression tree (SRT) algorithm was developed for approximating complex nonlinear relationships. Based on the regression tree of Breiman et al . (BRT) and a stepwise linear regression (SLR) method, this algorithm represents an improvement over SLR in that it can approximate nonlinear relationships and over BRT in that it gives more realistic predictions. The applicability of this method to estimating subpixel forest was demonstrated using three test data sets, on all of which it gave more accurate predictions than SLR and BRT. SRT also generated more compact trees and performed better than or at least as well as BRT at all 10 equal forest proportion interval ranging from 0 to 100%. This method is appealing to estimating subpixel land cover over large areas.
Ledermüller, Katrin; Schütz, Martin
2014-04-28
A multistate local CC2 response method for the calculation of analytic energy gradients with respect to nuclear displacements is presented for ground and electronically excited states. The gradient enables the search for equilibrium geometries of extended molecular systems. Laplace transform is used to partition the eigenvalue problem in order to obtain an effective singles eigenvalue problem and adaptive, state-specific local approximations. This leads to an approximation in the energy Lagrangian, which however is shown (by comparison with the corresponding gradient method without Laplace transform) to be of no concern for geometry optimizations. The accuracy of the local approximation is tested and the efficiency of the new code is demonstrated by application calculations devoted to a photocatalytic decarboxylation process of present interest.
Xu, Xin; Huang, Zhenhua; Graves, Daniel; Pedrycz, Witold
2014-12-01
In order to deal with the sequential decision problems with large or continuous state spaces, feature representation and function approximation have been a major research topic in reinforcement learning (RL). In this paper, a clustering-based graph Laplacian framework is presented for feature representation and value function approximation (VFA) in RL. By making use of clustering-based techniques, that is, K-means clustering or fuzzy C-means clustering, a graph Laplacian is constructed by subsampling in Markov decision processes (MDPs) with continuous state spaces. The basis functions for VFA can be automatically generated from spectral analysis of the graph Laplacian. The clustering-based graph Laplacian is integrated with a class of approximation policy iteration algorithms called representation policy iteration (RPI) for RL in MDPs with continuous state spaces. Simulation and experimental results show that, compared with previous RPI methods, the proposed approach needs fewer sample points to compute an efficient set of basis functions and the learning control performance can be improved for a variety of parameter settings.
Selecting Faculty with Behavioral-Based Interviewing
ERIC Educational Resources Information Center
Hammons, James O.; Gansz, Joey L.
2005-01-01
In the corporate world, more and more companies have begun to use a more effective method of evaluating prospective employees. It is estimated that by 1996, approximately 20 to 30 percent of the nation's large companies had begun to use this more effective method known as behavioral-based interviewing (BI). This article explains what BI is and…
RAIM availability for supplemental GPS navigation
DOT National Transportation Integrated Search
1992-06-29
This paper examines GPS receiver autonomous integrity monitoring (RAIM) availability for supplemental navigation based on the approximate radial-error protection (ARP) method. This method applies ceiling levels for the ARP figure of merit to screen o...
Approximate likelihood calculation on a phylogeny for Bayesian estimation of divergence times.
dos Reis, Mario; Yang, Ziheng
2011-07-01
The molecular clock provides a powerful way to estimate species divergence times. If information on some species divergence times is available from the fossil or geological record, it can be used to calibrate a phylogeny and estimate divergence times for all nodes in the tree. The Bayesian method provides a natural framework to incorporate different sources of information concerning divergence times, such as information in the fossil and molecular data. Current models of sequence evolution are intractable in a Bayesian setting, and Markov chain Monte Carlo (MCMC) is used to generate the posterior distribution of divergence times and evolutionary rates. This method is computationally expensive, as it involves the repeated calculation of the likelihood function. Here, we explore the use of Taylor expansion to approximate the likelihood during MCMC iteration. The approximation is much faster than conventional likelihood calculation. However, the approximation is expected to be poor when the proposed parameters are far from the likelihood peak. We explore the use of parameter transforms (square root, logarithm, and arcsine) to improve the approximation to the likelihood curve. We found that the new methods, particularly the arcsine-based transform, provided very good approximations under relaxed clock models and also under the global clock model when the global clock is not seriously violated. The approximation is poorer for analysis under the global clock when the global clock is seriously wrong and should thus not be used. The results suggest that the approximate method may be useful for Bayesian dating analysis using large data sets.
Confidence bounds for normal and lognormal distribution coefficients of variation
Steve Verrill
2003-01-01
This paper compares the so-called exact approach for obtaining confidence intervals on normal distribution coefficients of variation to approximate methods. Approximate approaches were found to perform less well than the exact approach for large coefficients of variation and small sample sizes. Web-based computer programs are described for calculating confidence...
Closure to new results for an approximate method for calculating two-dimensional furrow infiltration
USDA-ARS?s Scientific Manuscript database
In a discussion paper, Ebrahimian and Noury (2015) raised several concerns about an approximate solution to the two-dimensional Richards equation presented by Bautista et al (2014). The solution is based on a procedure originally proposed by Warrick et al. (2007). Such a solution is of practical i...
NASA Astrophysics Data System (ADS)
Sato, Kazunori; Dederichs, Peter H.; Katayama-Yoshida, Hiroshi
2007-02-01
We investigate the electronic structure and magnetic properties of AlN-, AlP-, AlAs-, AlSb-, InN-, InP-, InAs-, and InSb-based dilute magnetic semiconductors (DMS) with Mn impurities from first-principles. The electronic structure of DMS is calculated by using the Korringa-Kohn-Rostoker coherent potential approximation (KKR-CPA) method in connection with the local density approximation (LDA) and the LDA+U method. Describing the magnetic properties by a classical Heisenberg model, effective exchange interactions are calculated by applying magnetic force theorem for two impurities embedded in the CPA medium. With the calculated exchange interactions, TC is estimated by using the mean field approximation, the random phase approximation and the Monte Carlo simulation. It is found that the p-d exchange model [Dietl et al.: Science 287 (2000) 1019] is adequate for a limited class of DMS and insufficient to describe the ferromagnetism in wide gap semiconductor based DMS such as (Ga,Mn)N and the presently investigated (Al,Mn)N and (In,Mn)N.
Approximate N-Player Nonzero-Sum Game Solution for an Uncertain Continuous Nonlinear System.
Johnson, Marcus; Kamalapurkar, Rushikesh; Bhasin, Shubhendu; Dixon, Warren E
2015-08-01
An approximate online equilibrium solution is developed for an N -player nonzero-sum game subject to continuous-time nonlinear unknown dynamics and an infinite horizon quadratic cost. A novel actor-critic-identifier structure is used, wherein a robust dynamic neural network is used to asymptotically identify the uncertain system with additive disturbances, and a set of critic and actor NNs are used to approximate the value functions and equilibrium policies, respectively. The weight update laws for the actor neural networks (NNs) are generated using a gradient-descent method, and the critic NNs are generated by least square regression, which are both based on the modified Bellman error that is independent of the system dynamics. A Lyapunov-based stability analysis shows that uniformly ultimately bounded tracking is achieved, and a convergence analysis demonstrates that the approximate control policies converge to a neighborhood of the optimal solutions. The actor, critic, and identifier structures are implemented in real time continuously and simultaneously. Simulations on two and three player games illustrate the performance of the developed method.
Fuzzy-Rough Nearest Neighbour Classification
NASA Astrophysics Data System (ADS)
Jensen, Richard; Cornelis, Chris
A new fuzzy-rough nearest neighbour (FRNN) classification algorithm is presented in this paper, as an alternative to Sarkar's fuzzy-rough ownership function (FRNN-O) approach. By contrast to the latter, our method uses the nearest neighbours to construct lower and upper approximations of decision classes, and classifies test instances based on their membership to these approximations. In the experimental analysis, we evaluate our approach with both classical fuzzy-rough approximations (based on an implicator and a t-norm), as well as with the recently introduced vaguely quantified rough sets. Preliminary results are very good, and in general FRNN outperforms FRNN-O, as well as the traditional fuzzy nearest neighbour (FNN) algorithm.
Recursive approach to the moment-based phase unwrapping method.
Langley, Jason A; Brice, Robert G; Zhao, Qun
2010-06-01
The moment-based phase unwrapping algorithm approximates the phase map as a product of Gegenbauer polynomials, but the weight function for the Gegenbauer polynomials generates artificial singularities along the edge of the phase map. A method is presented to remove the singularities inherent to the moment-based phase unwrapping algorithm by approximating the phase map as a product of two one-dimensional Legendre polynomials and applying a recursive property of derivatives of Legendre polynomials. The proposed phase unwrapping algorithm is tested on simulated and experimental data sets. The results are then compared to those of PRELUDE 2D, a widely used phase unwrapping algorithm, and a Chebyshev-polynomial-based phase unwrapping algorithm. It was found that the proposed phase unwrapping algorithm provides results that are comparable to those obtained by using PRELUDE 2D and the Chebyshev phase unwrapping algorithm.
Kinematic precision of gear trains
NASA Technical Reports Server (NTRS)
Litvin, F. L.; Goldrich, R. N.; Coy, J. J.; Zaretsky, E. V.
1982-01-01
Kinematic precision is affected by errors which are the result of either intentional adjustments or accidental defects in manufacturing and assembly of gear trains. A method for the determination of kinematic precision of gear trains is described. The method is based on the exact kinematic relations for the contact point motions of the gear tooth surfaces under the influence of errors. An approximate method is also explained. Example applications of the general approximate methods are demonstrated for gear trains consisting of involute (spur and helical) gears, circular arc (Wildhaber-Novikov) gears, and spiral bevel gears. Gear noise measurements from a helicopter transmission are presented and discussed with relation to the kinematic precision theory.
Jia, Mengyu; Chen, Xueying; Zhao, Huijuan; Cui, Shanshan; Liu, Ming; Liu, Lingling; Gao, Feng
2015-01-26
Most analytical methods for describing light propagation in turbid medium exhibit low effectiveness in the near-field of a collimated source. Motivated by the Charge Simulation Method in electromagnetic theory as well as the established discrete source based modeling, we herein report on an improved explicit model for a semi-infinite geometry, referred to as "Virtual Source" (VS) diffuse approximation (DA), to fit for low-albedo medium and short source-detector separation. In this model, the collimated light in the standard DA is analogously approximated as multiple isotropic point sources (VS) distributed along the incident direction. For performance enhancement, a fitting procedure between the calculated and realistic reflectances is adopted in the near-field to optimize the VS parameters (intensities and locations). To be practically applicable, an explicit 2VS-DA model is established based on close-form derivations of the VS parameters for the typical ranges of the optical parameters. This parameterized scheme is proved to inherit the mathematical simplicity of the DA approximation while considerably extending its validity in modeling the near-field photon migration in low-albedo medium. The superiority of the proposed VS-DA method to the established ones is demonstrated in comparison with Monte-Carlo simulations over wide ranges of the source-detector separation and the medium optical properties.
The generalized scattering coefficient method for plane wave scattering in layered structures
NASA Astrophysics Data System (ADS)
Liu, Yu; Li, Chao; Wang, Huai-Yu; Zhou, Yun-Song
2017-02-01
The generalized scattering coefficient (GSC) method is pedagogically derived and employed to study the scattering of plane waves in homogeneous and inhomogeneous layered structures. The numerical stabilities and accuracies of this method and other commonly used numerical methods are discussed and compared. For homogeneous layered structures, concise scattering formulas with clear physical interpretations and strong numerical stability are obtained by introducing the GSCs. For inhomogeneous layered structures, three numerical methods are employed: the staircase approximation method, the power series expansion method, and the differential equation based on the GSCs. We investigate the accuracies and convergence behaviors of these methods by comparing their predictions to the exact results. The conclusions are as follows. The staircase approximation method has a slow convergence in spite of its simple and intuitive implementation, and a fine stratification within the inhomogeneous layer is required for obtaining accurate results. The expansion method results are sensitive to the expansion order, and the treatment becomes very complicated for relatively complex configurations, which restricts its applicability. By contrast, the GSC-based differential equation possesses a simple implementation while providing fast and accurate results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rüger, Robert, E-mail: rueger@scm.com; Department of Theoretical Chemistry, Vrije Universiteit Amsterdam, De Boelelaan 1083, 1081 HV Amsterdam; Wilhelm-Ostwald-Institut für Physikalische und Theoretische Chemie, Linnéstr. 2, 04103 Leipzig
2016-05-14
We propose a new method of calculating electronically excited states that combines a density functional theory based ground state calculation with a linear response treatment that employs approximations used in the time-dependent density functional based tight binding (TD-DFTB) approach. The new method termed time-dependent density functional theory TD-DFT+TB does not rely on the DFTB parametrization and is therefore applicable to systems involving all combinations of elements. We show that the new method yields UV/Vis absorption spectra that are in excellent agreement with computationally much more expensive TD-DFT calculations. Errors in vertical excitation energies are reduced by a factor of twomore » compared to TD-DFTB.« less
Pulsed single-blow regenerator testing
NASA Technical Reports Server (NTRS)
Oldson, J. C.; Knowles, T. R.; Rauch, J.
1992-01-01
A pulsed single-blow method has been developed for testing of Stirling regenerator materials performance. The method uses a tubular flow arrangement with a steady gas flow passing through a regenerator matrix sample that packs the flow channel for a short distance. A wire grid heater spanning the gas flow channel is used to heat a plug of gas by approximately 2 K for approximately 350 ms. Foil thermocouples monitor the gas temperature entering and leaving the sample. Data analysis based on a 1D incompressible-flow thermal model allows the extraction of Stanton number. A figure of merit involving heat transfer and pressure drop is used to present results for steel screens and steel felt. The observations show a lower figure of merit for the materials tested than is expected based on correlations obtained by other methods.
Projection methods for the numerical solution of Markov chain models
NASA Technical Reports Server (NTRS)
Saad, Youcef
1989-01-01
Projection methods for computing stationary probability distributions for Markov chain models are presented. A general projection method is a method which seeks an approximation from a subspace of small dimension to the original problem. Thus, the original matrix problem of size N is approximated by one of dimension m, typically much smaller than N. A particularly successful class of methods based on this principle is that of Krylov subspace methods which utilize subspaces of the form span(v,av,...,A(exp m-1)v). These methods are effective in solving linear systems and eigenvalue problems (Lanczos, Arnoldi,...) as well as nonlinear equations. They can be combined with more traditional iterative methods such as successive overrelaxation, symmetric successive overrelaxation, or with incomplete factorization methods to enhance convergence.
A posteriori error estimation for multi-stage Runge–Kutta IMEX schemes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chaudhry, Jehanzeb H.; Collins, J. B.; Shadid, John N.
Implicit–Explicit (IMEX) schemes are widely used for time integration methods for approximating solutions to a large class of problems. In this work, we develop accurate a posteriori error estimates of a quantity-of-interest for approximations obtained from multi-stage IMEX schemes. This is done by first defining a finite element method that is nodally equivalent to an IMEX scheme, then using typical methods for adjoint-based error estimation. Furthermore, the use of a nodally equivalent finite element method allows a decomposition of the error into multiple components, each describing the effect of a different portion of the method on the total error inmore » a quantity-of-interest.« less
A posteriori error estimation for multi-stage Runge–Kutta IMEX schemes
Chaudhry, Jehanzeb H.; Collins, J. B.; Shadid, John N.
2017-02-05
Implicit–Explicit (IMEX) schemes are widely used for time integration methods for approximating solutions to a large class of problems. In this work, we develop accurate a posteriori error estimates of a quantity-of-interest for approximations obtained from multi-stage IMEX schemes. This is done by first defining a finite element method that is nodally equivalent to an IMEX scheme, then using typical methods for adjoint-based error estimation. Furthermore, the use of a nodally equivalent finite element method allows a decomposition of the error into multiple components, each describing the effect of a different portion of the method on the total error inmore » a quantity-of-interest.« less
Aeroacoustic directivity via wave-packet analysis of mean or base flows
NASA Astrophysics Data System (ADS)
Edstrand, Adam; Schmid, Peter; Cattafesta, Louis
2017-11-01
Noise pollution is an ever-increasing problem in society, and knowledge of the directivity patterns of the sound radiation is required for prediction and control. Directivity is frequently determined through costly numerical simulations of the flow field combined with an acoustic analogy. We introduce a new computationally efficient method of finding directivity for a given mean or base flow field using wave-packet analysis (Trefethen, PRSA 2005). Wave-packet analysis approximates the eigenvalue spectrum with spectral accuracy by modeling the eigenfunctions as wave packets. With the wave packets determined, we then follow the method of Obrist (JFM, 2009), which uses Lighthill's acoustic analogy to determine the far-field sound radiation and directivity of wave-packet modes. We apply this method to a canonical jet flow (Gudmundsson and Colonius, JFM 2011) and determine the directivity of potentially unstable wave packets. Furthermore, we generalize the method to consider a three-dimensional flow field of a trailing vortex wake. In summary, we approximate the disturbances as wave packets and extract the directivity from the wave-packet approximation in a fraction of the time of standard aeroacoustic solvers. ONR Grant N00014-15-1-2403.
NASA Technical Reports Server (NTRS)
Kvernadze, George; Hagstrom,Thomas; Shapiro, Henry
1997-01-01
A key step for some methods dealing with the reconstruction of a function with jump discontinuities is the accurate approximation of the jumps and their locations. Various methods have been suggested in the literature to obtain this valuable information. In the present paper, we develop an algorithm based on identities which determine the jumps of a 2(pi)-periodic bounded not-too-highly oscillating function by the partial sums of its differentiated Fourier series. The algorithm enables one to approximate the locations of discontinuities and the magnitudes of jumps of a bounded function. We study the accuracy of approximation and establish asymptotic expansions for the approximations of a 27(pi)-periodic piecewise smooth function with one discontinuity. By an appropriate linear combination, obtained via derivatives of different order, we significantly improve the accuracy. Next, we use Richardson's extrapolation method to enhance the accuracy even more. For a function with multiple discontinuities we establish simple formulae which "eliminate" all discontinuities of the function but one. Then we treat the function as if it had one singularity following the method described above.
NASA Astrophysics Data System (ADS)
Chen, Shuhong; Tan, Zhong
2007-11-01
In this paper, we consider the nonlinear elliptic systems under controllable growth condition. We use a new method introduced by Duzaar and Grotowski, for proving partial regularity for weak solutions, based on a generalization of the technique of harmonic approximation. We extend previous partial regularity results under the natural growth condition to the case of the controllable growth condition, and directly establishing the optimal Hölder exponent for the derivative of a weak solution.
Numerical realization of the variational method for generating self-trapped beams
NASA Astrophysics Data System (ADS)
Duque, Erick I.; Lopez-Aguayo, Servando; Malomed, Boris A.
2018-03-01
We introduce a numerical variational method based on the Rayleigh-Ritz optimization principle for predicting two-dimensional self-trapped beams in nonlinear media. This technique overcomes the limitation of the traditional variational approximation in performing analytical Lagrangian integration and differentiation. Approximate soliton solutions of a generalized nonlinear Schr\\"odinger equation are obtained, demonstrating robustness of the beams of various types (fundamental, vortices, multipoles, azimuthons) in the course of their propagation. The algorithm offers possibilities to produce more sophisticated soliton profiles in general nonlinear models.
A frozen Gaussian approximation-based multi-level particle swarm optimization for seismic inversion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Jinglai, E-mail: jinglaili@sjtu.edu.cn; Lin, Guang, E-mail: lin491@purdue.edu; Computational Sciences and Mathematics Division, Pacific Northwest National Laboratory, Richland, WA 99352
2015-09-01
In this paper, we propose a frozen Gaussian approximation (FGA)-based multi-level particle swarm optimization (MLPSO) method for seismic inversion of high-frequency wave data. The method addresses two challenges in it: First, the optimization problem is highly non-convex, which makes hard for gradient-based methods to reach global minima. This is tackled by MLPSO which can escape from undesired local minima. Second, the character of high-frequency of seismic waves requires a large number of grid points in direct computational methods, and thus renders an extremely high computational demand on the simulation of each sample in MLPSO. We overcome this difficulty by threemore » steps: First, we use FGA to compute high-frequency wave propagation based on asymptotic analysis on phase plane; Then we design a constrained full waveform inversion problem to prevent the optimization search getting into regions of velocity where FGA is not accurate; Last, we solve the constrained optimization problem by MLPSO that employs FGA solvers with different fidelity. The performance of the proposed method is demonstrated by a two-dimensional full-waveform inversion example of the smoothed Marmousi model.« less
Hamiltonian Monte Carlo acceleration using surrogate functions with random bases.
Zhang, Cheng; Shahbaba, Babak; Zhao, Hongkai
2017-11-01
For big data analysis, high computational cost for Bayesian methods often limits their applications in practice. In recent years, there have been many attempts to improve computational efficiency of Bayesian inference. Here we propose an efficient and scalable computational technique for a state-of-the-art Markov chain Monte Carlo methods, namely, Hamiltonian Monte Carlo. The key idea is to explore and exploit the structure and regularity in parameter space for the underlying probabilistic model to construct an effective approximation of its geometric properties. To this end, we build a surrogate function to approximate the target distribution using properly chosen random bases and an efficient optimization process. The resulting method provides a flexible, scalable, and efficient sampling algorithm, which converges to the correct target distribution. We show that by choosing the basis functions and optimization process differently, our method can be related to other approaches for the construction of surrogate functions such as generalized additive models or Gaussian process models. Experiments based on simulated and real data show that our approach leads to substantially more efficient sampling algorithms compared to existing state-of-the-art methods.
NASA Astrophysics Data System (ADS)
Trujillo Bueno, J.; Fabiani Bendicho, P.
1995-12-01
Iterative schemes based on Gauss-Seidel (G-S) and optimal successive over-relaxation (SOR) iteration are shown to provide a dramatic increase in the speed with which non-LTE radiation transfer (RT) problems can be solved. The convergence rates of these new RT methods are identical to those of upper triangular nonlocal approximate operator splitting techniques, but the computing time per iteration and the memory requirements are similar to those of a local operator splitting method. In addition to these properties, both methods are particularly suitable for multidimensional geometry, since they neither require the actual construction of nonlocal approximate operators nor the application of any matrix inversion procedure. Compared with the currently used Jacobi technique, which is based on the optimal local approximate operator (see Olson, Auer, & Buchler 1986), the G-S method presented here is faster by a factor 2. It gives excellent smoothing of the high-frequency error components, which makes it the iterative scheme of choice for multigrid radiative transfer. This G-S method can also be suitably combined with standard acceleration techniques to achieve even higher performance. Although the convergence rate of the optimal SOR scheme developed here for solving non-LTE RT problems is much higher than G-S, the computing time per iteration is also minimal, i.e., virtually identical to that of a local operator splitting method. While the conventional optimal local operator scheme provides the converged solution after a total CPU time (measured in arbitrary units) approximately equal to the number n of points per decade of optical depth, the time needed by this new method based on the optimal SOR iterations is only √n/2√2. This method is competitive with those that result from combining the above-mentioned Jacobi and G-S schemes with the best acceleration techniques. Contrary to what happens with the local operator splitting strategy currently in use, these novel methods remain effective even under extreme non-LTE conditions in very fine grids.
NASA Astrophysics Data System (ADS)
Yang, Lei; Yan, Hongyong; Liu, Hong
2017-03-01
Implicit staggered-grid finite-difference (ISFD) scheme is competitive for its great accuracy and stability, whereas its coefficients are conventionally determined by the Taylor-series expansion (TE) method, leading to a loss in numerical precision. In this paper, we modify the TE method using the minimax approximation (MA), and propose a new optimal ISFD scheme based on the modified TE (MTE) with MA method. The new ISFD scheme takes the advantage of the TE method that guarantees great accuracy at small wavenumbers, and keeps the property of the MA method that keeps the numerical errors within a limited bound at the same time. Thus, it leads to great accuracy for numerical solution of the wave equations. We derive the optimal ISFD coefficients by applying the new method to the construction of the objective function, and using a Remez algorithm to minimize its maximum. Numerical analysis is made in comparison with the conventional TE-based ISFD scheme, indicating that the MTE-based ISFD scheme with appropriate parameters can widen the wavenumber range with high accuracy, and achieve greater precision than the conventional ISFD scheme. The numerical modeling results also demonstrate that the MTE-based ISFD scheme performs well in elastic wave simulation, and is more efficient than the conventional ISFD scheme for elastic modeling.
Biochemical simulations: stochastic, approximate stochastic and hybrid approaches.
Pahle, Jürgen
2009-01-01
Computer simulations have become an invaluable tool to study the sometimes counterintuitive temporal dynamics of (bio-)chemical systems. In particular, stochastic simulation methods have attracted increasing interest recently. In contrast to the well-known deterministic approach based on ordinary differential equations, they can capture effects that occur due to the underlying discreteness of the systems and random fluctuations in molecular numbers. Numerous stochastic, approximate stochastic and hybrid simulation methods have been proposed in the literature. In this article, they are systematically reviewed in order to guide the researcher and help her find the appropriate method for a specific problem.
Biochemical simulations: stochastic, approximate stochastic and hybrid approaches
2009-01-01
Computer simulations have become an invaluable tool to study the sometimes counterintuitive temporal dynamics of (bio-)chemical systems. In particular, stochastic simulation methods have attracted increasing interest recently. In contrast to the well-known deterministic approach based on ordinary differential equations, they can capture effects that occur due to the underlying discreteness of the systems and random fluctuations in molecular numbers. Numerous stochastic, approximate stochastic and hybrid simulation methods have been proposed in the literature. In this article, they are systematically reviewed in order to guide the researcher and help her find the appropriate method for a specific problem. PMID:19151097
An approximate methods approach to probabilistic structural analysis
NASA Technical Reports Server (NTRS)
Mcclung, R. C.; Millwater, H. R.; Wu, Y.-T.; Thacker, B. H.; Burnside, O. H.
1989-01-01
A probabilistic structural analysis method (PSAM) is described which makes an approximate calculation of the structural response of a system, including the associated probabilistic distributions, with minimal computation time and cost, based on a simplified representation of the geometry, loads, and material. The method employs the fast probability integration (FPI) algorithm of Wu and Wirsching. Typical solution strategies are illustrated by formulations for a representative critical component chosen from the Space Shuttle Main Engine (SSME) as part of a major NASA-sponsored program on PSAM. Typical results are presented to demonstrate the role of the methodology in engineering design and analysis.
NASA Astrophysics Data System (ADS)
Fonseca, E. S. R.; de Jesus, M. E. P.
2007-07-01
The estimation of optical properties of highly turbid and opaque biological tissue is a difficult task since conventional purely optical methods rapidly loose sensitivity as the mean photon path length decreases. Photothermal methods, such as pulsed or frequency domain photothermal radiometry (FD-PTR), on the other hand, show remarkable sensitivity in experimental conditions that produce very feeble optical signals. Photothermal Radiometry is primarily sensitive to absorption coefficient yielding considerably higher estimation errors on scattering coefficients. Conversely, purely optical methods such as Local Diffuse Reflectance (LDR) depend mainly on the scattering coefficient and yield much better estimates of this parameter. Therefore, at moderate transport albedos, the combination of photothermal and reflectance methods can improve considerably the sensitivity of detection of tissue optical properties. The authors have recently proposed a novel method that combines FD-PTR with LDR, aimed at improving sensitivity on the determination of both optical properties. Signal analysis was performed by global fitting the experimental data to forward models based on Monte-Carlo simulations. Although this approach is accurate, the associated computational burden often limits its use as a forward model. Therefore, the application of analytical models based on the diffusion approximation offers a faster alternative. In this work, we propose the calculation of the diffuse reflectance and the fluence rate profiles under the δ-P I approximation. This approach is known to approximate fluence rate expressions better close to collimated sources and boundaries than the standard diffusion approximation (SDA). We extend this study to the calculation of the diffuse reflectance profiles. The ability of the δ-P I based model to provide good estimates of the absorption, scattering and anisotropy coefficients is tested against Monte-Carlo simulations over a wide range of scattering to absorption ratios. Experimental validation of the proposed method is accomplished by a set of measurements on solid absorbing and scattering phantoms.
Qing Liu; Zhihui Lai; Zongwei Zhou; Fangjun Kuang; Zhong Jin
2016-01-01
Low-rank matrix completion aims to recover a matrix from a small subset of its entries and has received much attention in the field of computer vision. Most existing methods formulate the task as a low-rank matrix approximation problem. A truncated nuclear norm has recently been proposed as a better approximation to the rank of matrix than a nuclear norm. The corresponding optimization method, truncated nuclear norm regularization (TNNR), converges better than the nuclear norm minimization-based methods. However, it is not robust to the number of subtracted singular values and requires a large number of iterations to converge. In this paper, a TNNR method based on weighted residual error (TNNR-WRE) for matrix completion and its extension model (ETNNR-WRE) are proposed. TNNR-WRE assigns different weights to the rows of the residual error matrix in an augmented Lagrange function to accelerate the convergence of the TNNR method. The ETNNR-WRE is much more robust to the number of subtracted singular values than the TNNR-WRE, TNNR alternating direction method of multipliers, and TNNR accelerated proximal gradient with Line search methods. Experimental results using both synthetic and real visual data sets show that the proposed TNNR-WRE and ETNNR-WRE methods perform better than TNNR and Iteratively Reweighted Nuclear Norm (IRNN) methods.
Density functional theory calculations of III-N based semiconductors with mBJLDA
NASA Astrophysics Data System (ADS)
Gürel, Hikmet Hakan; Akıncı, Özden; Ünlü, Hilmi
2017-02-01
In this work, we present first principles calculations based on a full potential linear augmented plane-wave method (FP-LAPW) to calculate structural and electronic properties of III-V based nitrides such as GaN, AlN, InN in a zinc-blende cubic structure. First principles calculation using the local density approximation (LDA) and generalized gradient approximation (GGA) underestimate the band gap. We proposed a new potential called modified Becke-Johnson local density approximation (MBJLDA) that combines modified Becke-Johnson exchange potential and the LDA correlation potential to get better band gap results compared to experiment. We compared various exchange-correlation potentials (LSDA, GGA, HSE, and MBJLDA) to determine band gaps and structural properties of semiconductors. We show that using MBJLDA density potential gives a better agreement with experimental data for band gaps III-V nitrides based semiconductors.
Automated prediction of protein function and detection of functional sites from structure.
Pazos, Florencio; Sternberg, Michael J E
2004-10-12
Current structural genomics projects are yielding structures for proteins whose functions are unknown. Accordingly, there is a pressing requirement for computational methods for function prediction. Here we present PHUNCTIONER, an automatic method for structure-based function prediction using automatically extracted functional sites (residues associated to functions). The method relates proteins with the same function through structural alignments and extracts 3D profiles of conserved residues. Functional features to train the method are extracted from the Gene Ontology (GO) database. The method extracts these features from the entire GO hierarchy and hence is applicable across the whole range of function specificity. 3D profiles associated with 121 GO annotations were extracted. We tested the power of the method both for the prediction of function and for the extraction of functional sites. The success of function prediction by our method was compared with the standard homology-based method. In the zone of low sequence similarity (approximately 15%), our method assigns the correct GO annotation in 90% of the protein structures considered, approximately 20% higher than inheritance of function from the closest homologue.
Methods in the study of discrete upper hybrid waves
NASA Astrophysics Data System (ADS)
Yoon, P. H.; Ye, S.; Labelle, J.; Weatherwax, A. T.; Menietti, J. D.
2007-11-01
Naturally occurring plasma waves characterized by fine frequency structure or discrete spectrum, detected by satellite, rocket-borne instruments, or ground-based receivers, can be interpreted as eigenmodes excited and trapped in field-aligned density structures. This paper overviews various theoretical methods to study such phenomena for a one-dimensional (1-D) density structure. Among the various methods are parabolic approximation, eikonal matching, eigenfunction matching, and full numerical solution based upon shooting method. Various approaches are compared against the full numerical solution. Among the analytic methods it is found that the eigenfunction matching technique best approximates the actual numerical solution. The analysis is further extended to 2-D geometry. A detailed comparative analysis between the eigenfunction matching and fully numerical methods is carried out for the 2-D case. Although in general the two methods compare favorably, significant differences are also found such that for application to actual observations it is prudent to employ the fully numerical method. Application of the methods developed in the present paper to actual geophysical problems will be given in a companion paper.
Error-Rate Estimation Based on Multi-Signal Flow Graph Model and Accelerated Radiation Tests
Wang, Yueke; Xing, Kefei; Deng, Wei; Zhang, Zelong
2016-01-01
A method of evaluating the single-event effect soft-error vulnerability of space instruments before launched has been an active research topic in recent years. In this paper, a multi-signal flow graph model is introduced to analyze the fault diagnosis and meantime to failure (MTTF) for space instruments. A model for the system functional error rate (SFER) is proposed. In addition, an experimental method and accelerated radiation testing system for a signal processing platform based on the field programmable gate array (FPGA) is presented. Based on experimental results of different ions (O, Si, Cl, Ti) under the HI-13 Tandem Accelerator, the SFER of the signal processing platform is approximately 10−3(error/particle/cm2), while the MTTF is approximately 110.7 h. PMID:27583533
Error-Rate Estimation Based on Multi-Signal Flow Graph Model and Accelerated Radiation Tests.
He, Wei; Wang, Yueke; Xing, Kefei; Deng, Wei; Zhang, Zelong
2016-01-01
A method of evaluating the single-event effect soft-error vulnerability of space instruments before launched has been an active research topic in recent years. In this paper, a multi-signal flow graph model is introduced to analyze the fault diagnosis and meantime to failure (MTTF) for space instruments. A model for the system functional error rate (SFER) is proposed. In addition, an experimental method and accelerated radiation testing system for a signal processing platform based on the field programmable gate array (FPGA) is presented. Based on experimental results of different ions (O, Si, Cl, Ti) under the HI-13 Tandem Accelerator, the SFER of the signal processing platform is approximately 10-3(error/particle/cm2), while the MTTF is approximately 110.7 h.
Mining the protein data bank with CReF to predict approximate 3-D structures of polypeptides.
Dorn, Márcio; de Souza, Osmar Norberto
2010-01-01
n this paper we describe CReF, a Central Residue Fragment-based method to predict approximate 3-D structures of polypeptides by mining the Protein Data Bank (PDB). The approximate predicted structures are good enough to be used as starting conformations in refinement procedures employing state-of-the-art molecular mechanics methods such as molecular dynamics simulations. CReF is very fast and we illustrate its efficacy in three case studies of polypeptides whose sizes vary from 34 to 70 amino acids. As indicated by the RMSD values, our initial results show that the predicted structures adopt the expected fold, similar to the experimental ones.
Hybrid stochastic simulation of reaction-diffusion systems with slow and fast dynamics.
Strehl, Robert; Ilie, Silvana
2015-12-21
In this paper, we present a novel hybrid method to simulate discrete stochastic reaction-diffusion models arising in biochemical signaling pathways. We study moderately stiff systems, for which we can partition each reaction or diffusion channel into either a slow or fast subset, based on its propensity. Numerical approaches missing this distinction are often limited with respect to computational run time or approximation quality. We design an approximate scheme that remedies these pitfalls by using a new blending strategy of the well-established inhomogeneous stochastic simulation algorithm and the tau-leaping simulation method. The advantages of our hybrid simulation algorithm are demonstrated on three benchmarking systems, with special focus on approximation accuracy and efficiency.
The approximation of anomalous magnetic field by array of magnetized rods
NASA Astrophysics Data System (ADS)
Denis, Byzov; Lev, Muravyev; Natalia, Fedorova
2017-07-01
The method for calculation the vertical component of an anomalous magnetic field from its absolute value is presented. Conversion is based on the approximation of magnetic induction module anomalies by the set of singular sources and the subsequent calculation for the vertical component of the field with the chosen distribution. The rods that are uniformly magnetized along their axis were used as a set of singular sources. Applicability analysis of different methods of nonlinear optimization for solving the given task was carried out. The algorithm is implemented using the parallel computing technology on the NVidia GPU. The approximation and calculation of vertical component is demonstrated for regional magnetic field of North Eurasia territories.
SEE rate estimation based on diffusion approximation of charge collection
NASA Astrophysics Data System (ADS)
Sogoyan, Armen V.; Chumakov, Alexander I.; Smolin, Anatoly A.
2018-03-01
The integral rectangular parallelepiped (IRPP) method remains the main approach to single event rate (SER) prediction for aerospace systems, despite the growing number of issues impairing method's validity when applied to scaled technology nodes. One of such issues is uncertainty in parameters extraction in the IRPP method, which can lead to a spread of several orders of magnitude in the subsequently calculated SER. The paper presents an alternative approach to SER estimation based on diffusion approximation of the charge collection by an IC element and geometrical interpretation of SEE cross-section. In contrast to the IRPP method, the proposed model includes only two parameters which are uniquely determined from the experimental data for normal incidence irradiation at an ion accelerator. This approach eliminates the necessity of arbitrary decisions during parameter extraction and, thus, greatly simplifies calculation procedure and increases the robustness of the forecast.
Tamosiunaite, Minija; Asfour, Tamim; Wörgötter, Florentin
2009-03-01
Reinforcement learning methods can be used in robotics applications especially for specific target-oriented problems, for example the reward-based recalibration of goal directed actions. To this end still relatively large and continuous state-action spaces need to be efficiently handled. The goal of this paper is, thus, to develop a novel, rather simple method which uses reinforcement learning with function approximation in conjunction with different reward-strategies for solving such problems. For the testing of our method, we use a four degree-of-freedom reaching problem in 3D-space simulated by a two-joint robot arm system with two DOF each. Function approximation is based on 4D, overlapping kernels (receptive fields) and the state-action space contains about 10,000 of these. Different types of reward structures are being compared, for example, reward-on- touching-only against reward-on-approach. Furthermore, forbidden joint configurations are punished. A continuous action space is used. In spite of a rather large number of states and the continuous action space these reward/punishment strategies allow the system to find a good solution usually within about 20 trials. The efficiency of our method demonstrated in this test scenario suggests that it might be possible to use it on a real robot for problems where mixed rewards can be defined in situations where other types of learning might be difficult.
Fast online and index-based algorithms for approximate search of RNA sequence-structure patterns
2013-01-01
Background It is well known that the search for homologous RNAs is more effective if both sequence and structure information is incorporated into the search. However, current tools for searching with RNA sequence-structure patterns cannot fully handle mutations occurring on both these levels or are simply not fast enough for searching large sequence databases because of the high computational costs of the underlying sequence-structure alignment problem. Results We present new fast index-based and online algorithms for approximate matching of RNA sequence-structure patterns supporting a full set of edit operations on single bases and base pairs. Our methods efficiently compute semi-global alignments of structural RNA patterns and substrings of the target sequence whose costs satisfy a user-defined sequence-structure edit distance threshold. For this purpose, we introduce a new computing scheme to optimally reuse the entries of the required dynamic programming matrices for all substrings and combine it with a technique for avoiding the alignment computation of non-matching substrings. Our new index-based methods exploit suffix arrays preprocessed from the target database and achieve running times that are sublinear in the size of the searched sequences. To support the description of RNA molecules that fold into complex secondary structures with multiple ordered sequence-structure patterns, we use fast algorithms for the local or global chaining of approximate sequence-structure pattern matches. The chaining step removes spurious matches from the set of intermediate results, in particular of patterns with little specificity. In benchmark experiments on the Rfam database, our improved online algorithm is faster than the best previous method by up to factor 45. Our best new index-based algorithm achieves a speedup of factor 560. Conclusions The presented methods achieve considerable speedups compared to the best previous method. This, together with the expected sublinear running time of the presented index-based algorithms, allows for the first time approximate matching of RNA sequence-structure patterns in large sequence databases. Beyond the algorithmic contributions, we provide with RaligNAtor a robust and well documented open-source software package implementing the algorithms presented in this manuscript. The RaligNAtor software is available at http://www.zbh.uni-hamburg.de/ralignator. PMID:23865810
Geodesic regression for image time-series.
Niethammer, Marc; Huang, Yang; Vialard, François-Xavier
2011-01-01
Registration of image-time series has so far been accomplished (i) by concatenating registrations between image pairs, (ii) by solving a joint estimation problem resulting in piecewise geodesic paths between image pairs, (iii) by kernel based local averaging or (iv) by augmenting the joint estimation with additional temporal irregularity penalties. Here, we propose a generative model extending least squares linear regression to the space of images by using a second-order dynamic formulation for image registration. Unlike previous approaches, the formulation allows for a compact representation of an approximation to the full spatio-temporal trajectory through its initial values. The method also opens up possibilities to design image-based approximation algorithms. The resulting optimization problem is solved using an adjoint method.
Inclusion-Based Effective Medium Models for the Permeability of a 3D Fractured Rock Mass
NASA Astrophysics Data System (ADS)
Ebigbo, A.; Lang, P. S.; Paluszny, A.; Zimmerman, R. W.
2015-12-01
Following the work of Saevik et al. (Transp. Porous Media, 2013; Geophys. Prosp., 2014), we investigate the ability of classical inclusion-based effective medium theories to predict the macroscopic permeability of a fractured rock mass. The fractures are assumed to be thin, oblate spheroids, and are treated as porous media in their own right, with permeability kf, and are embedded in a homogeneous matrix having permeability km. At very low fracture densities, the effective permeability is given exactly by a well-known expression that goes back at least as far as Fricke (Phys. Rev., 1924). For non-trivial fracture densities, an effective medium approximation must be employed. We have investigated several such approximations: Maxwell's method, the differential method, and the symmetric and asymmetric versions of the self-consistent approximation. The predictions of the various approximate models are tested against the results of explicit numerical simulations, averaged over numerous statistical realizations for each set of parameters. Each of the various effective medium approximations satisfies the Hashin-Shtrikman (H-S) bounds. Unfortunately, these bounds are much too far apart to provide quantitatively useful estimates of keff. For the case of zero matrix permeability, the well-known approximation of Snow, which is based on network considerations rather than a continuum approach, is shown to essentially coincide with the upper H-S bound, thereby proving that the commonly made assumption that Snow's equation is an "upper bound" is indeed correct. This problem is actually characterized by two small parameters, the aspect ratio of the spheroidal fractures, α, and the permeability ratio, κ = km/kf. Two different regimes can be identified, corresponding to α < κ and κ < α, and expressions for each of the effective medium approximations are developed in both regimes. In both regimes, the symmetric version of the self-consistent approximation is the most accurate.
Weir, Christopher J; Butcher, Isabella; Assi, Valentina; Lewis, Stephanie C; Murray, Gordon D; Langhorne, Peter; Brady, Marian C
2018-03-07
Rigorous, informative meta-analyses rely on availability of appropriate summary statistics or individual participant data. For continuous outcomes, especially those with naturally skewed distributions, summary information on the mean or variability often goes unreported. While full reporting of original trial data is the ideal, we sought to identify methods for handling unreported mean or variability summary statistics in meta-analysis. We undertook two systematic literature reviews to identify methodological approaches used to deal with missing mean or variability summary statistics. Five electronic databases were searched, in addition to the Cochrane Colloquium abstract books and the Cochrane Statistics Methods Group mailing list archive. We also conducted cited reference searching and emailed topic experts to identify recent methodological developments. Details recorded included the description of the method, the information required to implement the method, any underlying assumptions and whether the method could be readily applied in standard statistical software. We provided a summary description of the methods identified, illustrating selected methods in example meta-analysis scenarios. For missing standard deviations (SDs), following screening of 503 articles, fifteen methods were identified in addition to those reported in a previous review. These included Bayesian hierarchical modelling at the meta-analysis level; summary statistic level imputation based on observed SD values from other trials in the meta-analysis; a practical approximation based on the range; and algebraic estimation of the SD based on other summary statistics. Following screening of 1124 articles for methods estimating the mean, one approximate Bayesian computation approach and three papers based on alternative summary statistics were identified. Illustrative meta-analyses showed that when replacing a missing SD the approximation using the range minimised loss of precision and generally performed better than omitting trials. When estimating missing means, a formula using the median, lower quartile and upper quartile performed best in preserving the precision of the meta-analysis findings, although in some scenarios, omitting trials gave superior results. Methods based on summary statistics (minimum, maximum, lower quartile, upper quartile, median) reported in the literature facilitate more comprehensive inclusion of randomised controlled trials with missing mean or variability summary statistics within meta-analyses.
Non-additive non-interacting kinetic energy of rare gas dimers
NASA Astrophysics Data System (ADS)
Jiang, Kaili; Nafziger, Jonathan; Wasserman, Adam
2018-03-01
Approximations of the non-additive non-interacting kinetic energy (NAKE) as an explicit functional of the density are the basis of several electronic structure methods that provide improved computational efficiency over standard Kohn-Sham calculations. However, within most fragment-based formalisms, there is no unique exact NAKE, making it difficult to develop general, robust approximations for it. When adjustments are made to the embedding formalisms to guarantee uniqueness, approximate functionals may be more meaningfully compared to the exact unique NAKE. We use numerically accurate inversions to study the exact NAKE of several rare-gas dimers within partition density functional theory, a method that provides the uniqueness for the exact NAKE. We find that the NAKE decreases nearly exponentially with atomic separation for the rare-gas dimers. We compute the logarithmic derivative of the NAKE with respect to the bond length for our numerically accurate inversions as well as for several approximate NAKE functionals. We show that standard approximate NAKE functionals do not reproduce the correct behavior for this logarithmic derivative and propose two new NAKE functionals that do. The first of these is based on a re-parametrization of a conjoint Perdew-Burke-Ernzerhof (PBE) functional. The second is a simple, physically motivated non-decomposable NAKE functional that matches the asymptotic decay constant without fitting.
Berlin, Konstantin; O’Leary, Dianne P.; Fushman, David
2011-01-01
We present and evaluate a rigid-body, deterministic, molecular docking method, called ELMDOCK, that relies solely on the three-dimensional structure of the individual components and the overall rotational diffusion tensor of the complex, obtained from nuclear spin-relaxation measurements. We also introduce a docking method, called ELMPATIDOCK, derived from ELMDOCK and based on the new concept of combining the shape-related restraints from rotational diffusion with those from residual dipolar couplings, along with ambiguous contact/interface-related restraints obtained from chemical shift perturbations. ELMDOCK and ELMPATIDOCK use two novel approximations of the molecular rotational diffusion tensor that allow computationally efficient docking. We show that these approximations are accurate enough to properly dock the two components of a complex without the need to recompute the diffusion tensor at each iteration step. We analyze the accuracy, robustness, and efficiency of these methods using synthetic relaxation data for a large variety of protein-protein complexes. We also test our method on three protein systems for which the structure of the complex and experimental relaxation data are available, and analyze the effect of flexible unstructured tails on the outcome of docking. Additionally, we describe a method for integrating the new approximation methods into the existing docking approaches that use the rotational diffusion tensor as a restraint. The results show that the proposed docking method is robust against experimental errors in the relaxation data or structural rearrangements upon complex formation and is computationally more efficient than current methods. The developed approximations are accurate enough to be used in structure refinement protocols. PMID:21604302
Berlin, Konstantin; O'Leary, Dianne P; Fushman, David
2011-07-01
We present and evaluate a rigid-body, deterministic, molecular docking method, called ELMDOCK, that relies solely on the three-dimensional structure of the individual components and the overall rotational diffusion tensor of the complex, obtained from nuclear spin-relaxation measurements. We also introduce a docking method, called ELMPATIDOCK, derived from ELMDOCK and based on the new concept of combining the shape-related restraints from rotational diffusion with those from residual dipolar couplings, along with ambiguous contact/interface-related restraints obtained from chemical shift perturbations. ELMDOCK and ELMPATIDOCK use two novel approximations of the molecular rotational diffusion tensor that allow computationally efficient docking. We show that these approximations are accurate enough to properly dock the two components of a complex without the need to recompute the diffusion tensor at each iteration step. We analyze the accuracy, robustness, and efficiency of these methods using synthetic relaxation data for a large variety of protein-protein complexes. We also test our method on three protein systems for which the structure of the complex and experimental relaxation data are available, and analyze the effect of flexible unstructured tails on the outcome of docking. Additionally, we describe a method for integrating the new approximation methods into the existing docking approaches that use the rotational diffusion tensor as a restraint. The results show that the proposed docking method is robust against experimental errors in the relaxation data or structural rearrangements upon complex formation and is computationally more efficient than current methods. The developed approximations are accurate enough to be used in structure refinement protocols. Copyright © 2011 Wiley-Liss, Inc.
NASA Astrophysics Data System (ADS)
Voloshinov, V. V.
2018-03-01
In computations related to mathematical programming problems, one often has to consider approximate, rather than exact, solutions satisfying the constraints of the problem and the optimality criterion with a certain error. For determining stopping rules for iterative procedures, in the stability analysis of solutions with respect to errors in the initial data, etc., a justified characteristic of such solutions that is independent of the numerical method used to obtain them is needed. A necessary δ-optimality condition in the smooth mathematical programming problem that generalizes the Karush-Kuhn-Tucker theorem for the case of approximate solutions is obtained. The Lagrange multipliers corresponding to the approximate solution are determined by solving an approximating quadratic programming problem.
Empirical likelihood-based confidence intervals for mean medical cost with censored data.
Jeyarajah, Jenny; Qin, Gengsheng
2017-11-10
In this paper, we propose empirical likelihood methods based on influence function and jackknife techniques for constructing confidence intervals for mean medical cost with censored data. We conduct a simulation study to compare the coverage probabilities and interval lengths of our proposed confidence intervals with that of the existing normal approximation-based confidence intervals and bootstrap confidence intervals. The proposed methods have better finite-sample performances than existing methods. Finally, we illustrate our proposed methods with a relevant example. Copyright © 2017 John Wiley & Sons, Ltd.
White, Alec F.; Head-Gordon, Martin; McCurdy, C. William
2017-01-30
The computation of Siegert energies by analytic continuation of bound state energies has recently been applied to shape resonances in polyatomic molecules by several authors. Here, we critically evaluate a recently proposed analytic continuation method based on low order (type III) Padé approximants as well as an analytic continuation method based on high order (type II) Padé approximants. We compare three classes of stabilizing potentials: Coulomb potentials, Gaussian potentials, and attenuated Coulomb potentials. These methods are applied to a model potential where the correct answer is known exactly and to the 2Π g shape resonance of N 2 - whichmore » has been studied extensively by other methods. Both the choice of stabilizing potential and method of analytic continuation prove to be important to the accuracy of the results. We then conclude that an attenuated Coulomb potential is the most effective of the three for bound state analytic continuation methods. With the proper potential, such methods show promise for algorithmic determination of the positions and widths of molecular shape resonances.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, Alec F.; Head-Gordon, Martin; McCurdy, C. William
The computation of Siegert energies by analytic continuation of bound state energies has recently been applied to shape resonances in polyatomic molecules by several authors. Here, we critically evaluate a recently proposed analytic continuation method based on low order (type III) Padé approximants as well as an analytic continuation method based on high order (type II) Padé approximants. We compare three classes of stabilizing potentials: Coulomb potentials, Gaussian potentials, and attenuated Coulomb potentials. These methods are applied to a model potential where the correct answer is known exactly and to the 2Π g shape resonance of N 2 - whichmore » has been studied extensively by other methods. Both the choice of stabilizing potential and method of analytic continuation prove to be important to the accuracy of the results. We then conclude that an attenuated Coulomb potential is the most effective of the three for bound state analytic continuation methods. With the proper potential, such methods show promise for algorithmic determination of the positions and widths of molecular shape resonances.« less
Approximation of the ruin probability using the scaled Laplace transform inversion
Mnatsakanov, Robert M.; Sarkisian, Khachatur; Hakobyan, Artak
2015-01-01
The problem of recovering the ruin probability in the classical risk model based on the scaled Laplace transform inversion is studied. It is shown how to overcome the problem of evaluating the ruin probability at large values of an initial surplus process. Comparisons of proposed approximations with the ones based on the Laplace transform inversions using a fixed Talbot algorithm as well as on the ones using the Trefethen–Weideman–Schmelzer and maximum entropy methods are presented via a simulation study. PMID:26752796
Entropy Viscosity and L1-based Approximations of PDEs: Exploiting Sparsity
2015-10-23
AFRL-AFOSR-VA-TR-2015-0337 Entropy Viscosity and L1-based Approximations of PDEs: Exploiting Sparsity Jean-Luc Guermond TEXAS A & M UNIVERSITY 750...REPORT DATE (DD-MM-YYYY) 09-05-2015 2. REPORT TYPE Final report 3. DATES COVERED (From - To) 01-07-2012 - 30-06-2015 4. TITLE AND SUBTITLE Entropy ...conservation equations can be stabilized by using the so-called entropy viscosity method and we proposed to to investigate this new technique. We
Symmetric Resonance Charge Exchange Cross Section Based on Impact Parameter Treatment
NASA Technical Reports Server (NTRS)
Omidvar, Kazem; Murphy, Kendrah; Atlas, Robert (Technical Monitor)
2002-01-01
Using a two-state impact parameter approximation, a calculation has been carried out to obtain symmetric resonance charge transfer cross sections between nine ions and their parent atoms or molecules. Calculation is based on a two-dimensional numerical integration. The method is mostly suited for hydrogenic and some closed shell atoms. Good agreement has been obtained with the results of laboratory measurements for the ion-atom pairs H+-H, He+-He, and Ar+-Ar. Several approximations in a similar published calculation have been eliminated.
Baek, Hyun Jae; Shin, JaeWook; Jin, Gunwoo; Cho, Jaegeol
2017-10-24
Photoplethysmographic signals are useful for heart rate variability analysis in practical ambulatory applications. While reducing the sampling rate of signals is an important consideration for modern wearable devices that enable 24/7 continuous monitoring, there have not been many studies that have investigated how to compensate the low timing resolution of low-sampling-rate signals for accurate heart rate variability analysis. In this study, we utilized the parabola approximation method and measured it against the conventional cubic spline interpolation method for the time, frequency, and nonlinear domain variables of heart rate variability. For each parameter, the intra-class correlation, standard error of measurement, Bland-Altman 95% limits of agreement and root mean squared relative error were presented. Also, elapsed time taken to compute each interpolation algorithm was investigated. The results indicated that parabola approximation is a simple, fast, and accurate algorithm-based method for compensating the low timing resolution of pulse beat intervals. In addition, the method showed comparable performance with the conventional cubic spline interpolation method. Even though the absolute value of the heart rate variability variables calculated using a signal sampled at 20 Hz were not exactly matched with those calculated using a reference signal sampled at 250 Hz, the parabola approximation method remains a good interpolation method for assessing trends in HRV measurements for low-power wearable applications.
NASA Technical Reports Server (NTRS)
Hofmann, Douglas C. (Inventor); Roberts, Scott N. (Inventor)
2017-01-01
Systems and methods in accordance with embodiments of the invention fabricate objects including metallic glass-based materials using ultrasonic welding. In one embodiment, a method of fabricating an object that includes a metallic glass-based material includes: ultrasonically welding at least one ribbon to a surface; where at least one ribbon that is ultrasonically welded to a surface has a thickness of less than approximately 150.mu.m; and where at least one ribbon that is ultrasonically welded to a surface includes a metallic glass-based material.
NASA Technical Reports Server (NTRS)
Unal, Resit; Morris, W. Douglas; White, Nancy H.; Lepsch, Roger A.; Brown, Richard W.
2000-01-01
This paper describes the development of parametric models for estimating operational reliability and maintainability (R&M) characteristics for reusable vehicle concepts, based on vehicle size and technology support level. A R&M analysis tool (RMAT) and response surface methods are utilized to build parametric approximation models for rapidly estimating operational R&M characteristics such as mission completion reliability. These models that approximate RMAT, can then be utilized for fast analysis of operational requirements, for lifecycle cost estimating and for multidisciplinary sign optimization.
NASA Astrophysics Data System (ADS)
Altıparmak, Hamit; Al Shahadat, Mohamad; Kiani, Ehsan; Dimililer, Kamil
2018-04-01
Robotic agriculture requires smart and doable techniques to substitute the human intelligence with machine intelligence. Strawberry is one of the important Mediterranean product and its productivity enhancement requires modern and machine-based methods. Whereas a human identifies the disease infected leaves by his eye, the machine should also be capable of vision-based disease identification. The objective of this paper is to practically verify the applicability of a new computer-vision method for discrimination between the healthy and disease infected strawberry leaves which does not require neural network or time consuming trainings. The proposed method was tested under outdoor lighting condition using a regular DLSR camera without any particular lens. Since the type and infection degree of disease is approximated a human brain a fuzzy decision maker classifies the leaves over the images captured on-site having the same properties of human vision. Optimizing the fuzzy parameters for a typical strawberry production area at a summer mid-day in Cyprus produced 96% accuracy for segmented iron deficiency and 93% accuracy for segmented using a typical human instant classification approximation as the benchmark holding higher accuracy than a human eye identifier. The fuzzy-base classifier provides approximate result for decision making on the leaf status as if it is healthy or not.
NASA Astrophysics Data System (ADS)
García-Moreno, Angel-Iván; González-Barbosa, José-Joel; Ramírez-Pedraza, Alfonso; Hurtado-Ramos, Juan B.; Ornelas-Rodriguez, Francisco-Javier
2016-04-01
Computer-based reconstruction models can be used to approximate urban environments. These models are usually based on several mathematical approximations and the usage of different sensors, which implies dependency on many variables. The sensitivity analysis presented in this paper is used to weigh the relative importance of each uncertainty contributor into the calibration of a panoramic camera-LiDAR system. Both sensors are used for three-dimensional urban reconstruction. Simulated and experimental tests were conducted. For the simulated tests we analyze and compare the calibration parameters using the Monte Carlo and Latin hypercube sampling techniques. Sensitivity analysis for each variable involved into the calibration was computed by the Sobol method, which is based on the analysis of the variance breakdown, and the Fourier amplitude sensitivity test method, which is based on Fourier's analysis. Sensitivity analysis is an essential tool in simulation modeling and for performing error propagation assessments.
NASA Astrophysics Data System (ADS)
Hu, Jiexiang; Zhou, Qi; Jiang, Ping; Shao, Xinyu; Xie, Tingli
2018-01-01
Variable-fidelity (VF) modelling methods have been widely used in complex engineering system design to mitigate the computational burden. Building a VF model generally includes two parts: design of experiments and metamodel construction. In this article, an adaptive sampling method based on improved hierarchical kriging (ASM-IHK) is proposed to refine the improved VF model. First, an improved hierarchical kriging model is developed as the metamodel, in which the low-fidelity model is varied through a polynomial response surface function to capture the characteristics of a high-fidelity model. Secondly, to reduce local approximation errors, an active learning strategy based on a sequential sampling method is introduced to make full use of the already required information on the current sampling points and to guide the sampling process of the high-fidelity model. Finally, two numerical examples and the modelling of the aerodynamic coefficient for an aircraft are provided to demonstrate the approximation capability of the proposed approach, as well as three other metamodelling methods and two sequential sampling methods. The results show that ASM-IHK provides a more accurate metamodel at the same simulation cost, which is very important in metamodel-based engineering design problems.
NASA Astrophysics Data System (ADS)
Filatov, Michael; Cremer, Dieter
2005-02-01
The regular approximation to the normalized elimination of the small component (NESC) in the modified Dirac equation has been developed and presented in matrix form. The matrix form of the infinite-order regular approximation (IORA) expressions, obtained in [Filatov and Cremer, J. Chem. Phys. 118, 6741 (2003)] using the resolution of the identity, is the exact matrix representation and corresponds to the zeroth-order regular approximation to NESC (NESC-ZORA). Because IORA (=NESC-ZORA) is a variationally stable method, it was used as a suitable starting point for the development of the second-order regular approximation to NESC (NESC-SORA). As shown for hydrogenlike ions, NESC-SORA energies are closer to the exact Dirac energies than the energies from the fifth-order Douglas-Kroll approximation, which is much more computationally demanding than NESC-SORA. For the application of IORA (=NESC-ZORA) and NESC-SORA to many-electron systems, the number of the two-electron integrals that need to be evaluated (identical to the number of the two-electron integrals of a full Dirac-Hartree-Fock calculation) was drastically reduced by using the resolution of the identity technique. An approximation was derived, which requires only the two-electron integrals of a nonrelativistic calculation. The accuracy of this approach was demonstrated for heliumlike ions. The total energy based on the approximate integrals deviates from the energy calculated with the exact integrals by less than 5×10-9hartree units. NESC-ZORA and NESC-SORA can easily be implemented in any nonrelativistic quantum chemical program. Their application is comparable in cost with that of nonrelativistic methods. The methods can be run with density functional theory and any wave function method. NESC-SORA has the advantage that it does not imply a picture change.
NASA Astrophysics Data System (ADS)
Yang, Dongzheng; Hu, Xixi; Zhang, Dong H.; Xie, Daiqian
2018-02-01
Solving the time-independent close coupling equations of a diatom-diatom inelastic collision system by using the rigorous close-coupling approach is numerically difficult because of its expensive matrix manipulation. The coupled-states approximation decouples the centrifugal matrix by neglecting the important Coriolis couplings completely. In this work, a new approximation method based on the coupled-states approximation is presented and applied to time-independent quantum dynamic calculations. This approach only considers the most important Coriolis coupling with the nearest neighbors and ignores weaker Coriolis couplings with farther K channels. As a result, it reduces the computational costs without a significant loss of accuracy. Numerical tests for para-H2+ortho-H2 and para-H2+HD inelastic collision were carried out and the results showed that the improved method dramatically reduces the errors due to the neglect of the Coriolis couplings in the coupled-states approximation. This strategy should be useful in quantum dynamics of other systems.
Pseudospectral collocation methods for fourth order differential equations
NASA Technical Reports Server (NTRS)
Malek, Alaeddin; Phillips, Timothy N.
1994-01-01
Collocation schemes are presented for solving linear fourth order differential equations in one and two dimensions. The variational formulation of the model fourth order problem is discretized by approximating the integrals by a Gaussian quadrature rule generalized to include the values of the derivative of the integrand at the boundary points. Collocation schemes are derived which are equivalent to this discrete variational problem. An efficient preconditioner based on a low-order finite difference approximation to the same differential operator is presented. The corresponding multidomain problem is also considered and interface conditions are derived. Pseudospectral approximations which are C1 continuous at the interfaces are used in each subdomain to approximate the solution. The approximations are also shown to be C3 continuous at the interfaces asymptotically. A complete analysis of the collocation scheme for the multidomain problem is provided. The extension of the method to the biharmonic equation in two dimensions is discussed and results are presented for a problem defined in a nonrectangular domain.
Approximating a DSM-5 Diagnosis of PTSD Using DSM-IV Criteria
Rosellini, Anthony J.; Stein, Murray B.; Colpe, Lisa J.; Heeringa, Steven G.; Petukhova, Maria V.; Sampson, Nancy A.; Schoenbaum, Michael; Ursano, Robert J.; Kessler, Ronald C.
2015-01-01
Background Diagnostic criteria for DSM-5 posttraumatic stress disorder (PTSD) are in many ways similar to DSM-IV criteria, raising the possibility that it might be possible to closely approximate DSM-5 diagnoses using DSM-IV symptoms. If so, the resulting transformation rules could be used to pool research data based on the two criteria sets. Methods The Pre-Post Deployment Study (PPDS) of the Army Study to Assess Risk and Resilience in Servicemembers (Army STARRS) administered a blended 30-day DSM-IV and DSM-5 PTSD symptom assessment based on the civilian PTSD Checklist for DSM-IV (PCL-C) and the PTSD Checklist for DSM-5 (PCL-5). This assessment was completed by 9,193 soldiers from three US Army Brigade Combat Teams approximately three months after returning from Afghanistan. PCL-C items were used to operationalize conservative and broad approximations of DSM-5 PTSD diagnoses. The operating characteristics of these approximations were examined compared to diagnoses based on actual DSM-5 criteria. Results The estimated 30-day prevalence of DSM-5 PTSD based on conservative (4.3%) and broad (4.7%) approximations of DSM-5 criteria using DSM-IV symptom assessments were similar to estimates based on actual DSM-5 criteria (4.6%). Both approximations had excellent sensitivity (92.6-95.5%), specificity (99.6-99.9%), total classification accuracy (99.4-99.6%), and area under the receiver operating characteristic curve (0.96-0.98). Conclusions DSM-IV symptoms can be used to approximate DSM-5 diagnoses of PTSD among recently-deployed soldiers, making it possible to recode symptom-level data from earlier DSM-IV studies to draw inferences about DSM-5 PTSD. However, replication is needed in broader trauma-exposed samples to evaluate the external validity of this finding. PMID:25845710
Estimation of Noise Properties for TV-regularized Image Reconstruction in Computed Tomography
Sánchez, Adrian A.
2016-01-01
A method for predicting the image covariance resulting from total-variation-penalized iterative image reconstruction (TV-penalized IIR) is presented and demonstrated in a variety of contexts. The method is validated against the sample covariance from statistical noise realizations for a small image using a variety of comparison metrics. Potential applications for the covariance approximation include investigation of image properties such as object- and signal-dependence of noise, and noise stationarity. These applications are demonstrated, along with the construction of image pixel variance maps for two-dimensional 128 × 128 pixel images. Methods for extending the proposed covariance approximation to larger images and improving computational efficiency are discussed. Future work will apply the developed methodology to the construction of task-based image quality metrics such as the Hotelling observer detectability for TV-based IIR. PMID:26308968
Estimation of noise properties for TV-regularized image reconstruction in computed tomography.
Sánchez, Adrian A
2015-09-21
A method for predicting the image covariance resulting from total-variation-penalized iterative image reconstruction (TV-penalized IIR) is presented and demonstrated in a variety of contexts. The method is validated against the sample covariance from statistical noise realizations for a small image using a variety of comparison metrics. Potential applications for the covariance approximation include investigation of image properties such as object- and signal-dependence of noise, and noise stationarity. These applications are demonstrated, along with the construction of image pixel variance maps for two-dimensional 128 × 128 pixel images. Methods for extending the proposed covariance approximation to larger images and improving computational efficiency are discussed. Future work will apply the developed methodology to the construction of task-based image quality metrics such as the Hotelling observer detectability for TV-based IIR.
Estimation of noise properties for TV-regularized image reconstruction in computed tomography
NASA Astrophysics Data System (ADS)
Sánchez, Adrian A.
2015-09-01
A method for predicting the image covariance resulting from total-variation-penalized iterative image reconstruction (TV-penalized IIR) is presented and demonstrated in a variety of contexts. The method is validated against the sample covariance from statistical noise realizations for a small image using a variety of comparison metrics. Potential applications for the covariance approximation include investigation of image properties such as object- and signal-dependence of noise, and noise stationarity. These applications are demonstrated, along with the construction of image pixel variance maps for two-dimensional 128× 128 pixel images. Methods for extending the proposed covariance approximation to larger images and improving computational efficiency are discussed. Future work will apply the developed methodology to the construction of task-based image quality metrics such as the Hotelling observer detectability for TV-based IIR.
NASA Astrophysics Data System (ADS)
Xu, Lei; Zheng, Xiaoxiang; Zhang, Hengyi; Yu, Yajun
1998-09-01
Accurate edge detection of retinal vessels is a prerequisite for quantitative analysis of subtle morphological changes of retinal vessels under different pathological conditions. A novel method for edge detection of retinal vessels is presented in this paper. Methods: (1) Wavelet-based image preprocessing. (2) The signed edge detection algorithm and mathematical morphological operation are applied to get the approximate regions that contain retinal vessels. (3) By convolving the preprocessed image with a LoG operator only on the detected approximate regions of retinal vessels, followed by edges refining, clear edge maps of the retinal vessels are fast obtained. Results: A detailed performance evaluation together with the existing techniques is given to demonstrate the strong features of our method. Conclusions: True edge locations of retinal vessels can be fast detected with continuous structures of retinal vessels, less non- vessel segments left and insensitivity to noise. The method is also suitable for other application fields such as road edge detection.
On Bayesian Testing of Additive Conjoint Measurement Axioms Using Synthetic Likelihood
ERIC Educational Resources Information Center
Karabatsos, George
2017-01-01
This article introduces a Bayesian method for testing the axioms of additive conjoint measurement. The method is based on an importance sampling algorithm that performs likelihood-free, approximate Bayesian inference using a synthetic likelihood to overcome the analytical intractability of this testing problem. This new method improves upon…
Kinematic precision of gear trains
NASA Technical Reports Server (NTRS)
Litvin, F. L.; Goldrich, R. N.; Coy, J. J.; Zaretsky, E. V.
1983-01-01
Kinematic precision is affected by errors which are the result of either intentional adjustments or accidental defects in manufacturing and assembly of gear trains. A method for the determination of kinematic precision of gear trains is described. The method is based on the exact kinematic relations for the contact point motions of the gear tooth surfaces under the influence of errors. An approximate method is also explained. Example applications of the general approximate methods are demonstrated for gear trains consisting of involute (spur and helical) gears, circular arc (Wildhaber-Novikov) gears, and spiral bevel gears. Gear noise measurements from a helicopter transmission are presented and discussed with relation to the kinematic precision theory. Previously announced in STAR as N82-32733
Overview of Krylov subspace methods with applications to control problems
NASA Technical Reports Server (NTRS)
Saad, Youcef
1989-01-01
An overview of projection methods based on Krylov subspaces are given with emphasis on their application to solving matrix equations that arise in control problems. The main idea of Krylov subspace methods is to generate a basis of the Krylov subspace Span and seek an approximate solution the the original problem from this subspace. Thus, the original matrix problem of size N is approximated by one of dimension m typically much smaller than N. Krylov subspace methods have been very successful in solving linear systems and eigenvalue problems and are now just becoming popular for solving nonlinear equations. It is shown how they can be used to solve partial pole placement problems, Sylvester's equation, and Lyapunov's equation.
Numerical approximations for fractional diffusion equations via a Chebyshev spectral-tau method
NASA Astrophysics Data System (ADS)
Doha, Eid H.; Bhrawy, Ali H.; Ezz-Eldien, Samer S.
2013-10-01
In this paper, a class of fractional diffusion equations with variable coefficients is considered. An accurate and efficient spectral tau technique for solving the fractional diffusion equations numerically is proposed. This method is based upon Chebyshev tau approximation together with Chebyshev operational matrix of Caputo fractional differentiation. Such approach has the advantage of reducing the problem to the solution of a system of algebraic equations, which may then be solved by any standard numerical technique. We apply this general method to solve four specific examples. In each of the examples considered, the numerical results show that the proposed method is of high accuracy and is efficient for solving the time-dependent fractional diffusion equations.
Anandakrishnan, Ramu; Scogland, Tom R. W.; Fenley, Andrew T.; Gordon, John C.; Feng, Wu-chun; Onufriev, Alexey V.
2010-01-01
Tools that compute and visualize biomolecular electrostatic surface potential have been used extensively for studying biomolecular function. However, determining the surface potential for large biomolecules on a typical desktop computer can take days or longer using currently available tools and methods. Two commonly used techniques to speed up these types of electrostatic computations are approximations based on multi-scale coarse-graining and parallelization across multiple processors. This paper demonstrates that for the computation of electrostatic surface potential, these two techniques can be combined to deliver significantly greater speed-up than either one separately, something that is in general not always possible. Specifically, the electrostatic potential computation, using an analytical linearized Poisson Boltzmann (ALPB) method, is approximated using the hierarchical charge partitioning (HCP) multiscale method, and parallelized on an ATI Radeon 4870 graphical processing unit (GPU). The implementation delivers a combined 934-fold speed-up for a 476,040 atom viral capsid, compared to an equivalent non-parallel implementation on an Intel E6550 CPU without the approximation. This speed-up is significantly greater than the 42-fold speed-up for the HCP approximation alone or the 182-fold speed-up for the GPU alone. PMID:20452792
Ergodicity of the Stochastic Nosé-Hoover Heat Bath
NASA Astrophysics Data System (ADS)
Wei Chung Lo,; Baowen Li,
2010-07-01
We numerically study the ergodicity of the stochastic Nosé-Hoover heat bath whose formalism is based on the Markovian approximation for the Nosé-Hoover equation [J. Phys. Soc. Jpn. 77 (2008) 103001]. The approximation leads to a Langevin-like equation driven by a fluctuating dissipative force and multiplicative Gaussian white noise. The steady state solution of the associated Fokker-Planck equation is the canonical distribution. We investigate the dynamics of this method for the case of (i) free particle, (ii) nonlinear oscillators and (iii) lattice chains. We derive the Fokker-Planck equation for the free particle and present approximate analytical solution for the stationary distribution in the context of the Markovian approximation. Numerical simulation results for nonlinear oscillators show that this method results in a Gaussian distribution for the particles velocity. We also employ the method as heat baths to study nonequilibrium heat flow in one-dimensional Fermi-Pasta-Ulam (FPU-β) and Frenkel-Kontorova (FK) lattices. The establishment of well-defined temperature profiles are observed only when the lattice size is large. Our results provide numerical justification for such Markovian approximation for classical single- and many-body systems.
Gradients estimation from random points with volumetric tensor in turbulence
NASA Astrophysics Data System (ADS)
Watanabe, Tomoaki; Nagata, Koji
2017-12-01
We present an estimation method of fully-resolved/coarse-grained gradients from randomly distributed points in turbulence. The method is based on a linear approximation of spatial gradients expressed with the volumetric tensor, which is a 3 × 3 matrix determined by a geometric distribution of the points. The coarse grained gradient can be considered as a low pass filtered gradient, whose cutoff is estimated with the eigenvalues of the volumetric tensor. The present method, the volumetric tensor approximation, is tested for velocity and passive scalar gradients in incompressible planar jet and mixing layer. Comparison with a finite difference approximation on a Cartesian grid shows that the volumetric tensor approximation computes the coarse grained gradients fairly well at a moderate computational cost under various conditions of spatial distributions of points. We also show that imposing the solenoidal condition improves the accuracy of the present method for solenoidal vectors, such as a velocity vector in incompressible flows, especially when the number of the points is not large. The volumetric tensor approximation with 4 points poorly estimates the gradient because of anisotropic distribution of the points. Increasing the number of points from 4 significantly improves the accuracy. Although the coarse grained gradient changes with the cutoff length, the volumetric tensor approximation yields the coarse grained gradient whose magnitude is close to the one obtained by the finite difference. We also show that the velocity gradient estimated with the present method well captures the turbulence characteristics such as local flow topology, amplification of enstrophy and strain, and energy transfer across scales.
Minimal-Approximation-Based Decentralized Backstepping Control of Interconnected Time-Delay Systems.
Choi, Yun Ho; Yoo, Sung Jin
2016-12-01
A decentralized adaptive backstepping control design using minimal function approximators is proposed for nonlinear large-scale systems with unknown unmatched time-varying delayed interactions and unknown backlash-like hysteresis nonlinearities. Compared with existing decentralized backstepping methods, the contribution of this paper is to design a simple local control law for each subsystem, consisting of an actual control with one adaptive function approximator, without requiring the use of multiple function approximators and regardless of the order of each subsystem. The virtual controllers for each subsystem are used as intermediate signals for designing a local actual control at the last step. For each subsystem, a lumped unknown function including the unknown nonlinear terms and the hysteresis nonlinearities is derived at the last step and is estimated by one function approximator. Thus, the proposed approach only uses one function approximator to implement each local controller, while existing decentralized backstepping control methods require the number of function approximators equal to the order of each subsystem and a calculation of virtual controllers to implement each local actual controller. The stability of the total controlled closed-loop system is analyzed using the Lyapunov stability theorem.
Feature selection using probabilistic prediction of support vector regression.
Yang, Jian-Bo; Ong, Chong-Jin
2011-06-01
This paper presents a new wrapper-based feature selection method for support vector regression (SVR) using its probabilistic predictions. The method computes the importance of a feature by aggregating the difference, over the feature space, of the conditional density functions of the SVR prediction with and without the feature. As the exact computation of this importance measure is expensive, two approximations are proposed. The effectiveness of the measure using these approximations, in comparison to several other existing feature selection methods for SVR, is evaluated on both artificial and real-world problems. The result of the experiments show that the proposed method generally performs better than, or at least as well as, the existing methods, with notable advantage when the dataset is sparse.
Krylov subspace methods - Theory, algorithms, and applications
NASA Technical Reports Server (NTRS)
Sad, Youcef
1990-01-01
Projection methods based on Krylov subspaces for solving various types of scientific problems are reviewed. The main idea of this class of methods when applied to a linear system Ax = b, is to generate in some manner an approximate solution to the original problem from the so-called Krylov subspace span. Thus, the original problem of size N is approximated by one of dimension m, typically much smaller than N. Krylov subspace methods have been very successful in solving linear systems and eigenvalue problems and are now becoming popular for solving nonlinear equations. The main ideas in Krylov subspace methods are shown and their use in solving linear systems, eigenvalue problems, parabolic partial differential equations, Liapunov matrix equations, and nonlinear system of equations are discussed.
FFT multislice method--the silver anniversary.
Ishizuka, Kazuo
2004-02-01
The first paper on the FFT multislice method was published in 1977, a quarter of a century ago. The formula was extended in 1982 to include a large tilt of an incident beam relative to the specimen surface. Since then, with advances of computing power, the FFT multislice method has been successfully applied to coherent CBED and HAADF-STEM simulations. However, because the multislice formula is built on some physical approximations and approximations in numerical procedure, there seem to be controversial conclusions in the literature on the multislice method. In this report, the physical implication of the multislice method is reviewed based on the formula for the tilted illumination. Then, some results on the coherent CBED and the HAADF-STEM simulations are presented.
Similarity-transformed equation-of-motion vibrational coupled-cluster theory.
Faucheaux, Jacob A; Nooijen, Marcel; Hirata, So
2018-02-07
A similarity-transformed equation-of-motion vibrational coupled-cluster (STEOM-XVCC) method is introduced as a one-mode theory with an effective vibrational Hamiltonian, which is similarity transformed twice so that its lower-order operators are dressed with higher-order anharmonic effects. The first transformation uses an exponential excitation operator, defining the equation-of-motion vibrational coupled-cluster (EOM-XVCC) method, and the second uses an exponential excitation-deexcitation operator. From diagonalization of this doubly similarity-transformed Hamiltonian in the small one-mode excitation space, the method simultaneously computes accurate anharmonic vibrational frequencies of all fundamentals, which have unique significance in vibrational analyses. We establish a diagrammatic method of deriving the working equations of STEOM-XVCC and prove their connectedness and thus size-consistency as well as the exact equality of its frequencies with the corresponding roots of EOM-XVCC. We furthermore elucidate the similarities and differences between electronic and vibrational STEOM methods and between STEOM-XVCC and vibrational many-body Green's function theory based on the Dyson equation, which is also an anharmonic one-mode theory. The latter comparison inspires three approximate STEOM-XVCC methods utilizing the common approximations made in the Dyson equation: the diagonal approximation, a perturbative expansion of the Dyson self-energy, and the frequency-independent approximation. The STEOM-XVCC method including up to the simultaneous four-mode excitation operator in a quartic force field and its three approximate variants are formulated and implemented in computer codes with the aid of computer algebra, and they are applied to small test cases with varied degrees of anharmonicity.
Similarity-transformed equation-of-motion vibrational coupled-cluster theory
NASA Astrophysics Data System (ADS)
Faucheaux, Jacob A.; Nooijen, Marcel; Hirata, So
2018-02-01
A similarity-transformed equation-of-motion vibrational coupled-cluster (STEOM-XVCC) method is introduced as a one-mode theory with an effective vibrational Hamiltonian, which is similarity transformed twice so that its lower-order operators are dressed with higher-order anharmonic effects. The first transformation uses an exponential excitation operator, defining the equation-of-motion vibrational coupled-cluster (EOM-XVCC) method, and the second uses an exponential excitation-deexcitation operator. From diagonalization of this doubly similarity-transformed Hamiltonian in the small one-mode excitation space, the method simultaneously computes accurate anharmonic vibrational frequencies of all fundamentals, which have unique significance in vibrational analyses. We establish a diagrammatic method of deriving the working equations of STEOM-XVCC and prove their connectedness and thus size-consistency as well as the exact equality of its frequencies with the corresponding roots of EOM-XVCC. We furthermore elucidate the similarities and differences between electronic and vibrational STEOM methods and between STEOM-XVCC and vibrational many-body Green's function theory based on the Dyson equation, which is also an anharmonic one-mode theory. The latter comparison inspires three approximate STEOM-XVCC methods utilizing the common approximations made in the Dyson equation: the diagonal approximation, a perturbative expansion of the Dyson self-energy, and the frequency-independent approximation. The STEOM-XVCC method including up to the simultaneous four-mode excitation operator in a quartic force field and its three approximate variants are formulated and implemented in computer codes with the aid of computer algebra, and they are applied to small test cases with varied degrees of anharmonicity.
Asymptotic confidence intervals for the Pearson correlation via skewness and kurtosis.
Bishara, Anthony J; Li, Jiexiang; Nash, Thomas
2018-02-01
When bivariate normality is violated, the default confidence interval of the Pearson correlation can be inaccurate. Two new methods were developed based on the asymptotic sampling distribution of Fisher's z' under the general case where bivariate normality need not be assumed. In Monte Carlo simulations, the most successful of these methods relied on the (Vale & Maurelli, 1983, Psychometrika, 48, 465) family to approximate a distribution via the marginal skewness and kurtosis of the sample data. In Simulation 1, this method provided more accurate confidence intervals of the correlation in non-normal data, at least as compared to no adjustment of the Fisher z' interval, or to adjustment via the sample joint moments. In Simulation 2, this approximate distribution method performed favourably relative to common non-parametric bootstrap methods, but its performance was mixed relative to an observed imposed bootstrap and two other robust methods (PM1 and HC4). No method was completely satisfactory. An advantage of the approximate distribution method, though, is that it can be implemented even without access to raw data if sample skewness and kurtosis are reported, making the method particularly useful for meta-analysis. Supporting information includes R code. © 2017 The British Psychological Society.
Lagrangian particle method for compressible fluid dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samulyak, Roman; Wang, Xingyu; Chen, Hsin -Chiang
A new Lagrangian particle method for solving Euler equations for compressible inviscid fluid or gas flows is proposed. Similar to smoothed particle hydrodynamics (SPH), the method represents fluid cells with Lagrangian particles and is suitable for the simulation of complex free surface / multi-phase flows. The main contributions of our method, which is different from SPH in all other aspects, are (a) significant improvement of approximation of differential operators based on a polynomial fit via weighted least squares approximation and the convergence of prescribed order, (b) a second-order particle-based algorithm that reduces to the first-order upwind method at local extremalmore » points, providing accuracy and long term stability, and (c) more accurate resolution of entropy discontinuities and states at free inter-faces. While the method is consistent and convergent to a prescribed order, the conservation of momentum and energy is not exact and depends on the convergence order . The method is generalizable to coupled hyperbolic-elliptic systems. As a result, numerical verification tests demonstrating the convergence order are presented as well as examples of complex multiphase flows.« less
Time-stable overset grid method for hyperbolic problems using summation-by-parts operators
NASA Astrophysics Data System (ADS)
Sharan, Nek; Pantano, Carlos; Bodony, Daniel J.
2018-05-01
A provably time-stable method for solving hyperbolic partial differential equations arising in fluid dynamics on overset grids is presented in this paper. The method uses interface treatments based on the simultaneous approximation term (SAT) penalty method and derivative approximations that satisfy the summation-by-parts (SBP) property. Time-stability is proven using energy arguments in a norm that naturally relaxes to the standard diagonal norm when the overlap reduces to a traditional multiblock arrangement. The proposed overset interface closures are time-stable for arbitrary overlap arrangements. The information between grids is transferred using Lagrangian interpolation applied to the incoming characteristics, although other interpolation schemes could also be used. The conservation properties of the method are analyzed. Several one-, two-, and three-dimensional, linear and non-linear numerical examples are presented to confirm the stability and accuracy of the method. A performance comparison between the proposed SAT-based interface treatment and the commonly-used approach of injecting the interpolated data onto each grid is performed to highlight the efficacy of the SAT method.
Lagrangian particle method for compressible fluid dynamics
Samulyak, Roman; Wang, Xingyu; Chen, Hsin -Chiang
2018-02-09
A new Lagrangian particle method for solving Euler equations for compressible inviscid fluid or gas flows is proposed. Similar to smoothed particle hydrodynamics (SPH), the method represents fluid cells with Lagrangian particles and is suitable for the simulation of complex free surface / multi-phase flows. The main contributions of our method, which is different from SPH in all other aspects, are (a) significant improvement of approximation of differential operators based on a polynomial fit via weighted least squares approximation and the convergence of prescribed order, (b) a second-order particle-based algorithm that reduces to the first-order upwind method at local extremalmore » points, providing accuracy and long term stability, and (c) more accurate resolution of entropy discontinuities and states at free inter-faces. While the method is consistent and convergent to a prescribed order, the conservation of momentum and energy is not exact and depends on the convergence order . The method is generalizable to coupled hyperbolic-elliptic systems. As a result, numerical verification tests demonstrating the convergence order are presented as well as examples of complex multiphase flows.« less
Tests for detecting overdispersion in models with measurement error in covariates.
Yang, Yingsi; Wong, Man Yu
2015-11-30
Measurement error in covariates can affect the accuracy in count data modeling and analysis. In overdispersion identification, the true mean-variance relationship can be obscured under the influence of measurement error in covariates. In this paper, we propose three tests for detecting overdispersion when covariates are measured with error: a modified score test and two score tests based on the proposed approximate likelihood and quasi-likelihood, respectively. The proposed approximate likelihood is derived under the classical measurement error model, and the resulting approximate maximum likelihood estimator is shown to have superior efficiency. Simulation results also show that the score test based on approximate likelihood outperforms the test based on quasi-likelihood and other alternatives in terms of empirical power. By analyzing a real dataset containing the health-related quality-of-life measurements of a particular group of patients, we demonstrate the importance of the proposed methods by showing that the analyses with and without measurement error correction yield significantly different results. Copyright © 2015 John Wiley & Sons, Ltd.
Approach for Input Uncertainty Propagation and Robust Design in CFD Using Sensitivity Derivatives
NASA Technical Reports Server (NTRS)
Putko, Michele M.; Taylor, Arthur C., III; Newman, Perry A.; Green, Lawrence L.
2002-01-01
An implementation of the approximate statistical moment method for uncertainty propagation and robust optimization for quasi 3-D Euler CFD code is presented. Given uncertainties in statistically independent, random, normally distributed input variables, first- and second-order statistical moment procedures are performed to approximate the uncertainty in the CFD output. Efficient calculation of both first- and second-order sensitivity derivatives is required. In order to assess the validity of the approximations, these moments are compared with statistical moments generated through Monte Carlo simulations. The uncertainties in the CFD input variables are also incorporated into a robust optimization procedure. For this optimization, statistical moments involving first-order sensitivity derivatives appear in the objective function and system constraints. Second-order sensitivity derivatives are used in a gradient-based search to successfully execute a robust optimization. The approximate methods used throughout the analyses are found to be valid when considering robustness about input parameter mean values.
Response surface method in geotechnical/structural analysis, phase 1
NASA Astrophysics Data System (ADS)
Wong, F. S.
1981-02-01
In the response surface approach, an approximating function is fit to a long running computer code based on a limited number of code calculations. The approximating function, called the response surface, is then used to replace the code in subsequent repetitive computations required in a statistical analysis. The procedure of the response surface development and feasibility of the method are shown using a sample problem in slop stability which is based on data from centrifuge experiments of model soil slopes and involves five random soil parameters. It is shown that a response surface can be constructed based on as few as four code calculations and that the response surface is computationally extremely efficient compared to the code calculation. Potential applications of this research include probabilistic analysis of dynamic, complex, nonlinear soil/structure systems such as slope stability, liquefaction, and nuclear reactor safety.
DOE Office of Scientific and Technical Information (OSTI.GOV)
LaBrecque, J.J.; Adames, D.; Parker, W.C.
1981-01-01
A rapid method is presented for the simultaneous determinations of thorium, niobium, lead, and zinc in lateritic material from Cerro Impacto, Estado Bolivar, Venezuela. This technique uses a PDP - 11/05 processor - based photon induced x-ray fluorescence system. The total variations of approximately 5% for concentrations of approximately 1 and 10% for concentrations of approximately 0.1% were obtained with only 500 s of fluorescent time. The values obtained by this method were in agreement with values measured by conventional flame atomic absorption spectroscopy for lead and zinc. The values for thorium measured were in agreement with the reported valuesmore » for the reference materials supplied by NBL.« less
Hybrid stochastic simulation of reaction-diffusion systems with slow and fast dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strehl, Robert; Ilie, Silvana, E-mail: silvana@ryerson.ca
2015-12-21
In this paper, we present a novel hybrid method to simulate discrete stochastic reaction-diffusion models arising in biochemical signaling pathways. We study moderately stiff systems, for which we can partition each reaction or diffusion channel into either a slow or fast subset, based on its propensity. Numerical approaches missing this distinction are often limited with respect to computational run time or approximation quality. We design an approximate scheme that remedies these pitfalls by using a new blending strategy of the well-established inhomogeneous stochastic simulation algorithm and the tau-leaping simulation method. The advantages of our hybrid simulation algorithm are demonstrated onmore » three benchmarking systems, with special focus on approximation accuracy and efficiency.« less
NASA Astrophysics Data System (ADS)
Ciancio, P. M.; Rossit, C. A.; Laura, P. A. A.
2007-05-01
This study is concerned with the vibration analysis of a cantilevered rectangular anisotropic plate when a concentrated mass is rigidly attached to its center point. Based on the classical theory of anisotropic plates, the Ritz method is employed to perform the analysis. The deflection of the plate is approximated by a set of beam functions in each principal coordinate direction. The influence of the mass magnitude on the natural frequencies and modal shapes of vibration is studied for a boron-epoxy plate and also in the case of a generic anisotropic material. The classical Ritz method with beam functions as the spatial approximation proved to be a suitable procedure to solve a problem of this analytical complexity.
Treatment of ice cover and other thin elastic layers with the parabolic equation method.
Collins, Michael D
2015-03-01
The parabolic equation method is extended to handle problems involving ice cover and other thin elastic layers. Parabolic equation solutions are based on rational approximations that are designed using accuracy constraints to ensure that the propagating modes are handled properly and stability constrains to ensure that the non-propagating modes are annihilated. The non-propagating modes are especially problematic for problems involving thin elastic layers. It is demonstrated that stable results may be obtained for such problems by using rotated rational approximations [Milinazzo, Zala, and Brooke, J. Acoust. Soc. Am. 101, 760-766 (1997)] and generalizations of these approximations. The approach is applied to problems involving ice cover with variable thickness and sediment layers that taper to zero thickness.
NASA Astrophysics Data System (ADS)
Bonetto, P.; Qi, Jinyi; Leahy, R. M.
2000-08-01
Describes a method for computing linear observer statistics for maximum a posteriori (MAP) reconstructions of PET images. The method is based on a theoretical approximation for the mean and covariance of MAP reconstructions. In particular, the authors derive here a closed form for the channelized Hotelling observer (CHO) statistic applied to 2D MAP images. The theoretical analysis models both the Poission statistics of PET data and the inhomogeneity of tracer uptake. The authors show reasonably good correspondence between these theoretical results and Monte Carlo studies. The accuracy and low computational cost of the approximation allow the authors to analyze the observer performance over a wide range of operating conditions and parameter settings for the MAP reconstruction algorithm.
ERIC Educational Resources Information Center
Li, Deping; Oranje, Andreas
2007-01-01
Two versions of a general method for approximating standard error of regression effect estimates within an IRT-based latent regression model are compared. The general method is based on Binder's (1983) approach, accounting for complex samples and finite populations by Taylor series linearization. In contrast, the current National Assessment of…
Detection of proteins using a colorimetric bio-barcode assay.
Nam, Jwa-Min; Jang, Kyung-Jin; Groves, Jay T
2007-01-01
The colorimetric bio-barcode assay is a red-to-blue color change-based protein detection method with ultrahigh sensitivity. This assay is based on both the bio-barcode amplification method that allows for detecting miniscule amount of targets with attomolar sensitivity and gold nanoparticle-based colorimetric DNA detection method that allows for a simple and straightforward detection of biomolecules of interest (here we detect interleukin-2, an important biomarker (cytokine) for many immunodeficiency-related diseases and cancers). The protocol is composed of the following steps: (i) conjugation of target capture molecules and barcode DNA strands onto silica microparticles, (ii) target capture with probes, (iii) separation and release of barcode DNA strands from the separated probes, (iv) detection of released barcode DNA using DNA-modified gold nanoparticle probes and (v) red-to-blue color change analysis with a graphic software. Actual target detection and quantification steps with premade probes take approximately 3 h (whole protocol including probe preparations takes approximately 3 days).
Fourier series expansion for nonlinear Hamiltonian oscillators.
Méndez, Vicenç; Sans, Cristina; Campos, Daniel; Llopis, Isaac
2010-06-01
The problem of nonlinear Hamiltonian oscillators is one of the classical questions in physics. When an analytic solution is not possible, one can resort to obtaining a numerical solution or using perturbation theory around the linear problem. We apply the Fourier series expansion to find approximate solutions to the oscillator position as a function of time as well as the period-amplitude relationship. We compare our results with other recent approaches such as variational methods or heuristic approximations, in particular the Ren-He's method. Based on its application to the Duffing oscillator, the nonlinear pendulum and the eardrum equation, it is shown that the Fourier series expansion method is the most accurate.
NASA Technical Reports Server (NTRS)
Gibson, J. S.; Rosen, I. G.
1986-01-01
An abstract approximation theory and computational methods are developed for the determination of optimal linear-quadratic feedback control, observers and compensators for infinite dimensional discrete-time systems. Particular attention is paid to systems whose open-loop dynamics are described by semigroups of operators on Hilbert spaces. The approach taken is based on the finite dimensional approximation of the infinite dimensional operator Riccati equations which characterize the optimal feedback control and observer gains. Theoretical convergence results are presented and discussed. Numerical results for an example involving a heat equation with boundary control are presented and used to demonstrate the feasibility of the method.
Feature selection gait-based gender classification under different circumstances
NASA Astrophysics Data System (ADS)
Sabir, Azhin; Al-Jawad, Naseer; Jassim, Sabah
2014-05-01
This paper proposes a gender classification based on human gait features and investigates the problem of two variations: clothing (wearing coats) and carrying bag condition as addition to the normal gait sequence. The feature vectors in the proposed system are constructed after applying wavelet transform. Three different sets of feature are proposed in this method. First, Spatio-temporal distance that is dealing with the distance of different parts of the human body (like feet, knees, hand, Human Height and shoulder) during one gait cycle. The second and third feature sets are constructed from approximation and non-approximation coefficient of human body respectively. To extract these two sets of feature we divided the human body into two parts, upper and lower body part, based on the golden ratio proportion. In this paper, we have adopted a statistical method for constructing the feature vector from the above sets. The dimension of the constructed feature vector is reduced based on the Fisher score as a feature selection method to optimize their discriminating significance. Finally k-Nearest Neighbor is applied as a classification method. Experimental results demonstrate that our approach is providing more realistic scenario and relatively better performance compared with the existing approaches.
Fan, Quan-Yong; Yang, Guang-Hong
2016-01-01
This paper is concerned with the problem of integral sliding-mode control for a class of nonlinear systems with input disturbances and unknown nonlinear terms through the adaptive actor-critic (AC) control method. The main objective is to design a sliding-mode control methodology based on the adaptive dynamic programming (ADP) method, so that the closed-loop system with time-varying disturbances is stable and the nearly optimal performance of the sliding-mode dynamics can be guaranteed. In the first step, a neural network (NN)-based observer and a disturbance observer are designed to approximate the unknown nonlinear terms and estimate the input disturbances, respectively. Based on the NN approximations and disturbance estimations, the discontinuous part of the sliding-mode control is constructed to eliminate the effect of the disturbances and attain the expected equivalent sliding-mode dynamics. Then, the ADP method with AC structure is presented to learn the optimal control for the sliding-mode dynamics online. Reconstructed tuning laws are developed to guarantee the stability of the sliding-mode dynamics and the convergence of the weights of critic and actor NNs. Finally, the simulation results are presented to illustrate the effectiveness of the proposed method.
Numerical realization of the variational method for generating self-trapped beams.
Duque, Erick I; Lopez-Aguayo, Servando; Malomed, Boris A
2018-03-19
We introduce a numerical variational method based on the Rayleigh-Ritz optimization principle for predicting two-dimensional self-trapped beams in nonlinear media. This technique overcomes the limitation of the traditional variational approximation in performing analytical Lagrangian integration and differentiation. Approximate soliton solutions of a generalized nonlinear Schrödinger equation are obtained, demonstrating robustness of the beams of various types (fundamental, vortices, multipoles, azimuthons) in the course of their propagation. The algorithm offers possibilities to produce more sophisticated soliton profiles in general nonlinear models.
Arigovindan, Muthuvel; Shaevitz, Joshua; McGowan, John; Sedat, John W; Agard, David A
2010-03-29
We address the problem of computational representation of image formation in 3D widefield fluorescence microscopy with depth varying spherical aberrations. We first represent 3D depth-dependent point spread functions (PSFs) as a weighted sum of basis functions that are obtained by principal component analysis (PCA) of experimental data. This representation is then used to derive an approximating structure that compactly expresses the depth variant response as a sum of few depth invariant convolutions pre-multiplied by a set of 1D depth functions, where the convolving functions are the PCA-derived basis functions. The model offers an efficient and convenient trade-off between complexity and accuracy. For a given number of approximating PSFs, the proposed method results in a much better accuracy than the strata based approximation scheme that is currently used in the literature. In addition to yielding better accuracy, the proposed methods automatically eliminate the noise in the measured PSFs.
Approximate matching of structured motifs in DNA sequences.
El-Mabrouk, Nadia; Raffinot, Mathieu; Duchesne, Jean-Eudes; Lajoie, Mathieu; Luc, Nicolas
2005-04-01
Several methods have been developed for identifying more or less complex RNA structures in a genome. All these methods are based on the search for conserved primary and secondary sub-structures. In this paper, we present a simple formal representation of a helix, which is a combination of sequence and folding constraints, as a constrained regular expression. This representation allows us to develop a well-founded algorithm that searches for all approximate matches of a helix in a genome. The algorithm is based on an alignment graph constructed from several copies of a pushdown automaton, arranged one on top of another. This is a first attempt to take advantage of the possibilities of pushdown automata in the context of approximate matching. The worst time complexity is O(krpn), where k is the error threshold, n the size of the genome, p the size of the secondary expression, and r its number of union symbols. We then extend the algorithm to search for pseudo-knots and secondary structures containing an arbitrary number of helices.
Calculations of Hubbard U from first-principles
NASA Astrophysics Data System (ADS)
Aryasetiawan, F.; Karlsson, K.; Jepsen, O.; Schönberger, U.
2006-09-01
The Hubbard U of the 3d transition metal series as well as SrVO3 , YTiO3 , Ce, and Gd has been estimated using a recently proposed scheme based on the random-phase approximation. The values obtained are generally in good accord with the values often used in model calculations but for some cases the estimated values are somewhat smaller than those used in the literature. We have also calculated the frequency-dependent U for some of the materials. The strong frequency dependence of U in some of the cases considered in this paper suggests that the static value of U may not be the most appropriate one to use in model calculations. We have also made comparison with the constrained local density approximation (LDA) method and found some discrepancies in a number of cases. We emphasize that our scheme and the constrained local density approximation LDA method theoretically ought to give similar results and the discrepancies may be attributed to technical difficulties in performing calculations based on currently implemented constrained LDA schemes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Özdemir, Semra Bayat; Demiralp, Metin
The determination of the energy states is highly studied issue in the quantum mechanics. Based on expectation values dynamics, energy states can be observed. But conditions and calculations vary depending on the created system. In this work, a symmetric exponential anharmonic oscillator is considered and development of a recursive approximation method is studied to find its ground energy state. The use of majorant values facilitates the approximate calculation of expectation values.
Arrival-time picking method based on approximate negentropy for microseismic data
NASA Astrophysics Data System (ADS)
Li, Yue; Ni, Zhuo; Tian, Yanan
2018-05-01
Accurate and dependable picking of the first arrival time for microseismic data is an important part in microseismic monitoring, which directly affects analysis results of post-processing. This paper presents a new method based on approximate negentropy (AN) theory for microseismic arrival time picking in condition of much lower signal-to-noise ratio (SNR). According to the differences in information characteristics between microseismic data and random noise, an appropriate approximation of negentropy function is selected to minimize the effect of SNR. At the same time, a weighted function of the differences between maximum and minimum value of AN spectrum curve is designed to obtain a proper threshold function. In this way, the region of signal and noise is distinguished to pick the first arrival time accurately. To demonstrate the effectiveness of AN method, we make many experiments on a series of synthetic data with different SNR from -1 dB to -12 dB and compare it with previously published Akaike information criterion (AIC) and short/long time average ratio (STA/LTA) methods. Experimental results indicate that these three methods can achieve well picking effect when SNR is from -1 dB to -8 dB. However, when SNR is as low as -8 dB to -12 dB, the proposed AN method yields more accurate and stable picking result than AIC and STA/LTA methods. Furthermore, the application results of real three-component microseismic data also show that the new method is superior to the other two methods in accuracy and stability.
Differential privacy based on importance weighting
Ji, Zhanglong
2014-01-01
This paper analyzes a novel method for publishing data while still protecting privacy. The method is based on computing weights that make an existing dataset, for which there are no confidentiality issues, analogous to the dataset that must be kept private. The existing dataset may be genuine but public already, or it may be synthetic. The weights are importance sampling weights, but to protect privacy, they are regularized and have noise added. The weights allow statistical queries to be answered approximately while provably guaranteeing differential privacy. We derive an expression for the asymptotic variance of the approximate answers. Experiments show that the new mechanism performs well even when the privacy budget is small, and when the public and private datasets are drawn from different populations. PMID:24482559
"Tools For Analysis and Visualization of Large Time- Varying CFD Data Sets"
NASA Technical Reports Server (NTRS)
Wilhelms, Jane; vanGelder, Allen
1999-01-01
During the four years of this grant (including the one year extension), we have explored many aspects of the visualization of large CFD (Computational Fluid Dynamics) datasets. These have included new direct volume rendering approaches, hierarchical methods, volume decimation, error metrics, parallelization, hardware texture mapping, and methods for analyzing and comparing images. First, we implemented an extremely general direct volume rendering approach that can be used to render rectilinear, curvilinear, or tetrahedral grids, including overlapping multiple zone grids, and time-varying grids. Next, we developed techniques for associating the sample data with a k-d tree, a simple hierarchial data model to approximate samples in the regions covered by each node of the tree, and an error metric for the accuracy of the model. We also explored a new method for determining the accuracy of approximate models based on the light field method described at ACM SIGGRAPH (Association for Computing Machinery Special Interest Group on Computer Graphics) '96. In our initial implementation, we automatically image the volume from 32 approximately evenly distributed positions on the surface of an enclosing tessellated sphere. We then calculate differences between these images under different conditions of volume approximation or decimation.
NASA Astrophysics Data System (ADS)
Dickey, Dwayne J.; Moore, Ronald B.; Tulip, John
2001-01-01
For photodynamic therapy of solid tumors, such as prostatic carcinoma, to be achieved, an accurate model to predict tissue parameters and light dose must be found. Presently, most analytical light dosimetry models are fluence based and are not clinically viable for tissue characterization. Other methods of predicting optical properties, such as Monet Carlo, are accurate but far too time consuming for clinical application. However, radiance predicted by the P3-Approximation, an anaylitical solution to the transport equation, may be a viable and accurate alternative. The P3-Approximation accurately predicts optical parameters in intralipid/methylene blue based phantoms in a spherical geometry. The optical parameters furnished by the radiance, when introduced into fluence predicted by both P3- Approximation and Grosjean Theory, correlate well with experimental data. The P3-Approximation also predicts the optical properties of prostate tissue, agreeing with documented optical parameters. The P3-Approximation could be the clinical tool necessary to facilitate PDT of solid tumors because of the limited number of invasive measurements required and the speed in which accurate calculations can be performed.
Ashbaugh, H S; Garde, S; Hummer, G; Kaler, E W; Paulaitis, M E
1999-01-01
Conformational free energies of butane, pentane, and hexane in water are calculated from molecular simulations with explicit waters and from a simple molecular theory in which the local hydration structure is estimated based on a proximity approximation. This proximity approximation uses only the two nearest carbon atoms on the alkane to predict the local water density at a given point in space. Conformational free energies of hydration are subsequently calculated using a free energy perturbation method. Quantitative agreement is found between the free energies obtained from simulations and theory. Moreover, free energy calculations using this proximity approximation are approximately four orders of magnitude faster than those based on explicit water simulations. Our results demonstrate the accuracy and utility of the proximity approximation for predicting water structure as the basis for a quantitative description of n-alkane conformational equilibria in water. In addition, the proximity approximation provides a molecular foundation for extending predictions of water structure and hydration thermodynamic properties of simple hydrophobic solutes to larger clusters or assemblies of hydrophobic solutes. PMID:10423414
Biomathematical modeling of pulsatile hormone secretion: a historical perspective.
Evans, William S; Farhy, Leon S; Johnson, Michael L
2009-01-01
Shortly after the recognition of the profound physiological significance of the pulsatile nature of hormone secretion, computer-based modeling techniques were introduced for the identification and characterization of such pulses. Whereas these earlier approaches defined perturbations in hormone concentration-time series, deconvolution procedures were subsequently employed to separate such pulses into their secretion event and clearance components. Stochastic differential equation modeling was also used to define basal and pulsatile hormone secretion. To assess the regulation of individual components within a hormone network, a method that quantitated approximate entropy within hormone concentration-times series was described. To define relationships within coupled hormone systems, methods including cross-correlation and cross-approximate entropy were utilized. To address some of the inherent limitations of these methods, modeling techniques with which to appraise the strength of feedback signaling between and among hormone-secreting components of a network have been developed. Techniques such as dynamic modeling have been utilized to reconstruct dose-response interactions between hormones within coupled systems. A logical extension of these advances will require the development of mathematical methods with which to approximate endocrine networks exhibiting multiple feedback interactions and subsequently reconstruct their parameters based on experimental data for the purpose of testing regulatory hypotheses and estimating alterations in hormone release control mechanisms.
Combining Biomarkers Linearly and Nonlinearly for Classification Using the Area Under the ROC Curve
Fong, Youyi; Yin, Shuxin; Huang, Ying
2016-01-01
In biomedical studies, it is often of interest to classify/predict a subject’s disease status based on a variety of biomarker measurements. A commonly used classification criterion is based on AUC - Area under the Receiver Operating Characteristic Curve. Many methods have been proposed to optimize approximated empirical AUC criteria, but there are two limitations to the existing methods. First, most methods are only designed to find the best linear combination of biomarkers, which may not perform well when there is strong nonlinearity in the data. Second, many existing linear combination methods use gradient-based algorithms to find the best marker combination, which often result in sub-optimal local solutions. In this paper, we address these two problems by proposing a new kernel-based AUC optimization method called Ramp AUC (RAUC). This method approximates the empirical AUC loss function with a ramp function, and finds the best combination by a difference of convex functions algorithm. We show that as a linear combination method, RAUC leads to a consistent and asymptotically normal estimator of the linear marker combination when the data is generated from a semiparametric generalized linear model, just as the Smoothed AUC method (SAUC). Through simulation studies and real data examples, we demonstrate that RAUC out-performs SAUC in finding the best linear marker combinations, and can successfully capture nonlinear pattern in the data to achieve better classification performance. We illustrate our method with a dataset from a recent HIV vaccine trial. PMID:27058981
Wei, Wei; Larrey-Lassalle, Pyrène; Faure, Thierry; Dumoulin, Nicolas; Roux, Philippe; Mathias, Jean-Denis
2016-03-01
Comparative decision making process is widely used to identify which option (system, product, service, etc.) has smaller environmental footprints and for providing recommendations that help stakeholders take future decisions. However, the uncertainty problem complicates the comparison and the decision making. Probability-based decision support in LCA is a way to help stakeholders in their decision-making process. It calculates the decision confidence probability which expresses the probability of a option to have a smaller environmental impact than the one of another option. Here we apply the reliability theory to approximate the decision confidence probability. We compare the traditional Monte Carlo method with a reliability method called FORM method. The Monte Carlo method needs high computational time to calculate the decision confidence probability. The FORM method enables us to approximate the decision confidence probability with fewer simulations than the Monte Carlo method by approximating the response surface. Moreover, the FORM method calculates the associated importance factors that correspond to a sensitivity analysis in relation to the probability. The importance factors allow stakeholders to determine which factors influence their decision. Our results clearly show that the reliability method provides additional useful information to stakeholders as well as it reduces the computational time.
Solving bi-level optimization problems in engineering design using kriging models
NASA Astrophysics Data System (ADS)
Xia, Yi; Liu, Xiaojie; Du, Gang
2018-05-01
Stackelberg game-theoretic approaches are applied extensively in engineering design to handle distributed collaboration decisions. Bi-level genetic algorithms (BLGAs) and response surfaces have been used to solve the corresponding bi-level programming models. However, the computational costs for BLGAs often increase rapidly with the complexity of lower-level programs, and optimal solution functions sometimes cannot be approximated by response surfaces. This article proposes a new method, namely the optimal solution function approximation by kriging model (OSFAKM), in which kriging models are used to approximate the optimal solution functions. A detailed example demonstrates that OSFAKM can obtain better solutions than BLGAs and response surface-based methods, and at the same time reduce the workload of computation remarkably. Five benchmark problems and a case study of the optimal design of a thin-walled pressure vessel are also presented to illustrate the feasibility and potential of the proposed method for bi-level optimization in engineering design.
Free and Forced Vibrations of Thick-Walled Anisotropic Cylindrical Shells
NASA Astrophysics Data System (ADS)
Marchuk, A. V.; Gnedash, S. V.; Levkovskii, S. A.
2017-03-01
Two approaches to studying the free and forced axisymmetric vibrations of cylindrical shell are proposed. They are based on the three-dimensional theory of elasticity and division of the original cylindrical shell with concentric cross-sectional circles into several coaxial cylindrical shells. One approach uses linear polynomials to approximate functions defined in plan and across the thickness. The other approach also uses linear polynomials to approximate functions defined in plan, but their variation with thickness is described by the analytical solution of a system of differential equations. Both approaches have approximation and arithmetic errors. When determining the natural frequencies by the semi-analytical finite-element method in combination with the divide and conqure method, it is convenient to find the initial frequencies by the finite-element method. The behavior of the shell during free and forced vibrations is analyzed in the case where the loading area is half the shell thickness
Metamodels for Computer-Based Engineering Design: Survey and Recommendations
NASA Technical Reports Server (NTRS)
Simpson, Timothy W.; Peplinski, Jesse; Koch, Patrick N.; Allen, Janet K.
1997-01-01
The use of statistical techniques to build approximations of expensive computer analysis codes pervades much of todays engineering design. These statistical approximations, or metamodels, are used to replace the actual expensive computer analyses, facilitating multidisciplinary, multiobjective optimization and concept exploration. In this paper we review several of these techniques including design of experiments, response surface methodology, Taguchi methods, neural networks, inductive learning, and kriging. We survey their existing application in engineering design and then address the dangers of applying traditional statistical techniques to approximate deterministic computer analysis codes. We conclude with recommendations for the appropriate use of statistical approximation techniques in given situations and how common pitfalls can be avoided.
Approximate Single-Diode Photovoltaic Model for Efficient I-V Characteristics Estimation
Ting, T. O.; Zhang, Nan; Guan, Sheng-Uei; Wong, Prudence W. H.
2013-01-01
Precise photovoltaic (PV) behavior models are normally described by nonlinear analytical equations. To solve such equations, it is necessary to use iterative procedures. Aiming to make the computation easier, this paper proposes an approximate single-diode PV model that enables high-speed predictions for the electrical characteristics of commercial PV modules. Based on the experimental data, statistical analysis is conducted to validate the approximate model. Simulation results show that the calculated current-voltage (I-V) characteristics fit the measured data with high accuracy. Furthermore, compared with the existing modeling methods, the proposed model reduces the simulation time by approximately 30% in this work. PMID:24298205
Hybrid perturbation methods based on statistical time series models
NASA Astrophysics Data System (ADS)
San-Juan, Juan Félix; San-Martín, Montserrat; Pérez, Iván; López, Rosario
2016-04-01
In this work we present a new methodology for orbit propagation, the hybrid perturbation theory, based on the combination of an integration method and a prediction technique. The former, which can be a numerical, analytical or semianalytical theory, generates an initial approximation that contains some inaccuracies derived from the fact that, in order to simplify the expressions and subsequent computations, not all the involved forces are taken into account and only low-order terms are considered, not to mention the fact that mathematical models of perturbations not always reproduce physical phenomena with absolute precision. The prediction technique, which can be based on either statistical time series models or computational intelligence methods, is aimed at modelling and reproducing missing dynamics in the previously integrated approximation. This combination results in the precision improvement of conventional numerical, analytical and semianalytical theories for determining the position and velocity of any artificial satellite or space debris object. In order to validate this methodology, we present a family of three hybrid orbit propagators formed by the combination of three different orders of approximation of an analytical theory and a statistical time series model, and analyse their capability to process the effect produced by the flattening of the Earth. The three considered analytical components are the integration of the Kepler problem, a first-order and a second-order analytical theories, whereas the prediction technique is the same in the three cases, namely an additive Holt-Winters method.
Inference of epidemiological parameters from household stratified data
Walker, James N.; Ross, Joshua V.
2017-01-01
We consider a continuous-time Markov chain model of SIR disease dynamics with two levels of mixing. For this so-called stochastic households model, we provide two methods for inferring the model parameters—governing within-household transmission, recovery, and between-household transmission—from data of the day upon which each individual became infectious and the household in which each infection occurred, as might be available from First Few Hundred studies. Each method is a form of Bayesian Markov Chain Monte Carlo that allows us to calculate a joint posterior distribution for all parameters and hence the household reproduction number and the early growth rate of the epidemic. The first method performs exact Bayesian inference using a standard data-augmentation approach; the second performs approximate Bayesian inference based on a likelihood approximation derived from branching processes. These methods are compared for computational efficiency and posteriors from each are compared. The branching process is shown to be a good approximation and remains computationally efficient as the amount of data is increased. PMID:29045456
Detection of Natural Fractures from Observed Surface Seismic Data Based on a Linear-Slip Model
NASA Astrophysics Data System (ADS)
Chen, Huaizhen; Zhang, Guangzhi
2018-03-01
Natural fractures play an important role in migration of hydrocarbon fluids. Based on a rock physics effective model, the linear-slip model, which defines fracture parameters (fracture compliances) for quantitatively characterizing the effects of fractures on rock total compliance, we propose a method to detect natural fractures from observed seismic data via inversion for the fracture compliances. We first derive an approximate PP-wave reflection coefficient in terms of fracture compliances. Using the approximate reflection coefficient, we derive azimuthal elastic impedance as a function of fracture compliances. An inversion method to estimate fracture compliances from seismic data is presented based on a Bayesian framework and azimuthal elastic impedance, which is implemented in a two-step procedure: a least-squares inversion for azimuthal elastic impedance and an iterative inversion for fracture compliances. We apply the inversion method to synthetic and real data to verify its stability and reasonability. Synthetic tests confirm that the method can make a stable estimation of fracture compliances in the case of seismic data containing a moderate signal-to-noise ratio for Gaussian noise, and the test on real data reveals that reasonable fracture compliances are obtained using the proposed method.
Yao, Yanping; Kou, Ziming; Meng, Wenjun; Han, Gang
2014-01-01
Properly evaluating the overall performance of tubular scraper conveyors (TSCs) can increase their overall efficiency and reduce economic investments, but such methods have rarely been studied. This study evaluated the overall performance of TSCs based on the technique for order of preference by similarity to ideal solution (TOPSIS). Three conveyors of the same type produced in the same factory were investigated. Their scraper space, material filling coefficient, and vibration coefficient of the traction components were evaluated. A mathematical model of the multiattribute decision matrix was constructed; a weighted judgment matrix was obtained using the DELPHI method. The linguistic positive-ideal solution (LPIS), the linguistic negative-ideal solution (LNIS), and the distance from each solution to the LPIS and the LNIS, that is, the approximation degrees, were calculated. The optimal solution was determined by ordering the approximation degrees for each solution. The TOPSIS-based results were compared with the measurement results provided by the manufacturer. The ordering result based on the three evaluated parameters was highly consistent with the result provided by the manufacturer. The TOPSIS-based method serves as a suitable evaluation tool for the overall performance of TSCs. It facilitates the optimal deployment of TSCs for industrial purposes. PMID:24991646
NASA Astrophysics Data System (ADS)
Rajshekhar, G.; Gorthi, Sai Siva; Rastogi, Pramod
2010-04-01
For phase estimation in digital holographic interferometry, a high-order instantaneous moments (HIM) based method was recently developed which relies on piecewise polynomial approximation of phase and subsequent evaluation of the polynomial coefficients using the HIM operator. A crucial step in the method is mapping the polynomial coefficient estimation to single-tone frequency determination for which various techniques exist. The paper presents a comparative analysis of the performance of the HIM operator based method in using different single-tone frequency estimation techniques for phase estimation. The analysis is supplemented by simulation results.
An adaptive finite element method for the inequality-constrained Reynolds equation
NASA Astrophysics Data System (ADS)
Gustafsson, Tom; Rajagopal, Kumbakonam R.; Stenberg, Rolf; Videman, Juha
2018-07-01
We present a stabilized finite element method for the numerical solution of cavitation in lubrication, modeled as an inequality-constrained Reynolds equation. The cavitation model is written as a variable coefficient saddle-point problem and approximated by a residual-based stabilized method. Based on our recent results on the classical obstacle problem, we present optimal a priori estimates and derive novel a posteriori error estimators. The method is implemented as a Nitsche-type finite element technique and shown in numerical computations to be superior to the usually applied penalty methods.
40 CFR Appendix A-3 to Part 60 - Test Methods 4 through 5I
Code of Federal Regulations, 2011 CFR
2011-07-01
... isokinetic sampling rates prior to a pollutant emission measurement run. The approximation method described... with a pollutant emission measurement run. When it is, calculation of percent isokinetic, pollutant emission rate, etc., for the run shall be based upon the results of the reference method or its equivalent...
The frozen nucleon approximation in two-particle two-hole response functions
Ruiz Simo, I.; Amaro, J. E.; Barbaro, M. B.; ...
2017-07-10
Here, we present a fast and efficient method to compute the inclusive two-particle two-hole (2p–2h) electroweak responses in the neutrino and electron quasielastic inclusive cross sections. The method is based on two approximations. The first neglects the motion of the two initial nucleons below the Fermi momentum, which are considered to be at rest. This approximation, which is reasonable for high values of the momentum transfer, turns out also to be quite good for moderate values of the momentum transfer q ≳kF. The second approximation involves using in the “frozen” meson-exchange currents (MEC) an effective Δ-propagator averaged over the Fermimore » sea. Within the resulting “frozen nucleon approximation”, the inclusive 2p–2h responses are accurately calculated with only a one-dimensional integral over the emission angle of one of the final nucleons, thus drastically simplifying the calculation and reducing the computational time. The latter makes this method especially well-suited for implementation in Monte Carlo neutrino event generators.« less
The frozen nucleon approximation in two-particle two-hole response functions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruiz Simo, I.; Amaro, J. E.; Barbaro, M. B.
Here, we present a fast and efficient method to compute the inclusive two-particle two-hole (2p–2h) electroweak responses in the neutrino and electron quasielastic inclusive cross sections. The method is based on two approximations. The first neglects the motion of the two initial nucleons below the Fermi momentum, which are considered to be at rest. This approximation, which is reasonable for high values of the momentum transfer, turns out also to be quite good for moderate values of the momentum transfer q ≳kF. The second approximation involves using in the “frozen” meson-exchange currents (MEC) an effective Δ-propagator averaged over the Fermimore » sea. Within the resulting “frozen nucleon approximation”, the inclusive 2p–2h responses are accurately calculated with only a one-dimensional integral over the emission angle of one of the final nucleons, thus drastically simplifying the calculation and reducing the computational time. The latter makes this method especially well-suited for implementation in Monte Carlo neutrino event generators.« less
A 2D Gaussian-Beam-Based Method for Modeling the Dichroic Surfaces of Quasi-Optical Systems
NASA Astrophysics Data System (ADS)
Elis, Kevin; Chabory, Alexandre; Sokoloff, Jérôme; Bolioli, Sylvain
2016-08-01
In this article, we propose an approach in the spectral domain to treat the interaction of a field with a dichroic surface in two dimensions. For a Gaussian beam illumination of the surface, the reflected and transmitted fields are approximated by one reflected and one transmitted Gaussian beams. Their characteristics are determined by means of a matching in the spectral domain, which requires a second-order approximation of the dichroic surface response when excited by plane waves. This approximation is of the same order as the one used in Gaussian beam shooting algorithm to model curved interfaces associated with lenses, reflector, etc. The method uses general analytical formulations for the GBs that depend either on a paraxial or far-field approximation. Numerical experiments are led to test the efficiency of the method in terms of accuracy and computation time. They include a parametric study and a case for which the illumination is provided by a horn antenna. For the latter, the incident field is firstly expressed as a sum of Gaussian beams by means of Gabor frames.
Physical foundation of the fluid particle dynamics method for colloid dynamics simulation.
Furukawa, Akira; Tateno, Michio; Tanaka, Hajime
2018-05-16
Colloid dynamics is significantly influenced by many-body hydrodynamic interactions mediated by a suspending fluid. However, theoretical and numerical treatments of such interactions are extremely difficult. To overcome this situation, we developed a fluid particle dynamics (FPD) method [H. Tanaka and T. Araki, Phys. Rev. Lett., 2000, 35, 3523], which is based on two key approximations: (i) a colloidal particle is treated as a highly viscous particle and (ii) the viscosity profile is described by a smooth interfacial profile function. Approximation (i) makes our method free from the solid-fluid boundary condition, significantly simplifying the treatment of many-body hydrodynamic interactions while satisfying the incompressible condition without the Stokes approximation. Approximation (ii) allows us to incorporate an extra degree of freedom in a fluid, e.g., orientational order and concentration, as an additional field variable. Here, we consider two fundamental problems associated with these approximations. One is the introduction of thermal noise and the other is the incorporation of coupling of the colloid surface with an order parameter introduced into a fluid component, which is crucial when considering colloidal particles suspended in a complex fluid. Here, we show that our FPD method makes it possible to simulate colloid dynamics properly while including full hydrodynamic interactions, inertia effects, incompressibility, thermal noise, and additional degrees of freedom of a fluid, which may be relevant for wide applications in colloidal and soft matter science.
Error Estimation for the Linearized Auto-Localization Algorithm
Guevara, Jorge; Jiménez, Antonio R.; Prieto, Jose Carlos; Seco, Fernando
2012-01-01
The Linearized Auto-Localization (LAL) algorithm estimates the position of beacon nodes in Local Positioning Systems (LPSs), using only the distance measurements to a mobile node whose position is also unknown. The LAL algorithm calculates the inter-beacon distances, used for the estimation of the beacons’ positions, from the linearized trilateration equations. In this paper we propose a method to estimate the propagation of the errors of the inter-beacon distances obtained with the LAL algorithm, based on a first order Taylor approximation of the equations. Since the method depends on such approximation, a confidence parameter τ is defined to measure the reliability of the estimated error. Field evaluations showed that by applying this information to an improved weighted-based auto-localization algorithm (WLAL), the standard deviation of the inter-beacon distances can be improved by more than 30% on average with respect to the original LAL method. PMID:22736965
Multiscale skeletal representation of images via Voronoi diagrams
NASA Astrophysics Data System (ADS)
Marston, R. E.; Shih, Jian C.
1995-08-01
Polygonal approximations to skeletal or stroke-based representations of 2D objects may consume less storage and be sufficient to describe their shape for many applications. Multi- scale descriptions of object outlines are well established but corresponding methods for skeletal descriptions have been slower to develop. In this paper we offer a method of generating scale-based skeletal representation via the Voronoi diagram. The method has the advantages of less time complexity, a closer relationship between the skeletons at each scale and better control over simplification of the skeleton at lower scales. This is because the algorithm starts by generating the skeleton at the coarsest scale first, then it produces each finer scale, in an iterative manner, directly from the level below. The skeletal approximations produced by the algorithm also benefit from a strong relationship with the object outline, due to the structure of the Voronoi diagram.
NASA Technical Reports Server (NTRS)
Hou, Gene J.-W; Newman, Perry A. (Technical Monitor)
2004-01-01
A major step in a most probable point (MPP)-based method for reliability analysis is to determine the MPP. This is usually accomplished by using an optimization search algorithm. The minimum distance associated with the MPP provides a measurement of safety probability, which can be obtained by approximate probability integration methods such as FORM or SORM. The reliability sensitivity equations are derived first in this paper, based on the derivatives of the optimal solution. Examples are provided later to demonstrate the use of these derivatives for better reliability analysis and reliability-based design optimization (RBDO).
Energy transfer between two vacuum-gapped metal plates: Coulomb fluctuations and electron tunneling
NASA Astrophysics Data System (ADS)
Zhang, Zu-Quan; Lü, Jing-Tao; Wang, Jian-Sheng
2018-05-01
Recent experimental measurements for near-field radiative heat transfer between two bodies have been able to approach the gap distance within 2 nm , where the contributions of Coulomb fluctuation and electron tunneling are comparable. Using the nonequilibrium Green's function method in the G0W0 approximation, based on a tight-binding model, we obtain for the energy current a Caroli formula from the Meir-Wingreen formula in the local equilibrium approximation. Also, the Caroli formula is consistent with the evanescent part of the heat transfer from the theory of fluctuational electrodynamics. We go beyond the local equilibrium approximation to study the energy transfer in the crossover region from electron tunneling to Coulomb fluctuation based on a numerical calculation.
SGM-based seamline determination for urban orthophoto mosaicking
NASA Astrophysics Data System (ADS)
Pang, Shiyan; Sun, Mingwei; Hu, Xiangyun; Zhang, Zuxun
2016-02-01
Mosaicking is a key step in the production of digital orthophoto maps (DOMs), especially for large-scale urban orthophotos. During this step, manual intervention is commonly involved to avoid the case where the seamline crosses obvious objects (e.g., buildings), which causes geometric discontinuities on the DOMs. How to guide the seamline to avoid crossing obvious objects has become a popular topic in the field of photogrammetry and remote sensing. Thus, a new semi-global matching (SGM)-based method to guide seamline determination is proposed for urban orthophoto mosaicking in this study, which can greatly eliminate geometric discontinuities. The approximate epipolar geometry of the orthophoto pairs is first derived and proven, and the approximate epipolar image pair is then generated by rotating the two orthorectified images according to the parallax direction. A SGM algorithm is applied to their overlaps to obtain the corresponding pixel-wise disparity. According to a predefined disparity threshold, the overlap area is identified as the obstacle and non-obstacle areas. For the non-obstacle regions, Hilditch thinning algorithm is used to obtain the skeleton line, followed by Dijkstra's algorithm to search for the optimal path on the skeleton network as the seamline between two orthophotos. A whole seamline network is constructed based on the strip information recorded in flight. In the experimental section, the approximate epipolar geometric theory of the orthophoto is first analyzed and verified, and the effectiveness of the proposed method is then validated by comparing its results with the results of the geometry-based, OrthoVista, and orthoimage elevation synchronous model (OESM)-based methods.
Direct application of Padé approximant for solving nonlinear differential equations.
Vazquez-Leal, Hector; Benhammouda, Brahim; Filobello-Nino, Uriel; Sarmiento-Reyes, Arturo; Jimenez-Fernandez, Victor Manuel; Garcia-Gervacio, Jose Luis; Huerta-Chua, Jesus; Morales-Mendoza, Luis Javier; Gonzalez-Lee, Mario
2014-01-01
This work presents a direct procedure to apply Padé method to find approximate solutions for nonlinear differential equations. Moreover, we present some cases study showing the strength of the method to generate highly accurate rational approximate solutions compared to other semi-analytical methods. The type of tested nonlinear equations are: a highly nonlinear boundary value problem, a differential-algebraic oscillator problem, and an asymptotic problem. The high accurate handy approximations obtained by the direct application of Padé method shows the high potential if the proposed scheme to approximate a wide variety of problems. What is more, the direct application of the Padé approximant aids to avoid the previous application of an approximative method like Taylor series method, homotopy perturbation method, Adomian Decomposition method, homotopy analysis method, variational iteration method, among others, as tools to obtain a power series solutions to post-treat with the Padé approximant. 34L30.
Numeric Modified Adomian Decomposition Method for Power System Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dimitrovski, Aleksandar D; Simunovic, Srdjan; Pannala, Sreekanth
This paper investigates the applicability of numeric Wazwaz El Sayed modified Adomian Decomposition Method (WES-ADM) for time domain simulation of power systems. WESADM is a numerical method based on a modified Adomian decomposition (ADM) technique. WES-ADM is a numerical approximation method for the solution of nonlinear ordinary differential equations. The non-linear terms in the differential equations are approximated using Adomian polynomials. In this paper WES-ADM is applied to time domain simulations of multimachine power systems. WECC 3-generator, 9-bus system and IEEE 10-generator, 39-bus system have been used to test the applicability of the approach. Several fault scenarios have been tested.more » It has been found that the proposed approach is faster than the trapezoidal method with comparable accuracy.« less
Subsonic Aircraft With Regression and Neural-Network Approximators Designed
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Hopkins, Dale A.
2004-01-01
At the NASA Glenn Research Center, NASA Langley Research Center's Flight Optimization System (FLOPS) and the design optimization testbed COMETBOARDS with regression and neural-network-analysis approximators have been coupled to obtain a preliminary aircraft design methodology. For a subsonic aircraft, the optimal design, that is the airframe-engine combination, is obtained by the simulation. The aircraft is powered by two high-bypass-ratio engines with a nominal thrust of about 35,000 lbf. It is to carry 150 passengers at a cruise speed of Mach 0.8 over a range of 3000 n mi and to operate on a 6000-ft runway. The aircraft design utilized a neural network and a regression-approximations-based analysis tool, along with a multioptimizer cascade algorithm that uses sequential linear programming, sequential quadratic programming, the method of feasible directions, and then sequential quadratic programming again. Optimal aircraft weight versus the number of design iterations is shown. The central processing unit (CPU) time to solution is given. It is shown that the regression-method-based analyzer exhibited a smoother convergence pattern than the FLOPS code. The optimum weight obtained by the approximation technique and the FLOPS code differed by 1.3 percent. Prediction by the approximation technique exhibited no error for the aircraft wing area and turbine entry temperature, whereas it was within 2 percent for most other parameters. Cascade strategy was required by FLOPS as well as the approximators. The regression method had a tendency to hug the data points, whereas the neural network exhibited a propensity to follow a mean path. The performance of the neural network and regression methods was considered adequate. It was at about the same level for small, standard, and large models with redundancy ratios (defined as the number of input-output pairs to the number of unknown coefficients) of 14, 28, and 57, respectively. In an SGI octane workstation (Silicon Graphics, Inc., Mountainview, CA), the regression training required a fraction of a CPU second, whereas neural network training was between 1 and 9 min, as given. For a single analysis cycle, the 3-sec CPU time required by the FLOPS code was reduced to milliseconds by the approximators. For design calculations, the time with the FLOPS code was 34 min. It was reduced to 2 sec with the regression method and to 4 min by the neural network technique. The performance of the regression and neural network methods was found to be satisfactory for the analysis and design optimization of the subsonic aircraft.
The Role of Scale and Model Bias in ADAPT's Photospheric Eatimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Godinez Vazquez, Humberto C.; Hickmann, Kyle Scott; Arge, Charles Nicholas
2015-05-20
The Air Force Assimilative Photospheric flux Transport model (ADAPT), is a magnetic flux propagation based on Worden-Harvey (WH) model. ADAPT would be used to provide a global photospheric map of the Earth. A data assimilation method based on the Ensemble Kalman Filter (EnKF), a method of Monte Carlo approximation tied with Kalman filtering, is used in calculating the ADAPT models.
A single-image method for x-ray refractive index CT.
Mittone, A; Gasilov, S; Brun, E; Bravin, A; Coan, P
2015-05-07
X-ray refraction-based computer tomography imaging is a well-established method for nondestructive investigations of various objects. In order to perform the 3D reconstruction of the index of refraction, two or more raw computed tomography phase-contrast images are usually acquired and combined to retrieve the refraction map (i.e. differential phase) signal within the sample. We suggest an approximate method to extract the refraction signal, which uses a single raw phase-contrast image. This method, here applied to analyzer-based phase-contrast imaging, is employed to retrieve the index of refraction map of a biological sample. The achieved accuracy in distinguishing the different tissues is comparable with the non-approximated approach. The suggested procedure can be used for precise refraction computer tomography with the advantage of a reduction of at least a factor of two of both the acquisition time and the dose delivered to the sample with respect to any of the other algorithms in the literature.
Hamiltonian lattice field theory: Computer calculations using variational methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zako, Robert L.
1991-12-03
I develop a variational method for systematic numerical computation of physical quantities -- bound state energies and scattering amplitudes -- in quantum field theory. An infinite-volume, continuum theory is approximated by a theory on a finite spatial lattice, which is amenable to numerical computation. I present an algorithm for computing approximate energy eigenvalues and eigenstates in the lattice theory and for bounding the resulting errors. I also show how to select basis states and choose variational parameters in order to minimize errors. The algorithm is based on the Rayleigh-Ritz principle and Kato`s generalizations of Temple`s formula. The algorithm could bemore » adapted to systems such as atoms and molecules. I show how to compute Green`s functions from energy eigenvalues and eigenstates in the lattice theory, and relate these to physical (renormalized) coupling constants, bound state energies and Green`s functions. Thus one can compute approximate physical quantities in a lattice theory that approximates a quantum field theory with specified physical coupling constants. I discuss the errors in both approximations. In principle, the errors can be made arbitrarily small by increasing the size of the lattice, decreasing the lattice spacing and computing sufficiently long. Unfortunately, I do not understand the infinite-volume and continuum limits well enough to quantify errors due to the lattice approximation. Thus the method is currently incomplete. I apply the method to real scalar field theories using a Fock basis of free particle states. All needed quantities can be calculated efficiently with this basis. The generalization to more complicated theories is straightforward. I describe a computer implementation of the method and present numerical results for simple quantum mechanical systems.« less
Modeling Wind Wave Evolution from Deep to Shallow Water
2011-09-30
validation and calibration of new model developments. WORK COMPLETED Development of a Lumped Quadruplet Approximation ( LQA ) To make evaluation of the...interactions based on the WRT method. This Lumped Quadruplet Approximation ( LQA ) clusters (lumps) contributions to the integrations over the...total transfer rate. A procedure has been developed to test the implementation (of LQA and other reduced versions of the WRT) where 1) the non
PET-CT image fusion using random forest and à-trous wavelet transform.
Seal, Ayan; Bhattacharjee, Debotosh; Nasipuri, Mita; Rodríguez-Esparragón, Dionisio; Menasalvas, Ernestina; Gonzalo-Martin, Consuelo
2018-03-01
New image fusion rules for multimodal medical images are proposed in this work. Image fusion rules are defined by random forest learning algorithm and a translation-invariant à-trous wavelet transform (AWT). The proposed method is threefold. First, source images are decomposed into approximation and detail coefficients using AWT. Second, random forest is used to choose pixels from the approximation and detail coefficients for forming the approximation and detail coefficients of the fused image. Lastly, inverse AWT is applied to reconstruct fused image. All experiments have been performed on 198 slices of both computed tomography and positron emission tomography images of a patient. A traditional fusion method based on Mallat wavelet transform has also been implemented on these slices. A new image fusion performance measure along with 4 existing measures has been presented, which helps to compare the performance of 2 pixel level fusion methods. The experimental results clearly indicate that the proposed method outperforms the traditional method in terms of visual and quantitative qualities and the new measure is meaningful. Copyright © 2017 John Wiley & Sons, Ltd.
A rational interpolation method to compute frequency response
NASA Technical Reports Server (NTRS)
Kenney, Charles; Stubberud, Stephen; Laub, Alan J.
1993-01-01
A rational interpolation method for approximating a frequency response is presented. The method is based on a product formulation of finite differences, thereby avoiding the numerical problems incurred by near-equal-valued subtraction. Also, resonant pole and zero cancellation schemes are developed that increase the accuracy and efficiency of the interpolation method. Selection techniques of interpolation points are also discussed.
Goal-Oriented Probability Density Function Methods for Uncertainty Quantification
2015-12-11
approximations or data-driven approaches. We investigated the accuracy of analytical tech- niques based Kubo -Van Kampen operator cumulant expansions for...analytical techniques based Kubo -Van Kampen operator cumulant expansions for Langevin equations driven by fractional Brownian motion and other noises
Automating approximate Bayesian computation by local linear regression.
Thornton, Kevin R
2009-07-07
In several biological contexts, parameter inference often relies on computationally-intensive techniques. "Approximate Bayesian Computation", or ABC, methods based on summary statistics have become increasingly popular. A particular flavor of ABC based on using a linear regression to approximate the posterior distribution of the parameters, conditional on the summary statistics, is computationally appealing, yet no standalone tool exists to automate the procedure. Here, I describe a program to implement the method. The software package ABCreg implements the local linear-regression approach to ABC. The advantages are: 1. The code is standalone, and fully-documented. 2. The program will automatically process multiple data sets, and create unique output files for each (which may be processed immediately in R), facilitating the testing of inference procedures on simulated data, or the analysis of multiple data sets. 3. The program implements two different transformation methods for the regression step. 4. Analysis options are controlled on the command line by the user, and the program is designed to output warnings for cases where the regression fails. 5. The program does not depend on any particular simulation machinery (coalescent, forward-time, etc.), and therefore is a general tool for processing the results from any simulation. 6. The code is open-source, and modular.Examples of applying the software to empirical data from Drosophila melanogaster, and testing the procedure on simulated data, are shown. In practice, the ABCreg simplifies implementing ABC based on local-linear regression.
Exploring stability of entropy analysis for signal with different trends
NASA Astrophysics Data System (ADS)
Zhang, Yin; Li, Jin; Wang, Jun
2017-03-01
Considering the effects of environment disturbances and instrument systems, the actual detecting signals always are carrying different trends, which result in that it is difficult to accurately catch signals complexity. So choosing steady and effective analysis methods is very important. In this paper, we applied entropy measures-the base-scale entropy and approximate entropy to analyze signal complexity, and studied the effect of trends on the ideal signal and the heart rate variability (HRV) signals, that is, linear, periodic, and power-law trends which are likely to occur in actual signals. The results show that approximate entropy is unsteady when we embed different trends into the signals, so it is not suitable to analyze signal with trends. However, the base-scale entropy has preferable stability and accuracy for signal with different trends. So the base-scale entropy is an effective method to analyze the actual signals.
Approximation methods for combined thermal/structural design
NASA Technical Reports Server (NTRS)
Haftka, R. T.; Shore, C. P.
1979-01-01
Two approximation concepts for combined thermal/structural design are evaluated. The first concept is an approximate thermal analysis based on the first derivatives of structural temperatures with respect to design variables. Two commonly used first-order Taylor series expansions are examined. The direct and reciprocal expansions are special members of a general family of approximations, and for some conditions other members of that family of approximations are more accurate. Several examples are used to compare the accuracy of the different expansions. The second approximation concept is the use of critical time points for combined thermal and stress analyses of structures with transient loading conditions. Significant time savings are realized by identifying critical time points and performing the stress analysis for those points only. The design of an insulated panel which is exposed to transient heating conditions is discussed.
Best uniform approximation to a class of rational functions
NASA Astrophysics Data System (ADS)
Zheng, Zhitong; Yong, Jun-Hai
2007-10-01
We explicitly determine the best uniform polynomial approximation to a class of rational functions of the form 1/(x-c)2+K(a,b,c,n)/(x-c) on [a,b] represented by their Chebyshev expansion, where a, b, and c are real numbers, n-1 denotes the degree of the best approximating polynomial, and K is a constant determined by a, b, c, and n. Our result is based on the explicit determination of a phase angle [eta] in the representation of the approximation error by a trigonometric function. Moreover, we formulate an ansatz which offers a heuristic strategies to determine the best approximating polynomial to a function represented by its Chebyshev expansion. Combined with the phase angle method, this ansatz can be used to find the best uniform approximation to some more functions.
Spacecraft attitude control using neuro-fuzzy approximation of the optimal controllers
NASA Astrophysics Data System (ADS)
Kim, Sung-Woo; Park, Sang-Young; Park, Chandeok
2016-01-01
In this study, a neuro-fuzzy controller (NFC) was developed for spacecraft attitude control to mitigate large computational load of the state-dependent Riccati equation (SDRE) controller. The NFC was developed by training a neuro-fuzzy network to approximate the SDRE controller. The stability of the NFC was numerically verified using a Lyapunov-based method, and the performance of the controller was analyzed in terms of approximation ability, steady-state error, cost, and execution time. The simulations and test results indicate that the developed NFC efficiently approximates the SDRE controller, with asymptotic stability in a bounded region of angular velocity encompassing the operational range of rapid-attitude maneuvers. In addition, it was shown that an approximated optimal feedback controller can be designed successfully through neuro-fuzzy approximation of the optimal open-loop controller.
NASA Astrophysics Data System (ADS)
Scherstjanoi, M.; Kaplan, J. O.; Thürig, E.; Lischke, H.
2013-09-01
Models of vegetation dynamics that are designed for application at spatial scales larger than individual forest gaps suffer from several limitations. Typically, either a population average approximation is used that results in unrealistic tree allometry and forest stand structure, or models have a high computational demand because they need to simulate both a series of age-based cohorts and a number of replicate patches to account for stochastic gap-scale disturbances. The detail required by the latter method increases the number of calculations by two to three orders of magnitude compared to the less realistic population average approach. In an effort to increase the efficiency of dynamic vegetation models without sacrificing realism, we developed a new method for simulating stand-replacing disturbances that is both accurate and faster than approaches that use replicate patches. The GAPPARD (approximating GAP model results with a Probabilistic Approach to account for stand Replacing Disturbances) method works by postprocessing the output of deterministic, undisturbed simulations of a cohort-based vegetation model by deriving the distribution of patch ages at any point in time on the basis of a disturbance probability. With this distribution, the expected value of any output variable can be calculated from the output values of the deterministic undisturbed run at the time corresponding to the patch age. To account for temporal changes in model forcing (e.g., as a result of climate change), GAPPARD performs a series of deterministic simulations and interpolates between the results in the postprocessing step. We integrated the GAPPARD method in the vegetation model LPJ-GUESS, and evaluated it in a series of simulations along an altitudinal transect of an inner-Alpine valley. We obtained results very similar to the output of the original LPJ-GUESS model that uses 100 replicate patches, but simulation time was reduced by approximately the factor 10. Our new method is therefore highly suited for rapidly approximating LPJ-GUESS results, and provides the opportunity for future studies over large spatial domains, allows easier parameterization of tree species, faster identification of areas of interesting simulation results, and comparisons with large-scale datasets and results of other forest models.
Data approximation using a blending type spline construction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dalmo, Rune; Bratlie, Jostein
2014-11-18
Generalized expo-rational B-splines (GERBS) is a blending type spline construction where local functions at each knot are blended together by C{sup k}-smooth basis functions. One way of approximating discrete regular data using GERBS is by partitioning the data set into subsets and fit a local function to each subset. Partitioning and fitting strategies can be devised such that important or interesting data points are interpolated in order to preserve certain features. We present a method for fitting discrete data using a tensor product GERBS construction. The method is based on detection of feature points using differential geometry. Derivatives, which aremore » necessary for feature point detection and used to construct local surface patches, are approximated from the discrete data using finite differences.« less
Anandakrishnan, Ramu; Scogland, Tom R W; Fenley, Andrew T; Gordon, John C; Feng, Wu-chun; Onufriev, Alexey V
2010-06-01
Tools that compute and visualize biomolecular electrostatic surface potential have been used extensively for studying biomolecular function. However, determining the surface potential for large biomolecules on a typical desktop computer can take days or longer using currently available tools and methods. Two commonly used techniques to speed-up these types of electrostatic computations are approximations based on multi-scale coarse-graining and parallelization across multiple processors. This paper demonstrates that for the computation of electrostatic surface potential, these two techniques can be combined to deliver significantly greater speed-up than either one separately, something that is in general not always possible. Specifically, the electrostatic potential computation, using an analytical linearized Poisson-Boltzmann (ALPB) method, is approximated using the hierarchical charge partitioning (HCP) multi-scale method, and parallelized on an ATI Radeon 4870 graphical processing unit (GPU). The implementation delivers a combined 934-fold speed-up for a 476,040 atom viral capsid, compared to an equivalent non-parallel implementation on an Intel E6550 CPU without the approximation. This speed-up is significantly greater than the 42-fold speed-up for the HCP approximation alone or the 182-fold speed-up for the GPU alone. Copyright (c) 2010 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Sun, Lin; Liu, Xinyan; Yang, Yikun; Chen, TingTing; Wang, Quan; Zhou, Xueying
2018-04-01
Although enhanced over prior Landsat instruments, Landsat 8 OLI can obtain very high cloud detection precisions, but for the detection of cloud shadows, it still faces great challenges. Geometry-based cloud shadow detection methods are considered the most effective and are being improved constantly. The Function of Mask (Fmask) cloud shadow detection method is one of the most representative geometry-based methods that has been used for cloud shadow detection with Landsat 8 OLI. However, the Fmask method estimates cloud height employing fixed temperature rates, which are highly uncertain, and errors of large area cloud shadow detection can be caused by errors in estimations of cloud height. This article improves the geometry-based cloud shadow detection method for Landsat OLI from the following two aspects. (1) Cloud height no longer depends on the brightness temperature of the thermal infrared band but uses a possible dynamic range from 200 m to 12,000 m. In this case, cloud shadow is not a specific location but a possible range. Further analysis was carried out in the possible range based on the spectrum to determine cloud shadow location. This effectively avoids the cloud shadow leakage caused by the error in the height determination of a cloud. (2) Object-based and pixel spectral analyses are combined to detect cloud shadows, which can realize cloud shadow detection from two aspects of target scale and pixel scale. Based on the analysis of the spectral differences between the cloud shadow and typical ground objects, the best cloud shadow detection bands of Landsat 8 OLI were determined. The combined use of spectrum and shape can effectively improve the detection precision of cloud shadows produced by thin clouds. Several cloud shadow detection experiments were carried out, and the results were verified by the results of artificial recognition. The results of these experiments indicated that this method can identify cloud shadows in different regions with correct accuracy exceeding 80%, approximately 5% of the areas were wrongly identified, and approximately 10% of the cloud shadow areas were missing. The accuracy of this method is obviously higher than the recognition accuracy of Fmask, which has correct accuracy lower than 60%, and the missing recognition is approximately 40%.
NASA Astrophysics Data System (ADS)
Moraes Rêgo, Patrícia Helena; Viana da Fonseca Neto, João; Ferreira, Ernesto M.
2015-08-01
The main focus of this article is to present a proposal to solve, via UDUT factorisation, the convergence and numerical stability problems that are related to the covariance matrix ill-conditioning of the recursive least squares (RLS) approach for online approximations of the algebraic Riccati equation (ARE) solution associated with the discrete linear quadratic regulator (DLQR) problem formulated in the actor-critic reinforcement learning and approximate dynamic programming context. The parameterisations of the Bellman equation, utility function and dynamic system as well as the algebra of Kronecker product assemble a framework for the solution of the DLQR problem. The condition number and the positivity parameter of the covariance matrix are associated with statistical metrics for evaluating the approximation performance of the ARE solution via RLS-based estimators. The performance of RLS approximators is also evaluated in terms of consistence and polarisation when associated with reinforcement learning methods. The used methodology contemplates realisations of online designs for DLQR controllers that is evaluated in a multivariable dynamic system model.
Modeling dam-break flows using finite volume method on unstructured grid
USDA-ARS?s Scientific Manuscript database
Two-dimensional shallow water models based on unstructured finite volume method and approximate Riemann solvers for computing the intercell fluxes have drawn growing attention because of their robustness, high adaptivity to complicated geometry and ability to simulate flows with mixed regimes and di...
NASA Astrophysics Data System (ADS)
Dehghan, Mehdi; Nikpour, Ahmad
2013-09-01
In this research, we propose two different methods to solve the coupled Klein-Gordon-Zakharov (KGZ) equations: the Differential Quadrature (DQ) and Globally Radial Basis Functions (GRBFs) methods. In the DQ method, the derivative value of a function with respect to a point is directly approximated by a linear combination of all functional values in the global domain. The principal work in this method is the determination of weight coefficients. We use two ways for obtaining these coefficients: cosine expansion (CDQ) and radial basis functions (RBFs-DQ), the former is a mesh-based method and the latter categorizes in the set of meshless methods. Unlike the DQ method, the GRBF method directly substitutes the expression of the function approximation by RBFs into the partial differential equation. The main problem in the GRBFs method is ill-conditioning of the interpolation matrix. Avoiding this problem, we study the bases introduced in Pazouki and Schaback (2011) [44]. Some examples are presented to compare the accuracy and easy implementation of the proposed methods. In numerical examples, we concentrate on Inverse Multiquadric (IMQ) and second-order Thin Plate Spline (TPS) radial basis functions. The variable shape parameter (exponentially and random) strategies are applied in the IMQ function and the results are compared with the constant shape parameter.
High-Order Semi-Discrete Central-Upwind Schemes for Multi-Dimensional Hamilton-Jacobi Equations
NASA Technical Reports Server (NTRS)
Bryson, Steve; Levy, Doron; Biegel, Bran R. (Technical Monitor)
2002-01-01
We present high-order semi-discrete central-upwind numerical schemes for approximating solutions of multi-dimensional Hamilton-Jacobi (HJ) equations. This scheme is based on the use of fifth-order central interpolants like those developed in [1], in fluxes presented in [3]. These interpolants use the weighted essentially nonoscillatory (WENO) approach to avoid spurious oscillations near singularities, and become "central-upwind" in the semi-discrete limit. This scheme provides numerical approximations whose error is as much as an order of magnitude smaller than those in previous WENO-based fifth-order methods [2, 1]. Thee results are discussed via examples in one, two and three dimensions. We also pregnant explicit N-dimensional formulas for the fluxes, discuss their monotonicity and tl!e connection between this method and that in [2].
Bayesian alternative to the ISO-GUM's use of the Welch Satterthwaite formula
NASA Astrophysics Data System (ADS)
Kacker, Raghu N.
2006-02-01
In certain disciplines, uncertainty is traditionally expressed as an interval about an estimate for the value of the measurand. Development of such uncertainty intervals with a stated coverage probability based on the International Organization for Standardization (ISO) Guide to the Expression of Uncertainty in Measurement (GUM) requires a description of the probability distribution for the value of the measurand. The ISO-GUM propagates the estimates and their associated standard uncertainties for various input quantities through a linear approximation of the measurement equation to determine an estimate and its associated standard uncertainty for the value of the measurand. This procedure does not yield a probability distribution for the value of the measurand. The ISO-GUM suggests that under certain conditions motivated by the central limit theorem the distribution for the value of the measurand may be approximated by a scaled-and-shifted t-distribution with effective degrees of freedom obtained from the Welch-Satterthwaite (W-S) formula. The approximate t-distribution may then be used to develop an uncertainty interval with a stated coverage probability for the value of the measurand. We propose an approximate normal distribution based on a Bayesian uncertainty as an alternative to the t-distribution based on the W-S formula. A benefit of the approximate normal distribution based on a Bayesian uncertainty is that it greatly simplifies the expression of uncertainty by eliminating altogether the need for calculating effective degrees of freedom from the W-S formula. In the special case where the measurand is the difference between two means, each evaluated from statistical analyses of independent normally distributed measurements with unknown and possibly unequal variances, the probability distribution for the value of the measurand is known to be a Behrens-Fisher distribution. We compare the performance of the approximate normal distribution based on a Bayesian uncertainty and the approximate t-distribution based on the W-S formula with respect to the Behrens-Fisher distribution. The approximate normal distribution is simpler and better in this case. A thorough investigation of the relative performance of the two approximate distributions would require comparison for a range of measurement equations by numerical methods.
Feasibility study of shell buckling analysis using the modified structure method
NASA Technical Reports Server (NTRS)
Cohen, G. A.; Haftka, R. T.
1972-01-01
The modified structure method, which is based on Koiter's theory of imperfections, was used to calculate approximate buckling loads of several shells of revolution. The method does not appear to be practical for shells because, in many cases, the prebuckling nonlinearity may be too large to be treated accurately as a small imperfection.
Liu, Jian; Miller, William H
2007-06-21
It is shown how quantum mechanical time correlation functions [defined, e.g., in Eq. (1.1)] can be expressed, without approximation, in the same form as the linearized approximation of the semiclassical initial value representation (LSC-IVR), or classical Wigner model, for the correlation function [cf. Eq. (2.1)], i.e., as a phase space average (over initial conditions for trajectories) of the Wigner functions corresponding to the two operators. The difference is that the trajectories involved in the LSC-IVR evolve classically, i.e., according to the classical equations of motion, while in the exact theory they evolve according to generalized equations of motion that are derived here. Approximations to the exact equations of motion are then introduced to achieve practical methods that are applicable to complex (i.e., large) molecular systems. Four such methods are proposed in the paper--the full Wigner dynamics (full WD) and the second order WD based on "Wigner trajectories" [H. W. Lee and M. D. Scully, J. Chem. Phys. 77, 4604 (1982)] and the full Donoso-Martens dynamics (full DMD) and the second order DMD based on "Donoso-Martens trajectories" [A. Donoso and C. C. Martens, Phys. Rev. Lett. 8722, 223202 (2001)]--all of which can be viewed as generalizations of the original LSC-IVR method. Numerical tests of the four versions of this new approach are made for two anharmonic model problems, and for each the momentum autocorrelation function (i.e., operators linear in coordinate or momentum operators) and the force autocorrelation function (nonlinear operators) have been calculated. These four new approximate treatments are indeed seen to be significant improvements to the original LSC-IVR approximation.
Efficient experimental design for uncertainty reduction in gene regulatory networks.
Dehghannasiri, Roozbeh; Yoon, Byung-Jun; Dougherty, Edward R
2015-01-01
An accurate understanding of interactions among genes plays a major role in developing therapeutic intervention methods. Gene regulatory networks often contain a significant amount of uncertainty. The process of prioritizing biological experiments to reduce the uncertainty of gene regulatory networks is called experimental design. Under such a strategy, the experiments with high priority are suggested to be conducted first. The authors have already proposed an optimal experimental design method based upon the objective for modeling gene regulatory networks, such as deriving therapeutic interventions. The experimental design method utilizes the concept of mean objective cost of uncertainty (MOCU). MOCU quantifies the expected increase of cost resulting from uncertainty. The optimal experiment to be conducted first is the one which leads to the minimum expected remaining MOCU subsequent to the experiment. In the process, one must find the optimal intervention for every gene regulatory network compatible with the prior knowledge, which can be prohibitively expensive when the size of the network is large. In this paper, we propose a computationally efficient experimental design method. This method incorporates a network reduction scheme by introducing a novel cost function that takes into account the disruption in the ranking of potential experiments. We then estimate the approximate expected remaining MOCU at a lower computational cost using the reduced networks. Simulation results based on synthetic and real gene regulatory networks show that the proposed approximate method has close performance to that of the optimal method but at lower computational cost. The proposed approximate method also outperforms the random selection policy significantly. A MATLAB software implementing the proposed experimental design method is available at http://gsp.tamu.edu/Publications/supplementary/roozbeh15a/.
Efficient experimental design for uncertainty reduction in gene regulatory networks
2015-01-01
Background An accurate understanding of interactions among genes plays a major role in developing therapeutic intervention methods. Gene regulatory networks often contain a significant amount of uncertainty. The process of prioritizing biological experiments to reduce the uncertainty of gene regulatory networks is called experimental design. Under such a strategy, the experiments with high priority are suggested to be conducted first. Results The authors have already proposed an optimal experimental design method based upon the objective for modeling gene regulatory networks, such as deriving therapeutic interventions. The experimental design method utilizes the concept of mean objective cost of uncertainty (MOCU). MOCU quantifies the expected increase of cost resulting from uncertainty. The optimal experiment to be conducted first is the one which leads to the minimum expected remaining MOCU subsequent to the experiment. In the process, one must find the optimal intervention for every gene regulatory network compatible with the prior knowledge, which can be prohibitively expensive when the size of the network is large. In this paper, we propose a computationally efficient experimental design method. This method incorporates a network reduction scheme by introducing a novel cost function that takes into account the disruption in the ranking of potential experiments. We then estimate the approximate expected remaining MOCU at a lower computational cost using the reduced networks. Conclusions Simulation results based on synthetic and real gene regulatory networks show that the proposed approximate method has close performance to that of the optimal method but at lower computational cost. The proposed approximate method also outperforms the random selection policy significantly. A MATLAB software implementing the proposed experimental design method is available at http://gsp.tamu.edu/Publications/supplementary/roozbeh15a/. PMID:26423515
A point-value enhanced finite volume method based on approximate delta functions
NASA Astrophysics Data System (ADS)
Xuan, Li-Jun; Majdalani, Joseph
2018-02-01
We revisit the concept of an approximate delta function (ADF), introduced by Huynh (2011) [1], in the form of a finite-order polynomial that holds identical integral properties to the Dirac delta function when used in conjunction with a finite-order polynomial integrand over a finite domain. We show that the use of generic ADF polynomials can be effective at recovering and generalizing several high-order methods, including Taylor-based and nodal-based Discontinuous Galerkin methods, as well as the Correction Procedure via Reconstruction. Based on the ADF concept, we then proceed to formulate a Point-value enhanced Finite Volume (PFV) method, which stores and updates the cell-averaged values inside each element as well as the unknown quantities and, if needed, their derivatives on nodal points. The sharing of nodal information with surrounding elements saves the number of degrees of freedom compared to other compact methods at the same order. To ensure conservation, cell-averaged values are updated using an identical approach to that adopted in the finite volume method. Here, the updating of nodal values and their derivatives is achieved through an ADF concept that leverages all of the elements within the domain of integration that share the same nodal point. The resulting scheme is shown to be very stable at successively increasing orders. Both accuracy and stability of the PFV method are verified using a Fourier analysis and through applications to the linear wave and nonlinear Burgers' equations in one-dimensional space.
NASA Astrophysics Data System (ADS)
Filatov, Michael; Cremer, Dieter
2002-01-01
A recently developed variationally stable quasi-relativistic method, which is based on the low-order approximation to the method of normalized elimination of the small component, was incorporated into density functional theory (DFT). The new method was tested for diatomic molecules involving Ag, Cd, Au, and Hg by calculating equilibrium bond lengths, vibrational frequencies, and dissociation energies. The method is easy to implement into standard quantum chemical programs and leads to accurate results for the benchmark systems studied.
Simultaneous quaternion estimation (QUEST) and bias determination
NASA Technical Reports Server (NTRS)
Markley, F. Landis
1989-01-01
Tests of a new method for the simultaneous estimation of spacecraft attitude and sensor biases, based on a quaternion estimation algorithm minimizing Wahba's loss function are presented. The new method is compared with a conventional batch least-squares differential correction algorithm. The estimates are based on data from strapdown gyros and star trackers, simulated with varying levels of Gaussian noise for both inertially-fixed and Earth-pointing reference attitudes. Both algorithms solve for the spacecraft attitude and the gyro drift rate biases. They converge to the same estimates at the same rate for inertially-fixed attitude, but the new algorithm converges more slowly than the differential correction for Earth-pointing attitude. The slower convergence of the new method for non-zero attitude rates is believed to be due to the use of an inadequate approximation for a partial derivative matrix. The new method requires about twice the computational effort of the differential correction. Improving the approximation for the partial derivative matrix in the new method is expected to improve its convergence at the cost of increased computational effort.
NASA Astrophysics Data System (ADS)
Wu, Xiongwu; Brooks, Bernard R.
2011-11-01
The self-guided Langevin dynamics (SGLD) is a method to accelerate conformational searching. This method is unique in the way that it selectively enhances and suppresses molecular motions based on their frequency to accelerate conformational searching without modifying energy surfaces or raising temperatures. It has been applied to studies of many long time scale events, such as protein folding. Recent progress in the understanding of the conformational distribution in SGLD simulations makes SGLD also an accurate method for quantitative studies. The SGLD partition function provides a way to convert the SGLD conformational distribution to the canonical ensemble distribution and to calculate ensemble average properties through reweighting. Based on the SGLD partition function, this work presents a force-momentum-based self-guided Langevin dynamics (SGLDfp) simulation method to directly sample the canonical ensemble. This method includes interaction forces in its guiding force to compensate the perturbation caused by the momentum-based guiding force so that it can approximately sample the canonical ensemble. Using several example systems, we demonstrate that SGLDfp simulations can approximately maintain the canonical ensemble distribution and significantly accelerate conformational searching. With optimal parameters, SGLDfp and SGLD simulations can cross energy barriers of more than 15 kT and 20 kT, respectively, at similar rates for LD simulations to cross energy barriers of 10 kT. The SGLDfp method is size extensive and works well for large systems. For studies where preserving accessible conformational space is critical, such as free energy calculations and protein folding studies, SGLDfp is an efficient approach to search and sample the conformational space.
Varma, Hari M.; Valdes, Claudia P.; Kristoffersen, Anna K.; Culver, Joseph P.; Durduran, Turgut
2014-01-01
A novel tomographic method based on the laser speckle contrast, speckle contrast optical tomography (SCOT) is introduced that allows us to reconstruct three dimensional distribution of blood flow in deep tissues. This method is analogous to the diffuse optical tomography (DOT) but for deep tissue blood flow. We develop a reconstruction algorithm based on first Born approximation to generate three dimensional distribution of flow using the experimental data obtained from tissue simulating phantoms. PMID:24761306
Aerodynamic design optimization using sensitivity analysis and computational fluid dynamics
NASA Technical Reports Server (NTRS)
Baysal, Oktay; Eleshaky, Mohamed E.
1991-01-01
A new and efficient method is presented for aerodynamic design optimization, which is based on a computational fluid dynamics (CFD)-sensitivity analysis algorithm. The method is applied to design a scramjet-afterbody configuration for an optimized axial thrust. The Euler equations are solved for the inviscid analysis of the flow, which in turn provides the objective function and the constraints. The CFD analysis is then coupled with the optimization procedure that uses a constrained minimization method. The sensitivity coefficients, i.e. gradients of the objective function and the constraints, needed for the optimization are obtained using a quasi-analytical method rather than the traditional brute force method of finite difference approximations. During the one-dimensional search of the optimization procedure, an approximate flow analysis (predicted flow) based on a first-order Taylor series expansion is used to reduce the computational cost. Finally, the sensitivity of the optimum objective function to various design parameters, which are kept constant during the optimization, is computed to predict new optimum solutions. The flow analysis of the demonstrative example are compared with the experimental data. It is shown that the method is more efficient than the traditional methods.
NASA Astrophysics Data System (ADS)
Costa-Surós, M.; Calbó, J.; González, J. A.; Long, C. N.
2014-08-01
The cloud vertical distribution and especially the cloud base height, which is linked to cloud type, are important characteristics in order to describe the impact of clouds on climate. In this work, several methods for estimating the cloud vertical structure (CVS) based on atmospheric sounding profiles are compared, considering the number and position of cloud layers, with a ground-based system that is taken as a reference: the Active Remote Sensing of Clouds (ARSCL). All methods establish some conditions on the relative humidity, and differ in the use of other variables, the thresholds applied, or the vertical resolution of the profile. In this study, these methods are applied to 193 radiosonde profiles acquired at the Atmospheric Radiation Measurement (ARM) Southern Great Plains site during all seasons of the year 2009 and endorsed by Geostationary Operational Environmental Satellite (GOES) images, to confirm that the cloudiness conditions are homogeneous enough across their trajectory. The perfect agreement (i.e., when the whole CVS is estimated correctly) for the methods ranges between 26 and 64%; the methods show additional approximate agreement (i.e., when at least one cloud layer is assessed correctly) from 15 to 41%. Further tests and improvements are applied to one of these methods. In addition, we attempt to make this method suitable for low-resolution vertical profiles, like those from the outputs of reanalysis methods or from the World Meteorological Organization's (WMO) Global Telecommunication System. The perfect agreement, even when using low-resolution profiles, can be improved by up to 67% (plus 25% of the approximate agreement) if the thresholds for a moist layer to become a cloud layer are modified to minimize false negatives with the current data set, thus improving overall agreement.
A Kernel-free Boundary Integral Method for Elliptic Boundary Value Problems ⋆
Ying, Wenjun; Henriquez, Craig S.
2013-01-01
This paper presents a class of kernel-free boundary integral (KFBI) methods for general elliptic boundary value problems (BVPs). The boundary integral equations reformulated from the BVPs are solved iteratively with the GMRES method. During the iteration, the boundary and volume integrals involving Green's functions are approximated by structured grid-based numerical solutions, which avoids the need to know the analytical expressions of Green's functions. The KFBI method assumes that the larger regular domain, which embeds the original complex domain, can be easily partitioned into a hierarchy of structured grids so that fast elliptic solvers such as the fast Fourier transform (FFT) based Poisson/Helmholtz solvers or those based on geometric multigrid iterations are applicable. The structured grid-based solutions are obtained with standard finite difference method (FDM) or finite element method (FEM), where the right hand side of the resulting linear system is appropriately modified at irregular grid nodes to recover the formal accuracy of the underlying numerical scheme. Numerical results demonstrating the efficiency and accuracy of the KFBI methods are presented. It is observed that the number of GM-RES iterations used by the method for solving isotropic and moderately anisotropic BVPs is independent of the sizes of the grids that are employed to approximate the boundary and volume integrals. With the standard second-order FEMs and FDMs, the KFBI method shows a second-order convergence rate in accuracy for all of the tested Dirichlet/Neumann BVPs when the anisotropy of the diffusion tensor is not too strong. PMID:23519600
Comparing two Bayes methods based on the free energy functions in Bernoulli mixtures.
Yamazaki, Keisuke; Kaji, Daisuke
2013-08-01
Hierarchical learning models are ubiquitously employed in information science and data engineering. The structure makes the posterior distribution complicated in the Bayes method. Then, the prediction including construction of the posterior is not tractable though advantages of the method are empirically well known. The variational Bayes method is widely used as an approximation method for application; it has the tractable posterior on the basis of the variational free energy function. The asymptotic behavior has been studied in many hierarchical models and a phase transition is observed. The exact form of the asymptotic variational Bayes energy is derived in Bernoulli mixture models and the phase diagram shows that there are three types of parameter learning. However, the approximation accuracy or interpretation of the transition point has not been clarified yet. The present paper precisely analyzes the Bayes free energy function of the Bernoulli mixtures. Comparing free energy functions in these two Bayes methods, we can determine the approximation accuracy and elucidate behavior of the parameter learning. Our results claim that the Bayes free energy has the same learning types while the transition points are different. Copyright © 2013 Elsevier Ltd. All rights reserved.
Negre, Christian F A; Mniszewski, Susan M; Cawkwell, Marc J; Bock, Nicolas; Wall, Michael E; Niklasson, Anders M N
2016-07-12
We present a reduced complexity algorithm to compute the inverse overlap factors required to solve the generalized eigenvalue problem in a quantum-based molecular dynamics (MD) simulation. Our method is based on the recursive, iterative refinement of an initial guess of Z (inverse square root of the overlap matrix S). The initial guess of Z is obtained beforehand by using either an approximate divide-and-conquer technique or dynamical methods, propagated within an extended Lagrangian dynamics from previous MD time steps. With this formulation, we achieve long-term stability and energy conservation even under the incomplete, approximate, iterative refinement of Z. Linear-scaling performance is obtained using numerically thresholded sparse matrix algebra based on the ELLPACK-R sparse matrix data format, which also enables efficient shared-memory parallelization. As we show in this article using self-consistent density-functional-based tight-binding MD, our approach is faster than conventional methods based on the diagonalization of overlap matrix S for systems as small as a few hundred atoms, substantially accelerating quantum-based simulations even for molecular structures of intermediate size. For a 4158-atom water-solvated polyalanine system, we find an average speedup factor of 122 for the computation of Z in each MD step.
NASA Technical Reports Server (NTRS)
Berenji, Hamid R.
1992-01-01
Fuzzy logic and neural networks provide new methods for designing control systems. Fuzzy logic controllers do not require a complete analytical model of a dynamic system and can provide knowledge-based heuristic controllers for ill-defined and complex systems. Neural networks can be used for learning control. In this chapter, we discuss hybrid methods using fuzzy logic and neural networks which can start with an approximate control knowledge base and refine it through reinforcement learning.
Research study on stabilization and control: Modern sampled data control theory
NASA Technical Reports Server (NTRS)
Kuo, B. C.; Singh, G.; Yackel, R. A.
1973-01-01
A numerical analysis of spacecraft stability parameters was conducted. The analysis is based on a digital approximation by point by point state comparison. The technique used is that of approximating a continuous data system by a sampled data model by comparison of the states of the two systems. Application of the method to the digital redesign of the simplified one axis dynamics of the Skylab is presented.
Välimäki, Vesa; Pekonen, Jussi; Nam, Juhan
2012-01-01
Digital subtractive synthesis is a popular music synthesis method, which requires oscillators that are aliasing-free in a perceptual sense. It is a research challenge to find computationally efficient waveform generation algorithms that produce similar-sounding signals to analog music synthesizers but which are free from audible aliasing. A technique for approximately bandlimited waveform generation is considered that is based on a polynomial correction function, which is defined as the difference of a non-bandlimited step function and a polynomial approximation of the ideal bandlimited step function. It is shown that the ideal bandlimited step function is equivalent to the sine integral, and that integrated polynomial interpolation methods can successfully approximate it. Integrated Lagrange interpolation and B-spline basis functions are considered for polynomial approximation. The polynomial correction function can be added onto samples around each discontinuity in a non-bandlimited waveform to suppress aliasing. Comparison against previously known methods shows that the proposed technique yields the best tradeoff between computational cost and sound quality. The superior method amongst those considered in this study is the integrated third-order B-spline correction function, which offers perceptually aliasing-free sawtooth emulation up to the fundamental frequency of 7.8 kHz at the sample rate of 44.1 kHz. © 2012 Acoustical Society of America.
Design of an essentially non-oscillatory reconstruction procedure in finite-element type meshes
NASA Technical Reports Server (NTRS)
Abgrall, Remi
1992-01-01
An essentially non oscillatory reconstruction for functions defined on finite element type meshes is designed. Two related problems are studied: the interpolation of possibly unsmooth multivariate functions on arbitary meshes and the reconstruction of a function from its averages in the control volumes surrounding the nodes of the mesh. Concerning the first problem, the behavior of the highest coefficients of two polynomial interpolations of a function that may admit discontinuities of locally regular curves is studied: the Lagrange interpolation and an approximation such that the mean of the polynomial on any control volume is equal to that of the function to be approximated. This enables the best stencil for the approximation to be chosen. The choice of the smallest possible number of stencils is addressed. Concerning the reconstruction problem, two methods were studied: one based on an adaptation of the so called reconstruction via deconvolution method to irregular meshes and one that lies on the approximation on the mean as defined above. The first method is conservative up to a quadrature formula and the second one is exactly conservative. The two methods have the expected order of accuracy, but the second one is much less expensive than the first one. Some numerical examples are given which demonstrate the efficiency of the reconstruction.
NASA Astrophysics Data System (ADS)
Chen, Guoxiong; Cheng, Qiuming
2016-02-01
Multi-resolution and scale-invariance have been increasingly recognized as two closely related intrinsic properties endowed in geofields such as geochemical and geophysical anomalies, and they are commonly investigated by using multiscale- and scaling-analysis methods. In this paper, the wavelet-based multiscale decomposition (WMD) method was proposed to investigate the multiscale natures of geochemical pattern from large scale to small scale. In the light of the wavelet transformation of fractal measures, we demonstrated that the wavelet approximation operator provides a generalization of box-counting method for scaling analysis of geochemical patterns. Specifically, the approximation coefficient acts as the generalized density-value in density-area fractal modeling of singular geochemical distributions. Accordingly, we presented a novel local singularity analysis (LSA) using the WMD algorithm which extends the conventional moving averaging to a kernel-based operator for implementing LSA. Finally, the novel LSA was validated using a case study dealing with geochemical data (Fe2O3) in stream sediments for mineral exploration in Inner Mongolia, China. In comparison with the LSA implemented using the moving averaging method the novel LSA using WMD identified improved weak geochemical anomalies associated with mineralization in covered area.
NASA Astrophysics Data System (ADS)
Wills, John M.; Mattsson, Ann E.
2012-02-01
Density functional theory (DFT) provides a formally predictive base for equation of state properties. Available approximations to the exchange/correlation functional provide accurate predictions for many materials in the periodic table. For heavy materials however, DFT calculations, using available functionals, fail to provide quantitative predictions, and often fail to be even qualitative. This deficiency is due both to the lack of the appropriate confinement physics in the exchange/correlation functional and to approximations used to evaluate the underlying equations. In order to assess and develop accurate functionals, it is essential to eliminate all other sources of error. In this talk we describe an efficient first-principles electronic structure method based on the Dirac equation and compare the results obtained with this method with other methods generally used. Implications for high-pressure equation of state of relativistic materials are demonstrated in application to Ce and the light actinides. Sandia National Laboratories is a multi-program laboratory managed andoperated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Novel Hyperspectral Anomaly Detection Methods Based on Unsupervised Nearest Regularized Subspace
NASA Astrophysics Data System (ADS)
Hou, Z.; Chen, Y.; Tan, K.; Du, P.
2018-04-01
Anomaly detection has been of great interest in hyperspectral imagery analysis. Most conventional anomaly detectors merely take advantage of spectral and spatial information within neighboring pixels. In this paper, two methods of Unsupervised Nearest Regularized Subspace-based with Outlier Removal Anomaly Detector (UNRSORAD) and Local Summation UNRSORAD (LSUNRSORAD) are proposed, which are based on the concept that each pixel in background can be approximately represented by its spatial neighborhoods, while anomalies cannot. Using a dual window, an approximation of each testing pixel is a representation of surrounding data via a linear combination. The existence of outliers in the dual window will affect detection accuracy. Proposed detectors remove outlier pixels that are significantly different from majority of pixels. In order to make full use of various local spatial distributions information with the neighboring pixels of the pixels under test, we take the local summation dual-window sliding strategy. The residual image is constituted by subtracting the predicted background from the original hyperspectral imagery, and anomalies can be detected in the residual image. Experimental results show that the proposed methods have greatly improved the detection accuracy compared with other traditional detection method.
DendroBLAST: approximate phylogenetic trees in the absence of multiple sequence alignments.
Kelly, Steven; Maini, Philip K
2013-01-01
The rapidly growing availability of genome information has created considerable demand for both fast and accurate phylogenetic inference algorithms. We present a novel method called DendroBLAST for reconstructing phylogenetic dendrograms/trees from protein sequences using BLAST. This method differs from other methods by incorporating a simple model of sequence evolution to test the effect of introducing sequence changes on the reliability of the bipartitions in the inferred tree. Using realistic simulated sequence data we demonstrate that this method produces phylogenetic trees that are more accurate than other commonly-used distance based methods though not as accurate as maximum likelihood methods from good quality multiple sequence alignments. In addition to tests on simulated data, we use DendroBLAST to generate input trees for a supertree reconstruction of the phylogeny of the Archaea. This independent analysis produces an approximate phylogeny of the Archaea that has both high precision and recall when compared to previously published analysis of the same dataset using conventional methods. Taken together these results demonstrate that approximate phylogenetic trees can be produced in the absence of multiple sequence alignments, and we propose that these trees will provide a platform for improving and informing downstream bioinformatic analysis. A web implementation of the DendroBLAST method is freely available for use at http://www.dendroblast.com/.
NASA Astrophysics Data System (ADS)
Douglas, Michael R.; Karp, Robert L.; Lukic, Sergio; Reinbacher, René
2008-03-01
We develop numerical methods for approximating Ricci flat metrics on Calabi-Yau hypersurfaces in projective spaces. Our approach is based on finding balanced metrics and builds on recent theoretical work by Donaldson. We illustrate our methods in detail for a one parameter family of quintics. We also suggest several ways to extend our results.
Creating IRT-Based Parallel Test Forms Using the Genetic Algorithm Method
ERIC Educational Resources Information Center
Sun, Koun-Tem; Chen, Yu-Jen; Tsai, Shu-Yen; Cheng, Chien-Fen
2008-01-01
In educational measurement, the construction of parallel test forms is often a combinatorial optimization problem that involves the time-consuming selection of items to construct tests having approximately the same test information functions (TIFs) and constraints. This article proposes a novel method, genetic algorithm (GA), to construct parallel…
A Simple Estimation Method for Aggregate Government Outsourcing
ERIC Educational Resources Information Center
Minicucci, Stephen; Donahue, John D.
2004-01-01
The scholarly and popular debate on the delegation to the private sector of governmental tasks rests on an inadequate empirical foundation, as no systematic data are collected on direct versus indirect service delivery. We offer a simple method for approximating levels of service outsourcing, based on relatively straightforward combinations of and…
NASA Technical Reports Server (NTRS)
Banks, H. T.; Rosen, I. G.
1985-01-01
An approximation scheme is developed for the identification of hybrid systems describing the transverse vibrations of flexible beams with attached tip bodies. In particular, problems involving the estimation of functional parameters are considered. The identification problem is formulated as a least squares fit to data subject to the coupled system of partial and ordinary differential equations describing the transverse displacement of the beam and the motion of the tip bodies respectively. A cubic spline-based Galerkin method applied to the state equations in weak form and the discretization of the admissible parameter space yield a sequence of approximating finite dimensional identification problems. It is shown that each of the approximating problems admits a solution and that from the resulting sequence of optimal solutions a convergent subsequence can be extracted, the limit of which is a solution to the original identification problem. The approximating identification problems can be solved using standard techniques and readily available software.
Approach for Uncertainty Propagation and Robust Design in CFD Using Sensitivity Derivatives
NASA Technical Reports Server (NTRS)
Putko, Michele M.; Newman, Perry A.; Taylor, Arthur C., III; Green, Lawrence L.
2001-01-01
This paper presents an implementation of the approximate statistical moment method for uncertainty propagation and robust optimization for a quasi 1-D Euler CFD (computational fluid dynamics) code. Given uncertainties in statistically independent, random, normally distributed input variables, a first- and second-order statistical moment matching procedure is performed to approximate the uncertainty in the CFD output. Efficient calculation of both first- and second-order sensitivity derivatives is required. In order to assess the validity of the approximations, the moments are compared with statistical moments generated through Monte Carlo simulations. The uncertainties in the CFD input variables are also incorporated into a robust optimization procedure. For this optimization, statistical moments involving first-order sensitivity derivatives appear in the objective function and system constraints. Second-order sensitivity derivatives are used in a gradient-based search to successfully execute a robust optimization. The approximate methods used throughout the analyses are found to be valid when considering robustness about input parameter mean values.
Neural Network Assisted Inverse Dynamic Guidance for Terminally Constrained Entry Flight
Chen, Wanchun
2014-01-01
This paper presents a neural network assisted entry guidance law that is designed by applying Bézier approximation. It is shown that a fully constrained approximation of a reference trajectory can be made by using the Bézier curve. Applying this approximation, an inverse dynamic system for an entry flight is solved to generate guidance command. The guidance solution thus gotten ensures terminal constraints for position, flight path, and azimuth angle. In order to ensure terminal velocity constraint, a prediction of the terminal velocity is required, based on which, the approximated Bézier curve is adjusted. An artificial neural network is used for this prediction of the terminal velocity. The method enables faster implementation in achieving fully constrained entry flight. Results from simulations indicate improved performance of the neural network assisted method. The scheme is expected to have prospect for further research on automated onboard control of terminal velocity for both reentry and terminal guidance laws. PMID:24723821
NASA Astrophysics Data System (ADS)
Gruy, Frédéric
2014-02-01
Depending on the range of size and the refractive index value, an optically soft particle follows Rayleigh-Debye-Gans or RDG approximation or Van de Hulst approximation. Practically the first one is valid for small particles whereas the second one works for large particles. Klett and Sutherland (Klett JD, Sutherland RA. App. Opt. 1992;31:373) proved that the Wentzel-Kramers-Brillouin or WKB approximation leads to accurate values of the differential scattering cross section of sphere and cylinder over a wide range of size. In this paper we extend the work of Klett and Sutherland by proposing a method allowing a fast calculation of the differential scattering cross section for any shape of particle with a given orientation and illuminated by unpolarized light. Our method is based on a geometrical approximation of the particle by replacing each geometrical cross section by an ellipse and then by exactly evaluating the differential scattering cross section of the newly generated body. The latter one contains only two single integrals.
Aishima, Jun; Russel, Daniel S; Guibas, Leonidas J; Adams, Paul D; Brunger, Axel T
2005-10-01
Automatic fitting methods that build molecules into electron-density maps usually fail below 3.5 A resolution. As a first step towards addressing this problem, an algorithm has been developed using an approximation of the medial axis to simplify an electron-density isosurface. This approximation captures the central axis of the isosurface with a graph which is then matched against a graph of the molecular model. One of the first applications of the medial axis to X-ray crystallography is presented here. When applied to ligand fitting, the method performs at least as well as methods based on selecting peaks in electron-density maps. Generalization of the method to recognition of common features across multiple contour levels could lead to powerful automatic fitting methods that perform well even at low resolution.
Novel harmonic regularization approach for variable selection in Cox's proportional hazards model.
Chu, Ge-Jin; Liang, Yong; Wang, Jia-Xuan
2014-01-01
Variable selection is an important issue in regression and a number of variable selection methods have been proposed involving nonconvex penalty functions. In this paper, we investigate a novel harmonic regularization method, which can approximate nonconvex Lq (1/2 < q < 1) regularizations, to select key risk factors in the Cox's proportional hazards model using microarray gene expression data. The harmonic regularization method can be efficiently solved using our proposed direct path seeking approach, which can produce solutions that closely approximate those for the convex loss function and the nonconvex regularization. Simulation results based on the artificial datasets and four real microarray gene expression datasets, such as real diffuse large B-cell lymphoma (DCBCL), the lung cancer, and the AML datasets, show that the harmonic regularization method can be more accurate for variable selection than existing Lasso series methods.
Testing large aspheric surfaces with complementary annular subaperture interferometric method
NASA Astrophysics Data System (ADS)
Hou, Xi; Wu, Fan; Lei, Baiping; Fan, Bin; Chen, Qiang
2008-07-01
Annular subaperture interferometric method has provided an alternative solution to testing rotationally symmetric aspheric surfaces with low cost and flexibility. However, some new challenges, particularly in the motion and algorithm components, appear when applied to large aspheric surfaces with large departure in the practical engineering. Based on our previously reported annular subaperture reconstruction algorithm with Zernike annular polynomials and matrix method, and the experimental results for an approximate 130-mm diameter and f/2 parabolic mirror, an experimental investigation by testing an approximate 302-mm diameter and f/1.7 parabolic mirror with the complementary annular subaperture interferometric method is presented. We have focused on full-aperture reconstruction accuracy, and discuss some error effects and limitations of testing larger aspheric surfaces with the annular subaperture method. Some considerations about testing sector segment with complementary sector subapertures are provided.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Gregory H.
2003-08-06
In this paper we present a general iterative method for the solution of the Riemann problem for hyperbolic systems of PDEs. The method is based on the multiple shooting method for free boundary value problems. We demonstrate the method by solving one-dimensional Riemann problems for hyperelastic solid mechanics. Even for conditions representative of routine laboratory conditions and military ballistics, dramatic differences are seen between the exact and approximate Riemann solution. The greatest discrepancy arises from misallocation of energy between compressional and thermal modes by the approximate solver, resulting in nonphysical entropy and temperature estimates. Several pathological conditions arise in commonmore » practice, and modifications to the method to handle these are discussed. These include points where genuine nonlinearity is lost, degeneracies, and eigenvector deficiencies that occur upon melting.« less
NASA Astrophysics Data System (ADS)
AllahTavakoli, Yahya; Safari, Abdolreza; Vaníček, Petr
2016-12-01
This paper resurrects a version of Poisson's Partial Differential Equation (PDE) associated with the gravitational field at the Earth's surface and illustrates how the PDE possesses a capability to extract the mass density of Earth's topography from land-based gravity data. Herein, first we propound a theorem which mathematically introduces this version of Poisson's PDE adapted for the Earth's surface and then we use this PDE to develop a method of approximating the terrain mass density. Also, we carry out a real case study showing how the proposed approach is able to be applied to a set of land-based gravity data. In the case study, the method is summarized by an algorithm and applied to a set of gravity stations located along a part of the north coast of the Persian Gulf in the south of Iran. The results were numerically validated via rock-samplings as well as a geological map. Also, the method was compared with two conventional methods of mass density reduction. The numerical experiments indicate that the Poisson PDE at the Earth's surface has the capability to extract the mass density from land-based gravity data and is able to provide an alternative and somewhat more precise method of estimating the terrain mass density.
Bardhan, Jaydeep P
2008-10-14
The importance of molecular electrostatic interactions in aqueous solution has motivated extensive research into physical models and numerical methods for their estimation. The computational costs associated with simulations that include many explicit water molecules have driven the development of implicit-solvent models, with generalized-Born (GB) models among the most popular of these. In this paper, we analyze a boundary-integral equation interpretation for the Coulomb-field approximation (CFA), which plays a central role in most GB models. This interpretation offers new insights into the nature of the CFA, which traditionally has been assessed using only a single point charge in the solute. The boundary-integral interpretation of the CFA allows the use of multiple point charges, or even continuous charge distributions, leading naturally to methods that eliminate the interpolation inaccuracies associated with the Still equation. This approach, which we call boundary-integral-based electrostatic estimation by the CFA (BIBEE/CFA), is most accurate when the molecular charge distribution generates a smooth normal displacement field at the solute-solvent boundary, and CFA-based GB methods perform similarly. Conversely, both methods are least accurate for charge distributions that give rise to rapidly varying or highly localized normal displacement fields. Supporting this analysis are comparisons of the reaction-potential matrices calculated using GB methods and boundary-element-method (BEM) simulations. An approximation similar to BIBEE/CFA exhibits complementary behavior, with superior accuracy for charge distributions that generate rapidly varying normal fields and poorer accuracy for distributions that produce smooth fields. This approximation, BIBEE by preconditioning (BIBEE/P), essentially generates initial guesses for preconditioned Krylov-subspace iterative BEMs. Thus, iterative refinement of the BIBEE/P results recovers the BEM solution; excellent agreement is obtained in only a few iterations. The boundary-integral-equation framework may also provide a means to derive rigorous results explaining how the empirical correction terms in many modern GB models significantly improve accuracy despite their simple analytical forms.
Approximate kernel competitive learning.
Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang
2015-03-01
Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. Copyright © 2014 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Fuke, E-mail: wufuke@mail.hust.edu.cn; Tian, Tianhai, E-mail: tianhai.tian@sci.monash.edu.au; Rawlings, James B., E-mail: james.rawlings@wisc.edu
The frequently used reduction technique is based on the chemical master equation for stochastic chemical kinetics with two-time scales, which yields the modified stochastic simulation algorithm (SSA). For the chemical reaction processes involving a large number of molecular species and reactions, the collection of slow reactions may still include a large number of molecular species and reactions. Consequently, the SSA is still computationally expensive. Because the chemical Langevin equations (CLEs) can effectively work for a large number of molecular species and reactions, this paper develops a reduction method based on the CLE by the stochastic averaging principle developed in themore » work of Khasminskii and Yin [SIAM J. Appl. Math. 56, 1766–1793 (1996); ibid. 56, 1794–1819 (1996)] to average out the fast-reacting variables. This reduction method leads to a limit averaging system, which is an approximation of the slow reactions. Because in the stochastic chemical kinetics, the CLE is seen as the approximation of the SSA, the limit averaging system can be treated as the approximation of the slow reactions. As an application, we examine the reduction of computation complexity for the gene regulatory networks with two-time scales driven by intrinsic noise. For linear and nonlinear protein production functions, the simulations show that the sample average (expectation) of the limit averaging system is close to that of the slow-reaction process based on the SSA. It demonstrates that the limit averaging system is an efficient approximation of the slow-reaction process in the sense of the weak convergence.« less
Structural optimization: Status and promise
NASA Astrophysics Data System (ADS)
Kamat, Manohar P.
Chapters contained in this book include fundamental concepts of optimum design, mathematical programming methods for constrained optimization, function approximations, approximate reanalysis methods, dual mathematical programming methods for constrained optimization, a generalized optimality criteria method, and a tutorial and survey of multicriteria optimization in engineering. Also included are chapters on the compromise decision support problem and the adaptive linear programming algorithm, sensitivity analyses of discrete and distributed systems, the design sensitivity analysis of nonlinear structures, optimization by decomposition, mixed elements in shape sensitivity analysis of structures based on local criteria, and optimization of stiffened cylindrical shells subjected to destabilizing loads. Other chapters are on applications to fixed-wing aircraft and spacecraft, integrated optimum structural and control design, modeling concurrency in the design of composite structures, and tools for structural optimization. (No individual items are abstracted in this volume)
Exact exchange-correlation potentials of singlet two-electron systems
NASA Astrophysics Data System (ADS)
Ryabinkin, Ilya G.; Ospadov, Egor; Staroverov, Viktor N.
2017-10-01
We suggest a non-iterative analytic method for constructing the exchange-correlation potential, v XC ( r ) , of any singlet ground-state two-electron system. The method is based on a convenient formula for v XC ( r ) in terms of quantities determined only by the system's electronic wave function, exact or approximate, and is essentially different from the Kohn-Sham inversion technique. When applied to Gaussian-basis-set wave functions, the method yields finite-basis-set approximations to the corresponding basis-set-limit v XC ( r ) , whereas the Kohn-Sham inversion produces physically inappropriate (oscillatory and divergent) potentials. The effectiveness of the procedure is demonstrated by computing accurate exchange-correlation potentials of several two-electron systems (helium isoelectronic series, H2, H3 + ) using common ab initio methods and Gaussian basis sets.
The FLAME-slab method for electromagnetic wave scattering in aperiodic slabs
NASA Astrophysics Data System (ADS)
Mansha, Shampy; Tsukerman, Igor; Chong, Y. D.
2017-12-01
The proposed numerical method, "FLAME-slab," solves electromagnetic wave scattering problems for aperiodic slab structures by exploiting short-range regularities in these structures. The computational procedure involves special difference schemes with high accuracy even on coarse grids. These schemes are based on Trefftz approximations, utilizing functions that locally satisfy the governing differential equations, as is done in the Flexible Local Approximation Method (FLAME). Radiation boundary conditions are implemented via Fourier expansions in the air surrounding the slab. When applied to ensembles of slab structures with identical short-range features, such as amorphous or quasicrystalline lattices, the method is significantly more efficient, both in runtime and in memory consumption, than traditional approaches. This efficiency is due to the fact that the Trefftz functions need to be computed only once for the whole ensemble.
NASA Astrophysics Data System (ADS)
Elgohary, T.; Kim, D.; Turner, J.; Junkins, J.
2014-09-01
Several methods exist for integrating the motion in high order gravity fields. Some recent methods use an approximate starting orbit, and an efficient method is needed for generating warm starts that account for specific low order gravity approximations. By introducing two scalar Lagrange-like invariants and employing Leibniz product rule, the perturbed motion is integrated by a novel recursive formulation. The Lagrange-like invariants allow exact arbitrary order time derivatives. Restricting attention to the perturbations due to the zonal harmonics J2 through J6, we illustrate an idea. The recursively generated vector-valued time derivatives for the trajectory are used to develop a continuation series-based solution for propagating position and velocity. Numerical comparisons indicate performance improvements of ~ 70X over existing explicit Runge-Kutta methods while maintaining mm accuracy for the orbit predictions. The Modified Chebyshev Picard Iteration (MCPI) is an iterative path approximation method to solve nonlinear ordinary differential equations. The MCPI utilizes Picard iteration with orthogonal Chebyshev polynomial basis functions to recursively update the states. The key advantages of the MCPI are as follows: 1) Large segments of a trajectory can be approximated by evaluating the forcing function at multiple nodes along the current approximation during each iteration. 2) It can readily handle general gravity perturbations as well as non-conservative forces. 3) Parallel applications are possible. The Picard sequence converges to the solution over large time intervals when the forces are continuous and differentiable. According to the accuracy of the starting solutions, however, the MCPI may require significant number of iterations and function evaluations compared to other integrators. In this work, we provide an efficient methodology to establish good starting solutions from the continuation series method; this warm start improves the performance of the MCPI significantly and will likely be useful for other applications where efficiently computed approximate orbit solutions are needed.
Sample size for post-marketing safety studies based on historical controls.
Wu, Yu-te; Makuch, Robert W
2010-08-01
As part of a drug's entire life cycle, post-marketing studies are an important part in the identification of rare, serious adverse events. Recently, the US Food and Drug Administration (FDA) has begun to implement new post-marketing safety mandates as a consequence of increased emphasis on safety. The purpose of this research is to provide exact sample size formula for the proposed hybrid design, based on a two-group cohort study with incorporation of historical external data. Exact sample size formula based on the Poisson distribution is developed, because the detection of rare events is our outcome of interest. Performance of exact method is compared to its approximate large-sample theory counterpart. The proposed hybrid design requires a smaller sample size compared to the standard, two-group prospective study design. In addition, the exact method reduces the number of subjects required in the treatment group by up to 30% compared to the approximate method for the study scenarios examined. The proposed hybrid design satisfies the advantages and rationale of the two-group design with smaller sample sizes generally required. 2010 John Wiley & Sons, Ltd.
Active magnetic refrigerants based on Gd-Si-Ge material and refrigeration apparatus and process
Gschneidner, Jr., Karl A.; Pecharsky, Vitalij K.
1998-04-28
Active magnetic regenerator and method using Gd.sub.5 (Si.sub.x Ge.sub.1-x).sub.4, where x is equal to or less than 0.5, as a magnetic refrigerant that exhibits a reversible ferromagnetic/antiferromagnetic or ferromagnetic-II/ferromagnetic-I first order phase transition and extraordinary magneto-thermal properties, such as a giant magnetocaloric effect, that renders the refrigerant more efficient and useful than existing magnetic refrigerants for commercialization of magnetic regenerators. The reversible first order phase transition is tunable from approximately 30 K to approximately 290 K (near room temperature) and above by compositional adjustments. The active magnetic regenerator and method can function for refrigerating, air conditioning, and liquefying low temperature cryogens with significantly improved efficiency and operating temperature range from approximately 10 K to 300 K and above. Also an active magnetic regenerator and method using Gd.sub.5 (Si.sub.x Ge.sub.1-x).sub.4, where x is equal to or greater than 0.5, as a magnetic heater/refrigerant that exhibits a reversible ferromagnetic/paramagnetic second order phase transition with large magneto-thermal properties, such as a large magnetocaloric effect that permits the commercialization of a magnetic heat pump and/or refrigerant. This second order phase transition is tunable from approximately 280 K (near room temperature) to approximately 350 K by composition adjustments. The active magnetic regenerator and method can function for low level heating for climate control for buildings, homes and automobile, and chemical processing.
Active magnetic refrigerants based on Gd-Si-Ge material and refrigeration apparatus and process
Gschneidner, K.A. Jr.; Pecharsky, V.K.
1998-04-28
Active magnetic regenerator and method using Gd{sub 5} (Si{sub x}Ge{sub 1{minus}x}){sub 4}, where x is equal to or less than 0.5, as a magnetic refrigerant that exhibits a reversible ferromagnetic/antiferromagnetic or ferromagnetic-II/ferromagnetic-I first order phase transition and extraordinary magneto-thermal properties, such as a giant magnetocaloric effect, that renders the refrigerant more efficient and useful than existing magnetic refrigerants for commercialization of magnetic regenerators. The reversible first order phase transition is tunable from approximately 30 K to approximately 290 K (near room temperature) and above by compositional adjustments. The active magnetic regenerator and method can function for refrigerating, air conditioning, and liquefying low temperature cryogens with significantly improved efficiency and operating temperature range from approximately 10 K to 300 K and above. Also an active magnetic regenerator and method using Gd{sub 5} (Si{sub x} Ge{sub 1{minus}x}){sub 4}, where x is equal to or greater than 0.5, as a magnetic heater/refrigerant that exhibits a reversible ferromagnetic/paramagnetic second order phase transition with large magneto-thermal properties, such as a large magnetocaloric effect that permits the commercialization of a magnetic heat pump and/or refrigerant. This second order phase transition is tunable from approximately 280 K (near room temperature) to approximately 350 K by composition adjustments. The active magnetic regenerator and method can function for low level heating for climate control for buildings, homes and automobile, and chemical processing. 27 figs.
NASA Astrophysics Data System (ADS)
Ogorodnikov, Yuri; Khachay, Michael; Pljonkin, Anton
2018-04-01
We describe the possibility of employing the special case of the 3-SAT problem stemming from the well known integer factorization problem for the quantum cryptography. It is known, that for every instance of our 3-SAT setting the given 3-CNF is satisfiable by a unique truth assignment, and the goal is to find this assignment. Since the complexity status of the factorization problem is still undefined, development of approximation algorithms and heuristics adopts interest of numerous researchers. One of promising approaches to construction of approximation techniques is based on real-valued relaxation of the given 3-CNF followed by minimizing of the appropriate differentiable loss function, and subsequent rounding of the fractional minimizer obtained. Actually, algorithms developed this way differ by the rounding scheme applied on their final stage. We propose a new rounding scheme based on Bayesian learning. The article shows that the proposed method can be used to determine the security in quantum key distribution systems. In the quantum distribution the Shannon rules is applied and the factorization problem is paramount when decrypting secret keys.
NASA Astrophysics Data System (ADS)
Lv, Yongfeng; Na, Jing; Yang, Qinmin; Wu, Xing; Guo, Yu
2016-01-01
An online adaptive optimal control is proposed for continuous-time nonlinear systems with completely unknown dynamics, which is achieved by developing a novel identifier-critic-based approximate dynamic programming algorithm with a dual neural network (NN) approximation structure. First, an adaptive NN identifier is designed to obviate the requirement of complete knowledge of system dynamics, and a critic NN is employed to approximate the optimal value function. Then, the optimal control law is computed based on the information from the identifier NN and the critic NN, so that the actor NN is not needed. In particular, a novel adaptive law design method with the parameter estimation error is proposed to online update the weights of both identifier NN and critic NN simultaneously, which converge to small neighbourhoods around their ideal values. The closed-loop system stability and the convergence to small vicinity around the optimal solution are all proved by means of the Lyapunov theory. The proposed adaptation algorithm is also improved to achieve finite-time convergence of the NN weights. Finally, simulation results are provided to exemplify the efficacy of the proposed methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, Hao; Ashkar, Rana; Steinke, Nina
A method dubbed grating-based holography was recently used to determine the structure of colloidal fluids in the rectangular grooves of a diffraction grating from X-ray scattering measurements. Similar grating-based measurements have also been recently made with neutrons using a technique called spin-echo small-angle neutron scattering. The analysis of the X-ray diffraction data was done using an approximation that treats the X-ray phase change caused by the colloidal structure as a small perturbation to the overall phase pattern generated by the grating. In this paper, the adequacy of this weak phase approximation is explored for both X-ray and neutron grating holography.more » Additionally, it is found that there are several approximations hidden within the weak phase approximation that can lead to incorrect conclusions from experiments. In particular, the phase contrast for the empty grating is a critical parameter. Finally, while the approximation is found to be perfectly adequate for X-ray grating holography experiments performed to date, it cannot be applied to similar neutron experiments because the latter technique requires much deeper grating channels.« less
NASA Technical Reports Server (NTRS)
Bardina, J. E.
1994-01-01
A new computational efficient 3-D compressible Reynolds-averaged implicit Navier-Stokes method with advanced two equation turbulence models for high speed flows is presented. All convective terms are modeled using an entropy satisfying higher-order Total Variation Diminishing (TVD) scheme based on implicit upwind flux-difference split approximations and arithmetic averaging procedure of primitive variables. This method combines the best features of data management and computational efficiency of space marching procedures with the generality and stability of time dependent Navier-Stokes procedures to solve flows with mixed supersonic and subsonic zones, including streamwise separated flows. Its robust stability derives from a combination of conservative implicit upwind flux-difference splitting with Roe's property U to provide accurate shock capturing capability that non-conservative schemes do not guarantee, alternating symmetric Gauss-Seidel 'method of planes' relaxation procedure coupled with a three-dimensional two-factor diagonal-dominant approximate factorization scheme, TVD flux limiters of higher-order flux differences satisfying realizability, and well-posed characteristic-based implicit boundary-point a'pproximations consistent with the local characteristics domain of dependence. The efficiency of the method is highly increased with Newton Raphson acceleration which allows convergence in essentially one forward sweep for supersonic flows. The method is verified by comparing with experiment and other Navier-Stokes methods. Here, results of adiabatic and cooled flat plate flows, compression corner flow, and 3-D hypersonic shock-wave/turbulent boundary layer interaction flows are presented. The robust 3-D method achieves a better computational efficiency of at least one order of magnitude over the CNS Navier-Stokes code. It provides cost-effective aerodynamic predictions in agreement with experiment, and the capability of predicting complex flow structures in complex geometries with good accuracy.
Prior-knowledge-based feedforward network simulation of true boiling point curve of crude oil.
Chen, C W; Chen, D Z
2001-11-01
Theoretical results and practical experience indicate that feedforward networks can approximate a wide class of functional relationships very well. This property is exploited in modeling chemical processes. Given finite and noisy training data, it is important to encode the prior knowledge in neural networks to improve the fit precision and the prediction ability of the model. In this paper, as to the three-layer feedforward networks and the monotonic constraint, the unconstrained method, Joerding's penalty function method, the interpolation method, and the constrained optimization method are analyzed first. Then two novel methods, the exponential weight method and the adaptive method, are proposed. These methods are applied in simulating the true boiling point curve of a crude oil with the condition of increasing monotonicity. The simulation experimental results show that the network models trained by the novel methods are good at approximating the actual process. Finally, all these methods are discussed and compared with each other.
NASA Astrophysics Data System (ADS)
Bhrawy, A. H.; Zaky, M. A.
2015-01-01
In this paper, we propose and analyze an efficient operational formulation of spectral tau method for multi-term time-space fractional differential equation with Dirichlet boundary conditions. The shifted Jacobi operational matrices of Riemann-Liouville fractional integral, left-sided and right-sided Caputo fractional derivatives are presented. By using these operational matrices, we propose a shifted Jacobi tau method for both temporal and spatial discretizations, which allows us to present an efficient spectral method for solving such problem. Furthermore, the error is estimated and the proposed method has reasonable convergence rates in spatial and temporal discretizations. In addition, some known spectral tau approximations can be derived as special cases from our algorithm if we suitably choose the corresponding special cases of Jacobi parameters θ and ϑ. Finally, in order to demonstrate its accuracy, we compare our method with those reported in the literature.
Application of the variational-asymptotical method to composite plates
NASA Technical Reports Server (NTRS)
Hodges, Dewey H.; Lee, Bok W.; Atilgan, Ali R.
1992-01-01
A method is developed for the 3D analysis of laminated plate deformation which is an extension of a variational-asymptotical method by Atilgan and Hodges (1991). Both methods are based on the treatment of plate deformation by splitting the 3D analysis into linear through-the-thickness analysis and 2D plate analysis. Whereas the first technique tackles transverse shear deformation in the second asymptotical approximation, the present method simplifies its treatment and restricts it to the first approximation. Both analytical techniques are applied to the linear cylindrical bending problem, and the strain and stress distributions are derived and compared with those of the exact solution. The present theory provides more accurate results than those of the classical laminated-plate theory for the transverse displacement of 2-, 3-, and 4-layer cross-ply laminated plates. The method can give reliable estimates of the in-plane strain and displacement distributions.
Single image super-resolution based on approximated Heaviside functions and iterative refinement
Wang, Xin-Yu; Huang, Ting-Zhu; Deng, Liang-Jian
2018-01-01
One method of solving the single-image super-resolution problem is to use Heaviside functions. This has been done previously by making a binary classification of image components as “smooth” and “non-smooth”, describing these with approximated Heaviside functions (AHFs), and iteration including l1 regularization. We now introduce a new method in which the binary classification of image components is extended to different degrees of smoothness and non-smoothness, these components being represented by various classes of AHFs. Taking into account the sparsity of the non-smooth components, their coefficients are l1 regularized. In addition, to pick up more image details, the new method uses an iterative refinement for the residuals between the original low-resolution input and the downsampled resulting image. Experimental results showed that the new method is superior to the original AHF method and to four other published methods. PMID:29329298
NASA Astrophysics Data System (ADS)
Jamróz, Weronika
2016-06-01
The paper shows the way enrgy-based models aproximate mechanical properties of hiperelastic materials. Main goal of research was to create a method of finding a set of material constants that are included in a strain energy function that constitutes a heart of an energy-based model. The most optimal set of material constants determines the best adjustment of a theoretical stress-strain relation to the experimental one. This kind of adjustment enables better prediction of behaviour of a chosen material. In order to obtain more precised solution the approximation was made with use of data obtained in a modern experiment widely describen in [1]. To save computation time main algorithm is based on genetic algorithms.
A unitary convolution approximation for the impact-parameter dependent electronic energy loss
NASA Astrophysics Data System (ADS)
Schiwietz, G.; Grande, P. L.
1999-06-01
In this work, we propose a simple method to calculate the impact-parameter dependence of the electronic energy loss of bare ions for all impact parameters. This perturbative convolution approximation (PCA) is based on first-order perturbation theory, and thus, it is only valid for fast particles with low projectile charges. Using Bloch's stopping-power result and a simple scaling, we get rid of the restriction to low charge states and derive the unitary convolution approximation (UCA). Results of the UCA are then compared with full quantum-mechanical coupled-channel calculations for the impact-parameter dependent electronic energy loss.
NASA Technical Reports Server (NTRS)
Goussis, D. A.; Lam, S. H.; Gnoffo, P. A.
1990-01-01
The Computational Singular Perturbation CSP methods is employed (1) in the modeling of a homogeneous isothermal reacting system and (2) in the numerical simulation of the chemical reactions in a hypersonic flowfield. Reduced and simplified mechanisms are constructed. The solutions obtained on the basis of these approximate mechanisms are shown to be in very good agreement with the exact solution based on the full mechanism. Physically meaningful approximations are derived. It is demonstrated that the deduction of these approximations from CSP is independent of the complexity of the problem and requires no intuition or experience in chemical kinetics.
Reproduction of exact solutions of Lipkin model by nonlinear higher random-phase approximation
NASA Astrophysics Data System (ADS)
Terasaki, J.; Smetana, A.; Šimkovic, F.; Krivoruchenko, M. I.
2017-10-01
It is shown that the random-phase approximation (RPA) method with its nonlinear higher generalization, which was previously considered as approximation except for a very limited case, reproduces the exact solutions of the Lipkin model. The nonlinear higher RPA is based on an equation nonlinear on eigenvectors and includes many-particle-many-hole components in the creation operator of the excited states. We demonstrate the exact character of solutions analytically for the particle number N = 2 and numerically for N = 8. This finding indicates that the nonlinear higher RPA is equivalent to the exact Schrödinger equation.
Atmospheric guidance law for planar skip trajectories
NASA Technical Reports Server (NTRS)
Mease, K. D.; Mccreary, F. A.
1985-01-01
The applicability of an approximate, closed-form, analytical solution to the equations of motion, as a basis for a deterministic guidance law for controlling the in-plane motion during a skip trajectory, is investigated. The derivation of the solution by the method of matched asymptotic expansions is discussed. Specific issues that arise in the application of the solution to skip trajectories are addressed. Based on the solution, an explicit formula for the approximate energy loss due to an atmospheric pass is derived. A guidance strategy is proposed that illustrates the use of the approximate solution. A numerical example shows encouraging performance.
Improving wavelet denoising based on an in-depth analysis of the camera color processing
NASA Astrophysics Data System (ADS)
Seybold, Tamara; Plichta, Mathias; Stechele, Walter
2015-02-01
While Denoising is an extensively studied task in signal processing research, most denoising methods are designed and evaluated using readily processed image data, e.g. the well-known Kodak data set. The noise model is usually additive white Gaussian noise (AWGN). This kind of test data does not correspond to nowadays real-world image data taken with a digital camera. Using such unrealistic data to test, optimize and compare denoising algorithms may lead to incorrect parameter tuning or suboptimal choices in research on real-time camera denoising algorithms. In this paper we derive a precise analysis of the noise characteristics for the different steps in the color processing. Based on real camera noise measurements and simulation of the processing steps, we obtain a good approximation for the noise characteristics. We further show how this approximation can be used in standard wavelet denoising methods. We improve the wavelet hard thresholding and bivariate thresholding based on our noise analysis results. Both the visual quality and objective quality metrics show the advantage of the proposed method. As the method is implemented using look-up-tables that are calculated before the denoising step, our method can be implemented with very low computational complexity and can process HD video sequences real-time in an FPGA.
NASA Astrophysics Data System (ADS)
Li, Xuesong; Northrop, William F.
2016-04-01
This paper describes a quantitative approach to approximate multiple scattering through an isotropic turbid slab based on Markov Chain theorem. There is an increasing need to utilize multiple scattering for optical diagnostic purposes; however, existing methods are either inaccurate or computationally expensive. Here, we develop a novel Markov Chain approximation approach to solve multiple scattering angular distribution (AD) that can accurately calculate AD while significantly reducing computational cost compared to Monte Carlo simulation. We expect this work to stimulate ongoing multiple scattering research and deterministic reconstruction algorithm development with AD measurements.
Parametric instability analysis of truncated conical shells using the Haar wavelet method
NASA Astrophysics Data System (ADS)
Dai, Qiyi; Cao, Qingjie
2018-05-01
In this paper, the Haar wavelet method is employed to analyze the parametric instability of truncated conical shells under static and time dependent periodic axial loads. The present work is based on the Love first-approximation theory for classical thin shells. The displacement field is expressed as the Haar wavelet series in the axial direction and trigonometric functions in the circumferential direction. Then the partial differential equations are reduced into a system of coupled Mathieu-type ordinary differential equations describing dynamic instability behavior of the shell. Using Bolotin's method, the first-order and second-order approximations of principal instability regions are determined. The correctness of present method is examined by comparing the results with those in the literature and very good agreement is observed. The difference between the first-order and second-order approximations of principal instability regions for tensile and compressive loads is also investigated. Finally, numerical results are presented to bring out the influences of various parameters like static load factors, boundary conditions and shell geometrical characteristics on the domains of parametric instability of conical shells.
A hybrid continuous-discrete method for stochastic reaction-diffusion processes.
Lo, Wing-Cheong; Zheng, Likun; Nie, Qing
2016-09-01
Stochastic fluctuations in reaction-diffusion processes often have substantial effect on spatial and temporal dynamics of signal transductions in complex biological systems. One popular approach for simulating these processes is to divide the system into small spatial compartments assuming that molecules react only within the same compartment and jump between adjacent compartments driven by the diffusion. While the approach is convenient in terms of its implementation, its computational cost may become prohibitive when diffusive jumps occur significantly more frequently than reactions, as in the case of rapid diffusion. Here, we present a hybrid continuous-discrete method in which diffusion is simulated using continuous approximation while reactions are based on the Gillespie algorithm. Specifically, the diffusive jumps are approximated as continuous Gaussian random vectors with time-dependent means and covariances, allowing use of a large time step, even for rapid diffusion. By considering the correlation among diffusive jumps, the approximation is accurate for the second moment of the diffusion process. In addition, a criterion is obtained for identifying the region in which such diffusion approximation is required to enable adaptive calculations for better accuracy. Applications to a linear diffusion system and two nonlinear systems of morphogens demonstrate the effectiveness and benefits of the new hybrid method.
Wood, Warren W.; Sanford, Ward E.
1995-01-01
The High Plains aquifer underlying the semiarid Southern High Plains of Texas and New Mexico, USA was used to illustrate solute and isotopic methods for evaluating recharge fluxes, runoff, and spatial and temporal distribution of recharge. The chloride mass-balance method can provide, under certain conditions, a time-integrated technique for evaluation of recharge flux to regional aquifers that is independent of physical parameters. Applying this method to the High Plains aquifer of the Southern High Plains suggests that recharge flux is approximately 2% of precipitation, or approximately 11 ± 2 mm/y, consistent with previous estimates based on a variety of physically based measurements. The method is useful because long-term average precipitation and chloride concentrations in rain and ground water have less uncertainty and are generally less expensive to acquire than physically based parameters commonly used in analyzing recharge. Spatial and temporal distribution of recharge was evaluated by use of δ2H, δ18O, and tritium concentrations in both ground water and the unsaturated zone. Analyses suggest that nearly half of the recharge to the Southern High Plains occurs as piston flow through playa basin floors that occupy approximately 6% of the area, and that macropore recharge may be important in the remaining recharge. Tritium and chloride concentrations in the unsaturated zone were used in a new equation developed to quantify runoff. Using this equation and data from a representative basin, runoff was found to be 24 ± 3 mm/y; that is in close agreement with values obtained from water-balance measurements on experimental watersheds in the area. Such geochemical estimates are possible because tritium is used to calculate a recharge flux that is independent of precipitation and runoff, whereas recharge flux based on chloride concentration in the unsaturated zone is dependent upon the amount of runoff. The difference between these two estimates yields the amount of runoff to the basin.
Model-Free Optimal Tracking Control via Critic-Only Q-Learning.
Luo, Biao; Liu, Derong; Huang, Tingwen; Wang, Ding
2016-10-01
Model-free control is an important and promising topic in control fields, which has attracted extensive attention in the past few years. In this paper, we aim to solve the model-free optimal tracking control problem of nonaffine nonlinear discrete-time systems. A critic-only Q-learning (CoQL) method is developed, which learns the optimal tracking control from real system data, and thus avoids solving the tracking Hamilton-Jacobi-Bellman equation. First, the Q-learning algorithm is proposed based on the augmented system, and its convergence is established. Using only one neural network for approximating the Q-function, the CoQL method is developed to implement the Q-learning algorithm. Furthermore, the convergence of the CoQL method is proved with the consideration of neural network approximation error. With the convergent Q-function obtained from the CoQL method, the adaptive optimal tracking control is designed based on the gradient descent scheme. Finally, the effectiveness of the developed CoQL method is demonstrated through simulation studies. The developed CoQL method learns with off-policy data and implements with a critic-only structure, thus it is easy to realize and overcome the inadequate exploration problem.
Optimizing some 3-stage W-methods for the time integration of PDEs
NASA Astrophysics Data System (ADS)
Gonzalez-Pinto, S.; Hernandez-Abreu, D.; Perez-Rodriguez, S.
2017-07-01
The optimization of some W-methods for the time integration of time-dependent PDEs in several spatial variables is considered. In [2, Theorem 1] several three-parametric families of three-stage W-methods for the integration of IVPs in ODEs were studied. Besides, the optimization of several specific methods for PDEs when the Approximate Matrix Factorization Splitting (AMF) is used to define the approximate Jacobian matrix (W ≈ fy(yn)) was carried out. Also, some convergence and stability properties were presented [2]. The derived methods were optimized on the base that the underlying explicit Runge-Kutta method is the one having the largest Monotonicity interval among the thee-stage order three Runge-Kutta methods [1]. Here, we propose an optimization of the methods by imposing some additional order condition [7] to keep order three for parabolic PDE problems [6] but at the price of reducing substantially the length of the nonlinear Monotonicity interval of the underlying explicit Runge-Kutta method.
A finite element analysis of viscoelastically damped sandwich plates
NASA Astrophysics Data System (ADS)
Ma, B.-A.; He, J.-F.
1992-01-01
A finite element analysis associated with an asymptotic solution method for the harmonic flexural vibration of viscoelastically damped unsymmetrical sandwich plates is given. The element formulation is based on generalization of the discrete Kirchhoff theory (DKT) element formulation. The results obtained with the first order approximation of the asymptotic solution presented here are the same as those obtained by means of the modal strain energy (MSE) method. By taking more terms of the asymptotic solution, with successive calculations and use of the Padé approximants method, accuracy can be improved. The finite element computation has been verified by comparison with an analytical exact solution for rectangular plates with simply supported edges. Results for the same plates with clamped edges are also presented.
NASA Technical Reports Server (NTRS)
Atluri, Satya N.; Shen, Shengping
2002-01-01
In this paper, a very simple method is used to derive the weakly singular traction boundary integral equation based on the integral relationships for displacement gradients. The concept of the MLPG method is employed to solve the integral equations, especially those arising in solid mechanics. A moving Least Squares (MLS) interpolation is selected to approximate the trial functions in this paper. Five boundary integral Solution methods are introduced: direct solution method; displacement boundary-value problem; traction boundary-value problem; mixed boundary-value problem; and boundary variational principle. Based on the local weak form of the BIE, four different nodal-based local test functions are selected, leading to four different MLPG methods for each BIE solution method. These methods combine the advantages of the MLPG method and the boundary element method.
Neural Networks and other Techniques for Fault Identification and Isolation of Aircraft Systems
NASA Technical Reports Server (NTRS)
Innocenti, M.; Napolitano, M.
2003-01-01
Fault identification, isolation, and accomodation have become critical issues in the overall performance of advanced aircraft systems. Neural Networks have shown to be a very attractive alternative to classic adaptation methods for identification and control of non-linear dynamic systems. The purpose of this paper is to show the improvements in neural network applications achievable through the use of learning algorithms more efficient than the classic Back-Propagation, and through the implementation of the neural schemes in parallel hardware. The results of the analysis of a scheme for Sensor Failure, Detection, Identification and Accommodation (SFDIA) using experimental flight data of a research aircraft model are presented. Conventional approaches to the problem are based on observers and Kalman Filters while more recent methods are based on neural approximators. The work described in this paper is based on the use of neural networks (NNs) as on-line learning non-linear approximators. The performances of two different neural architectures were compared. The first architecture is based on a Multi Layer Perceptron (MLP) NN trained with the Extended Back Propagation algorithm (EBPA). The second architecture is based on a Radial Basis Function (RBF) NN trained with the Extended-MRAN (EMRAN) algorithms. In addition, alternative methods for communications links fault detection and accomodation are presented, relative to multiple unmanned aircraft applications.
Formal modeling of a system of chemical reactions under uncertainty.
Ghosh, Krishnendu; Schlipf, John
2014-10-01
We describe a novel formalism representing a system of chemical reactions, with imprecise rates of reactions and concentrations of chemicals, and describe a model reduction method, pruning, based on the chemical properties. We present two algorithms, midpoint approximation and interval approximation, for construction of efficient model abstractions with uncertainty in data. We evaluate computational feasibility by posing queries in computation tree logic (CTL) on a prototype of extracellular-signal-regulated kinase (ERK) pathway.
Sponer, Jiří; Sponer, Judit E; Mládek, Arnošt; Jurečka, Petr; Banáš, Pavel; Otyepka, Michal
2013-12-01
Base stacking is a major interaction shaping up and stabilizing nucleic acids. During the last decades, base stacking has been extensively studied by experimental and theoretical methods. Advanced quantum-chemical calculations clarified that base stacking is a common interaction, which in the first approximation can be described as combination of the three most basic contributions to molecular interactions, namely, electrostatic interaction, London dispersion attraction and short-range repulsion. There is not any specific π-π energy term associated with the delocalized π electrons of the aromatic rings that cannot be described by the mentioned contributions. The base stacking can be rather reasonably approximated by simple molecular simulation methods based on well-calibrated common force fields although the force fields do not include nonadditivity of stacking, anisotropy of dispersion interactions, and some other effects. However, description of stacking association in condensed phase and understanding of the stacking role in biomolecules remain a difficult problem, as the net base stacking forces always act in a complex and context-specific environment. Moreover, the stacking forces are balanced with many other energy contributions. Differences in definition of stacking in experimental and theoretical studies are explained. Copyright © 2013 Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sevast'yanov, E A; Sadekova, E Kh
The Bulgarian mathematicians Sendov, Popov, and Boyanov have well-known results on the asymptotic behaviour of the least deviations of 2{pi}-periodic functions in the classes H{sup {omega}} from trigonometric polynomials in the Hausdorff metric. However, the asymptotics they give are not adequate to detect a difference in, for example, the rate of approximation of functions f whose moduli of continuity {omega}(f;{delta}) differ by factors of the form (log(1/{delta})){sup {beta}}. Furthermore, a more detailed determination of the asymptotic behaviour by traditional methods becomes very difficult. This paper develops an approach based on using trigonometric snakes as approximating polynomials. The snakes of ordermore » n inscribed in the Minkowski {delta}-neighbourhood of the graph of the approximated function f provide, in a number of cases, the best approximation for f (for the appropriate choice of {delta}). The choice of {delta} depends on n and f and is based on constructing polynomial kernels adjusted to the Hausdorff metric and polynomials with special oscillatory properties. Bibliography: 19 titles.« less
Improved response functions for gamma-ray skyshine analyses
NASA Astrophysics Data System (ADS)
Shultis, J. K.; Faw, R. E.; Deng, X.
1992-09-01
A computationally simple method, based on line-beam response functions, is refined for estimating gamma skyshine dose rates. Critical to this method is the availability of an accurate approximation for the line-beam response function (LBRF). In this study, the LBRF is evaluated accurately with the point-kernel technique using recent photon interaction data. Various approximations to the LBRF are considered, and a three parameter formula is selected as the most practical approximation. By fitting the approximating formula to point-kernel results, a set of parameters is obtained that allows the LBRF to be quickly and accurately evaluated for energies between 0.01 and 15 MeV, for source-to-detector distances from 1 to 3000 m, and for beam angles from 0 to 180 degrees. This re-evaluation of the approximate LBRF gives better accuracy, especially at low energies, over a greater source-to-detector range than do previous LBRF approximations. A conical beam response function is also introduced for application to skyshine sources that are azimuthally symmetric about a vertical axis. The new response functions are then applied to three simple skyshine geometries (an open silo geometry, an infinite wall, and a rectangular four-wall building) and the results are compared to previous calculations and benchmark data.
Improved response functions for gamma-ray skyshine analyses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shultis, J.K.; Faw, R.E.; Deng, X.
1992-09-01
A computationally simple method, based on line-beam response functions, is refined for estimating gamma skyshine dose rates. Critical to this method is the availability of an accurate approximation for the line-beam response function (LBRF). In this study the LBRF is evaluated accurately with the point-kernel technique using recent photon interaction data. Various approximations to the LBRF are considered, and a three parameter formula is selected as the most practical approximation. By fitting the approximating formula to point-kernel results, a set of parameters is obtained that allows the LBRF to be quickly and accurately evaluated for energies between 0.01 and 15more » MeV, for source-to-detector distances from 1 to 3000 m, and for beam angles from 0 to 180 degrees. This reevaluation of the approximate LBRF gives better accuracy, especially at low energies, over a greater source-to-detector range than do previous LBRF approximations. A conical beam response function is also introduced for application to skyshine sources that are azimuthally symmetric about a vertical axis. The new response functions are then applied to three simple skyshine geometries (an open silo geometry, an infinite wall, and a rectangular four-wall building) and the results compared to previous calculations and benchmark data.« less
Ho, Lam Si Tung; Xu, Jason; Crawford, Forrest W; Minin, Vladimir N; Suchard, Marc A
2018-03-01
Birth-death processes track the size of a univariate population, but many biological systems involve interaction between populations, necessitating models for two or more populations simultaneously. A lack of efficient methods for evaluating finite-time transition probabilities of bivariate processes, however, has restricted statistical inference in these models. Researchers rely on computationally expensive methods such as matrix exponentiation or Monte Carlo approximation, restricting likelihood-based inference to small systems, or indirect methods such as approximate Bayesian computation. In this paper, we introduce the birth/birth-death process, a tractable bivariate extension of the birth-death process, where rates are allowed to be nonlinear. We develop an efficient algorithm to calculate its transition probabilities using a continued fraction representation of their Laplace transforms. Next, we identify several exemplary models arising in molecular epidemiology, macro-parasite evolution, and infectious disease modeling that fall within this class, and demonstrate advantages of our proposed method over existing approaches to inference in these models. Notably, the ubiquitous stochastic susceptible-infectious-removed (SIR) model falls within this class, and we emphasize that computable transition probabilities newly enable direct inference of parameters in the SIR model. We also propose a very fast method for approximating the transition probabilities under the SIR model via a novel branching process simplification, and compare it to the continued fraction representation method with application to the 17th century plague in Eyam. Although the two methods produce similar maximum a posteriori estimates, the branching process approximation fails to capture the correlation structure in the joint posterior distribution.
NASA Astrophysics Data System (ADS)
Do, Seongju; Li, Haojun; Kang, Myungjoo
2017-06-01
In this paper, we present an accurate and efficient wavelet-based adaptive weighted essentially non-oscillatory (WENO) scheme for hydrodynamics and ideal magnetohydrodynamics (MHD) equations arising from the hyperbolic conservation systems. The proposed method works with the finite difference weighted essentially non-oscillatory (FD-WENO) method in space and the third order total variation diminishing (TVD) Runge-Kutta (RK) method in time. The philosophy of this work is to use the lifted interpolating wavelets as not only detector for singularities but also interpolator. Especially, flexible interpolations can be performed by an inverse wavelet transformation. When the divergence cleaning method introducing auxiliary scalar field ψ is applied to the base numerical schemes for imposing divergence-free condition to the magnetic field in a MHD equation, the approximations to derivatives of ψ require the neighboring points. Moreover, the fifth order WENO interpolation requires large stencil to reconstruct high order polynomial. In such cases, an efficient interpolation method is necessary. The adaptive spatial differentiation method is considered as well as the adaptation of grid resolutions. In order to avoid the heavy computation of FD-WENO, in the smooth regions fixed stencil approximation without computing the non-linear WENO weights is used, and the characteristic decomposition method is replaced by a component-wise approach. Numerical results demonstrate that with the adaptive method we are able to resolve the solutions that agree well with the solution of the corresponding fine grid.
Pedersen, S N; Lindholst, C
1999-12-09
Extraction methods were developed for quantification of the xenoestrogens 4-tert.-octylphenol (tOP) and bisphenol A (BPA) in water and in liver and muscle tissue from the rainbow trout (Oncorhynchus mykiss). The extraction of tOP and BPA from tissue samples was carried out using microwave-assisted solvent extraction (MASE) followed by solid-phase extraction (SPE). Water samples were extracted using only SPE. For the quantification of tOP and BPA, liquid chromatography mass spectrometry (LC-MS) equipped with an atmospheric pressure chemical ionisation interface (APCI) was applied. The combined methods for tissue extraction allow the use of small sample amounts of liver or muscle (typically 1 g), low volumes of solvent (20 ml), and short extraction times (25 min). Limits of quantification of tOP in tissue samples were found to be approximately 10 ng/g in muscle and 50 ng/g in liver (both based on 1 g of fresh tissue). The corresponding values for BPA were approximately 50 ng/g in both muscle and liver tissue. In water, the limit of quantification for tOP and BPA was approximately 0.1 microg/l (based on 100 ml sample size).
Eye gaze tracking using correlation filters
NASA Astrophysics Data System (ADS)
Karakaya, Mahmut; Bolme, David; Boehnen, Chris
2014-03-01
In this paper, we studied a method for eye gaze tracking that provide gaze estimation from a standard webcam with a zoom lens and reduce the setup and calibration requirements for new users. Specifically, we have developed a gaze estimation method based on the relative locations of points on the top of the eyelid and eye corners. Gaze estimation method in this paper is based on the distances between top point of the eyelid and eye corner detected by the correlation filters. Advanced correlation filters were found to provide facial landmark detections that are accurate enough to determine the subjects gaze direction up to angle of approximately 4-5 degrees although calibration errors often produce a larger overall shift in the estimates. This is approximately a circle of diameter 2 inches for a screen that is arm's length from the subject. At this accuracy it is possible to figure out what regions of text or images the subject is looking but it falls short of being able to determine which word the subject has looked at.
Estimating and Identifying Unspecified Correlation Structure for Longitudinal Data
Hu, Jianhua; Wang, Peng; Qu, Annie
2014-01-01
Identifying correlation structure is important to achieving estimation efficiency in analyzing longitudinal data, and is also crucial for drawing valid statistical inference for large size clustered data. In this paper, we propose a nonparametric method to estimate the correlation structure, which is applicable for discrete longitudinal data. We utilize eigenvector-based basis matrices to approximate the inverse of the empirical correlation matrix and determine the number of basis matrices via model selection. A penalized objective function based on the difference between the empirical and model approximation of the correlation matrices is adopted to select an informative structure for the correlation matrix. The eigenvector representation of the correlation estimation is capable of reducing the risk of model misspecification, and also provides useful information on the specific within-cluster correlation pattern of the data. We show that the proposed method possesses the oracle property and selects the true correlation structure consistently. The proposed method is illustrated through simulations and two data examples on air pollution and sonar signal studies. PMID:26361433
Lanczos algorithm with matrix product states for dynamical correlation functions
NASA Astrophysics Data System (ADS)
Dargel, P. E.; Wöllert, A.; Honecker, A.; McCulloch, I. P.; Schollwöck, U.; Pruschke, T.
2012-05-01
The density-matrix renormalization group (DMRG) algorithm can be adapted to the calculation of dynamical correlation functions in various ways which all represent compromises between computational efficiency and physical accuracy. In this paper we reconsider the oldest approach based on a suitable Lanczos-generated approximate basis and implement it using matrix product states (MPS) for the representation of the basis states. The direct use of matrix product states combined with an ex post reorthogonalization method allows us to avoid several shortcomings of the original approach, namely the multitargeting and the approximate representation of the Hamiltonian inherent in earlier Lanczos-method implementations in the DMRG framework, and to deal with the ghost problem of Lanczos methods, leading to a much better convergence of the spectral weights and poles. We present results for the dynamic spin structure factor of the spin-1/2 antiferromagnetic Heisenberg chain. A comparison to Bethe ansatz results in the thermodynamic limit reveals that the MPS-based Lanczos approach is much more accurate than earlier approaches at minor additional numerical cost.
NASA Astrophysics Data System (ADS)
Rusz, Ján; Lubk, Axel; Spiegelberg, Jakob; Tyutyunnikov, Dmitry
2017-12-01
The complex interplay of elastic and inelastic scattering amenable to different levels of approximation constitutes the major challenge for the computation and hence interpretation of TEM-based spectroscopical methods. The two major approaches to calculate inelastic scattering cross sections of fast electrons on crystals—Yoshioka-equations-based forward propagation and the reciprocal wave method—are founded in two conceptually differing schemes—a numerical forward integration of each inelastically scattered wave function, yielding the exit density matrix, and a computation of inelastic scattering matrix elements using elastically scattered initial and final states (double channeling). Here, we compare both approaches and show that the latter is computationally competitive to the former by exploiting analytical integration schemes over multiple excited states. Moreover, we show how to include full nonlocality of the inelastic scattering event, neglected in the forward propagation approaches, at no additional computing costs in the reciprocal wave method. Detailed simulations show in some cases significant errors due to the z -locality approximation and hence pitfalls in the interpretation of spectroscopical TEM results.
NASA Astrophysics Data System (ADS)
Morris, Titus; Bogner, Scott
2016-09-01
The In-Medium Similarity Renormalization Group (IM-SRG) has been applied successfully to the ground state of closed shell finite nuclei. Recent work has extended its ability to target excited states of these closed shell systems via equation of motion methods, and also complete spectra of the whole SD shell via effective shell model interactions. A recent alternative method for solving of the IM-SRG equations, based on the Magnus expansion, not only provides a computationally feasible route to producing observables, but also allows for approximate handling of induced three-body forces. Promising results for several systems, including finite nuclei, will be presented and discussed.
Coherent-Anomaly Method in Critical Phenomena. IV.
NASA Astrophysics Data System (ADS)
Hu, Xiao; Suzuki, Masuo
The systematic Weiss-like and Bethe-like approximations based on the mean-field transfer-matrix method are used to investigate the asymptotic behavior of the induced magnetization on a semi-infinite square lattice, and to investigate the wave-number dependence of the susceptibility in a nonuniform external field. The critical exponents ν, ν', ηi and η are estimated following the general CAM prescription. A new scaling relation ν·ηi=β is obtained in the framework of the finite-degree-of-approximation scaling. Together with previous papers, all the static critical exponents have been estimated by the CAM, and are shown to satisfy the well-known scaling relations.
NASA Astrophysics Data System (ADS)
Zhou, Weifeng; Cai, Jian-Feng; Gao, Hao
2013-12-01
A popular approach for medical image reconstruction has been through the sparsity regularization, assuming the targeted image can be well approximated by sparse coefficients under some properly designed system. The wavelet tight frame is such a widely used system due to its capability for sparsely approximating piecewise-smooth functions, such as medical images. However, using a fixed system may not always be optimal for reconstructing a variety of diversified images. Recently, the method based on the adaptive over-complete dictionary that is specific to structures of the targeted images has demonstrated its superiority for image processing. This work is to develop the adaptive wavelet tight frame method image reconstruction. The proposed scheme first constructs the adaptive wavelet tight frame that is task specific, and then reconstructs the image of interest by solving an l1-regularized minimization problem using the constructed adaptive tight frame system. The proof-of-concept study is performed for computed tomography (CT), and the simulation results suggest that the adaptive tight frame method improves the reconstructed CT image quality from the traditional tight frame method.
Battery state-of-charge estimation using approximate least squares
NASA Astrophysics Data System (ADS)
Unterrieder, C.; Zhang, C.; Lunglmayr, M.; Priewasser, R.; Marsili, S.; Huemer, M.
2015-03-01
In recent years, much effort has been spent to extend the runtime of battery-powered electronic applications. In order to improve the utilization of the available cell capacity, high precision estimation approaches for battery-specific parameters are needed. In this work, an approximate least squares estimation scheme is proposed for the estimation of the battery state-of-charge (SoC). The SoC is determined based on the prediction of the battery's electromotive force. The proposed approach allows for an improved re-initialization of the Coulomb counting (CC) based SoC estimation method. Experimental results for an implementation of the estimation scheme on a fuel gauge system on chip are illustrated. Implementation details and design guidelines are presented. The performance of the presented concept is evaluated for realistic operating conditions (temperature effects, aging, standby current, etc.). For the considered test case of a GSM/UMTS load current pattern of a mobile phone, the proposed method is able to re-initialize the CC-method with a high accuracy, while state-of-the-art methods fail to perform a re-initialization.
Approximate quasiparticle correction for calculations of the energy gap in two-dimensional materials
NASA Astrophysics Data System (ADS)
Guilhon, I.; Koda, D. S.; Ferreira, L. G.; Marques, M.; Teles, L. K.
2018-01-01
At the same time that two-dimensional (2D) systems open possibilities for new physics and applications, they present a higher challenge for electronic structure calculations, especially concerning excitations. The achievement of a fast and accurate practical model that incorporates approximate quasiparticle corrections can further open an avenue for more reliable band structure calculations of complex systems such as interactions of 2D materials with substrates or molecules, as well as the formation of van der Waals heterostructures. In this work, we demonstrate that the performance of the fast and parameter-free DFT-1/2 method is comparable with state-of-the-art GW and superior to the HSE06 hybrid functional in the majority set of the 34 different 2D materials studied. Moreover, based on the knowledge of the method and chemical information of the material, we can predict the small number of cases in which the method is not so effective and also provide the best recipe for an optimized DFT-1/2 method based on the electronegativity difference of the bonding atoms.
NASA Astrophysics Data System (ADS)
Gorthi, Sai Siva; Rajshekhar, G.; Rastogi, Pramod
2010-04-01
For three-dimensional (3D) shape measurement using fringe projection techniques, the information about the 3D shape of an object is encoded in the phase of a recorded fringe pattern. The paper proposes a high-order instantaneous moments based method to estimate phase from a single fringe pattern in fringe projection. The proposed method works by approximating the phase as a piece-wise polynomial and subsequently determining the polynomial coefficients using high-order instantaneous moments to construct the polynomial phase. Simulation results are presented to show the method's potential.
A patch-based pseudo-CT approach for MRI-only radiotherapy in the pelvis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andreasen, Daniel, E-mail: dana@dtu.dk
Purpose: In radiotherapy based only on magnetic resonance imaging (MRI), knowledge about tissue electron densities must be derived from the MRI. This can be achieved by converting the MRI scan to the so-called pseudo-computed tomography (pCT). An obstacle is that the voxel intensities in conventional MRI scans are not uniquely related to electron density. The authors previously demonstrated that a patch-based method could produce accurate pCTs of the brain using conventional T{sub 1}-weighted MRI scans. The method was driven mainly by local patch similarities and relied on simple affine registrations between an atlas database of the co-registered MRI/CT scan pairsmore » and the MRI scan to be converted. In this study, the authors investigate the applicability of the patch-based approach in the pelvis. This region is challenging for a method based on local similarities due to the greater inter-patient variation. The authors benchmark the method against a baseline pCT strategy where all voxels inside the body contour are assigned a water-equivalent bulk density. Furthermore, the authors implement a parallelized approximate patch search strategy to speed up the pCT generation time to a more clinically relevant level. Methods: The data consisted of CT and T{sub 1}-weighted MRI scans of 10 prostate patients. pCTs were generated using an approximate patch search algorithm in a leave-one-out fashion and compared with the CT using frequently described metrics such as the voxel-wise mean absolute error (MAE{sub vox}) and the deviation in water-equivalent path lengths. Furthermore, the dosimetric accuracy was tested for a volumetric modulated arc therapy plan using dose–volume histogram (DVH) point deviations and γ-index analysis. Results: The patch-based approach had an average MAE{sub vox} of 54 HU; median deviations of less than 0.4% in relevant DVH points and a γ-index pass rate of 0.97 using a 1%/1 mm criterion. The patch-based approach showed a significantly better performance than the baseline water pCT in almost all metrics. The approximate patch search strategy was 70x faster than a brute-force search, with an average prediction time of 20.8 min. Conclusions: The authors showed that a patch-based method based on affine registrations and T{sub 1}-weighted MRI could generate accurate pCTs of the pelvis. The main source of differences between pCT and CT was positional changes of air pockets and body outline.« less
Novel Harmonic Regularization Approach for Variable Selection in Cox's Proportional Hazards Model
Chu, Ge-Jin; Liang, Yong; Wang, Jia-Xuan
2014-01-01
Variable selection is an important issue in regression and a number of variable selection methods have been proposed involving nonconvex penalty functions. In this paper, we investigate a novel harmonic regularization method, which can approximate nonconvex Lq (1/2 < q < 1) regularizations, to select key risk factors in the Cox's proportional hazards model using microarray gene expression data. The harmonic regularization method can be efficiently solved using our proposed direct path seeking approach, which can produce solutions that closely approximate those for the convex loss function and the nonconvex regularization. Simulation results based on the artificial datasets and four real microarray gene expression datasets, such as real diffuse large B-cell lymphoma (DCBCL), the lung cancer, and the AML datasets, show that the harmonic regularization method can be more accurate for variable selection than existing Lasso series methods. PMID:25506389
Domain decomposition methods for systems of conservation laws: Spectral collocation approximations
NASA Technical Reports Server (NTRS)
Quarteroni, Alfio
1989-01-01
Hyperbolic systems of conversation laws are considered which are discretized in space by spectral collocation methods and advanced in time by finite difference schemes. At any time-level a domain deposition method based on an iteration by subdomain procedure was introduced yielding at each step a sequence of independent subproblems (one for each subdomain) that can be solved simultaneously. The method is set for a general nonlinear problem in several space variables. The convergence analysis, however, is carried out only for a linear one-dimensional system with continuous solutions. A precise form of the error reduction factor at each iteration is derived. Although the method is applied here to the case of spectral collocation approximation only, the idea is fairly general and can be used in a different context as well. For instance, its application to space discretization by finite differences is straight forward.
Bounded Linear Stability Analysis - A Time Delay Margin Estimation Approach for Adaptive Control
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.; Ishihara, Abraham K.; Krishnakumar, Kalmanje Srinlvas; Bakhtiari-Nejad, Maryam
2009-01-01
This paper presents a method for estimating time delay margin for model-reference adaptive control of systems with almost linear structured uncertainty. The bounded linear stability analysis method seeks to represent the conventional model-reference adaptive law by a locally bounded linear approximation within a small time window using the comparison lemma. The locally bounded linear approximation of the combined adaptive system is cast in a form of an input-time-delay differential equation over a small time window. The time delay margin of this system represents a local stability measure and is computed analytically by a matrix measure method, which provides a simple analytical technique for estimating an upper bound of time delay margin. Based on simulation results for a scalar model-reference adaptive control system, both the bounded linear stability method and the matrix measure method are seen to provide a reasonably accurate and yet not too conservative time delay margin estimation.
NASA Astrophysics Data System (ADS)
Costa-Surós, M.; Calbó, J.; González, J. A.; Long, C. N.
2014-04-01
The cloud vertical distribution and especially the cloud base height, which is linked to cloud type, is an important characteristic in order to describe the impact of clouds on climate. In this work several methods to estimate the cloud vertical structure (CVS) based on atmospheric sounding profiles are compared, considering number and position of cloud layers, with a ground based system which is taken as a reference: the Active Remote Sensing of Clouds (ARSCL). All methods establish some conditions on the relative humidity, and differ on the use of other variables, the thresholds applied, or the vertical resolution of the profile. In this study these methods are applied to 193 radiosonde profiles acquired at the ARM Southern Great Plains site during all seasons of year 2009 and endorsed by GOES images, to confirm that the cloudiness conditions are homogeneous enough across their trajectory. The perfect agreement (i.e. when the whole CVS is correctly estimated) for the methods ranges between 26-64%; the methods show additional approximate agreement (i.e. when at least one cloud layer is correctly assessed) from 15-41%. Further tests and improvements are applied on one of these methods. In addition, we attempt to make this method suitable for low resolution vertical profiles, like those from the outputs of reanalysis methods or from the WMO's Global Telecommunication System. The perfect agreement, even when using low resolution profiles, can be improved up to 67% (plus 25% of approximate agreement) if the thresholds for a moist layer to become a cloud layer are modified to minimize false negatives with the current data set, thus improving overall agreement.
NASA Astrophysics Data System (ADS)
Astionenko, I. O.; Litvinenko, O. I.; Osipova, N. V.; Tuluchenko, G. Ya.; Khomchenko, A. N.
2016-10-01
Recently the interpolation bases of the hierarchical type have been used for the problem solving of the approximation of multiple arguments functions (such as in the finite-element method). In this work the cognitive graphical method of constructing of the hierarchical form bases on the serendipity finite elements is suggested, which allowed to get the alternative bases on a biquadratic finite element from the serendipity family without internal knots' inclusion. The cognitive-graphic method allowed to improve the known interpolation procedure of Taylor and to get the modified elements with irregular arrangement of knots. The proposed procedures are universal and are spread in the area of finite-elements.
Linear approximations of global behaviors in nonlinear systems with moderate or strong noise
NASA Astrophysics Data System (ADS)
Liang, Junhao; Din, Anwarud; Zhou, Tianshou
2018-03-01
While many physical or chemical systems can be modeled by nonlinear Langevin equations (LEs), dynamical analysis of these systems is challenging in the cases of moderate and strong noise. Here we develop a linear approximation scheme, which can transform an often intractable LE into a linear set of binomial moment equations (BMEs). This scheme provides a feasible way to capture nonlinear behaviors in the sense of probability distribution and is effective even when the noise is moderate or big. Based on BMEs, we further develop a noise reduction technique, which can effectively handle tough cases where traditional small-noise theories are inapplicable. The overall method not only provides an approximation-based paradigm to analysis of the local and global behaviors of nonlinear noisy systems but also has a wide range of applications.
Label inspection of approximate cylinder based on adverse cylinder panorama
NASA Astrophysics Data System (ADS)
Lin, Jianping; Liao, Qingmin; He, Bei; Shi, Chenbo
2013-12-01
This paper presents a machine vision system for automated label inspection, with the goal to reduce labor cost and ensure consistent product quality. Firstly, the images captured from each single-camera are distorted, since the inspection object is approximate cylindrical. Therefore, this paper proposes an algorithm based on adverse cylinder projection, where label images are rectified by distortion compensation. Secondly, to overcome the limited field of viewing for each single-camera, our method novelly combines images of all single-cameras and build a panorama for label inspection. Thirdly, considering the shake of production lines and error of electronic signal, we design the real-time image registration to calculate offsets between the template and inspected images. Experimental results demonstrate that our system is accurate, real-time and can be applied for numerous real- time inspections of approximate cylinders.
Yu, Jinpeng; Shi, Peng; Yu, Haisheng; Chen, Bing; Lin, Chong
2015-07-01
This paper considers the problem of discrete-time adaptive position tracking control for a interior permanent magnet synchronous motor (IPMSM) based on fuzzy-approximation. Fuzzy logic systems are used to approximate the nonlinearities of the discrete-time IPMSM drive system which is derived by direct discretization using Euler method, and a discrete-time fuzzy position tracking controller is designed via backstepping approach. In contrast to existing results, the advantage of the scheme is that the number of the adjustable parameters is reduced to two only and the problem of coupling nonlinearity can be overcome. It is shown that the proposed discrete-time fuzzy controller can guarantee the tracking error converges to a small neighborhood of the origin and all the signals are bounded. Simulation results illustrate the effectiveness and the potentials of the theoretic results obtained.
Stephan, Carl N; Devine, Matthew
2009-10-30
The construction of the facial muscles (particularly those of mastication) is generally thought to enhance the accuracy of facial approximation methods because they increase attention paid to face anatomy. However, the lack of consideration for non-muscular structures of the face when using these "anatomical" methods ironically forces one of the two large masticatory muscles to be exaggerated beyond reality. To demonstrate and resolve this issue the temporal region of nineteen caucasoid human cadavers (10 females, 9 males; mean age=84 years, s=9 years, range=58-97 years) were investigated. Soft tissue depths were measured at regular intervals across the temporal fossa in 10 cadavers, and the thickness of the muscle and fat components quantified in nine other cadavers. The measurements indicated that the temporalis muscle generally accounts for <50% of the total soft tissue depth, and does not fill the entirety of the fossa (as generally known in the anatomical literature, but not as followed in facial approximation practice). In addition, a soft tissue bulge was consistently observed in the anteroinferior portion of the temporal fossa (as also evident in younger individuals), and during dissection, this bulge was found to closely correspond to the superficial temporal fat pad (STFP). Thus, the facial surface does not follow a simple undulating curve of the temporalis muscle as currently undertaken in facial approximation methods. New metric-based facial approximation guidelines are presented to facilitate accurate construction of the STFP and the temporalis muscle for future facial approximation casework. This study warrants further investigations of the temporalis muscle and the STFP in younger age groups and demonstrates that untested facial approximation guidelines, including those propounded to be anatomical, should be cautiously regarded.
NASA Astrophysics Data System (ADS)
Xing, Yanyuan; Yan, Yubin
2018-03-01
Gao et al. [11] (2014) introduced a numerical scheme to approximate the Caputo fractional derivative with the convergence rate O (k 3 - α), 0 < α < 1 by directly approximating the integer-order derivative with some finite difference quotients in the definition of the Caputo fractional derivative, see also Lv and Xu [20] (2016), where k is the time step size. Under the assumption that the solution of the time fractional partial differential equation is sufficiently smooth, Lv and Xu [20] (2016) proved by using energy method that the corresponding numerical method for solving time fractional partial differential equation has the convergence rate O (k 3 - α), 0 < α < 1 uniformly with respect to the time variable t. However, in general the solution of the time fractional partial differential equation has low regularity and in this case the numerical method fails to have the convergence rate O (k 3 - α), 0 < α < 1 uniformly with respect to the time variable t. In this paper, we first obtain a similar approximation scheme to the Riemann-Liouville fractional derivative with the convergence rate O (k 3 - α), 0 < α < 1 as in Gao et al. [11] (2014) by approximating the Hadamard finite-part integral with the piecewise quadratic interpolation polynomials. Based on this scheme, we introduce a time discretization scheme to approximate the time fractional partial differential equation and show by using Laplace transform methods that the time discretization scheme has the convergence rate O (k 3 - α), 0 < α < 1 for any fixed tn > 0 for smooth and nonsmooth data in both homogeneous and inhomogeneous cases. Numerical examples are given to show that the theoretical results are consistent with the numerical results.
A new sampling scheme for developing metamodels with the zeros of Chebyshev polynomials
NASA Astrophysics Data System (ADS)
Wu, Jinglai; Luo, Zhen; Zhang, Nong; Zhang, Yunqing
2015-09-01
The accuracy of metamodelling is determined by both the sampling and approximation. This article proposes a new sampling method based on the zeros of Chebyshev polynomials to capture the sampling information effectively. First, the zeros of one-dimensional Chebyshev polynomials are applied to construct Chebyshev tensor product (CTP) sampling, and the CTP is then used to construct high-order multi-dimensional metamodels using the 'hypercube' polynomials. Secondly, the CTP sampling is further enhanced to develop Chebyshev collocation method (CCM) sampling, to construct the 'simplex' polynomials. The samples of CCM are randomly and directly chosen from the CTP samples. Two widely studied sampling methods, namely the Smolyak sparse grid and Hammersley, are used to demonstrate the effectiveness of the proposed sampling method. Several numerical examples are utilized to validate the approximation accuracy of the proposed metamodel under different dimensions.
NASA Astrophysics Data System (ADS)
Guo, Sangang
2017-09-01
There are two stages in solving security-constrained unit commitment problems (SCUC) within Lagrangian framework: one is to obtain feasible units’ states (UC), the other is power economic dispatch (ED) for each unit. The accurate solution of ED is more important for enhancing the efficiency of the solution to SCUC for the fixed feasible units’ statues. Two novel methods named after Convex Combinatorial Coefficient Method and Power Increment Method respectively based on linear programming problem for solving ED are proposed by the piecewise linear approximation to the nonlinear convex fuel cost functions. Numerical testing results show that the methods are effective and efficient.
A fast Bayesian approach to discrete object detection in astronomical data sets - PowellSnakes I
NASA Astrophysics Data System (ADS)
Carvalho, Pedro; Rocha, Graça; Hobson, M. P.
2009-03-01
A new fast Bayesian approach is introduced for the detection of discrete objects immersed in a diffuse background. This new method, called PowellSnakes, speeds up traditional Bayesian techniques by (i) replacing the standard form of the likelihood for the parameters characterizing the discrete objects by an alternative exact form that is much quicker to evaluate; (ii) using a simultaneous multiple minimization code based on Powell's direction set algorithm to locate rapidly the local maxima in the posterior and (iii) deciding whether each located posterior peak corresponds to a real object by performing a Bayesian model selection using an approximate evidence value based on a local Gaussian approximation to the peak. The construction of this Gaussian approximation also provides the covariance matrix of the uncertainties in the derived parameter values for the object in question. This new approach provides a speed up in performance by a factor of `100' as compared to existing Bayesian source extraction methods that use Monte Carlo Markov chain to explore the parameter space, such as that presented by Hobson & McLachlan. The method can be implemented in either real or Fourier space. In the case of objects embedded in a homogeneous random field, working in Fourier space provides a further speed up that takes advantage of the fact that the correlation matrix of the background is circulant. We illustrate the capabilities of the method by applying to some simplified toy models. Furthermore, PowellSnakes has the advantage of consistently defining the threshold for acceptance/rejection based on priors which cannot be said of the frequentist methods. We present here the first implementation of this technique (version I). Further improvements to this implementation are currently under investigation and will be published shortly. The application of the method to realistic simulated Planck observations will be presented in a forthcoming publication.
A Formal Valuation Framework for Emotions and Their Control.
Huys, Quentin J M; Renz, Daniel
2017-09-15
Computational psychiatry aims to apply mathematical and computational techniques to help improve psychiatric care. To achieve this, the phenomena under scrutiny should be within the scope of formal methods. As emotions play an important role across many psychiatric disorders, such computational methods must encompass emotions. Here, we consider formal valuation accounts of emotions. We focus on the fact that the flexibility of emotional responses and the nature of appraisals suggest the need for a model-based valuation framework for emotions. However, resource limitations make plain model-based valuation impossible and require metareasoning strategies to apportion cognitive resources adaptively. We argue that emotions may implement such metareasoning approximations by restricting the range of behaviors and states considered. We consider the processes that guide the deployment of the approximations, discerning between innate, model-free, heuristic, and model-based controllers. A formal valuation and metareasoning framework may thus provide a principled approach to examining emotions. Copyright © 2017 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tran, A; Ruan, D; Woods, K
Purpose: The predictive power of knowledge based planning (KBP) has considerable potential in the development of automated treatment planning. Here, we examine the predictive capabilities and accuracy of previously reported KBP methods, as well as an artificial neural networks (ANN) method. Furthermore, we compare the predictive accuracy of these methods on coplanar volumetric-modulated arc therapy (VMAT) and non-coplanar 4π radiotherapy. Methods: 30 liver SBRT patients previously treated using coplanar VMAT were selected for this study. The patients were re-planned using 4π radiotherapy, which involves 20 optimally selected non-coplanar IMRT fields. ANNs were used to incorporate enhanced geometric information including livermore » and PTV size, prescription dose, patient girth, and proximity to beams. The performance of ANN was compared to three methods from statistical voxel dose learning (SVDL), wherein the doses of voxels sharing the same distance to the PTV are approximated by either taking the median of the distribution, non-parametric fitting, or skew-normal fitting. These three methods were shown to be capable of predicting DVH, but only median approximation can predict 3D dose. Prediction methods were tested using leave-one-out cross-validation tests and evaluated using residual sum of squares (RSS) for DVH and 3D dose predictions. Results: DVH prediction using non-parametric fitting had the lowest average RSS with 0.1176(4π) and 0.1633(VMAT), compared to 0.4879(4π) and 1.8744(VMAT) RSS for ANN. 3D dose prediction with median approximation had lower RSS with 12.02(4π) and 29.22(VMAT), compared to 27.95(4π) and 130.9(VMAT) for ANN. Conclusion: Paradoxically, although the ANNs included geometric features in addition to the distances to the PTV, it did not perform better in predicting DVH or 3D dose compared to simpler, faster methods based on the distances alone. The study further confirms that the prediction of 4π non-coplanar plans were more accurate than VMAT. NIH R43CA183390 and R01CA188300.« less
Hybrid DFP-CG method for solving unconstrained optimization problems
NASA Astrophysics Data System (ADS)
Osman, Wan Farah Hanan Wan; Asrul Hery Ibrahim, Mohd; Mamat, Mustafa
2017-09-01
The conjugate gradient (CG) method and quasi-Newton method are both well known method for solving unconstrained optimization method. In this paper, we proposed a new method by combining the search direction between conjugate gradient method and quasi-Newton method based on BFGS-CG method developed by Ibrahim et al. The Davidon-Fletcher-Powell (DFP) update formula is used as an approximation of Hessian for this new hybrid algorithm. Numerical result showed that the new algorithm perform well than the ordinary DFP method and proven to posses both sufficient descent and global convergence properties.
Fast Minimum Variance Beamforming Based on Legendre Polynomials.
Bae, MooHo; Park, Sung Bae; Kwon, Sung Jae
2016-09-01
Currently, minimum variance beamforming (MV) is actively investigated as a method that can improve the performance of an ultrasound beamformer, in terms of the lateral and contrast resolution. However, this method has the disadvantage of excessive computational complexity since the inverse spatial covariance matrix must be calculated. Some noteworthy methods among various attempts to solve this problem include beam space adaptive beamforming methods and the fast MV method based on principal component analysis, which are similar in that the original signal in the element space is transformed to another domain using an orthonormal basis matrix and the dimension of the covariance matrix is reduced by approximating the matrix only with important components of the matrix, hence making the inversion of the matrix very simple. Recently, we proposed a new method with further reduced computational demand that uses Legendre polynomials as the basis matrix for such a transformation. In this paper, we verify the efficacy of the proposed method through Field II simulations as well as in vitro and in vivo experiments. The results show that the approximation error of this method is less than or similar to those of the above-mentioned methods and that the lateral response of point targets and the contrast-to-speckle noise in anechoic cysts are also better than or similar to those methods when the dimensionality of the covariance matrices is reduced to the same dimension.
NASA Astrophysics Data System (ADS)
Scherstjanoi, M.; Kaplan, J. O.; Thürig, E.; Lischke, H.
2013-02-01
Models of vegetation dynamics that are designed for application at spatial scales larger than individual forest gaps suffer from several limitations. Typically, either a population average approximation is used that results in unrealistic tree allometry and forest stand structure, or models have a high computational demand because they need to simulate both a series of age-based cohorts and a number of replicate patches to account for stochastic gap-scale disturbances. The detail required by the latter method increases the number of calculations by two to three orders of magnitude compared to the less realistic population average approach. In an effort to increase the efficiency of dynamic vegetation models without sacrificing realism, and to explore patterns of spatial scaling in forests, we developed a new method for simulating stand-replacing disturbances that is both accurate and 10-50x faster than approaches that use replicate patches. The GAPPARD (approximating GAP model results with a Probabilistic Approach to account for stand Replacing Disturbances) method works by postprocessing the output of deterministic, undisturbed simulations of a cohort-based vegetation model by deriving the distribution of patch ages at any point in time on the basis of a disturbance probability. With this distribution, the expected value of any output variable can be calculated from the output values of the deterministic undisturbed run at the time corresponding to the patch age. To account for temporal changes in model forcing, e.g., as a result of climate change, GAPPARD performs a series of deterministic simulations and interpolates between the results in the postprocessing step. We integrated the GAPPARD method in the forest models LPJ-GUESS and TreeM-LPJ, and evaluated these in a series of simulations along an altitudinal transect of an inner-alpine valley. With GAPPARD applied to LPJ-GUESS results were insignificantly different from the output of the original model LPJ-GUESS using 100 replicate patches, but simulation time was reduced by approximately the factor 10. Our new method is therefore highly suited rapidly approximating LPJ-GUESS results, and provides the opportunity for future studies over large spatial domains, allows easier parameterization of tree species, faster identification of areas of interesting simulation results, and comparisons with large-scale datasets and forest models.
NASA Astrophysics Data System (ADS)
Zhong, XiaoXu; Liao, ShiJun
2018-01-01
Analytic approximations of the Von Kármán's plate equations in integral form for a circular plate under external uniform pressure to arbitrary magnitude are successfully obtained by means of the homotopy analysis method (HAM), an analytic approximation technique for highly nonlinear problems. Two HAM-based approaches are proposed for either a given external uniform pressure Q or a given central deflection, respectively. Both of them are valid for uniform pressure to arbitrary magnitude by choosing proper values of the so-called convergence-control parameters c 1 and c 2 in the frame of the HAM. Besides, it is found that the HAM-based iteration approaches generally converge much faster than the interpolation iterative method. Furthermore, we prove that the interpolation iterative method is a special case of the first-order HAM iteration approach for a given external uniform pressure Q when c 1 = - θ and c 2 = -1, where θ denotes the interpolation iterative parameter. Therefore, according to the convergence theorem of Zheng and Zhou about the interpolation iterative method, the HAM-based approaches are valid for uniform pressure to arbitrary magnitude at least in the special case c 1 = - θ and c 2 = -1. In addition, we prove that the HAM approach for the Von Kármán's plate equations in differential form is just a special case of the HAM for the Von Kármán's plate equations in integral form mentioned in this paper. All of these illustrate the validity and great potential of the HAM for highly nonlinear problems, and its superiority over perturbation techniques.
Hotspot detection in pancreatic neuroendocrine tumors: density approximation by α-shape maps
NASA Astrophysics Data System (ADS)
Niazi, M. K. K.; Hartman, Douglas J.; Pantanowitz, Liron; Gurcan, Metin N.
2016-03-01
The grading of neuroendocrine tumors of the digestive system is dependent on accurate and reproducible assessment of the proliferation with the tumor, either by counting mitotic figures or counting Ki-67 positive nuclei. At the moment, most pathologists manually identify the hotspots, a practice which is tedious and irreproducible. To better help pathologists, we present an automatic method to detect all potential hotspots in neuroendocrine tumors of the digestive system. The method starts by segmenting Ki-67 positive nuclei by entropy based thresholding, followed by detection of centroids for all Ki-67 positive nuclei. Based on geodesic distance, approximated by the nuclei centroids, we compute two maps: an amoeba map and a weighted amoeba map. These maps are later combined to generate the heat map, the segmentation of which results in the hotspots. The method was trained on three and tested on nine whole slide images of neuroendocrine tumors. When evaluated by two expert pathologists, the method reached an accuracy of 92.6%. The current method does not discriminate between tumor, stromal and inflammatory nuclei. The results show that α-shape maps may represent how hotspots are perceived.
Interface conditions for domain decomposition with radical grid refinement
NASA Technical Reports Server (NTRS)
Scroggs, Jeffrey S.
1991-01-01
Interface conditions for coupling the domains in a physically motivated domain decomposition method are discussed. The domain decomposition is based on an asymptotic-induced method for the numerical solution of hyperbolic conservation laws with small viscosity. The method consists of multiple stages. The first stage is to obtain a first approximation using a first-order method, such as the Godunov scheme. Subsequent stages of the method involve solving internal-layer problem via a domain decomposition. The method is derived and justified via singular perturbation techniques.
Adaptive EMG noise reduction in ECG signals using noise level approximation
NASA Astrophysics Data System (ADS)
Marouf, Mohamed; Saranovac, Lazar
2017-12-01
In this paper the usage of noise level approximation for adaptive Electromyogram (EMG) noise reduction in the Electrocardiogram (ECG) signals is introduced. To achieve the adequate adaptiveness, a translation-invariant noise level approximation is employed. The approximation is done in the form of a guiding signal extracted as an estimation of the signal quality vs. EMG noise. The noise reduction framework is based on a bank of low pass filters. So, the adaptive noise reduction is achieved by selecting the appropriate filter with respect to the guiding signal aiming to obtain the best trade-off between the signal distortion caused by filtering and the signal readability. For the evaluation purposes; both real EMG and artificial noises are used. The tested ECG signals are from the MIT-BIH Arrhythmia Database Directory, while both real and artificial records of EMG noise are added and used in the evaluation process. Firstly, comparison with state of the art methods is conducted to verify the performance of the proposed approach in terms of noise cancellation while preserving the QRS complex waves. Additionally, the signal to noise ratio improvement after the adaptive noise reduction is computed and presented for the proposed method. Finally, the impact of adaptive noise reduction method on QRS complexes detection was studied. The tested signals are delineated using a state of the art method, and the QRS detection improvement for different SNR is presented.
Radiative Transfer and Satellite Remote Sensing of Cirrus Clouds Using FIRE-2-IFO Data
NASA Technical Reports Server (NTRS)
2000-01-01
Under the support of the NASA grant, we have developed a new geometric-optics model (GOM2) for the calculation of the single-scattering and polarization properties for arbitrarily oriented hexagonal ice crystals. From comparisons with the results computed by the finite difference time domain (FDTD) method, we show that the novel geometric-optics can be applied to the computation of the extinction cross section and single-scattering albedo for ice crystals with size parameters along the minimum dimension as small as approximately 6. We demonstrate that the present model converges to the conventional ray tracing method for large size parameters and produces single-scattering results close to those computed by the FDTD method for size parameters along the minimum dimension smaller than approximately 20. We demonstrate that neither the conventional geometric optics method nor the Lorenz-Mie theory can be used to approximate the scattering, absorption, and polarization features for hexagonal ice crystals with size parameters from approximately 5 to 20. On the satellite remote sensing algorithm development and validation, we have developed a numerical scheme to identify multilayer cirrus cloud systems using AVHRR data. We have applied this scheme to the satellite data collected over the FIRE-2-IFO area during nine overpasses within seven observation dates. Determination of the threshold values used in the detection scheme are based on statistical analyses of these satellite data.
Low rank alternating direction method of multipliers reconstruction for MR fingerprinting.
Assländer, Jakob; Cloos, Martijn A; Knoll, Florian; Sodickson, Daniel K; Hennig, Jürgen; Lattanzi, Riccardo
2018-01-01
The proposed reconstruction framework addresses the reconstruction accuracy, noise propagation and computation time for magnetic resonance fingerprinting. Based on a singular value decomposition of the signal evolution, magnetic resonance fingerprinting is formulated as a low rank (LR) inverse problem in which one image is reconstructed for each singular value under consideration. This LR approximation of the signal evolution reduces the computational burden by reducing the number of Fourier transformations. Also, the LR approximation improves the conditioning of the problem, which is further improved by extending the LR inverse problem to an augmented Lagrangian that is solved by the alternating direction method of multipliers. The root mean square error and the noise propagation are analyzed in simulations. For verification, in vivo examples are provided. The proposed LR alternating direction method of multipliers approach shows a reduced root mean square error compared to the original fingerprinting reconstruction, to a LR approximation alone and to an alternating direction method of multipliers approach without a LR approximation. Incorporating sensitivity encoding allows for further artifact reduction. The proposed reconstruction provides robust convergence, reduced computational burden and improved image quality compared to other magnetic resonance fingerprinting reconstruction approaches evaluated in this study. Magn Reson Med 79:83-96, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
On the Daubechies-based wavelet differentiation matrix
NASA Technical Reports Server (NTRS)
Jameson, Leland
1993-01-01
The differentiation matrix for a Daubechies-based wavelet basis is constructed and superconvergence is proven. That is, it will be proven that under the assumption of periodic boundary conditions that the differentiation matrix is accurate of order 2M, even though the approximation subspace can represent exactly only polynomials up to degree M-1, where M is the number of vanishing moments of the associated wavelet. It is illustrated that Daubechies-based wavelet methods are equivalent to finite difference methods with grid refinement in regions of the domain where small-scale structure is present.
NASA Astrophysics Data System (ADS)
Kono, Naoyuki; Miki, Masahiro; Nakamura, Motoyuki; Ehara, Kazuya
2007-03-01
Phased array techniques are capable of the sensitive detection and precise sizing of flaws or cracks in components of nuclear power plants by using arbitrary focal beams with various depths, positions and angles. Aquantitative investigation of these focal beams is essential for the optimization of array probes, especially for austenitic weld inspection, in order to improve the detectability, sizing accuracy, and signal-to-noise ratio using these beams. In the present work, focal beams generated by phased array probes are calculated based on the Fresnel-Kirchhoff diffraction integral (FKDI) method, and an approximation formula between the actual focal depth and optical focal depth is proposed as an extension of the theory for conventional spherically focusing probes. The validity of the approximation formula for the array probes is confirmed by a comparison with simulation data using the FKDI method, and the experimental data.
Malik, Suheel Abdullah; Qureshi, Ijaz Mansoor; Amir, Muhammad; Malik, Aqdas Naveed; Haq, Ihsanul
2015-01-01
In this paper, a new heuristic scheme for the approximate solution of the generalized Burgers'-Fisher equation is proposed. The scheme is based on the hybridization of Exp-function method with nature inspired algorithm. The given nonlinear partial differential equation (NPDE) through substitution is converted into a nonlinear ordinary differential equation (NODE). The travelling wave solution is approximated by the Exp-function method with unknown parameters. The unknown parameters are estimated by transforming the NODE into an equivalent global error minimization problem by using a fitness function. The popular genetic algorithm (GA) is used to solve the minimization problem, and to achieve the unknown parameters. The proposed scheme is successfully implemented to solve the generalized Burgers'-Fisher equation. The comparison of numerical results with the exact solutions, and the solutions obtained using some traditional methods, including adomian decomposition method (ADM), homotopy perturbation method (HPM), and optimal homotopy asymptotic method (OHAM), show that the suggested scheme is fairly accurate and viable for solving such problems.
Malik, Suheel Abdullah; Qureshi, Ijaz Mansoor; Amir, Muhammad; Malik, Aqdas Naveed; Haq, Ihsanul
2015-01-01
In this paper, a new heuristic scheme for the approximate solution of the generalized Burgers'-Fisher equation is proposed. The scheme is based on the hybridization of Exp-function method with nature inspired algorithm. The given nonlinear partial differential equation (NPDE) through substitution is converted into a nonlinear ordinary differential equation (NODE). The travelling wave solution is approximated by the Exp-function method with unknown parameters. The unknown parameters are estimated by transforming the NODE into an equivalent global error minimization problem by using a fitness function. The popular genetic algorithm (GA) is used to solve the minimization problem, and to achieve the unknown parameters. The proposed scheme is successfully implemented to solve the generalized Burgers'-Fisher equation. The comparison of numerical results with the exact solutions, and the solutions obtained using some traditional methods, including adomian decomposition method (ADM), homotopy perturbation method (HPM), and optimal homotopy asymptotic method (OHAM), show that the suggested scheme is fairly accurate and viable for solving such problems. PMID:25811858
Numerical Methods of Parameter Identification for Problems Arising in Elasticity.
1982-06-01
Theorem 2.21 remains essentially unchanged by the inclusion of this new term . We now turn to a concrete realization of the approximate identification...cost if it had been accomplished under contract or if it had been done in-house in terms of manpower and/or dollars? ( ) a. MAN-YEARS ( ) b. $ 4...eigenfunction) state approximations were applied to a class of hyperbolic and parabolic equations, and also used in [7 ], where spline-based state
Pixel-based meshfree modelling of skeletal muscles.
Chen, Jiun-Shyan; Basava, Ramya Rao; Zhang, Yantao; Csapo, Robert; Malis, Vadim; Sinha, Usha; Hodgson, John; Sinha, Shantanu
2016-01-01
This paper introduces the meshfree Reproducing Kernel Particle Method (RKPM) for 3D image-based modeling of skeletal muscles. This approach allows for construction of simulation model based on pixel data obtained from medical images. The material properties and muscle fiber direction obtained from Diffusion Tensor Imaging (DTI) are input at each pixel point. The reproducing kernel (RK) approximation allows a representation of material heterogeneity with smooth transition. A multiphase multichannel level set based segmentation framework is adopted for individual muscle segmentation using Magnetic Resonance Images (MRI) and DTI. The application of the proposed methods for modeling the human lower leg is demonstrated.
NASA Technical Reports Server (NTRS)
Quek, Kok How Francis
1990-01-01
A method of computing reliable Gaussian and mean curvature sign-map descriptors from the polynomial approximation of surfaces was demonstrated. Such descriptors which are invariant under perspective variation are suitable for hypothesis generation. A means for determining the pose of constructed geometric forms whose algebraic surface descriptors are nonlinear in terms of their orienting parameters was developed. This was done by means of linear functions which are capable of approximating nonlinear forms and determining their parameters. It was shown that biquadratic surfaces are suitable companion linear forms for cylindrical approximation and parameter estimation. The estimates provided the initial parametric approximations necessary for a nonlinear regression stage to fine tune the estimates by fitting the actual nonlinear form to the data. A hypothesis-based split-merge algorithm for extraction and pose determination of cylinders and planes which merge smoothly into other surfaces was developed. It was shown that all split-merge algorithms are hypothesis-based. A finite-state algorithm for the extraction of the boundaries of run-length regions was developed. The computation takes advantage of the run list topology and boundary direction constraints implicit in the run-length encoding.
Finding Dantzig Selectors with a Proximity Operator based Fixed-point Algorithm
2014-11-01
experiments showed that this method usually outperforms the method in [2] in terms of CPU time while producing solutions of comparable quality. The... method proposed in [19]. To alleviate the difficulty caused by the subprob- lem without a closed form solution , a linearized ADM was proposed for the...a closed form solution , but the β-related subproblem does not and is solved approximately by using the nonmonotone gradient method in [18]. The
New approximate orientation averaging of the water molecule interacting with the thermal neutron
DOE Office of Scientific and Technical Information (OSTI.GOV)
Markovic, M.I.; Minic, D.M.; Rakic, A.D.
1992-02-01
This paper reports that exactly describing the time of thermal neutron collisions with water molecules, orientation averaging is performed by an exact method (EOA{sub k}) and four approximate methods (two well known and two less known). Expressions for the microscopic scattering kernel are developed. The two well-known approximate orientation averaging methods are Krieger-Nelkin (K-N) and Koppel-Young (K-Y). The results obtained by one of the two proposed approximate orientation averaging methods agree best with the corresponding results obtained by EOA{sub k}. The largest discrepancies between the EOA{sub k} results and the results of the approximate methods are obtained using the well-knowmore » K-N approximate orientation averaging method.« less
NASA Astrophysics Data System (ADS)
Chew, J. V. L.; Sulaiman, J.
2017-09-01
Partial differential equations that are used in describing the nonlinear heat and mass transfer phenomena are difficult to be solved. For the case where the exact solution is difficult to be obtained, it is necessary to use a numerical procedure such as the finite difference method to solve a particular partial differential equation. In term of numerical procedure, a particular method can be considered as an efficient method if the method can give an approximate solution within the specified error with the least computational complexity. Throughout this paper, the two-dimensional Porous Medium Equation (2D PME) is discretized by using the implicit finite difference scheme to construct the corresponding approximation equation. Then this approximation equation yields a large-sized and sparse nonlinear system. By using the Newton method to linearize the nonlinear system, this paper deals with the application of the Four-Point Newton-EGSOR (4NEGSOR) iterative method for solving the 2D PMEs. In addition to that, the efficiency of the 4NEGSOR iterative method is studied by solving three examples of the problems. Based on the comparative analysis, the Newton-Gauss-Seidel (NGS) and the Newton-SOR (NSOR) iterative methods are also considered. The numerical findings show that the 4NEGSOR method is superior to the NGS and the NSOR methods in terms of the number of iterations to get the converged solutions, the time of computation and the maximum absolute errors produced by the methods.
A Real-Time Marker-Based Visual Sensor Based on a FPGA and a Soft Core Processor
Tayara, Hilal; Ham, Woonchul; Chong, Kil To
2016-01-01
This paper introduces a real-time marker-based visual sensor architecture for mobile robot localization and navigation. A hardware acceleration architecture for post video processing system was implemented on a field-programmable gate array (FPGA). The pose calculation algorithm was implemented in a System on Chip (SoC) with an Altera Nios II soft-core processor. For every frame, single pass image segmentation and Feature Accelerated Segment Test (FAST) corner detection were used for extracting the predefined markers with known geometries in FPGA. Coplanar PosIT algorithm was implemented on the Nios II soft-core processor supplied with floating point hardware for accelerating floating point operations. Trigonometric functions have been approximated using Taylor series and cubic approximation using Lagrange polynomials. Inverse square root method has been implemented for approximating square root computations. Real time results have been achieved and pixel streams have been processed on the fly without any need to buffer the input frame for further implementation. PMID:27983714
A density difference based analysis of orbital-dependent exchange-correlation functionals
NASA Astrophysics Data System (ADS)
Grabowski, Ireneusz; Teale, Andrew M.; Fabiano, Eduardo; Śmiga, Szymon; Buksztel, Adam; Della Sala, Fabio
2014-03-01
We present a density difference based analysis for a range of orbital-dependent Kohn-Sham functionals. Results for atoms, some members of the neon isoelectronic series and small molecules are reported and compared with ab initio wave function calculations. Particular attention is paid to the quality of approximations to the exchange-only optimised effective potential (OEP) approach: we consider both the localised Hartree-Fock as well as the Krieger-Li-Iafrate methods. Analysis of density differences at the exchange-only level reveals the impact of the approximations on the resulting electronic densities. These differences are further quantified in terms of the ground state energies, frontier orbital energy differences and highest occupied orbital energies obtained. At the correlated level, an OEP approach based on a perturbative second-order correlation energy expression is shown to deliver results comparable with those from traditional wave function approaches, making it suitable for use as a benchmark against which to compare standard density functional approximations.
A Real-Time Marker-Based Visual Sensor Based on a FPGA and a Soft Core Processor.
Tayara, Hilal; Ham, Woonchul; Chong, Kil To
2016-12-15
This paper introduces a real-time marker-based visual sensor architecture for mobile robot localization and navigation. A hardware acceleration architecture for post video processing system was implemented on a field-programmable gate array (FPGA). The pose calculation algorithm was implemented in a System on Chip (SoC) with an Altera Nios II soft-core processor. For every frame, single pass image segmentation and Feature Accelerated Segment Test (FAST) corner detection were used for extracting the predefined markers with known geometries in FPGA. Coplanar PosIT algorithm was implemented on the Nios II soft-core processor supplied with floating point hardware for accelerating floating point operations. Trigonometric functions have been approximated using Taylor series and cubic approximation using Lagrange polynomials. Inverse square root method has been implemented for approximating square root computations. Real time results have been achieved and pixel streams have been processed on the fly without any need to buffer the input frame for further implementation.
The exact probability distribution of the rank product statistics for replicated experiments.
Eisinga, Rob; Breitling, Rainer; Heskes, Tom
2013-03-18
The rank product method is a widely accepted technique for detecting differentially regulated genes in replicated microarray experiments. To approximate the sampling distribution of the rank product statistic, the original publication proposed a permutation approach, whereas recently an alternative approximation based on the continuous gamma distribution was suggested. However, both approximations are imperfect for estimating small tail probabilities. In this paper we relate the rank product statistic to number theory and provide a derivation of its exact probability distribution and the true tail probabilities. Copyright © 2013 Federation of European Biochemical Societies. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Garabito, German; Cruz, João Carlos Ribeiro; Oliva, Pedro Andrés Chira; Söllner, Walter
2017-01-01
The Common Reflection Surface stack is a robust method for simulating zero-offset and common-offset sections with high accuracy from multi-coverage seismic data. For simulating common-offset sections, the Common-Reflection-Surface stack method uses a hyperbolic traveltime approximation that depends on five kinematic parameters for each selected sample point of the common-offset section to be simulated. The main challenge of this method is to find a computationally efficient data-driven optimization strategy for accurately determining the five kinematic stacking parameters on which each sample of the stacked common-offset section depends. Several authors have applied multi-step strategies to obtain the optimal parameters by combining different pre-stack data configurations. Recently, other authors used one-step data-driven strategies based on a global optimization for estimating simultaneously the five parameters from multi-midpoint and multi-offset gathers. In order to increase the computational efficiency of the global optimization process, we use in this paper a reduced form of the Common-Reflection-Surface traveltime approximation that depends on only four parameters, the so-called Common Diffraction Surface traveltime approximation. By analyzing the convergence of both objective functions and the data enhancement effect after applying the two traveltime approximations to the Marmousi synthetic dataset and a real land dataset, we conclude that the Common-Diffraction-Surface approximation is more efficient within certain aperture limits and preserves at the same time a high image accuracy. The preserved image quality is also observed in a direct comparison after applying both approximations for simulating common-offset sections on noisy pre-stack data.
Wavelet-based hierarchical surface approximation from height fields
Sang-Mook Lee; A. Lynn Abbott; Daniel L. Schmoldt
2004-01-01
This paper presents a novel hierarchical approach to triangular mesh generation from height fields. A wavelet-based multiresolution analysis technique is used to estimate local shape information at different levels of resolution. Using predefined templates at the coarsest level, the method constructs an initial triangulation in which underlying object shapes are well...
Parent Reactions to a School-Based Body Mass Index Screening Program
ERIC Educational Resources Information Center
Johnson, Suzanne Bennett; Pilkington, Lorri L.; Lamp, Camilla; He, Jianghua; Deeb, Larry C.
2009-01-01
Background: This study assessed parent reactions to school-based body mass index (BMI) screening. Methods: After a K-8 BMI screening program, parents were sent a letter detailing their child's BMI results. Approximately 50 parents were randomly selected for interview from each of 4 child weight-classification groups (overweight, at risk of…
ERIC Educational Resources Information Center
Joice, Sara; Johnston, Marie; Bonetti, Debbie; Morrison, Val; MacWalter, Ron
2012-01-01
Objective: To report stroke survivors' experiences and perceived usefulness of an effective self-help workbook-based intervention. Design: A cross-sectional study involving the intervention group of an earlier randomized controlled trial. Setting: At the participants' homes approximately seven weeks post-hospital discharge. Method: Following the…
Thermionic Properties of Carbon Based Nanomaterials Produced by Microhollow Cathode PECVD
NASA Technical Reports Server (NTRS)
Haase, John R.; Wolinksy, Jason J.; Bailey, Paul S.; George, Jeffrey A.; Go, David B.
2015-01-01
Thermionic emission is the process in which materials at sufficiently high temperature spontaneously emit electrons. This process occurs when electrons in a material gain sufficient thermal energy from heating to overcome the material's potential barrier, referred to as the work function. For most bulk materials very high temperatures (greater than 1500 K) are needed to produce appreciable emission. Carbon-based nanomaterials have shown significant promise as emission materials because of their low work functions, nanoscale geometry, and negative electron affinity. One method of producing these materials is through the process known as microhollow cathode PECVD. In a microhollow cathode plasma, high energy electrons oscillate at very high energies through the Pendel effect. These high energy electrons create numerous radical species and the technique has been shown to be an effective method of growing carbon based nanomaterials. In this work, we explore the thermionic emission properties of carbon based nanomaterials produced by microhollow cathode PECVD under a variety of synthesis conditions. Initial studies demonstrate measureable current at low temperatures (approximately 800 K) and work functions (approximately 3.3 eV) for these materials.
Computational Modeling of Proteins based on Cellular Automata: A Method of HP Folding Approximation.
Madain, Alia; Abu Dalhoum, Abdel Latif; Sleit, Azzam
2018-06-01
The design of a protein folding approximation algorithm is not straightforward even when a simplified model is used. The folding problem is a combinatorial problem, where approximation and heuristic algorithms are usually used to find near optimal folds of proteins primary structures. Approximation algorithms provide guarantees on the distance to the optimal solution. The folding approximation approach proposed here depends on two-dimensional cellular automata to fold proteins presented in a well-studied simplified model called the hydrophobic-hydrophilic model. Cellular automata are discrete computational models that rely on local rules to produce some overall global behavior. One-third and one-fourth approximation algorithms choose a subset of the hydrophobic amino acids to form H-H contacts. Those algorithms start with finding a point to fold the protein sequence into two sides where one side ignores H's at even positions and the other side ignores H's at odd positions. In addition, blocks or groups of amino acids fold the same way according to a predefined normal form. We intend to improve approximation algorithms by considering all hydrophobic amino acids and folding based on the local neighborhood instead of using normal forms. The CA does not assume a fixed folding point. The proposed approach guarantees one half approximation minus the H-H endpoints. This lower bound guaranteed applies to short sequences only. This is proved as the core and the folds of the protein will have two identical sides for all short sequences.
NASA Astrophysics Data System (ADS)
Guo, Yang; Becker, Ute; Neese, Frank
2018-03-01
Local correlation theories have been developed in two main flavors: (1) "direct" local correlation methods apply local approximation to the canonical equations and (2) fragment based methods reconstruct the correlation energy from a series of smaller calculations on subsystems. The present work serves two purposes. First, we investigate the relative efficiencies of the two approaches using the domain-based local pair natural orbital (DLPNO) approach as the "direct" method and the cluster in molecule (CIM) approach as the fragment based approach. Both approaches are applied in conjunction with second-order many-body perturbation theory (MP2) as well as coupled-cluster theory with single-, double- and perturbative triple excitations [CCSD(T)]. Second, we have investigated the possible merits of combining the two approaches by performing CIM calculations with DLPNO methods serving as the method of choice for performing the subsystem calculations. Our cluster-in-molecule approach is closely related to but slightly deviates from approaches in the literature since we have avoided real space cutoffs. Moreover, the neglected distant pair correlations in the previous CIM approach are considered approximately. Six very large molecules (503-2380 atoms) were studied. At both MP2 and CCSD(T) levels of theory, the CIM and DLPNO methods show similar efficiency. However, DLPNO methods are more accurate for 3-dimensional systems. While we have found only little incentive for the combination of CIM with DLPNO-MP2, the situation is different for CIM-DLPNO-CCSD(T). This combination is attractive because (1) the better parallelization opportunities offered by CIM; (2) the methodology is less memory intensive than the genuine DLPNO-CCSD(T) method and, hence, allows for large calculations on more modest hardware; and (3) the methodology is applicable and efficient in the frequently met cases, where the largest subsystem calculation is too large for the canonical CCSD(T) method.
Quantification of ETS exposure in hospitality workers who have never smoked
2010-01-01
Background Environmental Tobacco Smoke (ETS) was classified as human carcinogen (K1) by the German Research Council in 1998. According to epidemiological studies, the relative risk especially for lung cancer might be twice as high in persons who have never smoked but who are in the highest exposure category, for example hospitality workers. In order to implement these results in the German regulations on occupational illnesses, a valid method is needed to retrospectively assess the cumulative ETS exposure in the hospitality environment. Methods A literature-based review was carried out to locate a method that can be used for the German hospitality sector. Studies assessing ETS exposure using biological markers (for example urinary cotinine, DNA adducts) or questionnaires were excluded. Biological markers are not considered relevant as they assess exposure only over the last hours, weeks or months. Self-reported exposure based on questionnaires also does not seem adequate for medico-legal purposes. Therefore, retrospective exposure assessment should be based on mathematical models to approximate past exposure. Results For this purpose a validated model developed by Repace and Lowrey was considered appropriate. It offers the possibility of retrospectively assessing exposure with existing parameters (such as environmental dimensions, average number of smokers, ventilation characteristics and duration of exposure). The relative risk of lung cancer can then be estimated based on the individual cumulative exposure of the worker. Conclusion In conclusion, having adapted it to the German hospitality sector, an existing mathematical model appears to be capable of approximating the cumulative exposure. However, the level of uncertainty of these approximations has to be taken into account, especially for diseases with a long latency period such as lung cancer. PMID:20704719
An RBF-based compression method for image-based relighting.
Leung, Chi-Sing; Wong, Tien-Tsin; Lam, Ping-Man; Choy, Kwok-Hung
2006-04-01
In image-based relighting, a pixel is associated with a number of sampled radiance values. This paper presents a two-level compression method. In the first level, the plenoptic property of a pixel is approximated by a spherical radial basis function (SRBF) network. That means that the spherical plenoptic function of each pixel is represented by a number of SRBF weights. In the second level, we apply a wavelet-based method to compress these SRBF weights. To reduce the visual artifact due to quantization noise, we develop a constrained method for estimating the SRBF weights. Our proposed approach is superior to JPEG, JPEG2000, and MPEG. Compared with the spherical harmonics approach, our approach has a lower complexity, while the visual quality is comparable. The real-time rendering method for our SRBF representation is also discussed.
The Evolution and Discharge of Electric Fields within a Thunderstorm
NASA Astrophysics Data System (ADS)
Hager, William W.; Nisbet, John S.; Kasha, John R.
1989-05-01
A 3-dimensional electrical model for a thunderstorm is developed and finite difference approximations to the model are analyzed. If the spatial derivatives are approximated by a method akin to the ☐ scheme and if the temporal derivative is approximated by either a backward difference or the Crank-Nicholson scheme, we show that the resulting discretization is unconditionally stable. The forward difference approximation to the time derivative is stable when the time step is sufficiently small relative to the ratio between the permittivity and the conductivity. Max-norm error estimates for the discrete approximations are established. To handle the propagation of lightning, special numerical techniques are devised based on the Inverse Matrix Modification Formula and Cholesky updates. Numerical comparisons between the model and theoretical results of Wilson and Holzer-Saxon are presented. We also apply our model to a storm observed at the Kennedy Space Center on July 11, 1978.
MAGGIO, R; SIEKEVITZ, P; PALADE, G E
1963-08-01
This article describes a method for the isolation of nuclei from guinea pig liver. It involves the homogenization of the tissue in 0.88 M sucrose-1.5 mM CaCl(2) followed by centrifugation in a discontinuous density gradient in which the upper phase is the homogenate and the lower phase is 2.2 M sucrose-0.5 mM CaCl(2). Based on DNA recovery, the isolated fraction contains 25 to 30 per cent of the nuclei of the original homogenate. Electron microscopical observations showed that approximately 88 per cent of the isolated nuclei come from liver cells (the rest from von Kupffer cells and leucocytes) and that approximately 90 per cent of the nuclei appear intact, with well preserved nucleoli, nucleoplasm, nuclear envelope, and pores. Cytoplasmic contamination is minimal and consists primarily of the nuclear envelope and its attached ribosomes. The nuclear fraction consists of approximately 22.3 per cent DNA, approximately 4.7 per cent RNA, and approximately 73 per cent protein, the DNA/RNA ratio being 4.7. Data on RNA extractibility by phosphate and salt and on the base composition of total nuclear RNA are included.
NASA Astrophysics Data System (ADS)
Karyagin, Stanislav V.
2001-03-01
The hosts and nuclei-candidates (mass approximately 46 - 243, transition energy approximately 1 - 200 keV, decay's time 10-7 - 10+2 s) for gamma-laser (GL) realization are represented over Mendeleev Table. The choice of active media (nuclei-candidates, hosts) for GL is based on the joint theory of (gamma) -generation and radiation-heat regime which accounts a big complex of hindrances against GL and thus discards many tentative candidates. Nuclei- candidates are screened at the analyzing of data banks for nuclear transitions. Chosen candidates (approximately 20) could be used due to author's method SPTEN (Soft Prompt Transplantation of Excited Nuclei). The discarded tentative nuclei (approximately 80) with the life-times 10-6 - 10+2 are represented too. All analyzed long-lived (approximately 0.5 - 10+2 s) isomers are turned to be not fit for GL without use of very strong multi-wave Borrman effect even at the supposition of natural line's width. The application of the revealed candidates in two different (gamma) -laser's categories (residential and non- residential) is discussed.
Synthesis of a controller for stabilizing the motion of a rigid body about a fixed point
NASA Astrophysics Data System (ADS)
Zabolotnov, Yu. M.; Lobanov, A. A.
2017-05-01
A method for the approximate design of an optimal controller for stabilizing the motion of a rigid body about a fixed point is considered. It is assumed that rigid body motion is nearly the motion in the classical Lagrange case. The method is based on the common use of the Bellman dynamic programming principle and the averagingmethod. The latter is used to solve theHamilton-Jacobi-Bellman equation approximately, which permits synthesizing the controller. The proposed method for controller design can be used in many problems close to the problem of motion of the Lagrange top (the motion of a rigid body in the atmosphere, the motion of a rigid body fastened to a cable in deployment of the orbital cable system, etc.).
Efficient Boundary Extraction of BSP Solids Based on Clipping Operations.
Wang, Charlie C L; Manocha, Dinesh
2013-01-01
We present an efficient algorithm to extract the manifold surface that approximates the boundary of a solid represented by a Binary Space Partition (BSP) tree. Our polygonization algorithm repeatedly performs clipping operations on volumetric cells that correspond to a spatial convex partition and computes the boundary by traversing the connected cells. We use point-based representations along with finite-precision arithmetic to improve the efficiency and generate the B-rep approximation of a BSP solid. The core of our polygonization method is a novel clipping algorithm that uses a set of logical operations to make it resistant to degeneracies resulting from limited precision of floating-point arithmetic. The overall BSP to B-rep conversion algorithm can accurately generate boundaries with sharp and small features, and is faster than prior methods. At the end of this paper, we use this algorithm for a few geometric processing applications including Boolean operations, model repair, and mesh reconstruction.
Dam, Jan S; Yavari, Nazila; Sørensen, Søren; Andersson-Engels, Stefan
2005-07-10
We present a fast and accurate method for real-time determination of the absorption coefficient, the scattering coefficient, and the anisotropy factor of thin turbid samples by using simple continuous-wave noncoherent light sources. The three optical properties are extracted from recordings of angularly resolved transmittance in addition to spatially resolved diffuse reflectance and transmittance. The applied multivariate calibration and prediction techniques are based on multiple polynomial regression in combination with a Newton--Raphson algorithm. The numerical test results based on Monte Carlo simulations showed mean prediction errors of approximately 0.5% for all three optical properties within ranges typical for biological media. Preliminary experimental results are also presented yielding errors of approximately 5%. Thus the presented methods show a substantial potential for simultaneous absorption and scattering characterization of turbid media.
Shahbazi, Mohammad; Saranlı, Uluç; Babuška, Robert; Lopes, Gabriel A D
2016-12-05
This paper introduces approximate time-domain solutions to the otherwise non-integrable double-stance dynamics of the 'bipedal' spring-loaded inverted pendulum (B-SLIP) in the presence of non-negligible damping. We first introduce an auxiliary system whose behavior under certain conditions is approximately equivalent to the B-SLIP in double-stance. Then, we derive approximate solutions to the dynamics of the new system following two different methods: (i) updated-momentum approach that can deal with both the lossy and lossless B-SLIP models, and (ii) perturbation-based approach following which we only derive a solution to the lossless case. The prediction performance of each method is characterized via a comprehensive numerical analysis. The derived representations are computationally very efficient compared to numerical integrations, and, hence, are suitable for online planning, increasing the autonomy of walking robots. Two application examples of walking gait control are presented. The proposed solutions can serve as instrumental tools in various fields such as control in legged robotics and human motion understanding in biomechanics.
Barros, Wilson; Gochberg, Daniel F.; Gore, John C.
2009-01-01
The description of the nuclear magnetic resonance magnetization dynamics in the presence of long-range dipolar interactions, which is based upon approximate solutions of Bloch–Torrey equations including the effect of a distant dipolar field, has been revisited. New experiments show that approximate analytic solutions have a broader regime of validity as well as dependencies on pulse-sequence parameters that seem to have been overlooked. In order to explain these experimental results, we developed a new method consisting of calculating the magnetization via an iterative formalism where both diffusion and distant dipolar field contributions are treated as integral operators incorporated into the Bloch–Torrey equations. The solution can be organized as a perturbative series, whereby access to higher order terms allows one to set better boundaries on validity regimes for analytic first-order approximations. Finally, the method legitimizes the use of simple analytic first-order approximations under less demanding experimental conditions, it predicts new pulse-sequence parameter dependencies for the range of validity, and clarifies weak points in previous calculations. PMID:19425789
ERIC Educational Resources Information Center
Raykov, Tenko; Little, Todd D.
1999-01-01
Describes a method for evaluating results of Procrustean rotation to a target factor pattern matrix in exploratory factor analysis. The approach, based on the bootstrap method, yields empirical approximations of the sampling distributions of: (1) differences between target elements and rotated factor pattern matrices; and (2) the overall…
ERIC Educational Resources Information Center
Thombs, Dennis L.; Olds, R. Scott; Osborn, Cynthia J.; Casseday, Sarah; Glavin, Kevin; Berkowitz, Alan D.
2007-01-01
Objective: The authors tested a prototype intervention designed to deter alcohol use in residence halls. Participants: Approximately 384 freshmen participated in the study over a 2-year period. Methods: The authors devised a feedback method that assessed residents' blood alcohol concentration (BAC) at night and allowed the readings to be retrieved…
Policy Gradient Adaptive Dynamic Programming for Data-Based Optimal Control.
Luo, Biao; Liu, Derong; Wu, Huai-Ning; Wang, Ding; Lewis, Frank L
2017-10-01
The model-free optimal control problem of general discrete-time nonlinear systems is considered in this paper, and a data-based policy gradient adaptive dynamic programming (PGADP) algorithm is developed to design an adaptive optimal controller method. By using offline and online data rather than the mathematical system model, the PGADP algorithm improves control policy with a gradient descent scheme. The convergence of the PGADP algorithm is proved by demonstrating that the constructed Q -function sequence converges to the optimal Q -function. Based on the PGADP algorithm, the adaptive control method is developed with an actor-critic structure and the method of weighted residuals. Its convergence properties are analyzed, where the approximate Q -function converges to its optimum. Computer simulation results demonstrate the effectiveness of the PGADP-based adaptive control method.
Policy oscillation is overshooting.
Wagner, Paul
2014-04-01
A majority of approximate dynamic programming approaches to the reinforcement learning problem can be categorized into greedy value function methods and value-based policy gradient methods. The former approach, although fast, is well known to be susceptible to the policy oscillation phenomenon. We take a fresh view to this phenomenon by casting, within the context of non-optimistic policy iteration, a considerable subset of the former approach as a limiting special case of the latter. We explain the phenomenon in terms of this view and illustrate the underlying mechanism with artificial examples. We also use it to derive the constrained natural actor-critic algorithm that can interpolate between the aforementioned approaches. In addition, it has been suggested in the literature that the oscillation phenomenon might be subtly connected to the grossly suboptimal performance in the Tetris benchmark problem of all attempted approximate dynamic programming methods. Based on empirical findings, we offer a hypothesis that might explain the inferior performance levels and the associated policy degradation phenomenon, and which would partially support the suggested connection. Finally, we report scores in the Tetris problem that improve on existing dynamic programming based results by an order of magnitude. Copyright © 2014 Elsevier Ltd. All rights reserved.
Object-Location-Aware Hashing for Multi-Label Image Retrieval via Automatic Mask Learning.
Huang, Chang-Qin; Yang, Shang-Ming; Pan, Yan; Lai, Han-Jiang
2018-09-01
Learning-based hashing is a leading approach of approximate nearest neighbor search for large-scale image retrieval. In this paper, we develop a deep supervised hashing method for multi-label image retrieval, in which we propose to learn a binary "mask" map that can identify the approximate locations of objects in an image, so that we use this binary "mask" map to obtain length-limited hash codes which mainly focus on an image's objects but ignore the background. The proposed deep architecture consists of four parts: 1) a convolutional sub-network to generate effective image features; 2) a binary "mask" sub-network to identify image objects' approximate locations; 3) a weighted average pooling operation based on the binary "mask" to obtain feature representations and hash codes that pay most attention to foreground objects but ignore the background; and 4) the combination of a triplet ranking loss designed to preserve relative similarities among images and a cross entropy loss defined on image labels. We conduct comprehensive evaluations on four multi-label image data sets. The results indicate that the proposed hashing method achieves superior performance gains over the state-of-the-art supervised or unsupervised hashing baselines.
NASA Astrophysics Data System (ADS)
Salmin, Vadim V.
2017-01-01
Flight mechanics with a low-thrust is a new chapter of mechanics of space flight, considered plurality of all problems trajectory optimization and movement control laws and the design parameters of spacecraft. Thus tasks associated with taking into account the additional factors in mathematical models of the motion of spacecraft becomes increasingly important, as well as additional restrictions on the possibilities of the thrust vector control. The complication of the mathematical models of controlled motion leads to difficulties in solving optimization problems. Author proposed methods of finding approximate optimal control and evaluating their optimality based on analytical solutions. These methods are based on the principle of extending the class of admissible states and controls and sufficient conditions for the absolute minimum. Developed procedures of the estimation enabling to determine how close to the optimal founded solution, and indicate ways to improve them. Authors describes procedures of estimate for approximately optimal control laws for space flight mechanics problems, in particular for optimization flight low-thrust between the circular non-coplanar orbits, optimization the control angle and trajectory movement of the spacecraft during interorbital flights, optimization flights with low-thrust between arbitrary elliptical orbits Earth satellites.
NASA Astrophysics Data System (ADS)
Jia, Mengyu; Wang, Shuang; Chen, Xueying; Gao, Feng; Zhao, Huijuan
2016-03-01
Most analytical methods for describing light propagation in turbid medium exhibit low effectiveness in the near-field of a collimated source. Motivated by the Charge Simulation Method in electromagnetic theory as well as the established discrete source based modeling, we have reported on an improved explicit model, referred to as "Virtual Source" (VS) diffuse approximation (DA), to inherit the mathematical simplicity of the DA while considerably extend its validity in modeling the near-field photon migration in low-albedo medium. In this model, the collimated light in the standard DA is analogously approximated as multiple isotropic point sources (VS) distributed along the incident direction. For performance enhancement, a fitting procedure between the calculated and realistic reflectances is adopted in the nearfield to optimize the VS parameters (intensities and locations). To be practically applicable, an explicit 2VS-DA model is established based on close-form derivations of the VS parameters for the typical ranges of the optical parameters. The proposed VS-DA model is validated by comparing with the Monte Carlo simulations, and further introduced in the image reconstruction of the Laminar Optical Tomography system.
Some comparisons of complexity in dictionary-based and linear computational models.
Gnecco, Giorgio; Kůrková, Věra; Sanguineti, Marcello
2011-03-01
Neural networks provide a more flexible approximation of functions than traditional linear regression. In the latter, one can only adjust the coefficients in linear combinations of fixed sets of functions, such as orthogonal polynomials or Hermite functions, while for neural networks, one may also adjust the parameters of the functions which are being combined. However, some useful properties of linear approximators (such as uniqueness, homogeneity, and continuity of best approximation operators) are not satisfied by neural networks. Moreover, optimization of parameters in neural networks becomes more difficult than in linear regression. Experimental results suggest that these drawbacks of neural networks are offset by substantially lower model complexity, allowing accuracy of approximation even in high-dimensional cases. We give some theoretical results comparing requirements on model complexity for two types of approximators, the traditional linear ones and so called variable-basis types, which include neural networks, radial, and kernel models. We compare upper bounds on worst-case errors in variable-basis approximation with lower bounds on such errors for any linear approximator. Using methods from nonlinear approximation and integral representations tailored to computational units, we describe some cases where neural networks outperform any linear approximator. Copyright © 2010 Elsevier Ltd. All rights reserved.