Some comparisons of complexity in dictionary-based and linear computational models.
Gnecco, Giorgio; Kůrková, Věra; Sanguineti, Marcello
2011-03-01
Neural networks provide a more flexible approximation of functions than traditional linear regression. In the latter, one can only adjust the coefficients in linear combinations of fixed sets of functions, such as orthogonal polynomials or Hermite functions, while for neural networks, one may also adjust the parameters of the functions which are being combined. However, some useful properties of linear approximators (such as uniqueness, homogeneity, and continuity of best approximation operators) are not satisfied by neural networks. Moreover, optimization of parameters in neural networks becomes more difficult than in linear regression. Experimental results suggest that these drawbacks of neural networks are offset by substantially lower model complexity, allowing accuracy of approximation even in high-dimensional cases. We give some theoretical results comparing requirements on model complexity for two types of approximators, the traditional linear ones and so called variable-basis types, which include neural networks, radial, and kernel models. We compare upper bounds on worst-case errors in variable-basis approximation with lower bounds on such errors for any linear approximator. Using methods from nonlinear approximation and integral representations tailored to computational units, we describe some cases where neural networks outperform any linear approximator. Copyright © 2010 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Berthold, T.; Milbradt, P.; Berkhahn, V.
2018-04-01
This paper presents a model for the approximation of multiple, spatially distributed grain size distributions based on a feedforward neural network. Since a classical feedforward network does not guarantee to produce valid cumulative distribution functions, a priori information is incor porated into the model by applying weight and architecture constraints. The model is derived in two steps. First, a model is presented that is able to produce a valid distribution function for a single sediment sample. Although initially developed for sediment samples, the model is not limited in its application; it can also be used to approximate any other multimodal continuous distribution function. In the second part, the network is extended in order to capture the spatial variation of the sediment samples that have been obtained from 48 locations in the investigation area. Results show that the model provides an adequate approximation of grain size distributions, satisfying the requirements of a cumulative distribution function.
Jalalian, Athena; Tay, Francis E H; Arastehfar, Soheil; Liu, Gabriel
2017-06-01
Load-displacement relationships of spinal motion segments are crucial factors in characterizing the stiffness of scoliotic spine models to mimic the spine responses to loads. Although nonlinear approach to approximation of the relationships can be superior to linear ones, little mention has been made to deriving personalized nonlinear load-displacement relationships in previous studies. A method is developed for nonlinear approximation of load-displacement relationships of spinal motion segments to assist characterizing in vivo the stiffness of spine models. We propose approximation by tangent functions and focus on rotational displacements in lateral direction. The tangent functions are characterized using lateral bending test. A multi-body model was characterized to 18 patients and utilized to simulate four spine positions; right bending, left bending, neutral, and traction. The same was done using linear functions to assess the performance of the proposed tangent function in comparison with the linear function. Root-mean-square error (RMSE) of the displacements estimated by the tangent functions was 44 % smaller than the linear functions. This shows the ability of our tangent function in approximation of the relationships for a range of infinitesimal to large displacements involved in the spine movement to the four positions. In addition, the models based on the tangent functions yielded 67, 55, and 39 % smaller RMSEs of Ferguson angles, locations of vertebrae, and orientations of vertebrae, respectively, implying better estimates of spine responses to loads. Overall, it can be concluded that our method for approximating load-displacement relationships of spinal motion segments can offer good estimates of scoliotic spine stiffness.
Approximate formulas for elasticity of the Tornquist functions and some their advantages
NASA Astrophysics Data System (ADS)
Issin, Meyram
2017-09-01
In this article functions of demand for prime necessity, second necessity and luxury goods depending on the income are considered. These functions are called Tornquist functions. By means of the return model the demand for prime necessity goods and second necessity goods are approximately described. Then on the basis of a method of the smallest squares approximate formulas for elasticity of these Tornquist functions are received. To receive an approximate formula for elasticity of function of demand for luxury goods, the linear asymptotic formula is constructed for this function. Some benefits of approximate formulas for elasticity of Tornquist functions are specified.
Narimani, Mohammand; Lam, H K; Dilmaghani, R; Wolfe, Charles
2011-06-01
Relaxed linear-matrix-inequality-based stability conditions for fuzzy-model-based control systems with imperfect premise matching are proposed. First, the derivative of the Lyapunov function, containing the product terms of the fuzzy model and fuzzy controller membership functions, is derived. Then, in the partitioned operating domain of the membership functions, the relations between the state variables and the mentioned product terms are represented by approximated polynomials in each subregion. Next, the stability conditions containing the information of all subsystems and the approximated polynomials are derived. In addition, the concept of the S-procedure is utilized to release the conservativeness caused by considering the whole operating region for approximated polynomials. It is shown that the well-known stability conditions can be special cases of the proposed stability conditions. Simulation examples are given to illustrate the validity of the proposed approach.
Theory and applications of a deterministic approximation to the coalescent model
Jewett, Ethan M.; Rosenberg, Noah A.
2014-01-01
Under the coalescent model, the random number nt of lineages ancestral to a sample is nearly deterministic as a function of time when nt is moderate to large in value, and it is well approximated by its expectation E[nt]. In turn, this expectation is well approximated by simple deterministic functions that are easy to compute. Such deterministic functions have been applied to estimate allele age, effective population size, and genetic diversity, and they have been used to study properties of models of infectious disease dynamics. Although a number of simple approximations of E[nt] have been derived and applied to problems of population-genetic inference, the theoretical accuracy of the formulas and the inferences obtained using these approximations is not known, and the range of problems to which they can be applied is not well understood. Here, we demonstrate general procedures by which the approximation nt ≈ E[nt] can be used to reduce the computational complexity of coalescent formulas, and we show that the resulting approximations converge to their true values under simple assumptions. Such approximations provide alternatives to exact formulas that are computationally intractable or numerically unstable when the number of sampled lineages is moderate or large. We also extend an existing class of approximations of E[nt] to the case of multiple populations of time-varying size with migration among them. Our results facilitate the use of the deterministic approximation nt ≈ E[nt] for deriving functionally simple, computationally efficient, and numerically stable approximations of coalescent formulas under complicated demographic scenarios. PMID:24412419
Reflection and emission models for deserts derived from Nimbus-7 ERB scanner measurements
NASA Technical Reports Server (NTRS)
Staylor, W. F.; Suttles, J. T.
1986-01-01
Broadband shortwave and longwave radiance measurements obtained from the Nimbus-7 Earth Radiation Budget scanner were used to develop reflectance and emittance models for the Sahara-Arabian, Gibson, and Saudi Deserts. The models were established by fitting the satellite measurements to analytic functions. For the shortwave, the model function is based on an approximate solution to the radiative transfer equation. The bidirectional-reflectance function was obtained from a single-scattering approximation with a Rayleigh-like phase function. The directional-reflectance model followed from integration of the bidirectional model and is a function of the sum and product of cosine solar and viewing zenith angles, thus satisfying reciprocity between these angles. The emittance model was based on a simple power-law of cosine viewing zenith angle.
NASA Technical Reports Server (NTRS)
Kukreja, Sunil L.; Vio, Gareth A.; Andrianne, Thomas; azak, Norizham Abudl; Dimitriadis, Grigorios
2012-01-01
The stall flutter response of a rectangular wing in a low speed wind tunnel is modelled using a nonlinear difference equation description. Static and dynamic tests are used to select a suitable model structure and basis function. Bifurcation criteria such as the Hopf condition and vibration amplitude variation with airspeed were used to ensure the model was representative of experimentally measured stall flutter phenomena. Dynamic test data were used to estimate model parameters and estimate an approximate basis function.
Smith, J. C.; Pribram-Jones, A.; Burke, K.
2016-06-14
Thermal density functional theory calculations often use the Mermin-Kohn-Sham scheme, but employ ground-state approximations to the exchange-correlation (XC) free energy. In the simplest solvable nontrivial model, an asymmetric Hubbard dimer, we calculate the exact many-body energies and the exact Mermin-Kohn-Sham functionals for this system and extract the exact XC free energy. For moderate temperatures and weak correlation, we find this approximation to be excellent. Here we extract various exact free-energy correlation components and the exact adiabatic connection formula.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, J. C.; Pribram-Jones, A.; Burke, K.
Thermal density functional theory calculations often use the Mermin-Kohn-Sham scheme, but employ ground-state approximations to the exchange-correlation (XC) free energy. In the simplest solvable nontrivial model, an asymmetric Hubbard dimer, we calculate the exact many-body energies and the exact Mermin-Kohn-Sham functionals for this system and extract the exact XC free energy. For moderate temperatures and weak correlation, we find this approximation to be excellent. Here we extract various exact free-energy correlation components and the exact adiabatic connection formula.
NASA Technical Reports Server (NTRS)
Simpson, Timothy W.
1998-01-01
The use of response surface models and kriging models are compared for approximating non-random, deterministic computer analyses. After discussing the traditional response surface approach for constructing polynomial models for approximation, kriging is presented as an alternative statistical-based approximation method for the design and analysis of computer experiments. Both approximation methods are applied to the multidisciplinary design and analysis of an aerospike nozzle which consists of a computational fluid dynamics model and a finite element analysis model. Error analysis of the response surface and kriging models is performed along with a graphical comparison of the approximations. Four optimization problems are formulated and solved using both approximation models. While neither approximation technique consistently outperforms the other in this example, the kriging models using only a constant for the underlying global model and a Gaussian correlation function perform as well as the second order polynomial response surface models.
NASA Technical Reports Server (NTRS)
Gorski, Krzysztof M.
1993-01-01
Simple and easy to implement elementary function approximations are introduced to the spectral window functions needed in calculations of model predictions of the cosmic microwave backgrond (CMB) anisotropy. These approximations allow the investigator to obtain model delta T/T predictions in terms of single integrals over the power spectrum of cosmological perturbations and to avoid the necessity of performing the additional integrations. The high accuracy of these approximations is demonstrated here for the CDM theory-based calculations of the expected delta T/T signal in several experiments searching for the CMB anisotropy.
Cosmological applications of Padé approximant
NASA Astrophysics Data System (ADS)
Wei, Hao; Yan, Xiao-Peng; Zhou, Ya-Nan
2014-01-01
As is well known, in mathematics, any function could be approximated by the Padé approximant. The Padé approximant is the best approximation of a function by a rational function of given order. In fact, the Padé approximant often gives better approximation of the function than truncating its Taylor series, and it may still work where the Taylor series does not converge. In the present work, we consider the Padé approximant in two issues. First, we obtain the analytical approximation of the luminosity distance for the flat XCDM model, and find that the relative error is fairly small. Second, we propose several parameterizations for the equation-of-state parameter (EoS) of dark energy based on the Padé approximant. They are well motivated from the mathematical and physical points of view. We confront these EoS parameterizations with the latest observational data, and find that they can work well. In these practices, we show that the Padé approximant could be an useful tool in cosmology, and it deserves further investigation.
Approximation of Optimal Infinite Dimensional Compensators for Flexible Structures
NASA Technical Reports Server (NTRS)
Gibson, J. S.; Mingori, D. L.; Adamian, A.; Jabbari, F.
1985-01-01
The infinite dimensional compensator for a large class of flexible structures, modeled as distributed systems are discussed, as well as an approximation scheme for designing finite dimensional compensators to approximate the infinite dimensional compensator. The approximation scheme is applied to develop a compensator for a space antenna model based on wrap-rib antennas being built currently. While the present model has been simplified, it retains the salient features of rigid body modes and several distributed components of different characteristics. The control and estimator gains are represented by functional gains, which provide graphical representations of the control and estimator laws. These functional gains also indicate the convergence of the finite dimensional compensators and show which modes the optimal compensator ignores.
Derivation of phase functions from multiply scattered sunlight transmitted through a hazy atmosphere
NASA Technical Reports Server (NTRS)
Weinman, J. A.; Twitty, J. T.; Browning, S. R.; Herman, B. M.
1975-01-01
The intensity of sunlight multiply scattered in model atmospheres is derived from the equation of radiative transfer by an analytical small-angle approximation. The approximate analytical solutions are compared to rigorous numerical solutions of the same problem. Results obtained from an aerosol-laden model atmosphere are presented. Agreement between the rigorous and the approximate solutions is found to be within a few per cent. The analytical solution to the problem which considers an aerosol-laden atmosphere is then inverted to yield a phase function which describes a single scattering event at small angles. The effect of noisy data on the derived phase function is discussed.
Padé Approximant and Minimax Rational Approximation in Standard Cosmology
NASA Astrophysics Data System (ADS)
Zaninetti, Lorenzo
2016-02-01
The luminosity distance in the standard cosmology as given by $\\Lambda$CDM and consequently the distance modulus for supernovae can be defined by the Pad\\'e approximant. A comparison with a known analytical solution shows that the Pad\\'e approximant for the luminosity distance has an error of $4\\%$ at redshift $= 10$. A similar procedure for the Taylor expansion of the luminosity distance gives an error of $4\\%$ at redshift $=0.7 $; this means that for the luminosity distance, the Pad\\'e approximation is superior to the Taylor series. The availability of an analytical expression for the distance modulus allows applying the Levenberg--Marquardt method to derive the fundamental parameters from the available compilations for supernovae. A new luminosity function for galaxies derived from the truncated gamma probability density function models the observed luminosity function for galaxies when the observed range in absolute magnitude is modeled by the Pad\\'e approximant. A comparison of $\\Lambda$CDM with other cosmologies is done adopting a statistical point of view.
The luminosity function of quasars
NASA Technical Reports Server (NTRS)
Pei, Yichuan C.
1995-01-01
We propose a new evolutionary model for the optical luminosity function of quasars. Our analytical model is derived from fits to the empirical luminosity function estimated by Hartwick and Schade and Warren, Hewett, and Osmer on the basis of more than 1200 quasars over the range of redshifts 0 approximately less than z approximately less than 4.5. We find that the evolution of quasars over this entire redshift range can be well fitted by a Gaussian distribution, while the shape of the luminosity function can be well fitted by either a double power law or an exponential L(exp 1/4) law. The predicted number counts of quasars, as a function of either apparent magnitude or redshift, are fully consistent with the observed ones. Our model indicates that the evolution of quasars reaches its maximum at z approximately = 2.8 and declines at higher redshifts. An extrapolation of the evolution to z approximately greater than 4.5 implies that quasars may have started their cosmic fireworks at z(sub f) approximately = 5.2-5.5. Forthcoming surveys of quasars at these redshifts will be critical to constrain the epoch of quasar formation. All the results we derived are based on observed quasars and are therefore subject to the bias of obscuration by dust in damped Ly alpha systems. Future surveys of these absorption systems at z approximately greater than 3 will also be important if the formation epoch of quasars is to be known unambiguously.
Fast computation of the electrolyte-concentration transfer function of a lithium-ion cell model
NASA Astrophysics Data System (ADS)
Rodríguez, Albert; Plett, Gregory L.; Trimboli, M. Scott
2017-08-01
One approach to creating physics-based reduced-order models (ROMs) of battery-cell dynamics requires first generating linearized Laplace-domain transfer functions of all cell internal electrochemical variables of interest. Then, the resulting infinite-dimensional transfer functions can be reduced by various means in order to find an approximate low-dimensional model. These methods include Padé approximation or the Discrete-Time Realization algorithm. In a previous article, Lee and colleagues developed a transfer function of the electrolyte concentration for a porous-electrode pseudo-two-dimensional lithium-ion cell model. Their approach used separation of variables and Sturm-Liouville theory to compute an infinite-series solution to the transfer function, which they then truncated to a finite number of terms for reasons of practicality. Here, we instead use a variation-of-parameters approach to arrive at a different representation of the identical solution that does not require a series expansion. The primary benefits of the new approach are speed of computation of the transfer function and the removal of the requirement to approximate the transfer function by truncating the number of terms evaluated. Results show that the speedup of the new method can be more than 3800.
Frank, Patrick; George, Serena DeBeer; Anxolabéhère-Mallart, Elodie; Hedman, Britt; Hodgson, Keith O
2006-11-27
Sulfur K-edge X-ray absorption spectroscopy (XAS) was used to characterize the approximately 0.1% sulfur found both in native reticulated vitreous carbon (RVC) foam and in RVC oxidatively modified using 0.2 M KMnO4 in 2 M H2SO4. Sulfur valences and functional groups were assessed using K-edge XAS spectral curve-fitting and employing explicit sulfur compounds as models. For native RVC, these were episulfide (approximately 3%), thianthrene (approximately 9%), disulfide (approximately 10%), sulfenate ester (approximately 12%), benzothiophene (approximately 24%), N,N'-thiobisphthalimide (approximately 30%), alkyl sulfonate (approximately 1.2%), alkyl sulfate monoester (approximately 6%), and sulfate dianion (approximately 6%). Permanganate oxidation of RVC diminished sulfenic sulfur to approximately 9%, thianthrenic sulfur to approximately 7%, and sulfate dianion to approximately 1% but increased sulfate monoester to approximately 12%, and newly produced sulfone (approximately 2%) and sulfate diester (approximately 5%). A simple thermodynamic model was derived that allows proportionate functional group comparisons despite differing (approximately +/-15%) total sulfur contents between RVC batches. The limits of accuracy in the XAS curve-fitting analysis are discussed in terms of microenvironments and extended structures in RVC carbon that cannot be exactly modeled by small molecules. Sulfate esters cover approximately 0.15% of the RVC surface, increasing to approximately 0.51% following permanganate/sulfuric acid treatment. The detection of episulfide directly corroborates a proposed mechanism for the migration of elemental sulfur through carbon.
Comparison of Response Surface and Kriging Models for Multidisciplinary Design Optimization
NASA Technical Reports Server (NTRS)
Simpson, Timothy W.; Korte, John J.; Mauery, Timothy M.; Mistree, Farrokh
1998-01-01
In this paper, we compare and contrast the use of second-order response surface models and kriging models for approximating non-random, deterministic computer analyses. After reviewing the response surface method for constructing polynomial approximations, kriging is presented as an alternative approximation method for the design and analysis of computer experiments. Both methods are applied to the multidisciplinary design of an aerospike nozzle which consists of a computational fluid dynamics model and a finite-element model. Error analysis of the response surface and kriging models is performed along with a graphical comparison of the approximations, and four optimization problems m formulated and solved using both sets of approximation models. The second-order response surface models and kriging models-using a constant underlying global model and a Gaussian correlation function-yield comparable results.
LVQ and backpropagation neural networks applied to NASA SSME data
NASA Technical Reports Server (NTRS)
Doniere, Timothy F.; Dhawan, Atam P.
1993-01-01
Feedfoward neural networks with backpropagation learning have been used as function approximators for modeling the space shuttle main engine (SSME) sensor signals. The modeling of these sensor signals is aimed at the development of a sensor fault detection system that can be used during ground test firings. The generalization capability of a neural network based function approximator depends on the training vectors which in this application may be derived from a number of SSME ground test-firings. This yields a large number of training vectors. Large training sets can cause the time required to train the network to be very large. Also, the network may not be able to generalize for large training sets. To reduce the size of the training sets, the SSME test-firing data is reduced using the learning vector quantization (LVQ) based technique. Different compression ratios were used to obtain compressed data in training the neural network model. The performance of the neural model trained using reduced sets of training patterns is presented and compared with the performance of the model trained using complete data. The LVQ can also be used as a function approximator. The performance of the LVQ as a function approximator using reduced training sets is presented and compared with the performance of the backpropagation network.
Course 4: Density Functional Theory, Methods, Techniques, and Applications
NASA Astrophysics Data System (ADS)
Chrétien, S.; Salahub, D. R.
Contents 1 Introduction 2 Density functional theory 2.1 Hohenberg and Kohn theorems 2.2 Levy's constrained search 2.3 Kohn-Sham method 3 Density matrices and pair correlation functions 4 Adiabatic connection or coupling strength integration 5 Comparing and constrasting KS-DFT and HF-CI 6 Preparing new functionals 7 Approximate exchange and correlation functionals 7.1 The Local Spin Density Approximation (LSDA) 7.2 Gradient Expansion Approximation (GEA) 7.3 Generalized Gradient Approximation (GGA) 7.4 meta-Generalized Gradient Approximation (meta-GGA) 7.5 Hybrid functionals 7.6 The Optimized Effective Potential method (OEP) 7.7 Comparison between various approximate functionals 8 LAP correlation functional 9 Solving the Kohn-Sham equations 9.1 The Kohn-Sham orbitals 9.2 Coulomb potential 9.3 Exchange-correlation potential 9.4 Core potential 9.5 Other choices and sources of error 9.6 Functionality 10 Applications 10.1 Ab initio molecular dynamics for an alanine dipeptide model 10.2 Transition metal clusters: The ecstasy, and the agony... 10.3 The conversion of acetylene to benzene on Fe clusters 11 Conclusions
Solving bi-level optimization problems in engineering design using kriging models
NASA Astrophysics Data System (ADS)
Xia, Yi; Liu, Xiaojie; Du, Gang
2018-05-01
Stackelberg game-theoretic approaches are applied extensively in engineering design to handle distributed collaboration decisions. Bi-level genetic algorithms (BLGAs) and response surfaces have been used to solve the corresponding bi-level programming models. However, the computational costs for BLGAs often increase rapidly with the complexity of lower-level programs, and optimal solution functions sometimes cannot be approximated by response surfaces. This article proposes a new method, namely the optimal solution function approximation by kriging model (OSFAKM), in which kriging models are used to approximate the optimal solution functions. A detailed example demonstrates that OSFAKM can obtain better solutions than BLGAs and response surface-based methods, and at the same time reduce the workload of computation remarkably. Five benchmark problems and a case study of the optimal design of a thin-walled pressure vessel are also presented to illustrate the feasibility and potential of the proposed method for bi-level optimization in engineering design.
Local density approximation in site-occupation embedding theory
NASA Astrophysics Data System (ADS)
Senjean, Bruno; Tsuchiizu, Masahisa; Robert, Vincent; Fromager, Emmanuel
2017-01-01
Site-occupation embedding theory (SOET) is a density functional theory (DFT)-based method which aims at modelling strongly correlated electrons. It is in principle exact and applicable to model and quantum chemical Hamiltonians. The theory is presented here for the Hubbard Hamiltonian. In contrast to conventional DFT approaches, the site (or orbital) occupations are deduced in SOET from a partially interacting system consisting of one (or more) impurity site(s) and non-interacting bath sites. The correlation energy of the bath is then treated implicitly by means of a site-occupation functional. In this work, we propose a simple impurity-occupation functional approximation based on the two-level (2L) Hubbard model which is referred to as two-level impurity local density approximation (2L-ILDA). Results obtained on a prototypical uniform eight-site Hubbard ring are promising. The extension of the method to larger systems and more sophisticated model Hamiltonians is currently in progress.
Guidelines for Use of the Approximate Beta-Poisson Dose-Response Model.
Xie, Gang; Roiko, Anne; Stratton, Helen; Lemckert, Charles; Dunn, Peter K; Mengersen, Kerrie
2017-07-01
For dose-response analysis in quantitative microbial risk assessment (QMRA), the exact beta-Poisson model is a two-parameter mechanistic dose-response model with parameters α>0 and β>0, which involves the Kummer confluent hypergeometric function. Evaluation of a hypergeometric function is a computational challenge. Denoting PI(d) as the probability of infection at a given mean dose d, the widely used dose-response model PI(d)=1-(1+dβ)-α is an approximate formula for the exact beta-Poisson model. Notwithstanding the required conditions α<β and β>1, issues related to the validity and approximation accuracy of this approximate formula have remained largely ignored in practice, partly because these conditions are too general to provide clear guidance. Consequently, this study proposes a probability measure Pr(0 < r < 1 | α̂, β̂) as a validity measure (r is a random variable that follows a gamma distribution; α̂ and β̂ are the maximum likelihood estimates of α and β in the approximate model); and the constraint conditions β̂>(22α̂)0.50 for 0.02<α̂<2 as a rule of thumb to ensure an accurate approximation (e.g., Pr(0 < r < 1 | α̂, β̂) >0.99) . This validity measure and rule of thumb were validated by application to all the completed beta-Poisson models (related to 85 data sets) from the QMRA community portal (QMRA Wiki). The results showed that the higher the probability Pr(0 < r < 1 | α̂, β̂), the better the approximation. The results further showed that, among the total 85 models examined, 68 models were identified as valid approximate model applications, which all had a near perfect match to the corresponding exact beta-Poisson model dose-response curve. © 2016 Society for Risk Analysis.
Approximate Dynamic Programming: Combining Regional and Local State Following Approximations.
Deptula, Patryk; Rosenfeld, Joel A; Kamalapurkar, Rushikesh; Dixon, Warren E
2018-06-01
An infinite-horizon optimal regulation problem for a control-affine deterministic system is solved online using a local state following (StaF) kernel and a regional model-based reinforcement learning (R-MBRL) method to approximate the value function. Unlike traditional methods such as R-MBRL that aim to approximate the value function over a large compact set, the StaF kernel approach aims to approximate the value function in a local neighborhood of the state that travels within a compact set. In this paper, the value function is approximated using a state-dependent convex combination of the StaF-based and the R-MBRL-based approximations. As the state enters a neighborhood containing the origin, the value function transitions from being approximated by the StaF approach to the R-MBRL approach. Semiglobal uniformly ultimately bounded (SGUUB) convergence of the system states to the origin is established using a Lyapunov-based analysis. Simulation results are provided for two, three, six, and ten-state dynamical systems to demonstrate the scalability and performance of the developed method.
From Bethe–Salpeter Wave functions to Generalised Parton Distributions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mezrag, C.; Moutarde, H.; Rodríguez-Quintero, J.
2016-06-06
We review recent works on the modelling of Generalised Parton Distributions within the Dyson-Schwinger formalism. We highlight how covariant computations, using the impulse approximation, allows one to fulfil most of the theoretical constraints of the GPDs. A specific attention is brought to chiral properties and especially the so-called soft pion theorem, and its link with the Axial-Vector Ward-Takahashi identity. The limitation of the impulse approximation are also explained. Beyond impulse approximation computations are reviewed in the forward case. Finally, we stress the advantages of the overlap of lightcone wave functions, and possible ways to construct covariant GPD models within thismore » framework, in a two-body approximation« less
Atomic density functional and diagram of structures in the phase field crystal model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ankudinov, V. E., E-mail: vladimir@ankudinov.org; Galenko, P. K.; Kropotin, N. V.
2016-02-15
The phase field crystal model provides a continual description of the atomic density over the diffusion time of reactions. We consider a homogeneous structure (liquid) and a perfect periodic crystal, which are constructed from the one-mode approximation of the phase field crystal model. A diagram of 2D structures is constructed from the analytic solutions of the model using atomic density functionals. The diagram predicts equilibrium atomic configurations for transitions from the metastable state and includes the domains of existence of homogeneous, triangular, and striped structures corresponding to a liquid, a body-centered cubic crystal, and a longitudinal cross section of cylindricalmore » tubes. The method developed here is employed for constructing the diagram for the homogeneous liquid phase and the body-centered iron lattice. The expression for the free energy is derived analytically from density functional theory. The specific features of approximating the phase field crystal model are compared with the approximations and conclusions of the weak crystallization and 2D melting theories.« less
Smith, Kyle K G; Poulsen, Jens Aage; Nyman, Gunnar; Rossky, Peter J
2015-06-28
We develop two classes of quasi-classical dynamics that are shown to conserve the initial quantum ensemble when used in combination with the Feynman-Kleinert approximation of the density operator. These dynamics are used to improve the Feynman-Kleinert implementation of the classical Wigner approximation for the evaluation of quantum time correlation functions known as Feynman-Kleinert linearized path-integral. As shown, both classes of dynamics are able to recover the exact classical and high temperature limits of the quantum time correlation function, while a subset is able to recover the exact harmonic limit. A comparison of the approximate quantum time correlation functions obtained from both classes of dynamics is made with the exact results for the challenging model problems of the quartic and double-well potentials. It is found that these dynamics provide a great improvement over the classical Wigner approximation, in which purely classical dynamics are used. In a special case, our first method becomes identical to centroid molecular dynamics.
Nonlinear functional approximation with networks using adaptive neurons
NASA Technical Reports Server (NTRS)
Tawel, Raoul
1992-01-01
A novel mathematical framework for the rapid learning of nonlinear mappings and topological transformations is presented. It is based on allowing the neuron's parameters to adapt as a function of learning. This fully recurrent adaptive neuron model (ANM) has been successfully applied to complex nonlinear function approximation problems such as the highly degenerate inverse kinematics problem in robotics.
Chemical association in simple models of molecular and ionic fluids. III. The cavity function
NASA Astrophysics Data System (ADS)
Zhou, Yaoqi; Stell, George
1992-01-01
Exact equations which relate the cavity function to excess solvation free energies and equilibrium association constants are rederived by using a thermodynamic cycle. A zeroth-order approximation, derived previously by us as a simple interpolation scheme, is found to be very accurate if the associative bonding occurs on or near the surface of the repulsive core of the interaction potential. If the bonding radius is substantially less than the core radius, the approximation overestimates the association degree and the association constant. For binary association, the zeroth-order approximation is equivalent to the first-order thermodynamic perturbation theory (TPT) of Wertheim. For n-particle association, the combination of the zeroth-order approximation with a ``linear'' approximation (for n-particle distribution functions in terms of the two-particle function) yields the first-order TPT result. Using our exact equations to go beyond TPT, near-exact analytic results for binary hard-sphere association are obtained. Solvent effects on binary hard-sphere association and ionic association are also investigated. A new rule which generalizes Le Chatelier's principle is used to describe the three distinct forms of behaviors involving solvent effects that we find. The replacement of the dielectric-continuum solvent model by a dipolar hard-sphere model leads to improved agreement with an experimental observation. Finally, equation of state for an n-particle flexible linear-chain fluid is derived on the basis of a one-parameter approximation that interpolates between the generalized Kirkwood superposition approximation and the linear approximation. A value of the parameter that appears to be near optimal in the context of this application is obtained from comparison with computer-simulation data.
A strategy for improved computational efficiency of the method of anchored distributions
NASA Astrophysics Data System (ADS)
Over, Matthew William; Yang, Yarong; Chen, Xingyuan; Rubin, Yoram
2013-06-01
This paper proposes a strategy for improving the computational efficiency of model inversion using the method of anchored distributions (MAD) by "bundling" similar model parametrizations in the likelihood function. Inferring the likelihood function typically requires a large number of forward model (FM) simulations for each possible model parametrization; as a result, the process is quite expensive. To ease this prohibitive cost, we present an approximation for the likelihood function called bundling that relaxes the requirement for high quantities of FM simulations. This approximation redefines the conditional statement of the likelihood function as the probability of a set of similar model parametrizations "bundle" replicating field measurements, which we show is neither a model reduction nor a sampling approach to improving the computational efficiency of model inversion. To evaluate the effectiveness of these modifications, we compare the quality of predictions and computational cost of bundling relative to a baseline MAD inversion of 3-D flow and transport model parameters. Additionally, to aid understanding of the implementation we provide a tutorial for bundling in the form of a sample data set and script for the R statistical computing language. For our synthetic experiment, bundling achieved a 35% reduction in overall computational cost and had a limited negative impact on predicted probability distributions of the model parameters. Strategies for minimizing error in the bundling approximation, for enforcing similarity among the sets of model parametrizations, and for identifying convergence of the likelihood function are also presented.
Elfwing, Stefan; Uchibe, Eiji; Doya, Kenji
2016-12-01
Free-energy based reinforcement learning (FERL) was proposed for learning in high-dimensional state and action spaces. However, the FERL method does only really work well with binary, or close to binary, state input, where the number of active states is fewer than the number of non-active states. In the FERL method, the value function is approximated by the negative free energy of a restricted Boltzmann machine (RBM). In our earlier study, we demonstrated that the performance and the robustness of the FERL method can be improved by scaling the free energy by a constant that is related to the size of network. In this study, we propose that RBM function approximation can be further improved by approximating the value function by the negative expected energy (EERL), instead of the negative free energy, as well as being able to handle continuous state input. We validate our proposed method by demonstrating that EERL: (1) outperforms FERL, as well as standard neural network and linear function approximation, for three versions of a gridworld task with high-dimensional image state input; (2) achieves new state-of-the-art results in stochastic SZ-Tetris in both model-free and model-based learning settings; and (3) significantly outperforms FERL and standard neural network function approximation for a robot navigation task with raw and noisy RGB images as state input and a large number of actions. Copyright © 2016 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Monotone Boolean approximation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hulme, B.L.
1982-12-01
This report presents a theory of approximation of arbitrary Boolean functions by simpler, monotone functions. Monotone increasing functions can be expressed without the use of complements. Nonconstant monotone increasing functions are important in their own right since they model a special class of systems known as coherent systems. It is shown here that when Boolean expressions for noncoherent systems become too large to treat exactly, then monotone approximations are easily defined. The algorithms proposed here not only provide simpler formulas but also produce best possible upper and lower monotone bounds for any Boolean function. This theory has practical application formore » the analysis of noncoherent fault trees and event tree sequences.« less
Li, Chen; Requist, Ryan; Gross, E K U
2018-02-28
We perform model calculations for a stretched LiF molecule, demonstrating that nonadiabatic charge transfer effects can be accurately and seamlessly described within a density functional framework. In alkali halides like LiF, there is an abrupt change in the ground state electronic distribution due to an electron transfer at a critical bond length R = R c , where an avoided crossing of the lowest adiabatic potential energy surfaces calls the validity of the Born-Oppenheimer approximation into doubt. Modeling the R-dependent electronic structure of LiF within a two-site Hubbard model, we find that nonadiabatic electron-nuclear coupling produces a sizable elongation of the critical R c by 0.5 bohr. This effect is very accurately captured by a simple and rigorously derived correction, with an M -1 prefactor, to the exchange-correlation potential in density functional theory, M = reduced nuclear mass. Since this nonadiabatic term depends on gradients of the nuclear wave function and conditional electronic density, ∇ R χ(R) and ∇ R n(r, R), it couples the Kohn-Sham equations at neighboring R points. Motivated by an observed localization of nonadiabatic effects in nuclear configuration space, we propose a local conditional density approximation-an approximation that reduces the search for nonadiabatic density functionals to the search for a single function y(n).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Yongxi; Ernzerhof, Matthias, E-mail: Matthias.Ernzerhof@UMontreal.ca; Bahmann, Hilke
Drawing on the adiabatic connection of density functional theory, exchange-correlation functionals of Kohn-Sham density functional theory are constructed which interpolate between the extreme limits of the electron-electron interaction strength. The first limit is the non-interacting one, where there is only exchange. The second limit is the strong correlated one, characterized as the minimum of the electron-electron repulsion energy. The exchange-correlation energy in the strong-correlation limit is approximated through a model for the exchange-correlation hole that is referred to as nonlocal-radius model [L. O. Wagner and P. Gori-Giorgi, Phys. Rev. A 90, 052512 (2014)]. Using the non-interacting and strong-correlated extremes, variousmore » interpolation schemes are presented that yield new approximations to the adiabatic connection and thus to the exchange-correlation energy. Some of them rely on empiricism while others do not. Several of the proposed approximations yield the exact exchange-correlation energy for one-electron systems where local and semi-local approximations often fail badly. Other proposed approximations generalize existing global hybrids by using a fraction of the exchange-correlation energy in the strong-correlation limit to replace an equal fraction of the semi-local approximation to the exchange-correlation energy in the strong-correlation limit. The performance of the proposed approximations is evaluated for molecular atomization energies, total atomic energies, and ionization potentials.« less
Impact-parameter dependence of the energy loss of fast molecular clusters in hydrogen
NASA Astrophysics Data System (ADS)
Fadanelli, R. C.; Grande, P. L.; Schiwietz, G.
2008-03-01
The electronic energy loss of molecular clusters as a function of impact parameter is far less understood than atomic energy losses. For instance, there are no analytical expressions for the energy loss as a function of impact parameter for cluster ions. In this work, we describe two procedures to evaluate the combined energy loss of molecules: Ab initio calculations within the semiclassical approximation and the coupled-channels method using atomic orbitals; and simplified models for the electronic cluster energy loss as a function of the impact parameter, namely the molecular perturbative convolution approximation (MPCA, an extension of the corresponding atomic model PCA) and the molecular unitary convolution approximation (MUCA, a molecular extension of the previous unitary convolution approximation UCA). In this work, an improved ansatz for MPCA is proposed, extending its validity for very compact clusters. For the simplified models, the physical inputs are the oscillators strengths of the target atoms and the target-electron density. The results from these models applied to an atomic hydrogen target yield remarkable agreement with their corresponding ab initio counterparts for different angles between cluster axis and velocity direction at specific energies of 150 and 300 keV/u.
Functional Data Approximation on Bounded Domains using Polygonal Finite Elements.
Cao, Juan; Xiao, Yanyang; Chen, Zhonggui; Wang, Wenping; Bajaj, Chandrajit
2018-07-01
We construct and analyze piecewise approximations of functional data on arbitrary 2D bounded domains using generalized barycentric finite elements, and particularly quadratic serendipity elements for planar polygons. We compare approximation qualities (precision/convergence) of these partition-of-unity finite elements through numerical experiments, using Wachspress coordinates, natural neighbor coordinates, Poisson coordinates, mean value coordinates, and quadratic serendipity bases over polygonal meshes on the domain. For a convex n -sided polygon, the quadratic serendipity elements have 2 n basis functions, associated in a Lagrange-like fashion to each vertex and each edge midpoint, rather than the usual n ( n + 1)/2 basis functions to achieve quadratic convergence. Two greedy algorithms are proposed to generate Voronoi meshes for adaptive functional/scattered data approximations. Experimental results show space/accuracy advantages for these quadratic serendipity finite elements on polygonal domains versus traditional finite elements over simplicial meshes. Polygonal meshes and parameter coefficients of the quadratic serendipity finite elements obtained by our greedy algorithms can be further refined using an L 2 -optimization to improve the piecewise functional approximation. We conduct several experiments to demonstrate the efficacy of our algorithm for modeling features/discontinuities in functional data/image approximation.
Velocity statistics of the Nagel-Schreckenberg model
NASA Astrophysics Data System (ADS)
Bain, Nicolas; Emig, Thorsten; Ulm, Franz-Josef; Schreckenberg, Michael
2016-02-01
The statistics of velocities in the cellular automaton model of Nagel and Schreckenberg for traffic are studied. From numerical simulations, we obtain the probability distribution function (PDF) for vehicle velocities and the velocity-velocity (vv) covariance function. We identify the probability to find a standing vehicle as a potential order parameter that signals nicely the transition between free congested flow for a sufficiently large number of velocity states. Our results for the vv covariance function resemble features of a second-order phase transition. We develop a 3-body approximation that allows us to relate the PDFs for velocities and headways. Using this relation, an approximation to the velocity PDF is obtained from the headway PDF observed in simulations. We find a remarkable agreement between this approximation and the velocity PDF obtained from simulations.
Velocity statistics of the Nagel-Schreckenberg model.
Bain, Nicolas; Emig, Thorsten; Ulm, Franz-Josef; Schreckenberg, Michael
2016-02-01
The statistics of velocities in the cellular automaton model of Nagel and Schreckenberg for traffic are studied. From numerical simulations, we obtain the probability distribution function (PDF) for vehicle velocities and the velocity-velocity (vv) covariance function. We identify the probability to find a standing vehicle as a potential order parameter that signals nicely the transition between free congested flow for a sufficiently large number of velocity states. Our results for the vv covariance function resemble features of a second-order phase transition. We develop a 3-body approximation that allows us to relate the PDFs for velocities and headways. Using this relation, an approximation to the velocity PDF is obtained from the headway PDF observed in simulations. We find a remarkable agreement between this approximation and the velocity PDF obtained from simulations.
NASA Astrophysics Data System (ADS)
Rose, F.; Dupuis, N.
2018-05-01
We present an approximation scheme of the nonperturbative renormalization group that preserves the momentum dependence of correlation functions. This approximation scheme can be seen as a simple improvement of the local potential approximation (LPA) where the derivative terms in the effective action are promoted to arbitrary momentum-dependent functions. As in the LPA, the only field dependence comes from the effective potential, which allows us to solve the renormalization-group equations at a relatively modest numerical cost (as compared, e.g., to the Blaizot-Mendéz-Galain-Wschebor approximation scheme). As an application we consider the two-dimensional quantum O(N ) model at zero temperature. We discuss not only the two-point correlation function but also higher-order correlation functions such as the scalar susceptibility (which allows for an investigation of the "Higgs" amplitude mode) and the conductivity. In particular, we show how, using Padé approximants to perform the analytic continuation i ωn→ω +i 0+ of imaginary frequency correlation functions χ (i ωn) computed numerically from the renormalization-group equations, one can obtain spectral functions in the real-frequency domain.
A new model for approximating RNA folding trajectories and population kinetics
NASA Astrophysics Data System (ADS)
Kirkpatrick, Bonnie; Hajiaghayi, Monir; Condon, Anne
2013-01-01
RNA participates both in functional aspects of the cell and in gene regulation. The interactions of these molecules are mediated by their secondary structure which can be viewed as a planar circle graph with arcs for all the chemical bonds between pairs of bases in the RNA sequence. The problem of predicting RNA secondary structure, specifically the chemically most probable structure, has many useful and efficient algorithms. This leaves RNA folding, the problem of predicting the dynamic behavior of RNA structure over time, as the main open problem. RNA folding is important for functional understanding because some RNA molecules change secondary structure in response to interactions with the environment. The full RNA folding model on at most O(3n) secondary structures is the gold standard. We present a new subset approximation model for the full model, give methods to analyze its accuracy and discuss the relative merits of our model as compared with a pre-existing subset approximation. The main advantage of our model is that it generates Monte Carlo folding pathways with the same probabilities with which they are generated under the full model. The pre-existing subset approximation does not have this property.
Tensor Based Representation and Analysis of Diffusion-Weighted Magnetic Resonance Images
ERIC Educational Resources Information Center
Barmpoutis, Angelos
2009-01-01
Cartesian tensor bases have been widely used to model spherical functions. In medical imaging, tensors of various orders can approximate the diffusivity function at each voxel of a diffusion-weighted MRI data set. This approximation produces tensor-valued datasets that contain information about the underlying local structure of the scanned tissue.…
Approximate Model of Zone Sedimentation
NASA Astrophysics Data System (ADS)
Dzianik, František
2011-12-01
The process of zone sedimentation is affected by many factors that are not possible to express analytically. For this reason, the zone settling is evaluated in practice experimentally or by application of an empirical mathematical description of the process. The paper presents the development of approximate model of zone settling, i.e. the general function which should properly approximate the behaviour of the settling process within its entire range and at the various conditions. Furthermore, the specification of the model parameters by the regression analysis of settling test results is shown. The suitability of the model is reviewed by graphical dependencies and by statistical coefficients of correlation. The approximate model could by also useful on the simplification of process design of continual settling tanks and thickeners.
Thermally Driven One-Fluid Electron-Proton Solar Wind: Eight-Moment Approximation
NASA Astrophysics Data System (ADS)
Olsen, Espen Lyngdal; Leer, Egil
1996-05-01
In an effort to improve the "classical" solar wind model, we study an eight-moment approximation hydrodynamic solar wind model, in which the full conservation equation for the heat conductive flux is solved together with the conservation equations for mass, momentum, and energy. We consider two different cases: In one model the energy flux needed to drive the solar wind is supplied as heat flux from a hot coronal base, where both the density and temperature are specified. In the other model, the corona is heated. In that model, the coronal base density and temperature are also specified, but the temperature increases outward from the coronal base due to a specified energy flux that is dissipated in the corona. The eight-moment approximation solutions are compared with the results from a "classical" solar wind model in which the collision-dominated gas expression for the heat conductive flux is used. It is shown that the "classical" expression for the heat conductive flux is generally not valid in the solar wind. In collisionless regions of the flow, the eight-moment approximation gives a larger thermalization of the heat conductive flux than the models using the collision-dominated gas approximation for the heat flux, but the heat flux is still larger than the "saturation heat flux." This leads to a breakdown of the electron distribution function, which turns negative in the collisionless region of the flow. By increasing the interaction between the electrons, the heat flux is reduced, and a reasonable shape is obtained on the distribution function. By solving the full set of equations consistent with the eight-moment distribution function for the electrons, we are thus able to draw inferences about the validity of the eight-moment description of the solar wind as well as the validity of the very commonly used collision-dominated gas approximation for the heat conductive flux in the solar wind.
Simulations of sooting turbulent jet flames using a hybrid flamelet/stochastic Eulerian field method
NASA Astrophysics Data System (ADS)
Consalvi, Jean-Louis; Nmira, Fatiha; Burot, Daria
2016-03-01
The stochastic Eulerian field method is applied to simulate 12 turbulent C1-C3 hydrocarbon jet diffusion flames covering a wide range of Reynolds numbers and fuel sooting propensities. The joint scalar probability density function (PDF) is a function of the mixture fraction, enthalpy defect, scalar dissipation rate and representative soot properties. Soot production is modelled by a semi-empirical acetylene/benzene-based soot model. Spectral gas and soot radiation is modelled using a wide-band correlated-k model. Emission turbulent radiation interactions (TRIs) are taken into account by means of the PDF method, whereas absorption TRIs are modelled using the optically thin fluctuation approximation. Model predictions are found to be in reasonable agreement with experimental data in terms of flame structure, soot quantities and radiative loss. Mean soot volume fractions are predicted within a factor of two of the experiments whereas radiant fractions and peaks of wall radiative fluxes are within 20%. The study also aims to assess approximate radiative models, namely the optically thin approximation (OTA) and grey medium approximation. These approximations affect significantly the radiative loss and should be avoided if accurate predictions of the radiative flux are desired. At atmospheric pressure, the relative errors that they produced on the peaks of temperature and soot volume fraction are within both experimental and model uncertainties. However, these discrepancies are found to increase with pressure, suggesting that spectral models describing properly the self-absorption should be considered at over-atmospheric pressure.
Giese, Timothy J; York, Darrin M
2010-12-28
We extend the Kohn-Sham potential energy expansion (VE) to include variations of the kinetic energy density and use the VE formulation with a 6-31G* basis to perform a "Jacob's ladder" comparison of small molecule properties using density functionals classified as being either LDA, GGA, or meta-GGA. We show that the VE reproduces standard Kohn-Sham DFT results well if all integrals are performed without further approximation, and there is no substantial improvement in using meta-GGA functionals relative to GGA functionals. The advantages of using GGA versus LDA functionals becomes apparent when modeling hydrogen bonds. We furthermore examine the effect of using integral approximations to compute the zeroth-order energy and first-order matrix elements, and the results suggest that the origin of the short-range repulsive potential within self-consistent charge density-functional tight-binding methods mainly arises from the approximations made to the first-order matrix elements.
NASA Astrophysics Data System (ADS)
Validi, AbdoulAhad
2014-03-01
This study introduces a non-intrusive approach in the context of low-rank separated representation to construct a surrogate of high-dimensional stochastic functions, e.g., PDEs/ODEs, in order to decrease the computational cost of Markov Chain Monte Carlo simulations in Bayesian inference. The surrogate model is constructed via a regularized alternative least-square regression with Tikhonov regularization using a roughening matrix computing the gradient of the solution, in conjunction with a perturbation-based error indicator to detect optimal model complexities. The model approximates a vector of a continuous solution at discrete values of a physical variable. The required number of random realizations to achieve a successful approximation linearly depends on the function dimensionality. The computational cost of the model construction is quadratic in the number of random inputs, which potentially tackles the curse of dimensionality in high-dimensional stochastic functions. Furthermore, this vector-valued separated representation-based model, in comparison to the available scalar-valued case, leads to a significant reduction in the cost of approximation by an order of magnitude equal to the vector size. The performance of the method is studied through its application to three numerical examples including a 41-dimensional elliptic PDE and a 21-dimensional cavity flow.
Schearer, Eric M.; Liao, Yu-Wei; Perreault, Eric J.; Tresch, Matthew C.; Memberg, William D.; Kirsch, Robert F.; Lynch, Kevin M.
2016-01-01
We present a method to identify the dynamics of a human arm controlled by an implanted functional electrical stimulation neuroprosthesis. The method uses Gaussian process regression to predict shoulder and elbow torques given the shoulder and elbow joint positions and velocities and the electrical stimulation inputs to muscles. We compare the accuracy of torque predictions of nonparametric, semiparametric, and parametric model types. The most accurate of the three model types is a semiparametric Gaussian process model that combines the flexibility of a black box function approximator with the generalization power of a parameterized model. The semiparametric model predicted torques during stimulation of multiple muscles with errors less than 20% of the total muscle torque and passive torque needed to drive the arm. The identified model allows us to define an arbitrary reaching trajectory and approximately determine the muscle stimulations required to drive the arm along that trajectory. PMID:26955041
Modeling corneal surfaces with rational functions for high-speed videokeratoscopy data compression.
Schneider, Martin; Iskander, D Robert; Collins, Michael J
2009-02-01
High-speed videokeratoscopy is an emerging technique that enables study of the corneal surface and tear-film dynamics. Unlike its static predecessor, this new technique results in a very large amount of digital data for which storage needs become significant. We aimed to design a compression technique that would use mathematical functions to parsimoniously fit corneal surface data with a minimum number of coefficients. Since the Zernike polynomial functions that have been traditionally used for modeling corneal surfaces may not necessarily correctly represent given corneal surface data in terms of its optical performance, we introduced the concept of Zernike polynomial-based rational functions. Modeling optimality criteria were employed in terms of both the rms surface error as well as the point spread function cross-correlation. The parameters of approximations were estimated using a nonlinear least-squares procedure based on the Levenberg-Marquardt algorithm. A large number of retrospective videokeratoscopic measurements were used to evaluate the performance of the proposed rational-function-based modeling approach. The results indicate that the rational functions almost always outperform the traditional Zernike polynomial approximations with the same number of coefficients.
A hybrid Pade-Galerkin technique for differential equations
NASA Technical Reports Server (NTRS)
Geer, James F.; Andersen, Carl M.
1993-01-01
A three-step hybrid analysis technique, which successively uses the regular perturbation expansion method, the Pade expansion method, and then a Galerkin approximation, is presented and applied to some model boundary value problems. In the first step of the method, the regular perturbation method is used to construct an approximation to the solution in the form of a finite power series in a small parameter epsilon associated with the problem. In the second step of the method, the series approximation obtained in step one is used to construct a Pade approximation in the form of a rational function in the parameter epsilon. In the third step, the various powers of epsilon which appear in the Pade approximation are replaced by new (unknown) parameters (delta(sub j)). These new parameters are determined by requiring that the residual formed by substituting the new approximation into the governing differential equation is orthogonal to each of the perturbation coordinate functions used in step one. The technique is applied to model problems involving ordinary or partial differential equations. In general, the technique appears to provide good approximations to the solution even when the perturbation and Pade approximations fail to do so. The method is discussed and topics for future investigations are indicated.
NASA Astrophysics Data System (ADS)
Li, Chen; Requist, Ryan; Gross, E. K. U.
2018-02-01
We perform model calculations for a stretched LiF molecule, demonstrating that nonadiabatic charge transfer effects can be accurately and seamlessly described within a density functional framework. In alkali halides like LiF, there is an abrupt change in the ground state electronic distribution due to an electron transfer at a critical bond length R = Rc, where an avoided crossing of the lowest adiabatic potential energy surfaces calls the validity of the Born-Oppenheimer approximation into doubt. Modeling the R-dependent electronic structure of LiF within a two-site Hubbard model, we find that nonadiabatic electron-nuclear coupling produces a sizable elongation of the critical Rc by 0.5 bohr. This effect is very accurately captured by a simple and rigorously derived correction, with an M-1 prefactor, to the exchange-correlation potential in density functional theory, M = reduced nuclear mass. Since this nonadiabatic term depends on gradients of the nuclear wave function and conditional electronic density, ∇Rχ(R) and ∇Rn(r, R), it couples the Kohn-Sham equations at neighboring R points. Motivated by an observed localization of nonadiabatic effects in nuclear configuration space, we propose a local conditional density approximation—an approximation that reduces the search for nonadiabatic density functionals to the search for a single function y(n).
Finite state modeling of aeroelastic systems
NASA Technical Reports Server (NTRS)
Vepa, R.
1977-01-01
A general theory of finite state modeling of aerodynamic loads on thin airfoils and lifting surfaces performing completely arbitrary, small, time-dependent motions in an airstream is developed and presented. The nature of the behavior of the unsteady airloads in the frequency domain is explained, using as raw materials any of the unsteady linearized theories that have been mechanized for simple harmonic oscillations. Each desired aerodynamic transfer function is approximated by means of an appropriate Pade approximant, that is, a rational function of finite degree polynomials in the Laplace transform variable. The modeling technique is applied to several two dimensional and three dimensional airfoils. Circular, elliptic, rectangular and tapered planforms are considered as examples. Identical functions are also obtained for control surfaces for two and three dimensional airfoils.
Revisiting the Landau fluid closure.
NASA Astrophysics Data System (ADS)
Hunana, P.; Zank, G. P.; Webb, G. M.; Adhikari, L.
2017-12-01
Advanced fluid models that are much closer to the full kinetic description than the usual magnetohydrodynamic description are a very useful tool for studying astrophysical plasmas and for interpreting solar wind observational data. The development of advanced fluid models that contain certain kinetic effects is complicated and has attracted much attention over the past years. Here we focus on fluid models that incorporate the simplest possible forms of Landau damping, derived from linear kinetic theory expanded about a leading-order (gyrotropic) bi-Maxwellian distribution function f_0, under the approximation that the perturbed distribution function f_1 is gyrotropic as well. Specifically, we focus on various Pade approximants to the usual plasma response function (and to the plasma dispersion function) and examine possibilities that lead to a closure of the linear kinetic hierarchy of fluid moments. We present re-examination of the simplest Landau fluid closures.
Shotorban, Babak
2010-04-01
The dynamic least-squares kernel density (LSQKD) model [C. Pantano and B. Shotorban, Phys. Rev. E 76, 066705 (2007)] is used to solve the Fokker-Planck equations. In this model the probability density function (PDF) is approximated by a linear combination of basis functions with unknown parameters whose governing equations are determined by a global least-squares approximation of the PDF in the phase space. In this work basis functions are set to be Gaussian for which the mean, variance, and covariances are governed by a set of partial differential equations (PDEs) or ordinary differential equations (ODEs) depending on what phase-space variables are approximated by Gaussian functions. Three sample problems of univariate double-well potential, bivariate bistable neurodynamical system [G. Deco and D. Martí, Phys. Rev. E 75, 031913 (2007)], and bivariate Brownian particles in a nonuniform gas are studied. The LSQKD is verified for these problems as its results are compared against the results of the method of characteristics in nondiffusive cases and the stochastic particle method in diffusive cases. For the double-well potential problem it is observed that for low to moderate diffusivity the dynamic LSQKD well predicts the stationary PDF for which there is an exact solution. A similar observation is made for the bistable neurodynamical system. In both these problems least-squares approximation is made on all phase-space variables resulting in a set of ODEs with time as the independent variable for the Gaussian function parameters. In the problem of Brownian particles in a nonuniform gas, this approximation is made only for the particle velocity variable leading to a set of PDEs with time and particle position as independent variables. Solving these PDEs, a very good performance by LSQKD is observed for a wide range of diffusivities.
Analytical approximation of the InGaZnO thin-film transistors surface potential
NASA Astrophysics Data System (ADS)
Colalongo, Luigi
2016-10-01
Surface-potential-based mathematical models are among the most accurate and physically based compact models of thin-film transistors, and in turn of indium gallium zinc oxide TFTs, available today. However, the need of iterative computations of the surface potential limits their computational efficiency and diffusion in CAD applications. The existing closed-form approximations of the surface potential are based on regional approximations and empirical smoothing functions that could result not accurate enough in particular to model transconductances and transcapacitances. In this work we present an extremely accurate (in the range of nV) and computationally efficient non-iterative approximation of the surface potential that can serve as a basis for advanced surface-potential-based indium gallium zinc oxide TFTs models.
Density-functional expansion methods: Grand challenges.
Giese, Timothy J; York, Darrin M
2012-03-01
We discuss the source of errors in semiempirical density functional expansion (VE) methods. In particular, we show that VE methods are capable of well-reproducing their standard Kohn-Sham density functional method counterparts, but suffer from large errors upon using one or more of these approximations: the limited size of the atomic orbital basis, the Slater monopole auxiliary basis description of the response density, and the one- and two-body treatment of the core-Hamiltonian matrix elements. In the process of discussing these approximations and highlighting their symptoms, we introduce a new model that supplements the second-order density-functional tight-binding model with a self-consistent charge-dependent chemical potential equalization correction; we review our recently reported method for generalizing the auxiliary basis description of the atomic orbital response density; and we decompose the first-order potential into a summation of additive atomic components and many-body corrections, and from this examination, we provide new insights and preliminary results that motivate and inspire new approximate treatments of the core-Hamiltonian.
NASA Astrophysics Data System (ADS)
Lee, Ji-Hwan; Tak, Youngjoo; Lee, Taehun; Soon, Aloysius
Ceria (CeO2-x) is widely studied as a choice electrolyte material for intermediate-temperature (~ 800 K) solid oxide fuel cells. At this temperature, maintaining its chemical stability and thermal-mechanical integrity of this oxide are of utmost importance. To understand their thermal-elastic properties, we firstly test the influence of various approximations to the density-functional theory (DFT) xc functionals on specific thermal-elastic properties of both CeO2 and Ce2O3. Namely, we consider the local-density approximation (LDA), the generalized gradient approximation (GGA-PBE) with and without additional Hubbard U as applied to the 4 f electron of Ce, as well as the recently popularized hybrid functional due to Heyd-Scuseria-Ernzehof (HSE06). Next, we then couple this to a volume-dependent Debye-Grüneisen model to determine the thermodynamic quantities of ceria at arbitrary temperatures. We find an explicit description of the strong correlation (e.g. via the DFT + U and hybrid functional approach) is necessary to have a good agreement with experimental values, in contrast to the mean-field treatment in standard xc approximations (such as LDA or GGA-PBE). We acknowledge support from Samsung Research Funding Center of Samsung Electronics (SRFC-MA1501-03).
Computational methods for estimation of parameters in hyperbolic systems
NASA Technical Reports Server (NTRS)
Banks, H. T.; Ito, K.; Murphy, K. A.
1983-01-01
Approximation techniques for estimating spatially varying coefficients and unknown boundary parameters in second order hyperbolic systems are discussed. Methods for state approximation (cubic splines, tau-Legendre) and approximation of function space parameters (interpolatory splines) are outlined and numerical findings for use of the resulting schemes in model "one dimensional seismic inversion' problems are summarized.
Site-occupation embedding theory using Bethe ansatz local density approximations
NASA Astrophysics Data System (ADS)
Senjean, Bruno; Nakatani, Naoki; Tsuchiizu, Masahisa; Fromager, Emmanuel
2018-06-01
Site-occupation embedding theory (SOET) is an alternative formulation of density functional theory (DFT) for model Hamiltonians where the fully interacting Hubbard problem is mapped, in principle exactly, onto an impurity-interacting (rather than a noninteracting) one. It provides a rigorous framework for combining wave-function (or Green function)-based methods with DFT. In this work, exact expressions for the per-site energy and double occupation of the uniform Hubbard model are derived in the context of SOET. As readily seen from these derivations, the so-called bath contribution to the per-site correlation energy is, in addition to the latter, the key density functional quantity to model in SOET. Various approximations based on Bethe ansatz and perturbative solutions to the Hubbard and single-impurity Anderson models are constructed and tested on a one-dimensional ring. The self-consistent calculation of the embedded impurity wave function has been performed with the density-matrix renormalization group method. It has been shown that promising results are obtained in specific regimes of correlation and density. Possible further developments have been proposed in order to provide reliable embedding functionals and potentials.
Functional Based Adaptive and Fuzzy Sliding Controller for Non-Autonomous Active Suspension System
NASA Astrophysics Data System (ADS)
Huang, Shiuh-Jer; Chen, Hung-Yi
In this paper, an adaptive sliding controller is developed for controlling a vehicle active suspension system. The functional approximation technique is employed to substitute the unknown non-autonomous functions of the suspension system and release the model-based requirement of sliding mode control algorithm. In order to improve the control performance and reduce the implementation problem, a fuzzy strategy with online learning ability is added to compensate the functional approximation error. The update laws of the functional approximation coefficients and the fuzzy tuning parameters are derived from the Lyapunov theorem to guarantee the system stability. The proposed controller is implemented on a quarter-car hydraulic actuating active suspension system test-rig. The experimental results show that the proposed controller suppresses the oscillation amplitude of the suspension system effectively.
Spectral density method to Anderson-Holstein model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chebrolu, Narasimha Raju, E-mail: narasimharaju.phy@gmail.com; Chatterjee, Ashok
Two-parameter spectral density function of a magnetic impurity electron in a non-magnetic metal is calculated within the framework of the Anderson-Holstein model using the spectral density approximation method. The effect of electron-phonon interaction on the spectral function is investigated.
Local Approximation and Hierarchical Methods for Stochastic Optimization
NASA Astrophysics Data System (ADS)
Cheng, Bolong
In this thesis, we present local and hierarchical approximation methods for two classes of stochastic optimization problems: optimal learning and Markov decision processes. For the optimal learning problem class, we introduce a locally linear model with radial basis function for estimating the posterior mean of the unknown objective function. The method uses a compact representation of the function which avoids storing the entire history, as is typically required by nonparametric methods. We derive a knowledge gradient policy with the locally parametric model, which maximizes the expected value of information. We show the policy is asymptotically optimal in theory, and experimental works suggests that the method can reliably find the optimal solution on a range of test functions. For the Markov decision processes problem class, we are motivated by an application where we want to co-optimize a battery for multiple revenue, in particular energy arbitrage and frequency regulation. The nature of this problem requires the battery to make charging and discharging decisions at different time scales while accounting for the stochastic information such as load demand, electricity prices, and regulation signals. Computing the exact optimal policy becomes intractable due to the large state space and the number of time steps. We propose two methods to circumvent the computation bottleneck. First, we propose a nested MDP model that structure the co-optimization problem into smaller sub-problems with reduced state space. This new model allows us to understand how the battery behaves down to the two-second dynamics (that of the frequency regulation market). Second, we introduce a low-rank value function approximation for backward dynamic programming. This new method only requires computing the exact value function for a small subset of the state space and approximate the entire value function via low-rank matrix completion. We test these methods on historical price data from the PJM Interconnect and show that it outperforms the baseline approach used in the industry.
Pleiotropy Analysis of Quantitative Traits at Gene Level by Multivariate Functional Linear Models
Wang, Yifan; Liu, Aiyi; Mills, James L.; Boehnke, Michael; Wilson, Alexander F.; Bailey-Wilson, Joan E.; Xiong, Momiao; Wu, Colin O.; Fan, Ruzong
2015-01-01
In genetics, pleiotropy describes the genetic effect of a single gene on multiple phenotypic traits. A common approach is to analyze the phenotypic traits separately using univariate analyses and combine the test results through multiple comparisons. This approach may lead to low power. Multivariate functional linear models are developed to connect genetic variant data to multiple quantitative traits adjusting for covariates for a unified analysis. Three types of approximate F-distribution tests based on Pillai–Bartlett trace, Hotelling–Lawley trace, and Wilks’s Lambda are introduced to test for association between multiple quantitative traits and multiple genetic variants in one genetic region. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and optimal sequence kernel association test (SKAT-O). Extensive simulations were performed to evaluate the false positive rates and power performance of the proposed models and tests. We show that the approximate F-distribution tests control the type I error rates very well. Overall, simultaneous analysis of multiple traits can increase power performance compared to an individual test of each trait. The proposed methods were applied to analyze (1) four lipid traits in eight European cohorts, and (2) three biochemical traits in the Trinity Students Study. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and SKAT-O for the three biochemical traits. The approximate F-distribution tests of the proposed functional linear models are more sensitive than those of the traditional multivariate linear models that in turn are more sensitive than SKAT-O in the univariate case. The analysis of the four lipid traits and the three biochemical traits detects more association than SKAT-O in the univariate case. PMID:25809955
Pleiotropy analysis of quantitative traits at gene level by multivariate functional linear models.
Wang, Yifan; Liu, Aiyi; Mills, James L; Boehnke, Michael; Wilson, Alexander F; Bailey-Wilson, Joan E; Xiong, Momiao; Wu, Colin O; Fan, Ruzong
2015-05-01
In genetics, pleiotropy describes the genetic effect of a single gene on multiple phenotypic traits. A common approach is to analyze the phenotypic traits separately using univariate analyses and combine the test results through multiple comparisons. This approach may lead to low power. Multivariate functional linear models are developed to connect genetic variant data to multiple quantitative traits adjusting for covariates for a unified analysis. Three types of approximate F-distribution tests based on Pillai-Bartlett trace, Hotelling-Lawley trace, and Wilks's Lambda are introduced to test for association between multiple quantitative traits and multiple genetic variants in one genetic region. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and optimal sequence kernel association test (SKAT-O). Extensive simulations were performed to evaluate the false positive rates and power performance of the proposed models and tests. We show that the approximate F-distribution tests control the type I error rates very well. Overall, simultaneous analysis of multiple traits can increase power performance compared to an individual test of each trait. The proposed methods were applied to analyze (1) four lipid traits in eight European cohorts, and (2) three biochemical traits in the Trinity Students Study. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and SKAT-O for the three biochemical traits. The approximate F-distribution tests of the proposed functional linear models are more sensitive than those of the traditional multivariate linear models that in turn are more sensitive than SKAT-O in the univariate case. The analysis of the four lipid traits and the three biochemical traits detects more association than SKAT-O in the univariate case. © 2015 WILEY PERIODICALS, INC.
Comparing two Bayes methods based on the free energy functions in Bernoulli mixtures.
Yamazaki, Keisuke; Kaji, Daisuke
2013-08-01
Hierarchical learning models are ubiquitously employed in information science and data engineering. The structure makes the posterior distribution complicated in the Bayes method. Then, the prediction including construction of the posterior is not tractable though advantages of the method are empirically well known. The variational Bayes method is widely used as an approximation method for application; it has the tractable posterior on the basis of the variational free energy function. The asymptotic behavior has been studied in many hierarchical models and a phase transition is observed. The exact form of the asymptotic variational Bayes energy is derived in Bernoulli mixture models and the phase diagram shows that there are three types of parameter learning. However, the approximation accuracy or interpretation of the transition point has not been clarified yet. The present paper precisely analyzes the Bayes free energy function of the Bernoulli mixtures. Comparing free energy functions in these two Bayes methods, we can determine the approximation accuracy and elucidate behavior of the parameter learning. Our results claim that the Bayes free energy has the same learning types while the transition points are different. Copyright © 2013 Elsevier Ltd. All rights reserved.
An operator calculus for surface and volume modeling
NASA Technical Reports Server (NTRS)
Gordon, W. J.
1984-01-01
The mathematical techniques which form the foundation for most of the surface and volume modeling techniques used in practice are briefly described. An outline of what may be termed an operator calculus for the approximation and interpolation of functions of more than one independent variable is presented. By considering the linear operators associated with bivariate and multivariate interpolation/approximation schemes, it is shown how they can be compounded by operator multiplication and Boolean addition to obtain a distributive lattice of approximation operators. It is then demonstrated via specific examples how this operator calculus leads to practical techniques for sculptured surface and volume modeling.
A Galerkin approximation for linear elastic shallow shells
NASA Astrophysics Data System (ADS)
Figueiredo, I. N.; Trabucho, L.
1992-03-01
This work is a generalization to shallow shell models of previous results for plates by B. Miara (1989). Using the same basis functions as in the plate case, we construct a Galerkin approximation of the three-dimensional linearized elasticity problem, and establish some error estimates as a function of the thickness, the curvature, the geometry of the shell, the forces and the Lamé costants.
Effects of plasmon pole models on the G0W0 electronic structure of various oxides
NASA Astrophysics Data System (ADS)
Miglio, A.; Waroquiers, D.; Antonius, G.; Giantomassi, M.; Stankovski, M.; Côté, M.; Gonze, X.; Rignanese, G.-M.
2012-09-01
The electronic properties of three different oxides (ZnO, SnO2 and SiO2) are investigated within many-body perturbation theory in the G 0 W 0 approximation. The frequency dependence of the dielectric function is either approximated using two different well-established plasmon-pole models (one of which enforces the fulfillment of the f-sum rule) or treated explicitly by means of the contour-deformation approach. Comparing these results, it is found that the plasmon-pole model enforcing the f-sum rule gives less accurate results for all three oxides. The calculated electronic properties are also compared with the available experimental data and previous ab initio results, focusing on the d state binding energies. The G 0 W 0 approach leads to significantly improved band gaps with respect to calculations based on the density functional theory in the local density approximation.
NASA Astrophysics Data System (ADS)
Rogers, Jeremy D.
2016-03-01
Numerous methods have been developed to quantify the light scattering properties of tissue. These properties are of interest in diagnostic and screening applications due to sensitivity to changes in tissue ultrastructure and changes associated with disease such as cancer. Tissue is considered a weak scatterer because that the mean free path is much larger than the correlation length. When this is the case, all scattering properties can be calculated from the refractive index correlation function Bn(r). Direct measurement of Bn(r) is challenging because it requires refractive index measurement at high resolution over a large tissue volume. Instead, a model is usually assumed. One particularly useful model, the Whittle-Matern function includes several realistic function types such as mass fractal and exponential. Optical scattering properties for weakly scattering media can be determined analytically from Bn(r) by applying the Rayleigh-Gans-Debye (RGD) or Born Approximation, and so measured scattering properties are used to fit parameters of the model function. Direct measurement of Bn(r) would provide confirmation that the function is a good representation of tissue or help in identifying the length scale at which changes occur. The RGD approximation relates the scattering phase function to the refractive index correlation function through a Fourier transform. This can be inverted without approximation, so goniometric measurement of the scattering can be converted to Bn(r). However, geometric constraints of the measurement of the phase function, angular resolution, and wavelength result in a band limited measurement of Bn(r). These limits are discussed and example measurements are described.
Prospects of second generation artificial intelligence tools in calibration of chemical sensors.
Braibanti, Antonio; Rao, Rupenaguntla Sambasiva; Ramam, Veluri Anantha; Rao, Gollapalli Nageswara; Rao, Vaddadi Venkata Panakala
2005-05-01
Multivariate data driven calibration models with neural networks (NNs) are developed for binary (Cu++ and Ca++) and quaternary (K+, Ca++, NO3- and Cl-) ion-selective electrode (ISE) data. The response profiles of ISEs with concentrations are non-linear and sub-Nernstian. This task represents function approximation of multi-variate, multi-response, correlated, non-linear data with unknown noise structure i.e. multi-component calibration/prediction in chemometric parlance. Radial distribution function (RBF) and Fuzzy-ARTMAP-NN models implemented in the software packages, TRAJAN and Professional II, are employed for the calibration. The optimum NN models reported are based on residuals in concentration space. Being a data driven information technology, NN does not require a model, prior- or posterior- distribution of data or noise structure. Missing information, spikes or newer trends in different concentration ranges can be modeled through novelty detection. Two simulated data sets generated from mathematical functions are modeled as a function of number of data points and network parameters like number of neurons and nearest neighbors. The success of RBF and Fuzzy-ARTMAP-NNs to develop adequate calibration models for experimental data and function approximation models for more complex simulated data sets ensures AI2 (artificial intelligence, 2nd generation) as a promising technology in quantitation.
A new estimator for VLBI baseline length repeatability
NASA Astrophysics Data System (ADS)
Titov, O.
2009-11-01
The goal of this paper is to introduce a more effective technique to approximate for the “repeatability-baseline length” relationship that is used to evaluate the quality of geodetic VLBI results. Traditionally, this relationship is approximated by a quadratic function of baseline length over all baselines. The new model incorporates the mean number of observed group delays of the reference radio sources (i.e. estimated as global parameters) used in the estimation of each baseline. It is shown that the new method provides a better approximation of the “repeatability-baseline length” relationship than the traditional model. Further development of the new approach comes down to modeling the repeatability as a function of two parameters: baseline length and baseline slewing rate. Within the framework of this new approach the station vertical and horizontal uncertainties can be treated as a function of baseline length. While the previous relationship indicated that the station vertical uncertainties are generally 4-5 times larger than the horizontal uncertainties, the vertical uncertainties as determined by the new method are only larger by a factor of 1.44 over all baseline lengths.
Supervised nonlinear spectral unmixing using a postnonlinear mixing model for hyperspectral imagery.
Altmann, Yoann; Halimi, Abderrahim; Dobigeon, Nicolas; Tourneret, Jean-Yves
2012-06-01
This paper presents a nonlinear mixing model for hyperspectral image unmixing. The proposed model assumes that the pixel reflectances are nonlinear functions of pure spectral components contaminated by an additive white Gaussian noise. These nonlinear functions are approximated using polynomial functions leading to a polynomial postnonlinear mixing model. A Bayesian algorithm and optimization methods are proposed to estimate the parameters involved in the model. The performance of the unmixing strategies is evaluated by simulations conducted on synthetic and real data.
Physical models for the normal YORP and diurnal Yarkovsky effects
NASA Astrophysics Data System (ADS)
Golubov, O.; Kravets, Y.; Krugly, Yu. N.; Scheeres, D. J.
2016-06-01
We propose an analytic model for the normal Yarkovsky-O'Keefe-Radzievskii-Paddack (YORP) and diurnal Yarkovsky effects experienced by a convex asteroid. Both the YORP torque and the Yarkovsky force are expressed as integrals of a universal function over the surface of an asteroid. Although in general this function can only be calculated numerically from the solution of the heat conductivity equation, approximate solutions can be obtained in quadratures for important limiting cases. We consider three such simplified models: Rubincam's approximation (zero heat conductivity), low thermal inertia limit (including the next order correction and thus valid for small heat conductivity), and high thermal inertia limit (valid for large heat conductivity). All three simplified models are compared with the exact solution.
Two-Term Asymptotic Approximation of a Cardiac Restitution Curve*
Cain, John W.; Schaeffer, David G.
2007-01-01
If spatial extent is neglected, ionic models of cardiac cells consist of systems of ordinary differential equations (ODEs) which have the property of excitability, i.e., a brief stimulus produces a prolonged evolution (called an action potential in the cardiac context) before the eventual return to equilibrium. Under repeated stimulation, or pacing, cardiac tissue exhibits electrical restitution: the steady-state action potential duration (APD) at a given pacing period B shortens as B is decreased. Independent of ionic models, restitution is often modeled phenomenologically by a one-dimensional mapping of the form APDnext = f(B – APDprevious). Under some circumstances, a restitution function f can be derived as an asymptotic approximation to the behavior of an ionic model. In this paper, extending previous work, we derive the next term in such an asymptotic approximation for a particular ionic model consisting of two ODEs. The two-term approximation exhibits excellent quantitative agreement with the actual restitution curve, whereas the leading-order approximation significantly underestimates actual APD values. PMID:18080006
Strong potential wave functions with elastic channel distortion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Macek, J.; Taulbjerg, K.
1989-06-01
The strong-potential Born approximation is analyzed in a channel-distorted-wave approach. Channel-distorted SPB wave functions are reduced to a conventional form in which the standard off-energy-shell factor /ital g/ has been replaced by a modified factor ..gamma.., which represents a suitable average of /ital g/ over the momentum distribution of the distorted-channel function. The modified factor is evaluated in a physically realistic model for the distortion potential, and it is found that ..gamma.. is well represented by a slowly varying phase factor. The channel-distorted SPB approximation is accordingly identical to the impulse approximation if the phase variation of ..gamma.. can bemore » ignored. This is generally the case in applications to radiative electron capture and to a good approximation for ordinary capture at not too small velocities.« less
A trust region approach with multivariate Padé model for optimal circuit design
NASA Astrophysics Data System (ADS)
Abdel-Malek, Hany L.; Ebid, Shaimaa E. K.; Mohamed, Ahmed S. A.
2017-11-01
Since the optimization process requires a significant number of consecutive function evaluations, it is recommended to replace the function by an easily evaluated approximation model during the optimization process. The model suggested in this article is based on a multivariate Padé approximation. This model is constructed using data points of ?, where ? is the number of parameters. The model is updated over a sequence of trust regions. This model avoids the slow convergence of linear models of ? and has features of quadratic models that need interpolation data points of ?. The proposed approach is tested by applying it to several benchmark problems. Yield optimization using such a direct method is applied to some practical circuit examples. Minimax solution leads to a suitable initial point to carry out the yield optimization process. The yield is optimized by the proposed derivative-free method for active and passive filter examples.
Barycentric parameterizations for isotropic BRDFs.
Stark, Michael M; Arvo, James; Smits, Brian
2005-01-01
A bidirectional reflectance distribution function (BRDF) is often expressed as a function of four real variables: two spherical coordinates in each of the the "incoming" and "outgoing" directions. However, many BRDFs reduce to functions of fewer variables. For example, isotropic reflection can be represented by a function of three variables. Some BRDF models can be reduced further. In this paper, we introduce new sets of coordinates which we use to reduce the dimensionality of several well-known analytic BRDFs as well as empirically measured BRDF data. The proposed coordinate systems are barycentric with respect to a triangular support with a direct physical interpretation. One coordinate set is based on the BRDF model proposed by Lafortune. Another set, based on a model of Ward, is associated with the "halfway" vector common in analytical BRDF formulas. Through these coordinate sets we establish lower bounds on the approximation error inherent in the models on which they are based. We present a third set of coordinates, not based on any analytical model, that performs well in approximating measured data. Finally, our proposed variables suggest novel ways of constructing and visualizing BRDFs.
Unimolecular diffusion-mediated reactions with a nonrandom time-modulated absorbing barrier
NASA Technical Reports Server (NTRS)
Bashford, D.; Weaver, D. L.
1986-01-01
A diffusion-reaction model with time-dependent reactivity is formulated and applied to unimolecular reactions. The model is solved exactly numerically and approximately analytically for the unreacted fraction as a function of time. It is shown that the approximate analytical solution is valid even when the system is far from equilibrium, and when the reactivity probability is more complicated than a square-wave function of time. A discussion is also given of an approach to problems of this type using a stochastically fluctuating reactivity, and the first-passage time for a particular example is derived.
Liu, Jian; Miller, William H
2007-06-21
It is shown how quantum mechanical time correlation functions [defined, e.g., in Eq. (1.1)] can be expressed, without approximation, in the same form as the linearized approximation of the semiclassical initial value representation (LSC-IVR), or classical Wigner model, for the correlation function [cf. Eq. (2.1)], i.e., as a phase space average (over initial conditions for trajectories) of the Wigner functions corresponding to the two operators. The difference is that the trajectories involved in the LSC-IVR evolve classically, i.e., according to the classical equations of motion, while in the exact theory they evolve according to generalized equations of motion that are derived here. Approximations to the exact equations of motion are then introduced to achieve practical methods that are applicable to complex (i.e., large) molecular systems. Four such methods are proposed in the paper--the full Wigner dynamics (full WD) and the second order WD based on "Wigner trajectories" [H. W. Lee and M. D. Scully, J. Chem. Phys. 77, 4604 (1982)] and the full Donoso-Martens dynamics (full DMD) and the second order DMD based on "Donoso-Martens trajectories" [A. Donoso and C. C. Martens, Phys. Rev. Lett. 8722, 223202 (2001)]--all of which can be viewed as generalizations of the original LSC-IVR method. Numerical tests of the four versions of this new approach are made for two anharmonic model problems, and for each the momentum autocorrelation function (i.e., operators linear in coordinate or momentum operators) and the force autocorrelation function (nonlinear operators) have been calculated. These four new approximate treatments are indeed seen to be significant improvements to the original LSC-IVR approximation.
NASA Astrophysics Data System (ADS)
Hellgren, Maria; Gross, E. K. U.
2013-11-01
We present a detailed study of the exact-exchange (EXX) kernel of time-dependent density-functional theory with an emphasis on its discontinuity at integer particle numbers. It was recently found that this exact property leads to sharp peaks and step features in the kernel that diverge in the dissociation limit of diatomic systems [Hellgren and Gross, Phys. Rev. APLRAAN1050-294710.1103/PhysRevA.85.022514 85, 022514 (2012)]. To further analyze the discontinuity of the kernel, we here make use of two different approximations to the EXX kernel: the Petersilka Gossmann Gross (PGG) approximation and a common energy denominator approximation (CEDA). It is demonstrated that whereas the PGG approximation neglects the discontinuity, the CEDA includes it explicitly. By studying model molecular systems it is shown that the so-called field-counteracting effect in the density-functional description of molecular chains can be viewed in terms of the discontinuity of the static kernel. The role of the frequency dependence is also investigated, highlighting its importance for long-range charge-transfer excitations as well as inner-shell excitations.
On the exchange-hole model of London dispersion forces
NASA Astrophysics Data System (ADS)
Ángyán, János G.
2007-07-01
First-principles derivation is given for the heuristic exchange-hole model of London dispersion forces by Becke and Johnson [J. Chem. Phys. 122, 154104 (2005)]. A one-term approximation is used for the dynamic charge density response function, and it is shown that a central nonempirical ingredient of the approximate nonexpanded dispersion energy is the charge density autocorrelation function, a two-particle property, related to the exchange-correlation hole. In the framework of a dipolar approximation of the Coulomb interaction around the molecular origin, one obtains the so-called Salem-Tang-Karplus approximation to the C6 dispersion coefficient. Alternatively, by expanding the Coulomb interaction around the center of charge (centroid) of the exchange-correlation hole associated with each point in the molecular volume, a multicenter expansion is obtained around the centroids of electron localization domains, always in terms of the exchange-correlation hole. In order to get a formula analogous to that of Becke and Johnson, which involves the exchange-hole only, further assumptions are needed, related to the difficulties of obtaining the expectation value of a two-electron operator from a single determinant. Thus a connection could be established between the conventional fluctuating charge density model of London dispersion forces and the notion of the "exchange-hole dipole moment" shedding some light on the true nature of the approximations implicit in the Becke-Johnson model.
NASA Astrophysics Data System (ADS)
Pei, Jin-Song; Mai, Eric C.
2007-04-01
This paper introduces a continuous effort towards the development of a heuristic initialization methodology for constructing multilayer feedforward neural networks to model nonlinear functions. In this and previous studies that this work is built upon, including the one presented at SPIE 2006, the authors do not presume to provide a universal method to approximate arbitrary functions, rather the focus is given to the development of a rational and unambiguous initialization procedure that applies to the approximation of nonlinear functions in the specific domain of engineering mechanics. The applications of this exploratory work can be numerous including those associated with potential correlation and interpretation of the inner workings of neural networks, such as damage detection. The goal of this study is fulfilled by utilizing the governing physics and mathematics of nonlinear functions and the strength of the sigmoidal basis function. A step-by-step graphical procedure utilizing a few neural network prototypes as "templates" to approximate commonly seen memoryless nonlinear functions of one or two variables is further developed in this study. Decomposition of complex nonlinear functions into a summation of some simpler nonlinear functions is utilized to exploit this prototype-based initialization methodology. Training examples are presented to demonstrate the rationality and effciency of the proposed methodology when compared with the popular Nguyen-Widrow initialization algorithm. Future work is also identfied.
Approximating a nonlinear advanced-delayed equation from acoustics
NASA Astrophysics Data System (ADS)
Teodoro, M. Filomena
2016-10-01
We approximate the solution of a particular non-linear mixed type functional differential equation from physiology, the mucosal wave model of the vocal oscillation during phonation. The mathematical equation models a superficial wave propagating through the tissues. The numerical scheme is adapted from the work presented in [1, 2, 3], using homotopy analysis method (HAM) to solve the non linear mixed type equation under study.
Constraint on the second functional derivative of the exchange-correlation energy
NASA Astrophysics Data System (ADS)
Joubert, D. P.
2012-09-01
Using the density functional adiabatic connection approach for an N-electron system it is shown that ? γ is the coupling constant that scales the electron-electron interaction strength. For the non-interacting Kohn-Sham Hamiltonian γ = 0 and for the fully interacting system γ = 1. ? is the Hartree plus exchange-correlation energy while f 0(r) and fγ(r) are the Fukui functions of the non-interacting and interacting systems, respectively. This identity can serve to test the internal self-consistency or quality of approximate functionals. The quality of some popular approximate exchange and correlation functionals are tested for a simple model system.
NASA Astrophysics Data System (ADS)
Cummings, Patrick
We consider the approximation of solutions of two complicated, physical systems via the nonlinear Schrodinger equation (NLS). In particular, we discuss the evolution of wave packets and long waves in two physical models. Due to the complicated nature of the equations governing many physical systems and the in-depth knowledge we have for solutions of the nonlinear Schrodinger equation, it is advantageous to use approximation results of this kind to model these physical systems. The approximations are simple enough that we can use them to understand the qualitative and quantitative behavior of the solutions, and by justifying them we can show that the behavior of the approximation captures the behavior of solutions to the original equation, at least for long, but finite time. We first consider a model of the water wave equations which can be approximated by wave packets using the NLS equation. We discuss a new proof that both simplifies and strengthens previous justification results of Schneider and Wayne. Rather than using analytic norms, as was done by Schneider and Wayne, we construct a modified energy functional so that the approximation holds for the full interval of existence of the approximate NLS solution as opposed to a subinterval (as is seen in the analytic case). Furthermore, the proof avoids problems associated with inverting the normal form transform by working with a modified energy functional motivated by Craig and Hunter et al. We then consider the Klein-Gordon-Zakharov system and prove a long wave approximation result. In this case there is a non-trivial resonance that cannot be eliminated via a normal form transform. By combining the normal form transform for small Fourier modes and using analytic norms elsewhere, we can get a justification result on the order 1 over epsilon squared time scale.
Liu, Jian; Miller, William H
2006-12-14
The thermal Gaussian approximation (TGA) recently developed by Frantsuzov et al. [Chem. Phys. Lett. 381, 117 (2003)] has been demonstrated to be a practical way for approximating the Boltzmann operator exp(-betaH) for multidimensional systems. In this paper the TGA is combined with semiclassical (SC) initial value representations (IVRs) for thermal time correlation functions. Specifically, it is used with the linearized SC-IVR (LSC-IVR, equivalent to the classical Wigner model), and the "forward-backward semiclassical dynamics" approximation developed by Shao and Makri [J. Phys. Chem. A 103, 7753 (1999); 103, 9749 (1999)]. Use of the TGA with both of these approximate SC-IVRs allows the oscillatory part of the IVR to be integrated out explicitly, providing an extremely simple result that is readily applicable to large molecular systems. Calculation of the force-force autocorrelation for a strongly anharmonic oscillator demonstrates its accuracy, and calculation of the velocity autocorrelation function (and thus the diffusion coefficient) of liquid neon demonstrates its applicability.
Online adaptive decision trees: pattern classification and function approximation.
Basak, Jayanta
2006-09-01
Recently we have shown that decision trees can be trained in the online adaptive (OADT) mode (Basak, 2004), leading to better generalization score. OADTs were bottlenecked by the fact that they are able to handle only two-class classification tasks with a given structure. In this article, we provide an architecture based on OADT, ExOADT, which can handle multiclass classification tasks and is able to perform function approximation. ExOADT is structurally similar to OADT extended with a regression layer. We also show that ExOADT is capable not only of adapting the local decision hyperplanes in the nonterminal nodes but also has the potential of smoothly changing the structure of the tree depending on the data samples. We provide the learning rules based on steepest gradient descent for the new model ExOADT. Experimentally we demonstrate the effectiveness of ExOADT in the pattern classification and function approximation tasks. Finally, we briefly discuss the relationship of ExOADT with other classification models.
Nonlinear identification using a B-spline neural network and chaotic immune approaches
NASA Astrophysics Data System (ADS)
dos Santos Coelho, Leandro; Pessôa, Marcelo Wicthoff
2009-11-01
One of the important applications of B-spline neural network (BSNN) is to approximate nonlinear functions defined on a compact subset of a Euclidean space in a highly parallel manner. Recently, BSNN, a type of basis function neural network, has received increasing attention and has been applied in the field of nonlinear identification. BSNNs have the potential to "learn" the process model from input-output data or "learn" fault knowledge from past experience. BSNN can be used as function approximators to construct the analytical model for residual generation too. However, BSNN is trained by gradient-based methods that may fall into local minima during the learning procedure. When using feed-forward BSNNs, the quality of approximation depends on the control points (knots) placement of spline functions. This paper describes the application of a modified artificial immune network inspired optimization method - the opt-aiNet - combined with sequences generate by Hénon map to provide a stochastic search to adjust the control points of a BSNN. The numerical results presented here indicate that artificial immune network optimization methods are useful for building good BSNN model for the nonlinear identification of two case studies: (i) the benchmark of Box and Jenkins gas furnace, and (ii) an experimental ball-and-tube system.
NASA Astrophysics Data System (ADS)
Siegmund, Marc; Pankratov, Oleg
2011-01-01
We show that the exchange-correlation scalar and vector potentials obtained from the optimized effective potential (OEP) equations and from the Krieger-Li-Iafrate (KLI) approximation for the current-density functional theory (CDFT) change under a gauge transformation such that the energy functional remains invariant. This alone does not assure, however, the theory’s compliance with the continuity equation. Using the model of a quantum ring with a broken angular symmetry which is penetrated by a magnetic flux we demonstrate that the physical current density calculated with the exact-exchange CDFT in the KLI approximation violates the continuity condition. In contrast, the current found from a solution of the full OEP equations satisfies this condition. We argue that the continuity violation stems from the fact that the KLI potentials are not (in general) the exact functional derivatives of a gauge-invariant exchange-correlation functional.
NASA Astrophysics Data System (ADS)
Tarantino, Walter; Mendoza, Bernardo S.; Romaniello, Pina; Berger, J. A.; Reining, Lucia
2018-04-01
Many-body perturbation theory is often formulated in terms of an expansion in the dressed instead of the bare Green’s function, and in the screened instead of the bare Coulomb interaction. However, screening can be calculated on different levels of approximation, and it is important to define what is the most appropriate choice. We explore this question by studying a zero-dimensional model (so called ‘one-point model’) that retains the structure of the full equations. We study both linear and non-linear response approximations to the screening. We find that an expansion in terms of the screening in the random phase approximation is the most promising way for an application in real systems. Moreover, by making use of the nonperturbative features of the Kadanoff-Baym equation for the one-body Green’s function, we obtain an approximate solution in our model that is very promising, although its applicability to real systems has still to be explored.
Actuator and aerodynamic modeling for high-angle-of-attack aeroservoelasticity
NASA Technical Reports Server (NTRS)
Brenner, Martin J.
1993-01-01
Accurate prediction of airframe/actuation coupling is required by the imposing demands of modern flight control systems. In particular, for agility enhancement at high angle of attack and low dynamic pressure, structural integration characteristics such as hinge moments, effective actuator stiffness, and airframe/control surface damping can have a significant effect on stability predictions. Actuator responses are customarily represented with low-order transfer functions matched to actuator test data, and control surface stiffness is often modeled as a linear spring. The inclusion of the physical properties of actuation and its installation on the airframe is therefore addressed in this paper using detailed actuator models which consider the physical, electrical, and mechanical elements of actuation. The aeroservoelastic analysis procedure is described in which the actuators are modeled as detailed high-order transfer functions and as approximate low-order transfer functions. The impacts of unsteady aerodynamic modeling on aeroservoelastic stability are also investigated in this paper by varying the order of approximation, or number of aerodynamic lag states, in the analysis. Test data from a thrust-vectoring configuration of an F/A-18 aircraft are compared to predictions to determine the effects on accuracy as a function of modeling complexity.
Actuator and aerodynamic modeling for high-angle-of-attack aeroservoelasticity
NASA Technical Reports Server (NTRS)
Brenner, Martin J.
1993-01-01
Accurate prediction of airframe/actuation coupling is required by the imposing demands of modern flight control systems. In particular, for agility enhancement at high angle of attack and low dynamic pressure, structural integration characteristics such as hinge moments, effective actuator stiffness, and airframe/control surface damping can have a significant effect on stability predictions. Actuator responses are customarily represented with low-order transfer functions matched to actuator test data, and control surface stiffness is often modeled as a linear spring. The inclusion of the physical properties of actuation and its installation on the airframe is therefore addressed using detailed actuator models which consider the physical, electrical, and mechanical elements of actuation. The aeroservoelastic analysis procedure is described in which the actuators are modeled as detailed high-order transfer functions and as approximate low-order transfer functions. The impacts of unsteady aerodynamic modeling on aeroservoelastic stability are also investigated by varying the order of approximation, or number of aerodynamic lag states, in the analysis. Test data from a thrust-vectoring configuration of an F/A-l8 aircraft are compared to predictions to determine the effects on accuracy as a function of modeling complexity.
Prediction of spectral acceleration response ordinates based on PGA attenuation
Graizer, V.; Kalkan, E.
2009-01-01
Developed herein is a new peak ground acceleration (PGA)-based predictive model for 5% damped pseudospectral acceleration (SA) ordinates of free-field horizontal component of ground motion from shallow-crustal earthquakes. The predictive model of ground motion spectral shape (i.e., normalized spectrum) is generated as a continuous function of few parameters. The proposed model eliminates the classical exhausted matrix of estimator coefficients, and provides significant ease in its implementation. It is structured on the Next Generation Attenuation (NGA) database with a number of additions from recent Californian events including 2003 San Simeon and 2004 Parkfield earthquakes. A unique feature of the model is its new functional form explicitly integrating PGA as a scaling factor. The spectral shape model is parameterized within an approximation function using moment magnitude, closest distance to the fault (fault distance) and VS30 (average shear-wave velocity in the upper 30 m) as independent variables. Mean values of its estimator coefficients were computed by fitting an approximation function to spectral shape of each record using robust nonlinear optimization. Proposed spectral shape model is independent of the PGA attenuation, allowing utilization of various PGA attenuation relations to estimate the response spectrum of earthquake recordings.
Arooj, Mahreen; Thangapandian, Sundarapandian; John, Shalini; Hwang, Swan; Park, Jong K; Lee, Keun W
2012-12-01
To provide a new idea for drug design, a computational investigation is performed on chymase and its novel 1,4-diazepane-2,5-diones inhibitors that explores the crucial molecular features contributing to binding specificity. Molecular docking studies of inhibitors within the active site of chymase were carried out to rationalize the inhibitory properties of these compounds and understand their inhibition mechanism. The density functional theory method was used to optimize molecular structures with the subsequent analysis of highest occupied molecular orbital, lowest unoccupied molecular orbital, and molecular electrostatic potential maps, which revealed that negative potentials near 1,4-diazepane-2,5-diones ring are essential for effective binding of inhibitors at active site of enzyme. The Bayesian model with receiver operating curve statistic of 0.82 also identified arylsulfonyl and aminocarbonyl as the molecular features favoring and not favoring inhibition of chymase, respectively. Moreover, genetic function approximation was applied to construct 3D quantitative structure-activity relationships models. Two models (genetic function approximation model 1 r(2) = 0.812 and genetic function approximation model 2 r(2) = 0.783) performed better in terms of correlation coefficients and cross-validation analysis. In general, this study is used as example to illustrate how combinational use of 2D/3D quantitative structure-activity relationships modeling techniques, molecular docking, frontier molecular orbital density fields (highest occupied molecular orbital and lowest unoccupied molecular orbital), and molecular electrostatic potential analysis may be useful to gain an insight into the binding mechanism between enzyme and its inhibitors. © 2012 John Wiley & Sons A/S.
A Modeling and Data Analysis of Laser Beam Propagation in the Maritime Domain
2015-05-18
approach to computing pdfs is the Kernel Density Method (Reference [9] has an intro - duction to the method), which we will apply to compute the pdf of our...The project has two parts to it: 1) we present a computational analysis of different probability density function approximation techniques; and 2) we... computational analysis of different probability density function approximation techniques; and 2) we introduce preliminary steps towards developing a
Gaussian functional regression for output prediction: Model assimilation and experimental design
NASA Astrophysics Data System (ADS)
Nguyen, N. C.; Peraire, J.
2016-03-01
In this paper, we introduce a Gaussian functional regression (GFR) technique that integrates multi-fidelity models with model reduction to efficiently predict the input-output relationship of a high-fidelity model. The GFR method combines the high-fidelity model with a low-fidelity model to provide an estimate of the output of the high-fidelity model in the form of a posterior distribution that can characterize uncertainty in the prediction. A reduced basis approximation is constructed upon the low-fidelity model and incorporated into the GFR method to yield an inexpensive posterior distribution of the output estimate. As this posterior distribution depends crucially on a set of training inputs at which the high-fidelity models are simulated, we develop a greedy sampling algorithm to select the training inputs. Our approach results in an output prediction model that inherits the fidelity of the high-fidelity model and has the computational complexity of the reduced basis approximation. Numerical results are presented to demonstrate the proposed approach.
NASA Astrophysics Data System (ADS)
Ben Abdessalem, Anis; Dervilis, Nikolaos; Wagg, David; Worden, Keith
2018-01-01
This paper will introduce the use of the approximate Bayesian computation (ABC) algorithm for model selection and parameter estimation in structural dynamics. ABC is a likelihood-free method typically used when the likelihood function is either intractable or cannot be approached in a closed form. To circumvent the evaluation of the likelihood function, simulation from a forward model is at the core of the ABC algorithm. The algorithm offers the possibility to use different metrics and summary statistics representative of the data to carry out Bayesian inference. The efficacy of the algorithm in structural dynamics is demonstrated through three different illustrative examples of nonlinear system identification: cubic and cubic-quintic models, the Bouc-Wen model and the Duffing oscillator. The obtained results suggest that ABC is a promising alternative to deal with model selection and parameter estimation issues, specifically for systems with complex behaviours.
Models of determining deformations
NASA Astrophysics Data System (ADS)
Gladilin, V. N.
2016-12-01
In recent years, a lot of functions designed to determine deformation values that occur mostly as a result of settlement of structures and industrial equipment. Some authors suggest such advanced mathematical functions approximating deformations as general methods for the determination of deformations. The article describes models of deformations as physical processes. When comparing static, cinematic and dynamic models, it was found that the dynamic model reflects the deformation of structures and industrial equipment most reliably.
Atmospheric Turbulence Modeling for Aerospace Vehicles: Fractional Order Fit
NASA Technical Reports Server (NTRS)
Kopasakis, George (Inventor)
2015-01-01
An improved model for simulating atmospheric disturbances is disclosed. A scale Kolmogorov spectral may be scaled to convert the Kolmogorov spectral into a finite energy von Karman spectral and a fractional order pole-zero transfer function (TF) may be derived from the von Karman spectral. Fractional order atmospheric turbulence may be approximated with an integer order pole-zero TF fit, and the approximation may be stored in memory.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mueller, Juliane
MISO is an optimization framework for solving computationally expensive mixed-integer, black-box, global optimization problems. MISO uses surrogate models to approximate the computationally expensive objective function. Hence, derivative information, which is generally unavailable for black-box simulation objective functions, is not needed. MISO allows the user to choose the initial experimental design strategy, the type of surrogate model, and the sampling strategy.
Numerical scheme approximating solution and parameters in a beam equation
NASA Astrophysics Data System (ADS)
Ferdinand, Robert R.
2003-12-01
We present a mathematical model which describes vibration in a metallic beam about its equilibrium position. This model takes the form of a nonlinear second-order (in time) and fourth-order (in space) partial differential equation with boundary and initial conditions. A finite-element Galerkin approximation scheme is used to estimate model solution. Infinite-dimensional model parameters are then estimated numerically using an inverse method procedure which involves the minimization of a least-squares cost functional. Numerical results are presented and future work to be done is discussed.
Self-consistency in the phonon space of the particle-phonon coupling model
NASA Astrophysics Data System (ADS)
Tselyaev, V.; Lyutorovich, N.; Speth, J.; Reinhard, P.-G.
2018-04-01
In the paper the nonlinear generalization of the time blocking approximation (TBA) is presented. The TBA is one of the versions of the extended random-phase approximation (RPA) developed within the Green-function method and the particle-phonon coupling model. In the generalized version of the TBA the self-consistency principle is extended onto the phonon space of the model. The numerical examples show that this nonlinear version of the TBA leads to the convergence of results with respect to enlarging the phonon space of the model.
Comparison of universal approximators incorporating partial monotonicity by structure.
Minin, Alexey; Velikova, Marina; Lang, Bernhard; Daniels, Hennie
2010-05-01
Neural networks applied in control loops and safety-critical domains have to meet more requirements than just the overall best function approximation. On the one hand, a small approximation error is required; on the other hand, the smoothness and the monotonicity of selected input-output relations have to be guaranteed. Otherwise, the stability of most of the control laws is lost. In this article we compare two neural network-based approaches incorporating partial monotonicity by structure, namely the Monotonic Multi-Layer Perceptron (MONMLP) network and the Monotonic MIN-MAX (MONMM) network. We show the universal approximation capabilities of both types of network for partially monotone functions. On a number of datasets, we investigate the advantages and disadvantages of these approaches related to approximation performance, training of the model and convergence. 2009 Elsevier Ltd. All rights reserved.
What is the correct cost functional for variational data assimilation?
NASA Astrophysics Data System (ADS)
Bröcker, Jochen
2018-03-01
Variational approaches to data assimilation, and weakly constrained four dimensional variation (WC-4DVar) in particular, are important in the geosciences but also in other communities (often under different names). The cost functions and the resulting optimal trajectories may have a probabilistic interpretation, for instance by linking data assimilation with maximum aposteriori (MAP) estimation. This is possible in particular if the unknown trajectory is modelled as the solution of a stochastic differential equation (SDE), as is increasingly the case in weather forecasting and climate modelling. In this situation, the MAP estimator (or "most probable path" of the SDE) is obtained by minimising the Onsager-Machlup functional. Although this fact is well known, there seems to be some confusion in the literature, with the energy (or "least squares") functional sometimes been claimed to yield the most probable path. The first aim of this paper is to address this confusion and show that the energy functional does not, in general, provide the most probable path. The second aim is to discuss the implications in practice. Although the mentioned results pertain to stochastic models in continuous time, they do have consequences in practice where SDE's are approximated by discrete time schemes. It turns out that using an approximation to the SDE and calculating its most probable path does not necessarily yield a good approximation to the most probable path of the SDE proper. This suggest that even in discrete time, a version of the Onsager-Machlup functional should be used, rather than the energy functional, at least if the solution is to be interpreted as a MAP estimator.
Multicategorical Spline Model for Item Response Theory.
ERIC Educational Resources Information Center
Abrahamowicz, Michal; Ramsay, James O.
1992-01-01
A nonparametric multicategorical model for multiple-choice data is proposed as an extension of the binary spline model of J. O. Ramsay and M. Abrahamowicz (1989). Results of two Monte Carlo studies illustrate the model, which approximates probability functions by rational splines. (SLD)
The relationship between stochastic and deterministic quasi-steady state approximations.
Kim, Jae Kyoung; Josić, Krešimir; Bennett, Matthew R
2015-11-23
The quasi steady-state approximation (QSSA) is frequently used to reduce deterministic models of biochemical networks. The resulting equations provide a simplified description of the network in terms of non-elementary reaction functions (e.g. Hill functions). Such deterministic reductions are frequently a basis for heuristic stochastic models in which non-elementary reaction functions are used to define reaction propensities. Despite their popularity, it remains unclear when such stochastic reductions are valid. It is frequently assumed that the stochastic reduction can be trusted whenever its deterministic counterpart is accurate. However, a number of recent examples show that this is not necessarily the case. Here we explain the origin of these discrepancies, and demonstrate a clear relationship between the accuracy of the deterministic and the stochastic QSSA for examples widely used in biological systems. With an analysis of a two-state promoter model, and numerical simulations for a variety of other models, we find that the stochastic QSSA is accurate whenever its deterministic counterpart provides an accurate approximation over a range of initial conditions which cover the likely fluctuations from the quasi steady-state (QSS). We conjecture that this relationship provides a simple and computationally inexpensive way to test the accuracy of reduced stochastic models using deterministic simulations. The stochastic QSSA is one of the most popular multi-scale stochastic simulation methods. While the use of QSSA, and the resulting non-elementary functions has been justified in the deterministic case, it is not clear when their stochastic counterparts are accurate. In this study, we show how the accuracy of the stochastic QSSA can be tested using their deterministic counterparts providing a concrete method to test when non-elementary rate functions can be used in stochastic simulations.
Macrocell path loss prediction using artificial intelligence techniques
NASA Astrophysics Data System (ADS)
Usman, Abraham U.; Okereke, Okpo U.; Omizegba, Elijah E.
2014-04-01
The prediction of propagation loss is a practical non-linear function approximation problem which linear regression or auto-regression models are limited in their ability to handle. However, some computational Intelligence techniques such as artificial neural networks (ANNs) and adaptive neuro-fuzzy inference systems (ANFISs) have been shown to have great ability to handle non-linear function approximation and prediction problems. In this study, the multiple layer perceptron neural network (MLP-NN), radial basis function neural network (RBF-NN) and an ANFIS network were trained using actual signal strength measurement taken at certain suburban areas of Bauchi metropolis, Nigeria. The trained networks were then used to predict propagation losses at the stated areas under differing conditions. The predictions were compared with the prediction accuracy of the popular Hata model. It was observed that ANFIS model gave a better fit in all cases having higher R2 values in each case and on average is more robust than MLP and RBF models as it generalises better to a different data.
Lu, Dan; Zhang, Guannan; Webster, Clayton G.; ...
2016-12-30
In this paper, we develop an improved multilevel Monte Carlo (MLMC) method for estimating cumulative distribution functions (CDFs) of a quantity of interest, coming from numerical approximation of large-scale stochastic subsurface simulations. Compared with Monte Carlo (MC) methods, that require a significantly large number of high-fidelity model executions to achieve a prescribed accuracy when computing statistical expectations, MLMC methods were originally proposed to significantly reduce the computational cost with the use of multifidelity approximations. The improved performance of the MLMC methods depends strongly on the decay of the variance of the integrand as the level increases. However, the main challengemore » in estimating CDFs is that the integrand is a discontinuous indicator function whose variance decays slowly. To address this difficult task, we approximate the integrand using a smoothing function that accelerates the decay of the variance. In addition, we design a novel a posteriori optimization strategy to calibrate the smoothing function, so as to balance the computational gain and the approximation error. The combined proposed techniques are integrated into a very general and practical algorithm that can be applied to a wide range of subsurface problems for high-dimensional uncertainty quantification, such as a fine-grid oil reservoir model considered in this effort. The numerical results reveal that with the use of the calibrated smoothing function, the improved MLMC technique significantly reduces the computational complexity compared to the standard MC approach. Finally, we discuss several factors that affect the performance of the MLMC method and provide guidance for effective and efficient usage in practice.« less
Kobayashi, Seiji
2002-05-10
A point-spread function (PSF) is commonly used as a model of an optical disk readout channel. However, the model given by the PSF does not contain the quadratic distortion generated by the photo-detection process. We introduce a model for calculating an approximation of the quadratic component of a signal. We show that this model can be further simplified when a read-only-memory (ROM) disk is assumed. We introduce an edge-spread function by which a simple nonlinear model of an optical ROM disk readout channel is created.
Arigovindan, Muthuvel; Shaevitz, Joshua; McGowan, John; Sedat, John W; Agard, David A
2010-03-29
We address the problem of computational representation of image formation in 3D widefield fluorescence microscopy with depth varying spherical aberrations. We first represent 3D depth-dependent point spread functions (PSFs) as a weighted sum of basis functions that are obtained by principal component analysis (PCA) of experimental data. This representation is then used to derive an approximating structure that compactly expresses the depth variant response as a sum of few depth invariant convolutions pre-multiplied by a set of 1D depth functions, where the convolving functions are the PCA-derived basis functions. The model offers an efficient and convenient trade-off between complexity and accuracy. For a given number of approximating PSFs, the proposed method results in a much better accuracy than the strata based approximation scheme that is currently used in the literature. In addition to yielding better accuracy, the proposed methods automatically eliminate the noise in the measured PSFs.
TOPICAL REVIEW: Nonlinear aspects of the renormalization group flows of Dyson's hierarchical model
NASA Astrophysics Data System (ADS)
Meurice, Y.
2007-06-01
We review recent results concerning the renormalization group (RG) transformation of Dyson's hierarchical model (HM). This model can be seen as an approximation of a scalar field theory on a lattice. We introduce the HM and show that its large group of symmetry simplifies drastically the blockspinning procedure. Several equivalent forms of the recursion formula are presented with unified notations. Rigourous and numerical results concerning the recursion formula are summarized. It is pointed out that the recursion formula of the HM is inequivalent to both Wilson's approximate recursion formula and Polchinski's equation in the local potential approximation (despite the very small difference with the exponents of the latter). We draw a comparison between the RG of the HM and functional RG equations in the local potential approximation. The construction of the linear and nonlinear scaling variables is discussed in an operational way. We describe the calculation of non-universal critical amplitudes in terms of the scaling variables of two fixed points. This question appears as a problem of interpolation between these fixed points. Universal amplitude ratios are calculated. We discuss the large-N limit and the complex singularities of the critical potential calculable in this limit. The interpolation between the HM and more conventional lattice models is presented as a symmetry breaking problem. We briefly introduce models with an approximate supersymmetry. One important goal of this review is to present a configuration space counterpart, suitable for lattice formulations, of functional RG equations formulated in momentum space (often called exact RG equations and abbreviated ERGE).
NASA Astrophysics Data System (ADS)
Aymard, François; Gulminelli, Francesca; Margueron, Jérôme
2016-08-01
The problem of determination of nuclear surface energy is addressed within the framework of the extended Thomas Fermi (ETF) approximation using Skyrme functionals. We propose an analytical model for the density profiles with variationally determined diffuseness parameters. In this first paper, we consider the case of symmetric nuclei. In this situation, the ETF functional can be exactly integrated, leading to an analytical formula expressing the surface energy as a function of the couplings of the energy functional. The importance of non-local terms is stressed and it is shown that they cannot be deduced simply from the local part of the functional, as it was suggested in previous works.
Ionization potential depression and optical spectra in a Debye plasma model
NASA Astrophysics Data System (ADS)
Lin, Chengliang; Röpke, Gerd; Reinholz, Heidi; Kraeft, Wolf-Dietrich
2017-11-01
We show how optical spectra in dense plasmas are determined by the shift of energy levels as well as the broadening owing to collisions with the plasma particles. In lowest approximation, the interaction with the plasma particles is described by the RPA dielectric function, leading to the Debye shift of the continuum edge. The bound states remain nearly un-shifted, their broadening is calculated in Born approximation. The role of ionization potential depression as well as the Inglis-Teller effect are shown. The model calculations have to be improved going beyond the lowest (RPA) approximation when applying to WDM spectra.
Nagel, Corey; Beach, Jack; Iribagiza, Chantal; Thomas, Evan A
2015-12-15
In rural sub-Saharan Africa, where handpumps are common, 10-67% are nonfunctional at any one time, and many never get repaired. Increased reliability requires improved monitoring and responsiveness of maintenance providers. In 2014, 181 cellular enabled water pump use sensors were installed in three provinces of Rwanda. In three arms, the nominal maintenance model was compared against a "best practice" circuit rider model, and an "ambulance" service model. In only the ambulance model was the sensor data available to the implementer, and used to dispatch technicians. The study ran for seven months in 2014-2015. In the study period, the nominal maintenance group had a median time to successful repair of approximately 152 days, with a mean per-pump functionality of about 68%. In the circuit rider group, the median time to successful repair was nearly 57 days, with a per-pump functionality mean of nearly 73%. In the ambulance service group, the successful repair interval was nearly 21 days with a functionality mean of nearly 91%. An indicative cost analysis suggests that the cost per functional pump per year is approximately similar between the three models. However, the benefits of reliable water service may justify greater focus on servicing models over installation models.
A Poisson process approximation for generalized K-5 confidence regions
NASA Technical Reports Server (NTRS)
Arsham, H.; Miller, D. R.
1982-01-01
One-sided confidence regions for continuous cumulative distribution functions are constructed using empirical cumulative distribution functions and the generalized Kolmogorov-Smirnov distance. The band width of such regions becomes narrower in the right or left tail of the distribution. To avoid tedious computation of confidence levels and critical values, an approximation based on the Poisson process is introduced. This aproximation provides a conservative confidence region; moreover, the approximation error decreases monotonically to 0 as sample size increases. Critical values necessary for implementation are given. Applications are made to the areas of risk analysis, investment modeling, reliability assessment, and analysis of fault tolerant systems.
NASA Technical Reports Server (NTRS)
Cogley, A. C.; Borucki, W. J.
1976-01-01
When incorporating formulations of instantaneous solar heating or photolytic rates as functions of altitude and sun angle into long range forecasting models, it may be desirable to replace the time integrals by daily average rates that are simple functions of latitude and season. This replacement is accomplished by approximating the integral over the solar day by a pure exponential. This gives a daily average rate as a multiplication factor times the instantaneous rate evaluated at an appropriate sun angle. The accuracy of the exponential approximation is investigated by a sample calculation using an instantaneous ozone heating formulation available in the literature.
Modified hyperbolic sine model for titanium dioxide-based memristive thin films
NASA Astrophysics Data System (ADS)
Abu Bakar, Raudah; Syahirah Kamarozaman, Nur; Fazlida Hanim Abdullah, Wan; Herman, Sukreen Hana
2018-03-01
Since the emergence of memristor as the newest fundamental circuit elements, studies on memristor modeling have been evolved. To date, the developed models were based on the linear model, linear ionic drift model using different window functions, tunnelling barrier model and hyperbolic-sine function based model. Although using hyperbolic-sine function model could predict the memristor electrical properties, the model was not well fitted to the experimental data. In order to improve the performance of the hyperbolic-sine function model, the state variable equation was modified. On the one hand, the addition of window function cannot provide an improved fitting. By multiplying the Yakopcic’s state variable model to Chang’s model on the other hand resulted in the closer agreement with the TiO2 thin film experimental data. The percentage error was approximately 2.15%.
NASA Astrophysics Data System (ADS)
Jurčo, Branislav
We describe an integrable model, related to the Gaudin magnet, and its relation to the matrix model of Brézin, Itzykson, Parisi and Zuber. Relation is based on Bethe ansatz for the integrable model and its interpretation using orthogonal polynomials and saddle point approximation. Large-N limit of the matrix model corresponds to the thermodynamic limit of the integrable system. In this limit (functional) Bethe ansatz is the same as the generating function for correlators of the matrix models.
NASA Astrophysics Data System (ADS)
Allphin, Devin
Computational fluid dynamics (CFD) solution approximations for complex fluid flow problems have become a common and powerful engineering analysis technique. These tools, though qualitatively useful, remain limited in practice by their underlying inverse relationship between simulation accuracy and overall computational expense. While a great volume of research has focused on remedying these issues inherent to CFD, one traditionally overlooked area of resource reduction for engineering analysis concerns the basic definition and determination of functional relationships for the studied fluid flow variables. This artificial relationship-building technique, called meta-modeling or surrogate/offline approximation, uses design of experiments (DOE) theory to efficiently approximate non-physical coupling between the variables of interest in a fluid flow analysis problem. By mathematically approximating these variables, DOE methods can effectively reduce the required quantity of CFD simulations, freeing computational resources for other analytical focuses. An idealized interpretation of a fluid flow problem can also be employed to create suitably accurate approximations of fluid flow variables for the purposes of engineering analysis. When used in parallel with a meta-modeling approximation, a closed-form approximation can provide useful feedback concerning proper construction, suitability, or even necessity of an offline approximation tool. It also provides a short-circuit pathway for further reducing the overall computational demands of a fluid flow analysis, again freeing resources for otherwise unsuitable resource expenditures. To validate these inferences, a design optimization problem was presented requiring the inexpensive estimation of aerodynamic forces applied to a valve operating on a simulated piston-cylinder heat engine. The determination of these forces was to be found using parallel surrogate and exact approximation methods, thus evidencing the comparative benefits of this technique. For the offline approximation, latin hypercube sampling (LHS) was used for design space filling across four (4) independent design variable degrees of freedom (DOF). Flow solutions at the mapped test sites were converged using STAR-CCM+ with aerodynamic forces from the CFD models then functionally approximated using Kriging interpolation. For the closed-form approximation, the problem was interpreted as an ideal 2-D converging-diverging (C-D) nozzle, where aerodynamic forces were directly mapped by application of the Euler equation solutions for isentropic compression/expansion. A cost-weighting procedure was finally established for creating model-selective discretionary logic, with a synthesized parallel simulation resource summary provided.
Wu, Wensheng; Zhang, Canyang; Lin, Wenjing; Chen, Quan; Guo, Xindong; Qian, Yu; Zhang, Lijuan
2015-01-01
Self-assembled nano-micelles of amphiphilic polymers represent a novel anticancer drug delivery system. However, their full clinical utilization remains challenging because the quantitative structure-property relationship (QSPR) between the polymer structure and the efficacy of micelles as a drug carrier is poorly understood. Here, we developed a series of QSPR models to account for the drug loading capacity of polymeric micelles using the genetic function approximation (GFA) algorithm. These models were further evaluated by internal and external validation and a Y-randomization test in terms of stability and generalization, yielding an optimization model that is applicable to an expanded materials regime. As confirmed by experimental data, the relationship between microstructure and drug loading capacity can be well-simulated, suggesting that our models are readily applicable to the quantitative evaluation of the drug-loading capacity of polymeric micelles. Our work may offer a pathway to the design of formulation experiments.
Lin, Wenjing; Chen, Quan; Guo, Xindong; Qian, Yu; Zhang, Lijuan
2015-01-01
Self-assembled nano-micelles of amphiphilic polymers represent a novel anticancer drug delivery system. However, their full clinical utilization remains challenging because the quantitative structure-property relationship (QSPR) between the polymer structure and the efficacy of micelles as a drug carrier is poorly understood. Here, we developed a series of QSPR models to account for the drug loading capacity of polymeric micelles using the genetic function approximation (GFA) algorithm. These models were further evaluated by internal and external validation and a Y-randomization test in terms of stability and generalization, yielding an optimization model that is applicable to an expanded materials regime. As confirmed by experimental data, the relationship between microstructure and drug loading capacity can be well-simulated, suggesting that our models are readily applicable to the quantitative evaluation of the drug-loading capacity of polymeric micelles. Our work may offer a pathway to the design of formulation experiments. PMID:25780923
Moncho, Salvador; Autschbach, Jochen
2010-01-12
A benchmark study for relativistic density functional calculations of NMR spin-spin coupling constants has been performed. The test set contained 47 complexes with heavy metal atoms (W, Pt, Hg, Tl, Pb) with a total of 88 coupling constants involving one or two heavy metal atoms. One-, two-, three-, and four-bond spin-spin couplings have been computed at different levels of theory (nonhybrid vs hybrid DFT, scalar vs two-component relativistic). The computational model was based on geometries fully optimized at the BP/TZP scalar relativistic zeroth-order regular approximation (ZORA) and the conductor-like screening model (COSMO) to include solvent effects. The NMR computations also employed the continuum solvent model. Computations in the gas phase were performed in order to assess the importance of the solvation model. The relative median deviations between various computational models and experiment were found to range between 13% and 21%, with the highest-level computational model (hybrid density functional computations including scalar plus spin-orbit relativistic effects, the COSMO solvent model, and a Gaussian finite-nucleus model) performing best.
NASA Astrophysics Data System (ADS)
García-Aldea, David; Alvarellos, J. E.
2009-03-01
We present several nonlocal exchange energy density functionals that reproduce the linear response function of the free electron gas. These nonlocal functionals are constructed following a similar procedure used previously for nonlocal kinetic energy density functionals by Chac'on-Alvarellos-Tarazona, Garc'ia-Gonz'alez et al., Wang-Govind-Carter and Garc'ia-Aldea-Alvarellos. The exchange response function is not known but we have used the approximate response function developed by Utsumi and Ichimaru, even we must remark that the same ansatz can be used to reproduce any other response function with the same scaling properties. We have developed two families of new nonlocal functionals: one is constructed with a mathematical structure based on the LDA approximation -- the Dirac functional for the exchange - and for the second one the structure of the second order gradient expansion approximation is took as a model. The functionals are constructed is such a way that they can be used in localized systems (using real space calculations) and in extended systems (using the momentum space, and achieving a quasilinear scaling with the system size if a constant reference electron density is defined).
Balance of baryon number in the quark coalescence model
NASA Astrophysics Data System (ADS)
Bialas, A.; Rafelski, J.
2006-02-01
The charge and baryon balance functions are studied in the coalescence hadronization mechanism of quark-gluon plasma. Assuming that in the plasma phase the qqbar pairs form uncorrelated clusters whose decay is also uncorrelated, one can understand the observed small width of the charge balance function in the Gaussian approximation. The coalescence model predicts even smaller width of the baryon-antibaryon balance function: σBBbar /σ+ - =√{ 2 / 3 }.
NASA Astrophysics Data System (ADS)
Inochkin, F. M.; Kruglov, S. K.; Bronshtein, I. G.; Kompan, T. A.; Kondratjev, S. V.; Korenev, A. S.; Pukhov, N. F.
2017-06-01
A new method for precise subpixel edge estimation is presented. The principle of the method is the iterative image approximation in 2D with subpixel accuracy until the appropriate simulated is found, matching the simulated and acquired images. A numerical image model is presented consisting of three parts: an edge model, object and background brightness distribution model, lens aberrations model including diffraction. The optimal values of model parameters are determined by means of conjugate-gradient numerical optimization of a merit function corresponding to the L2 distance between acquired and simulated images. Computationally-effective procedure for the merit function calculation along with sufficient gradient approximation is described. Subpixel-accuracy image simulation is performed in a Fourier domain with theoretically unlimited precision of edge points location. The method is capable of compensating lens aberrations and obtaining the edge information with increased resolution. Experimental method verification with digital micromirror device applied to physically simulate an object with known edge geometry is shown. Experimental results for various high-temperature materials within the temperature range of 1000°C..2400°C are presented.
A rapid radiative transfer model for reflection of solar radiation
NASA Technical Reports Server (NTRS)
Xiang, X.; Smith, E. A.; Justus, C. G.
1994-01-01
A rapid analytical radiative transfer model for reflection of solar radiation in plane-parallel atmospheres is developed based on the Sobolev approach and the delta function transformation technique. A distinct advantage of this model over alternative two-stream solutions is that in addition to yielding the irradiance components, which turn out to be mathematically equivalent to the delta-Eddington approximation, the radiance field can also be expanded in a mathematically consistent fashion. Tests with the model against a more precise multistream discrete ordinate model over a wide range of input parameters demonstrate that the new approximate method typically produces average radiance differences of less than 5%, with worst average differences of approximately 10%-15%. By the same token, the computational speed of the new model is some tens to thousands times faster than that of the more precise model when its stream resolution is set to generate precise calculations.
On the "Optimal" Choice of Trial Functions for Modelling Potential Fields
NASA Astrophysics Data System (ADS)
Michel, Volker
2015-04-01
There are many trial functions (e.g. on the sphere) available which can be used for the modelling of a potential field. Among them are orthogonal polynomials such as spherical harmonics and radial basis functions such as spline or wavelet basis functions. Their pros and cons have been widely discussed in the last decades. We present an algorithm, the Regularized Functional Matching Pursuit (RFMP), which is able to choose trial functions of different kinds in order to combine them to a stable approximation of a potential field. One main advantage of the RFMP is that the constructed approximation inherits the advantages of the different basis systems. By including spherical harmonics, coarse global structures can be represented in a sparse way. However, the additional use of spline basis functions allows a stable handling of scattered data grids. Furthermore, the inclusion of wavelets and scaling functions yields a multiscale analysis of the potential. In addition, ill-posed inverse problems (like a downward continuation or the inverse gravimetric problem) can be regularized with the algorithm. We show some numerical examples to demonstrate the possibilities which the RFMP provides.
Robust Bayesian decision theory applied to optimal dosage.
Abraham, Christophe; Daurès, Jean-Pierre
2004-04-15
We give a model for constructing an utility function u(theta,d) in a dose prescription problem. theta and d denote respectively the patient state of health and the dose. The construction of u is based on the conditional probabilities of several variables. These probabilities are described by logistic models. Obviously, u is only an approximation of the true utility function and that is why we investigate the sensitivity of the final decision with respect to the utility function. We construct a class of utility functions from u and approximate the set of all Bayes actions associated to that class. Then, we measure the sensitivity as the greatest difference between the expected utilities of two Bayes actions. Finally, we apply these results to weighing up a chemotherapy treatment of lung cancer. This application emphasizes the importance of measuring robustness through the utility of decisions rather than the decisions themselves. Copyright 2004 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Shiju, S.; Sumitra, S.
2017-12-01
In this paper, the multiple kernel learning (MKL) is formulated as a supervised classification problem. We dealt with binary classification data and hence the data modelling problem involves the computation of two decision boundaries of which one related with that of kernel learning and the other with that of input data. In our approach, they are found with the aid of a single cost function by constructing a global reproducing kernel Hilbert space (RKHS) as the direct sum of the RKHSs corresponding to the decision boundaries of kernel learning and input data and searching that function from the global RKHS, which can be represented as the direct sum of the decision boundaries under consideration. In our experimental analysis, the proposed model had shown superior performance in comparison with that of existing two stage function approximation formulation of MKL, where the decision functions of kernel learning and input data are found separately using two different cost functions. This is due to the fact that single stage representation helps the knowledge transfer between the computation procedures for finding the decision boundaries of kernel learning and input data, which inturn boosts the generalisation capacity of the model.
Toward a generalized theory of epidemic awareness in social networks
NASA Astrophysics Data System (ADS)
Wu, Qingchu; Zhu, Wenfang
We discuss the dynamics of a susceptible-infected-susceptible (SIS) model with local awareness in networks. Individual awareness to the infectious disease is characterized by a general function of epidemic information in its neighborhood. We build a high-accuracy approximate equation governing the spreading dynamics and derive an approximate epidemic threshold above which the epidemic spreads over the whole network. Our results extend the previous work and show that the epidemic threshold is dependent on the awareness function in terms of one infectious neighbor. Interestingly, when a pow-law awareness function is chosen, the epidemic threshold can emerge in infinite networks.
NASA Astrophysics Data System (ADS)
Sergeev, A.; Alharbi, F. H.; Jovanovic, R.; Kais, S.
2016-04-01
The gradient expansion of the kinetic energy density functional, when applied to atoms or finite systems, usually grossly overestimates the energy in the fourth order and generally diverges in the sixth order. We avoid the divergence of the integral by replacing the asymptotic series including the sixth order term in the integrand by a rational function. Padé approximants show moderate improvements in accuracy in comparison with partial sums of the series. The results are discussed for atoms and Hooke’s law model for two-electron atoms.
Neural networks for function approximation in nonlinear control
NASA Technical Reports Server (NTRS)
Linse, Dennis J.; Stengel, Robert F.
1990-01-01
Two neural network architectures are compared with a classical spline interpolation technique for the approximation of functions useful in a nonlinear control system. A standard back-propagation feedforward neural network and a cerebellar model articulation controller (CMAC) neural network are presented, and their results are compared with a B-spline interpolation procedure that is updated using recursive least-squares parameter identification. Each method is able to accurately represent a one-dimensional test function. Tradeoffs between size requirements, speed of operation, and speed of learning indicate that neural networks may be practical for identification and adaptation in a nonlinear control environment.
Exchange-Correlation Effects for Noncovalent Interactions in Density Functional Theory.
Otero-de-la-Roza, A; DiLabio, Gino A; Johnson, Erin R
2016-07-12
In this article, we develop an understanding of how errors from exchange-correlation functionals affect the modeling of noncovalent interactions in dispersion-corrected density-functional theory. Computed CCSD(T) reference binding energies for a collection of small-molecule clusters are decomposed via a molecular many-body expansion and are used to benchmark density-functional approximations, including the effect of semilocal approximation, exact-exchange admixture, and range separation. Three sources of error are identified. Repulsion error arises from the choice of semilocal functional approximation. This error affects intermolecular repulsions and is present in all n-body exchange-repulsion energies with a sign that alternates with the order n of the interaction. Delocalization error is independent of the choice of semilocal functional but does depend on the exact exchange fraction. Delocalization error misrepresents the induction energies, leading to overbinding in all induction n-body terms, and underestimates the electrostatic contribution to the 2-body energies. Deformation error affects only monomer relaxation (deformation) energies and behaves similarly to bond-dissociation energy errors. Delocalization and deformation errors affect systems with significant intermolecular orbital interactions (e.g., hydrogen- and halogen-bonded systems), whereas repulsion error is ubiquitous. Many-body errors from the underlying exchange-correlation functional greatly exceed in general the magnitude of the many-body dispersion energy term. A functional built to accurately model noncovalent interactions must contain a dispersion correction, semilocal exchange, and correlation components that minimize the repulsion error independently and must also incorporate exact exchange in such a way that delocalization error is absent.
Approximate Green's function methods for HZE transport in multilayered materials
NASA Technical Reports Server (NTRS)
Wilson, John W.; Badavi, Francis F.; Shinn, Judy L.; Costen, Robert C.
1993-01-01
A nonperturbative analytic solution of the high charge and energy (HZE) Green's function is used to implement a computer code for laboratory ion beam transport in multilayered materials. The code is established to operate on the Langley nuclear fragmentation model used in engineering applications. Computational procedures are established to generate linear energy transfer (LET) distributions for a specified ion beam and target for comparison with experimental measurements. The code was found to be highly efficient and compared well with the perturbation approximation.
NASA Technical Reports Server (NTRS)
Estefan, J. A.; Sovers, O. J.
1994-01-01
The standard tropospheric calibration model implemented in the operational Orbit Determination Program is the seasonal model developed by C. C. Chao in the early 1970's. The seasonal model has seen only slight modification since its release, particularly in the format and content of the zenith delay calibrations. Chao's most recent standard mapping tables, which are used to project the zenith delay calibrations along the station-to-spacecraft line of sight, have not been modified since they were first published in late 1972. This report focuses principally on proposed upgrades to the zenith delay mapping process, although modeling improvements to the zenith delay calibration process are also discussed. A number of candidate approximation models for the tropospheric mapping are evaluated, including the semi-analytic mapping function of Lanyi, and the semi-empirical mapping functions of Davis, et. al.('CfA-2.2'), of Ifadis (global solution model), of Herring ('MTT'), and of Niell ('NMF'). All of the candidate mapping functions are superior to the Chao standard mapping tables and approximation formulas when evaluated against the current Deep Space Network Mark 3 intercontinental very long baselines interferometry database.
Multigrid based First-Principles Molecular Dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fattebert, Jean-Luc; Osei-Kuffuor, Daniel; Dunn, Ian
2017-06-01
MGmol ls a First-Principles Molecular Dynamics code. It relies on the Born-Oppenheimer approximation and models the electronic structure using Density Functional Theory, either LDA or PBE. Norm-conserving pseudopotentials are used to model atomic cores.
Density-functional theory applied to d- and f-electron systems
NASA Astrophysics Data System (ADS)
Wu, Xueyuan
Density functional theory (DFT) has been applied to study the electronic and geometric structures of prototype d- and f-electron systems. For the d-electron system, all electron DFT with gradient corrections to the exchange and correlation functionals has been used to investigate the properties of small neutral and cationic vanadium clusters. Results are in good agreement with available experimental and other theoretical data. For the f-electron system, a hybrid DFT, namely, B3LYP (Becke's 3-parameter hybrid functional using the correlation functional of Lee, Yang and Parr) with relativistic effective core potentials and cluster models has been applied to investigate the nature of chemical bonding of both the bulk and the surfaces of plutonium monoxide and dioxide. Using periodic models, the electronic and geometric structures of PuO2 and its (110) surface, as well as water adsorption on this surface have also been investigated using DFT in both local density approximation (LDA) and generalized gradient approximation (GGA) formalisms.
System Identification for Nonlinear Control Using Neural Networks
NASA Technical Reports Server (NTRS)
Stengel, Robert F.; Linse, Dennis J.
1990-01-01
An approach to incorporating artificial neural networks in nonlinear, adaptive control systems is described. The controller contains three principal elements: a nonlinear inverse dynamic control law whose coefficients depend on a comprehensive model of the plant, a neural network that models system dynamics, and a state estimator whose outputs drive the control law and train the neural network. Attention is focused on the system identification task, which combines an extended Kalman filter with generalized spline function approximation. Continual learning is possible during normal operation, without taking the system off line for specialized training. Nonlinear inverse dynamic control requires smooth derivatives as well as function estimates, imposing stringent goals on the approximating technique.
2012-08-01
small data noise and model error, the discrete Hessian can be approximated by a low-rank matrix. This in turn enables fast solution of an appropriately...implication of the compactness of the Hessian is that for small data noise and model error, the discrete Hessian can be approximated by a low-rank matrix. This...probability distribution is given by the inverse of the Hessian of the negative log likelihood function. For Gaussian data noise and model error, this
The unitary convolution approximation for heavy ions
NASA Astrophysics Data System (ADS)
Grande, P. L.; Schiwietz, G.
2002-10-01
The convolution approximation for the impact-parameter dependent energy loss is reviewed with emphasis on the determination of the stopping force for heavy projectiles. In this method, the energy loss in different impact-parameter regions is well determined and interpolated smoothly. The physical inputs of the model are the projectile-screening function (in the case of dressed ions), the electron density and oscillators strengths of the target atoms. Moreover, the convolution approximation, in the perturbative mode (called PCA), yields remarkable agreement with full semi-classical-approximation (SCA) results for bare as well as for screened ions at all impact parameters. In the unitary mode (called UCA), the method contains some higher-order effects (yielding in some cases rather good agreement with full coupled-channel calculations) and approaches the classical regime similar as the Bohr model for large perturbations ( Z/ v≫1). The results are then used to compare with experimental values of the non-equilibrium stopping force as a function of the projectile charge as well as with the equilibrium energy loss under non-aligned and channeling conditions.
Representing Functions in n Dimensions to Arbitrary Accuracy
NASA Technical Reports Server (NTRS)
Scotti, Stephen J.
2007-01-01
A method of approximating a scalar function of n independent variables (where n is a positive integer) to arbitrary accuracy has been developed. This method is expected to be attractive for use in engineering computations in which it is necessary to link global models with local ones or in which it is necessary to interpolate noiseless tabular data that have been computed from analytic functions or numerical models in n-dimensional spaces of design parameters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taylor, J.A.; Brasseur, G.P.; Zimmerman, P.R.
Using the hydroxyl radical field calibrated to the methyl chloroform observations, the globally averaged release of methane and its spatial and temporal distribution were investigated. Two source function models of the spatial and temporal distribution of the flux of methane to the atmosphere were developed. The first model was based on the assumption that methane is emitted as a proportion of net primary productivity (NPP). With the average hydroxyl radical concentration fixed, the methane source term was computed as {approximately}623 Tg CH{sub 4}, giving an atmospheric lifetime for methane {approximately}8.3 years. The second model identified source regions for methane frommore » rice paddies, wetlands, enteric fermentation, termites, and biomass burning based on high-resolution land use data. This methane source distribution resulted in an estimate of the global total methane source of {approximately}611 Tg CH{sub 4}, giving an atmospheric lifetime for methane {approximately}8.5 years. The most significant difference between the two models were predictions of methane fluxes over China and South East Asia, the location of most of the world's rice paddies. Using a recent measurement of the reaction rate of hydroxyl radical and methane leads to estimates of the global total methane source for SF1 of {approximately}524 Tg CH{sub 4} giving an atmospheric lifetime of {approximately}10.0 years and for SF2{approximately}514 Tg CH{sub 4} yielding a lifetime of {approximately}10.2 years.« less
Applicability of Kinematic and Diffusive models for mud-flows: a steady state analysis
NASA Astrophysics Data System (ADS)
Di Cristo, Cristiana; Iervolino, Michele; Vacca, Andrea
2018-04-01
The paper investigates the applicability of Kinematic and Diffusive Wave models for mud-flows with a power-law shear-thinning rheology. In analogy with a well-known approach for turbulent clear-water flows, the study compares the steady flow depth profiles predicted by approximated models with those of the Full Dynamic Wave one. For all the models and assuming an infinitely wide channel, the analytical solution of the flow depth profiles, in terms of hypergeometric functions, is derived. The accuracy of the approximated models is assessed by computing the average, along the channel length, of the errors, for several values of the Froude and kinematic wave numbers. Assuming the threshold value of the error equal to 5%, the applicability conditions of the two approximations have been individuated for several values of the power-law exponent, showing a crucial role of the rheology. The comparison with the clear-water results indicates that applicability criteria for clear-water flows do not apply to shear-thinning fluids, potentially leading to an incorrect use of approximated models if the rheology is not properly accounted for.
NASA Astrophysics Data System (ADS)
Custo, Anna; Wells, William M., III; Barnett, Alex H.; Hillman, Elizabeth M. C.; Boas, David A.
2006-07-01
An efficient computation of the time-dependent forward solution for photon transport in a head model is a key capability for performing accurate inversion for functional diffuse optical imaging of the brain. The diffusion approximation to photon transport is much faster to simulate than the physically correct radiative transport equation (RTE); however, it is commonly assumed that scattering lengths must be much smaller than all system dimensions and all absorption lengths for the approximation to be accurate. Neither of these conditions is satisfied in the cerebrospinal fluid (CSF). Since line-of-sight distances in the CSF are small, of the order of a few millimeters, we explore the idea that the CSF scattering coefficient may be modeled by any value from zero up to the order of the typical inverse line-of-sight distance, or approximately 0.3 mm-1, without significantly altering the calculated detector signals or the partial path lengths relevant for functional measurements. We demonstrate this in detail by using a Monte Carlo simulation of the RTE in a three-dimensional head model based on clinical magnetic resonance imaging data, with realistic optode geometries. Our findings lead us to expect that the diffusion approximation will be valid even in the presence of the CSF, with consequences for faster solution of the inverse problem.
A degradation function consistent with Cocks–Ashby porosity kinetics
Moore, John A.
2017-10-14
Here, the load carrying capacity of ductile materials degrades as a function of porosity, stress state and strain-rate. The effect of these variables on porosity kinetics is captured by the Cocks–Ashby model; however, the Cocks–Ashby model does not account for material degradation directly. This work uses a yield criteria to form a degradation function that is consistent with Cocks–Ashby porosity kinetics and is a function of porosity, stress state and strain-rate dependence. Approximations of this degradation function for pure hydrostatic stress states are also explored.
A deterministic width function model
NASA Astrophysics Data System (ADS)
Puente, C. E.; Sivakumar, B.
Use of a deterministic fractal-multifractal (FM) geometric method to model width functions of natural river networks, as derived distributions of simple multifractal measures via fractal interpolating functions, is reported. It is first demonstrated that the FM procedure may be used to simulate natural width functions, preserving their most relevant features like their overall shape and texture and their observed power-law scaling on their power spectra. It is then shown, via two natural river networks (Racoon and Brushy creeks in the United States), that the FM approach may also be used to closely approximate existing width functions.
A degradation function consistent with Cocks–Ashby porosity kinetics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moore, John A.
Here, the load carrying capacity of ductile materials degrades as a function of porosity, stress state and strain-rate. The effect of these variables on porosity kinetics is captured by the Cocks–Ashby model; however, the Cocks–Ashby model does not account for material degradation directly. This work uses a yield criteria to form a degradation function that is consistent with Cocks–Ashby porosity kinetics and is a function of porosity, stress state and strain-rate dependence. Approximations of this degradation function for pure hydrostatic stress states are also explored.
NASA Technical Reports Server (NTRS)
Tiffany, Sherwood H.; Karpel, Mordechay
1989-01-01
Various control analysis, design, and simulation techniques for aeroelastic applications require the equations of motion to be cast in a linear time-invariant state-space form. Unsteady aerodynamics forces have to be approximated as rational functions of the Laplace variable in order to put them in this framework. For the minimum-state method, the number of denominator roots in the rational approximation. Results are shown of applying various approximation enhancements (including optimization, frequency dependent weighting of the tabular data, and constraint selection) with the minimum-state formulation to the active flexible wing wind-tunnel model. The results demonstrate that good models can be developed which have an order of magnitude fewer augmenting aerodynamic equations more than traditional approaches. This reduction facilitates the design of lower order control systems, analysis of control system performance, and near real-time simulation of aeroservoelastic phenomena.
NASA Technical Reports Server (NTRS)
Guo, Tong-Yi; Hwang, Chyi; Shieh, Leang-San
1994-01-01
This paper deals with the multipoint Cauer matrix continued-fraction expansion (MCFE) for model reduction of linear multi-input multi-output (MIMO) systems with various numbers of inputs and outputs. A salient feature of the proposed MCFE approach to model reduction of MIMO systems with square transfer matrices is its equivalence to the matrix Pade approximation approach. The Cauer second form of the ordinary MCFE for a square transfer function matrix is generalized in this paper to a multipoint and nonsquare-matrix version. An interesting connection of the multipoint Cauer MCFE method to the multipoint matrix Pade approximation method is established. Also, algorithms for obtaining the reduced-degree matrix-fraction descriptions and reduced-dimensional state-space models from a transfer function matrix via the multipoint Cauer MCFE algorithm are presented. Practical advantages of using the multipoint Cauer MCFE are discussed and a numerical example is provided to illustrate the algorithms.
Calculation of phonon dispersion relation using new correlation functional
NASA Astrophysics Data System (ADS)
Jitropas, Ukrit; Hsu, Chung-Hao
2017-06-01
To extend the use of Local Density Approximation (LDA), a new analytical correlation functional is introduced. Correlation energy is an essential ingredient within density functional theory and used to determine ground state energy and other properties including phonon dispersion relation. Except for high and low density limit, the general expression of correlation energy is unknown. The approximation approach is therefore required. The accuracy of the modelling system depends on the quality of correlation energy approximation. Typical correlation functionals used in LDA such as Vosko-Wilk-Nusair (VWN) and Perdew-Wang (PW) were obtained from parameterizing the near-exact quantum Monte Carlo data of Ceperley and Alder. These functionals are presented in complex form and inconvenient to implement. Alternatively, the latest published formula of Chachiyo correlation functional provides a comparable result for those much more complicated functionals. In addition, it provides more predictive power based on the first principle approach, not fitting functionals. Nevertheless, the performance of Chachiyo formula for calculating phonon dispersion relation (a key to the thermal properties of materials) has not been tested yet. Here, the implementation of new correlation functional to calculate phonon dispersion relation is initiated. The accuracy and its validity will be explored.
Optimized Reduction of Unsteady Radial Forces in a Singlechannel Pump for Wastewater Treatment
NASA Astrophysics Data System (ADS)
Kim, Jin-Hyuk; Cho, Bo-Min; Choi, Young-Seok; Lee, Kyoung-Yong; Peck, Jong-Hyeon; Kim, Seon-Chang
2016-11-01
A single-channel pump for wastewater treatment was optimized to reduce unsteady radial force sources caused by impeller-volute interactions. The steady and unsteady Reynolds- averaged Navier-Stokes equations using the shear-stress transport turbulence model were discretized by finite volume approximations and solved on tetrahedral grids to analyze the flow in the single-channel pump. The sweep area of radial force during one revolution and the distance of the sweep-area center of mass from the origin were selected as the objective functions; the two design variables were related to the internal flow cross-sectional area of the volute. These objective functions were integrated into one objective function by applying the weighting factor for optimization. Latin hypercube sampling was employed to generate twelve design points within the design space. A response-surface approximation model was constructed as a surrogate model for the objectives, based on the objective function values at the generated design points. The optimized results showed considerable reduction in the unsteady radial force sources in the optimum design, relative to those of the reference design.
Hamiltonian Monte Carlo acceleration using surrogate functions with random bases.
Zhang, Cheng; Shahbaba, Babak; Zhao, Hongkai
2017-11-01
For big data analysis, high computational cost for Bayesian methods often limits their applications in practice. In recent years, there have been many attempts to improve computational efficiency of Bayesian inference. Here we propose an efficient and scalable computational technique for a state-of-the-art Markov chain Monte Carlo methods, namely, Hamiltonian Monte Carlo. The key idea is to explore and exploit the structure and regularity in parameter space for the underlying probabilistic model to construct an effective approximation of its geometric properties. To this end, we build a surrogate function to approximate the target distribution using properly chosen random bases and an efficient optimization process. The resulting method provides a flexible, scalable, and efficient sampling algorithm, which converges to the correct target distribution. We show that by choosing the basis functions and optimization process differently, our method can be related to other approaches for the construction of surrogate functions such as generalized additive models or Gaussian process models. Experiments based on simulated and real data show that our approach leads to substantially more efficient sampling algorithms compared to existing state-of-the-art methods.
A reduced-order model from high-dimensional frictional hysteresis
Biswas, Saurabh; Chatterjee, Anindya
2014-01-01
Hysteresis in material behaviour includes both signum nonlinearities as well as high dimensionality. Available models for component-level hysteretic behaviour are empirical. Here, we derive a low-order model for rate-independent hysteresis from a high-dimensional massless frictional system. The original system, being given in terms of signs of velocities, is first solved incrementally using a linear complementarity problem formulation. From this numerical solution, to develop a reduced-order model, basis vectors are chosen using the singular value decomposition. The slip direction in generalized coordinates is identified as the minimizer of a dissipation-related function. That function includes terms for frictional dissipation through signum nonlinearities at many friction sites. Luckily, it allows a convenient analytical approximation. Upon solution of the approximated minimization problem, the slip direction is found. A final evolution equation for a few states is then obtained that gives a good match with the full solution. The model obtained here may lead to new insights into hysteresis as well as better empirical modelling thereof. PMID:24910522
Adaptation of the Carter-Tracy water influx calculation to groundwater flow simulation
Kipp, Kenneth L.
1986-01-01
The Carter-Tracy calculation for water influx is adapted to groundwater flow simulation with additional clarifying explanation not present in the original papers. The Van Everdingen and Hurst aquifer-influence functions for radial flow from an outer aquifer region are employed. This technique, based on convolution of unit-step response functions, offers a simple but approximate method for embedding an inner region of groundwater flow simulation within a much larger aquifer region where flow can be treated in an approximate fashion. The use of aquifer-influence functions in groundwater flow modeling reduces the size of the computational grid with a corresponding reduction in computer storage and execution time. The Carter-Tracy approximation to the convolution integral enables the aquifer influence function calculation to be made with an additional storage requirement of only two times the number of boundary nodes more than that required for the inner region simulation. It is a good approximation for constant flow rates but is poor for time-varying flow rates where the variation is large relative to the mean. A variety of outer aquifer region geometries, exterior boundary conditions, and flow rate versus potentiometric head relations can be used. The radial, transient-flow case presented is representative. An analytical approximation to the functions of Van Everdingen and Hurst for the dimensionless potentiometric head versus dimensionless time is given.
This is SPIRAL-TAP: Sparse Poisson Intensity Reconstruction ALgorithms--theory and practice.
Harmany, Zachary T; Marcia, Roummel F; Willett, Rebecca M
2012-03-01
Observations in many applications consist of counts of discrete events, such as photons hitting a detector, which cannot be effectively modeled using an additive bounded or Gaussian noise model, and instead require a Poisson noise model. As a result, accurate reconstruction of a spatially or temporally distributed phenomenon (f*) from Poisson data (y) cannot be effectively accomplished by minimizing a conventional penalized least-squares objective function. The problem addressed in this paper is the estimation of f* from y in an inverse problem setting, where the number of unknowns may potentially be larger than the number of observations and f* admits sparse approximation. The optimization formulation considered in this paper uses a penalized negative Poisson log-likelihood objective function with nonnegativity constraints (since Poisson intensities are naturally nonnegative). In particular, the proposed approach incorporates key ideas of using separable quadratic approximations to the objective function at each iteration and penalization terms related to l1 norms of coefficient vectors, total variation seminorms, and partition-based multiscale estimation methods.
A class of reduced-order models in the theory of waves and stability.
Chapman, C J; Sorokin, S V
2016-02-01
This paper presents a class of approximations to a type of wave field for which the dispersion relation is transcendental. The approximations have two defining characteristics: (i) they give the field shape exactly when the frequency and wavenumber lie on a grid of points in the (frequency, wavenumber) plane and (ii) the approximate dispersion relations are polynomials that pass exactly through points on this grid. Thus, the method is interpolatory in nature, but the interpolation takes place in (frequency, wavenumber) space, rather than in physical space. Full details are presented for a non-trivial example, that of antisymmetric elastic waves in a layer. The method is related to partial fraction expansions and barycentric representations of functions. An asymptotic analysis is presented, involving Stirling's approximation to the psi function, and a logarithmic correction to the polynomial dispersion relation.
NASA Astrophysics Data System (ADS)
Archer, Andrew J.; Chacko, Blesson; Evans, Robert
2017-07-01
In classical density functional theory (DFT), the part of the Helmholtz free energy functional arising from attractive inter-particle interactions is often treated in a mean-field or van der Waals approximation. On the face of it, this is a somewhat crude treatment as the resulting functional generates the simple random phase approximation (RPA) for the bulk fluid pair direct correlation function. We explain why using standard mean-field DFT to describe inhomogeneous fluid structure and thermodynamics is more accurate than one might expect based on this observation. By considering the pair correlation function g(x) and structure factor S(k) of a one-dimensional model fluid, for which exact results are available, we show that the mean-field DFT, employed within the test-particle procedure, yields results much superior to those from the RPA closure of the bulk Ornstein-Zernike equation. We argue that one should not judge the quality of a DFT based solely on the approximation it generates for the bulk pair direct correlation function.
NASA Technical Reports Server (NTRS)
Merrill, W. C.
1978-01-01
The Routh approximation technique for reducing the complexity of system models was applied in the frequency domain to a 16th order, state variable model of the F100 engine and to a 43d order, transfer function model of a launch vehicle boost pump pressure regulator. The results motivate extending the frequency domain formulation of the Routh method to the time domain in order to handle the state variable formulation directly. The time domain formulation was derived and a characterization that specifies all possible Routh similarity transformations was given. The characterization was computed by solving two eigenvalue-eigenvector problems. The application of the time domain Routh technique to the state variable engine model is described, and some results are given. Additional computational problems are discussed, including an optimization procedure that can improve the approximation accuracy by taking advantage of the transformation characterization.
An Approximate Dissipation Function for Large Strain Rubber Thermo-Mechanical Analyses
NASA Technical Reports Server (NTRS)
Johnson, Arthur R.; Chen, Tzi-Kang
2003-01-01
Mechanically induced viscoelastic dissipation is difficult to compute. When the constitutive model is defined by history integrals, the formula for dissipation is a double convolution integral. Since double convolution integrals are difficult to approximate, coupled thermo-mechanical analyses of highly viscous rubber-like materials cannot be made with most commercial finite element software. In this study, we present a method to approximate the dissipation for history integral constitutive models that represent Maxwell-like materials without approximating the double convolution integral. The method requires that the total stress can be separated into elastic and viscous components, and that the relaxation form of the constitutive law is defined with a Prony series. Numerical data is provided to demonstrate the limitations of this approximate method for determining dissipation. Rubber cylinders with imbedded steel disks and with an imbedded steel ball are dynamically loaded, and the nonuniform heating within the cylinders is computed.
NASA Astrophysics Data System (ADS)
Zhang, G.; Lu, D.; Ye, M.; Gunzburger, M.
2011-12-01
Markov Chain Monte Carlo (MCMC) methods have been widely used in many fields of uncertainty analysis to estimate the posterior distributions of parameters and credible intervals of predictions in the Bayesian framework. However, in practice, MCMC may be computationally unaffordable due to slow convergence and the excessive number of forward model executions required, especially when the forward model is expensive to compute. Both disadvantages arise from the curse of dimensionality, i.e., the posterior distribution is usually a multivariate function of parameters. Recently, sparse grid method has been demonstrated to be an effective technique for coping with high-dimensional interpolation or integration problems. Thus, in order to accelerate the forward model and avoid the slow convergence of MCMC, we propose a new method for uncertainty analysis based on sparse grid interpolation and quasi-Monte Carlo sampling. First, we construct a polynomial approximation of the forward model in the parameter space by using the sparse grid interpolation. This approximation then defines an accurate surrogate posterior distribution that can be evaluated repeatedly at minimal computational cost. Second, instead of using MCMC, a quasi-Monte Carlo method is applied to draw samples in the parameter space. Then, the desired probability density function of each prediction is approximated by accumulating the posterior density values of all the samples according to the prediction values. Our method has the following advantages: (1) the polynomial approximation of the forward model on the sparse grid provides a very efficient evaluation of the surrogate posterior distribution; (2) the quasi-Monte Carlo method retains the same accuracy in approximating the PDF of predictions but avoids all disadvantages of MCMC. The proposed method is applied to a controlled numerical experiment of groundwater flow modeling. The results show that our method attains the same accuracy much more efficiently than traditional MCMC.
Quantum corrections of the truncated Wigner approximation applied to an exciton transport model.
Ivanov, Anton; Breuer, Heinz-Peter
2017-04-01
We modify the path integral representation of exciton transport in open quantum systems such that an exact description of the quantum fluctuations around the classical evolution of the system is possible. As a consequence, the time evolution of the system observables is obtained by calculating the average of a stochastic difference equation which is weighted with a product of pseudoprobability density functions. From the exact equation of motion one can clearly identify the terms that are also present if we apply the truncated Wigner approximation. This description of the problem is used as a basis for the derivation of a new approximation, whose validity goes beyond the truncated Wigner approximation. To demonstrate this we apply the formalism to a donor-acceptor transport model.
NASA Astrophysics Data System (ADS)
Edwards, Brian J.
2002-05-01
Given the premise that a set of dynamical equations must possess a definite, underlying mathematical structure to ensure local and global thermodynamic stability, as has been well documented, several different models for describing liquid crystalline dynamics are examined with respect to said structure. These models, each derived during the past several years using a specific closure approximation for the fourth moment of the distribution function in Doi's rigid rod theory, are all shown to be inconsistent with this basic mathematical structure. The source of this inconsistency lies in Doi's expressions for the extra stress tensor and temporal evolution of the order parameter, which are rederived herein using a transformation that allows for internal compatibility with the underlying mathematical structure that is present on the distribution function level of description.
mBEEF-vdW: Robust fitting of error estimation density functionals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lundgaard, Keld T.; Wellendorff, Jess; Voss, Johannes
Here, we propose a general-purpose semilocal/nonlocal exchange-correlation functional approximation, named mBEEF-vdW. The exchange is a meta generalized gradient approximation, and the correlation is a semilocal and nonlocal mixture, with the Rutgers-Chalmers approximation for van der Waals (vdW) forces. The functional is fitted within the Bayesian error estimation functional (BEEF) framework. We improve the previously used fitting procedures by introducing a robust MM-estimator based loss function, reducing the sensitivity to outliers in the datasets. To more reliably determine the optimal model complexity, we furthermore introduce a generalization of the bootstrap 0.632 estimator with hierarchical bootstrap sampling and geometric mean estimator overmore » the training datasets. Using this estimator, we show that the robust loss function leads to a 10% improvement in the estimated prediction error over the previously used least-squares loss function. The mBEEF-vdW functional is benchmarked against popular density functional approximations over a wide range of datasets relevant for heterogeneous catalysis, including datasets that were not used for its training. Overall, we find that mBEEF-vdW has a higher general accuracy than competing popular functionals, and it is one of the best performing functionals on chemisorption systems, surface energies, lattice constants, and dispersion. We also show the potential-energy curve of graphene on the nickel(111) surface, where mBEEF-vdW matches the experimental binding length. mBEEF-vdW is currently available in gpaw and other density functional theory codes through Libxc, version 3.0.0.« less
mBEEF-vdW: Robust fitting of error estimation density functionals
Lundgaard, Keld T.; Wellendorff, Jess; Voss, Johannes; ...
2016-06-15
Here, we propose a general-purpose semilocal/nonlocal exchange-correlation functional approximation, named mBEEF-vdW. The exchange is a meta generalized gradient approximation, and the correlation is a semilocal and nonlocal mixture, with the Rutgers-Chalmers approximation for van der Waals (vdW) forces. The functional is fitted within the Bayesian error estimation functional (BEEF) framework. We improve the previously used fitting procedures by introducing a robust MM-estimator based loss function, reducing the sensitivity to outliers in the datasets. To more reliably determine the optimal model complexity, we furthermore introduce a generalization of the bootstrap 0.632 estimator with hierarchical bootstrap sampling and geometric mean estimator overmore » the training datasets. Using this estimator, we show that the robust loss function leads to a 10% improvement in the estimated prediction error over the previously used least-squares loss function. The mBEEF-vdW functional is benchmarked against popular density functional approximations over a wide range of datasets relevant for heterogeneous catalysis, including datasets that were not used for its training. Overall, we find that mBEEF-vdW has a higher general accuracy than competing popular functionals, and it is one of the best performing functionals on chemisorption systems, surface energies, lattice constants, and dispersion. We also show the potential-energy curve of graphene on the nickel(111) surface, where mBEEF-vdW matches the experimental binding length. mBEEF-vdW is currently available in gpaw and other density functional theory codes through Libxc, version 3.0.0.« less
Two-dimensional analytic weighting functions for limb scattering
NASA Astrophysics Data System (ADS)
Zawada, D. J.; Bourassa, A. E.; Degenstein, D. A.
2017-10-01
Through the inversion of limb scatter measurements it is possible to obtain vertical profiles of trace species in the atmosphere. Many of these inversion methods require what is often referred to as weighting functions, or derivatives of the radiance with respect to concentrations of trace species in the atmosphere. Several radiative transfer models have implemented analytic methods to calculate weighting functions, alleviating the computational burden of traditional numerical perturbation methods. Here we describe the implementation of analytic two-dimensional weighting functions, where derivatives are calculated relative to atmospheric constituents in a two-dimensional grid of altitude and angle along the line of sight direction, in the SASKTRAN-HR radiative transfer model. Two-dimensional weighting functions are required for two-dimensional inversions of limb scatter measurements. Examples are presented where the analytic two-dimensional weighting functions are calculated with an underlying one-dimensional atmosphere. It is shown that the analytic weighting functions are more accurate than ones calculated with a single scatter approximation, and are orders of magnitude faster than a typical perturbation method. Evidence is presented that weighting functions for stratospheric aerosols calculated under a single scatter approximation may not be suitable for use in retrieval algorithms under solar backscatter conditions.
A quark model analysis of the transversity distribution
NASA Astrophysics Data System (ADS)
Scopetta, Sergio; Vento, Vicente
1998-04-01
The feasibility of measuring chiral-odd parton distribution functions in polarized Drell-Yan and semi-inclusive experiments has renewed theoretical interest in their study. Models of hadron structure have proven successful in describing the gross features of the chiral-even structure functions. Similar expectations motivated our study of the transversity parton distributions in the Isgur-Karl and MIT bag models. We confirm, by performing a NLO calculation, the diverse low x behaviors of the transversity and spin structure functions at the experimental scale and show that it is fundamentally a consequence of the different behaviors under evolution of these functions. The inequalities of Soffer establish constraints between data and model calculations of the chiral-odd transversity function. The approximate compatibility of our model calculations with these constraints confers credibility to our estimates.
The Validity of Quasi-Steady-State Approximations in Discrete Stochastic Simulations
Kim, Jae Kyoung; Josić, Krešimir; Bennett, Matthew R.
2014-01-01
In biochemical networks, reactions often occur on disparate timescales and can be characterized as either fast or slow. The quasi-steady-state approximation (QSSA) utilizes timescale separation to project models of biochemical networks onto lower-dimensional slow manifolds. As a result, fast elementary reactions are not modeled explicitly, and their effect is captured by nonelementary reaction-rate functions (e.g., Hill functions). The accuracy of the QSSA applied to deterministic systems depends on how well timescales are separated. Recently, it has been proposed to use the nonelementary rate functions obtained via the deterministic QSSA to define propensity functions in stochastic simulations of biochemical networks. In this approach, termed the stochastic QSSA, fast reactions that are part of nonelementary reactions are not simulated, greatly reducing computation time. However, it is unclear when the stochastic QSSA provides an accurate approximation of the original stochastic simulation. We show that, unlike the deterministic QSSA, the validity of the stochastic QSSA does not follow from timescale separation alone, but also depends on the sensitivity of the nonelementary reaction rate functions to changes in the slow species. The stochastic QSSA becomes more accurate when this sensitivity is small. Different types of QSSAs result in nonelementary functions with different sensitivities, and the total QSSA results in less sensitive functions than the standard or the prefactor QSSA. We prove that, as a result, the stochastic QSSA becomes more accurate when nonelementary reaction functions are obtained using the total QSSA. Our work provides an apparently novel condition for the validity of the QSSA in stochastic simulations of biochemical reaction networks with disparate timescales. PMID:25099817
The Dipole Segment Model for Axisymmetrical Elongated Asteroids
NASA Astrophysics Data System (ADS)
Zeng, Xiangyuan; Zhang, Yonglong; Yu, Yang; Liu, Xiangdong
2018-02-01
Various simplified models have been investigated as a way to understand the complex dynamical environment near irregular asteroids. A dipole segment model is explored in this paper, one that is composed of a massive straight segment and two point masses at the extremities of the segment. Given an explicitly simple form of the potential function that is associated with the dipole segment model, five topological cases are identified with different sets of system parameters. Locations, stabilities, and variation trends of the system equilibrium points are investigated in a parametric way. The exterior potential distribution of nearly axisymmetrical elongated asteroids is approximated by minimizing the acceleration error in a test zone. The acceleration error minimization process determines the parameters of the dipole segment. The near-Earth asteroid (8567) 1996 HW1 is chosen as an example to evaluate the effectiveness of the approximation method for the exterior potential distribution. The advantages of the dipole segment model over the classical dipole and the traditional segment are also discussed. Percent error of acceleration and the degree of approximation are illustrated by using the dipole segment model to approximate four more asteroids. The high efficiency of the simplified model over the polyhedron is clearly demonstrated by comparing the CPU time.
NASA Astrophysics Data System (ADS)
Bourlier, C.; Berginc, G.
2004-07-01
In this paper the first- and second-order Kirchhoff approximation is applied to study the backscattering enhancement phenomenon, which appears when the surface rms slope is greater than 0.5. The formulation is reduced to the geometric optics approximation in which the second-order illumination function is taken into account. This study is developed for a two-dimensional (2D) anisotropic stationary rough dielectric surface and for any surface slope and height distributions assumed to be statistically even. Using the Weyl representation of the Green function (which introduces an absolute value over the surface elevation in the phase term), the incoherent scattering coefficient under the stationary phase assumption is expressed as the sum of three terms. The incoherent scattering coefficient then requires the numerical computation of a ten- dimensional integral. To reduce the number of numerical integrations, the geometric optics approximation is applied, which assumes that the correlation between two adjacent points is very strong. The model is then proportional to two surface slope probabilities, for which the slopes would specularly reflect the beams in the double scattering process. In addition, the slope distributions are related with each other by a propagating function, which accounts for the second-order illumination function. The companion paper is devoted to the simulation of this model and comparisons with an 'exact' numerical method.
NASA Astrophysics Data System (ADS)
Basu, A.; Das, B.; Middya, T. R.; Bhattacharya, D. P.
2017-01-01
The phonon growth characteristic in a degenerate semiconductor has been calculated under the condition of low temperature. If the lattice temperature is high, the energy of the intravalley acoustic phonon is negligibly small compared to the average thermal energy of the electrons. Hence one can traditionally assume the electron-phonon collisions to be elastic and approximate the Bose-Einstein (B.E.) distribution for the phonons by the simple equipartition law. However, in the present analysis at the low lattice temperatures, the interaction of the non equilibrium electrons with the acoustic phonons becomes inelastic and the simple equipartition law for the phonon distribution is not valid. Hence the analysis is made taking into account the inelastic collisions and the complete form of the B.E. distribution. The high-field distribution function of the carriers given by Fermi-Dirac (F.D.) function at the field dependent carrier temperature, has been approximated by a well tested model that apparently overcomes the intrinsic problem of correct evaluation of the integrals involving the product and powers of the Fermi function. Hence the results thus obtained are more reliable compared to the rough estimation that one may obtain from using the exact F.D. function, but taking recourse to some over simplified approximations.
A Gaussian-based rank approximation for subspace clustering
NASA Astrophysics Data System (ADS)
Xu, Fei; Peng, Chong; Hu, Yunhong; He, Guoping
2018-04-01
Low-rank representation (LRR) has been shown successful in seeking low-rank structures of data relationships in a union of subspaces. Generally, LRR and LRR-based variants need to solve the nuclear norm-based minimization problems. Beyond the success of such methods, it has been widely noted that the nuclear norm may not be a good rank approximation because it simply adds all singular values of a matrix together and thus large singular values may dominant the weight. This results in far from satisfactory rank approximation and may degrade the performance of lowrank models based on the nuclear norm. In this paper, we propose a novel nonconvex rank approximation based on the Gaussian distribution function, which has demanding properties to be a better rank approximation than the nuclear norm. Then a low-rank model is proposed based on the new rank approximation with application to motion segmentation. Experimental results have shown significant improvements and verified the effectiveness of our method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lei, Huan; Baker, Nathan A.; Li, Xiantao
We present a data-driven approach to determine the memory kernel and random noise of the generalized Langevin equation. To facilitate practical implementations, we parameterize the kernel function in the Laplace domain by a rational function, with coefficients directly linked to the equilibrium statistics of the coarse-grain variables. Further, we show that such an approximation can be constructed to arbitrarily high order. Within these approximations, the generalized Langevin dynamics can be embedded in an extended stochastic model without memory. We demonstrate how to introduce the stochastic noise so that the fluctuation-dissipation theorem is exactly satisfied.
Roussel, Marc R; Tang, Terry
2006-12-07
A slow manifold is a low-dimensional invariant manifold to which trajectories nearby are rapidly attracted on the way to the equilibrium point. The exact computation of the slow manifold simplifies the model without sacrificing accuracy on the slow time scales of the system. The Maas-Pope intrinsic low-dimensional manifold (ILDM) [Combust. Flame 88, 239 (1992)] is frequently used as an approximation to the slow manifold. This approximation is based on a linearized analysis of the differential equations and thus neglects curvature. We present here an efficient way to calculate an approximation equivalent to the ILDM. Our method, called functional equation truncation (FET), first develops a hierarchy of functional equations involving higher derivatives which can then be truncated at second-derivative terms to explicitly neglect the curvature. We prove that the ILDM and FET-approximated (FETA) manifolds are identical for the one-dimensional slow manifold of any planar system. In higher-dimensional spaces, the ILDM and FETA manifolds agree to numerical accuracy almost everywhere. Solution of the FET equations is, however, expected to generally be faster than the ILDM method.
Uncertainty Analysis via Failure Domain Characterization: Polynomial Requirement Functions
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Munoz, Cesar A.; Narkawicz, Anthony J.; Kenny, Sean P.; Giesy, Daniel P.
2011-01-01
This paper proposes an uncertainty analysis framework based on the characterization of the uncertain parameter space. This characterization enables the identification of worst-case uncertainty combinations and the approximation of the failure and safe domains with a high level of accuracy. Because these approximations are comprised of subsets of readily computable probability, they enable the calculation of arbitrarily tight upper and lower bounds to the failure probability. A Bernstein expansion approach is used to size hyper-rectangular subsets while a sum of squares programming approach is used to size quasi-ellipsoidal subsets. These methods are applicable to requirement functions whose functional dependency on the uncertainty is a known polynomial. Some of the most prominent features of the methodology are the substantial desensitization of the calculations from the uncertainty model assumed (i.e., the probability distribution describing the uncertainty) as well as the accommodation for changes in such a model with a practically insignificant amount of computational effort.
Uncertainty Analysis via Failure Domain Characterization: Unrestricted Requirement Functions
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2011-01-01
This paper proposes an uncertainty analysis framework based on the characterization of the uncertain parameter space. This characterization enables the identification of worst-case uncertainty combinations and the approximation of the failure and safe domains with a high level of accuracy. Because these approximations are comprised of subsets of readily computable probability, they enable the calculation of arbitrarily tight upper and lower bounds to the failure probability. The methods developed herein, which are based on nonlinear constrained optimization, are applicable to requirement functions whose functional dependency on the uncertainty is arbitrary and whose explicit form may even be unknown. Some of the most prominent features of the methodology are the substantial desensitization of the calculations from the assumed uncertainty model (i.e., the probability distribution describing the uncertainty) as well as the accommodation for changes in such a model with a practically insignificant amount of computational effort.
NASA Astrophysics Data System (ADS)
Chevrié, Mathieu; Farges, Christophe; Sabatier, Jocelyn; Guillemard, Franck; Pradere, Laetitia
2017-04-01
In automotive application field, reducing electric conductors dimensions is significant to decrease the embedded mass and the manufacturing costs. It is thus essential to develop tools to optimize the wire diameter according to thermal constraints and protection algorithms to maintain a high level of safety. In order to develop such tools and algorithms, accurate electro-thermal models of electric wires are required. However, thermal equation solutions lead to implicit fractional transfer functions involving an exponential that cannot be embedded in a car calculator. This paper thus proposes an integer order transfer function approximation methodology based on a spatial discretization for this class of fractional transfer functions. Moreover, the H2-norm is used to minimize approximation error. Accuracy of the proposed approach is confirmed with measured data on a 1.5 mm2 wire implemented in a dedicated test bench.
The three-point function as a probe of models for large-scale structure
NASA Astrophysics Data System (ADS)
Frieman, Joshua A.; Gaztanaga, Enrique
1994-04-01
We analyze the consequences of models of structure formation for higher order (n-point) galaxy correlation functions in the mildly nonlinear regime. Several variations of the standard Omega = 1 cold dark matter model with scale-invariant primordial perturbations have recently been introduced to obtain more power on large scales, Rp is approximately 20/h Mpc, e.g., low matter-density (nonzero cosmological constant) models, 'tilted' primordial spectra, and scenarios with a mixture of cold and hot dark matter. They also include models with an effective scale-dependent bias, such as the cooperative galaxy formation scenario of Bower et al. We show that higher-order (n-point) galaxy correlation functions can provide a useful test of such models and can discriminate between models with true large-scale power in the density field and those where the galaxy power arises from scale-dependent bias: a bias with rapid scale dependence leads to a dramatic decrease of the the hierarchical amplitudes QJ at large scales, r is greater than or approximately Rp. Current observational constraints on the three-point amplitudes Q3 and S3 can place limits on the bias parameter(s) and appear to disfavor, but not yet rule out, the hypothesis that scale-dependent bias is responsible for the extra power observed on large scales.
A Multi-Resolution Nonlinear Mapping Technique for Design and Analysis Applications
NASA Technical Reports Server (NTRS)
Phan, Minh Q.
1998-01-01
This report describes a nonlinear mapping technique where the unknown static or dynamic system is approximated by a sum of dimensionally increasing functions (one-dimensional curves, two-dimensional surfaces, etc.). These lower dimensional functions are synthesized from a set of multi-resolution basis functions, where the resolutions specify the level of details at which the nonlinear system is approximated. The basis functions also cause the parameter estimation step to become linear. This feature is taken advantage of to derive a systematic procedure to determine and eliminate basis functions that are less significant for the particular system under identification. The number of unknown parameters that must be estimated is thus reduced and compact models obtained. The lower dimensional functions (identified curves and surfaces) permit a kind of "visualization" into the complexity of the nonlinearity itself.
A Multi-Resolution Nonlinear Mapping Technique for Design and Analysis Application
NASA Technical Reports Server (NTRS)
Phan, Minh Q.
1997-01-01
This report describes a nonlinear mapping technique where the unknown static or dynamic system is approximated by a sum of dimensionally increasing functions (one-dimensional curves, two-dimensional surfaces, etc.). These lower dimensional functions are synthesized from a set of multi-resolution basis functions, where the resolutions specify the level of details at which the nonlinear system is approximated. The basis functions also cause the parameter estimation step to become linear. This feature is taken advantage of to derive a systematic procedure to determine and eliminate basis functions that are less significant for the particular system under identification. The number of unknown parameters that must be estimated is thus reduced and compact models obtained. The lower dimensional functions (identified curves and surfaces) permit a kind of "visualization" into the complexity of the nonlinearity itself.
Probability density function learning by unsupervised neurons.
Fiori, S
2001-10-01
In a recent work, we introduced the concept of pseudo-polynomial adaptive activation function neuron (FAN) and presented an unsupervised information-theoretic learning theory for such structure. The learning model is based on entropy optimization and provides a way of learning probability distributions from incomplete data. The aim of the present paper is to illustrate some theoretical features of the FAN neuron, to extend its learning theory to asymmetrical density function approximation, and to provide an analytical and numerical comparison with other known density function estimation methods, with special emphasis to the universal approximation ability. The paper also provides a survey of PDF learning from incomplete data, as well as results of several experiments performed on real-world problems and signals.
NASA Astrophysics Data System (ADS)
Messica, A.
2016-10-01
The probability distribution function of a weighted sum of non-identical lognormal random variables is required in various fields of science and engineering and specifically in finance for portfolio management as well as exotic options valuation. Unfortunately, it has no known closed form and therefore has to be approximated. Most of the approximations presented to date are complex as well as complicated for implementation. This paper presents a simple, and easy to implement, approximation method via modified moments matching and a polynomial asymptotic series expansion correction for a central limit theorem of a finite sum. The method results in an intuitively-appealing and computation-efficient approximation for a finite sum of lognormals of at least ten summands and naturally improves as the number of summands increases. The accuracy of the method is tested against the results of Monte Carlo simulationsand also compared against the standard central limit theorem andthe commonly practiced Markowitz' portfolio equations.
NASA Technical Reports Server (NTRS)
Tiffany, Sherwood H.; Adams, William M., Jr.
1988-01-01
The approximation of unsteady generalized aerodynamic forces in the equations of motion of a flexible aircraft are discussed. Two methods of formulating these approximations are extended to include the same flexibility in constraining the approximations and the same methodology in optimizing nonlinear parameters as another currently used extended least-squares method. Optimal selection of nonlinear parameters is made in each of the three methods by use of the same nonlinear, nongradient optimizer. The objective of the nonlinear optimization is to obtain rational approximations to the unsteady aerodynamics whose state-space realization is lower order than that required when no optimization of the nonlinear terms is performed. The free linear parameters are determined using the least-squares matrix techniques of a Lagrange multiplier formulation of an objective function which incorporates selected linear equality constraints. State-space mathematical models resulting from different approaches are described and results are presented that show comparative evaluations from application of each of the extended methods to a numerical example.
Evaluation of Analytical Modeling Functions for the Phonation Onset Process.
Petermann, Simon; Kniesburges, Stefan; Ziethe, Anke; Schützenberger, Anne; Döllinger, Michael
2016-01-01
The human voice originates from oscillations of the vocal folds in the larynx. The duration of the voice onset (VO), called the voice onset time (VOT), is currently under investigation as a clinical indicator for correct laryngeal functionality. Different analytical approaches for computing the VOT based on endoscopic imaging were compared to determine the most reliable method to quantify automatically the transient vocal fold oscillations during VO. Transnasal endoscopic imaging in combination with a high-speed camera (8000 fps) was applied to visualize the phonation onset process. Two different definitions of VO interval were investigated. Six analytical functions were tested that approximate the envelope of the filtered or unfiltered glottal area waveform (GAW) during phonation onset. A total of 126 recordings from nine healthy males and 210 recordings from 15 healthy females were evaluated. Three criteria were analyzed to determine the most appropriate computation approach: (1) reliability of the fit function for a correct approximation of VO; (2) consistency represented by the standard deviation of VOT; and (3) accuracy of the approximation of VO. The results suggest the computation of VOT by a fourth-order polynomial approximation in the interval between 32.2 and 67.8% of the saturation amplitude of the filtered GAW.
Variational Gaussian approximation for Poisson data
NASA Astrophysics Data System (ADS)
Arridge, Simon R.; Ito, Kazufumi; Jin, Bangti; Zhang, Chen
2018-02-01
The Poisson model is frequently employed to describe count data, but in a Bayesian context it leads to an analytically intractable posterior probability distribution. In this work, we analyze a variational Gaussian approximation to the posterior distribution arising from the Poisson model with a Gaussian prior. This is achieved by seeking an optimal Gaussian distribution minimizing the Kullback-Leibler divergence from the posterior distribution to the approximation, or equivalently maximizing the lower bound for the model evidence. We derive an explicit expression for the lower bound, and show the existence and uniqueness of the optimal Gaussian approximation. The lower bound functional can be viewed as a variant of classical Tikhonov regularization that penalizes also the covariance. Then we develop an efficient alternating direction maximization algorithm for solving the optimization problem, and analyze its convergence. We discuss strategies for reducing the computational complexity via low rank structure of the forward operator and the sparsity of the covariance. Further, as an application of the lower bound, we discuss hierarchical Bayesian modeling for selecting the hyperparameter in the prior distribution, and propose a monotonically convergent algorithm for determining the hyperparameter. We present extensive numerical experiments to illustrate the Gaussian approximation and the algorithms.
Modeling the Swift BAT Trigger Algorithm with Machine Learning
NASA Technical Reports Server (NTRS)
Graff, Philip B.; Lien, Amy Y.; Baker, John G.; Sakamoto, Takanori
2015-01-01
To draw inferences about gamma-ray burst (GRB) source populations based on Swift observations, it is essential to understand the detection efficiency of the Swift burst alert telescope (BAT). This study considers the problem of modeling the Swift BAT triggering algorithm for long GRBs, a computationally expensive procedure, and models it using machine learning algorithms. A large sample of simulated GRBs from Lien et al. (2014) is used to train various models: random forests, boosted decision trees (with AdaBoost), support vector machines, and artificial neural networks. The best models have accuracies of approximately greater than 97% (approximately less than 3% error), which is a significant improvement on a cut in GRB flux which has an accuracy of 89:6% (10:4% error). These models are then used to measure the detection efficiency of Swift as a function of redshift z, which is used to perform Bayesian parameter estimation on the GRB rate distribution. We find a local GRB rate density of eta(sub 0) approximately 0.48(+0.41/-0.23) Gpc(exp -3) yr(exp -1) with power-law indices of eta(sub 1) approximately 1.7(+0.6/-0.5) and eta(sub 2) approximately -5.9(+5.7/-0.1) for GRBs above and below a break point of z(sub 1) approximately 6.8(+2.8/-3.2). This methodology is able to improve upon earlier studies by more accurately modeling Swift detection and using this for fully Bayesian model fitting. The code used in this is analysis is publicly available online.
Chen, Zhenhua; Hoffmann, Mark R
2012-07-07
A unitary wave operator, exp (G), G(+) = -G, is considered to transform a multiconfigurational reference wave function Φ to the potentially exact, within basis set limit, wave function Ψ = exp (G)Φ. To obtain a useful approximation, the Hausdorff expansion of the similarity transformed effective Hamiltonian, exp (-G)Hexp (G), is truncated at second order and the excitation manifold is limited; an additional separate perturbation approximation can also be made. In the perturbation approximation, which we refer to as multireference unitary second-order perturbation theory (MRUPT2), the Hamiltonian operator in the highest order commutator is approximated by a Mo̸ller-Plesset-type one-body zero-order Hamiltonian. If a complete active space self-consistent field wave function is used as reference, then the energy is invariant under orbital rotations within the inactive, active, and virtual orbital subspaces for both the second-order unitary coupled cluster method and its perturbative approximation. Furthermore, the redundancies of the excitation operators are addressed in a novel way, which is potentially more efficient compared to the usual full diagonalization of the metric of the excited configurations. Despite the loss of rigorous size-extensivity possibly due to the use of a variational approach rather than a projective one in the solution of the amplitudes, test calculations show that the size-extensivity errors are very small. Compared to other internally contracted multireference perturbation theories, MRUPT2 only needs reduced density matrices up to three-body even with a non-complete active space reference wave function when two-body excitations within the active orbital subspace are involved in the wave operator, exp (G). Both the coupled cluster and perturbation theory variants are amenable to large, incomplete model spaces. Applications to some widely studied model systems that can be problematic because of geometry dependent quasidegeneracy, H4, P4, and BeH(2), are performed in order to test the new methods on problems where full configuration interaction results are available.
Computational properties of networks of synchronous groups of spiking neurons.
Dayhoff, Judith E
2007-09-01
We demonstrate a model in which synchronously firing ensembles of neurons are networked to produce computational results. Each ensemble is a group of biological integrate-and-fire spiking neurons, with probabilistic interconnections between groups. An analogy is drawn in which each individual processing unit of an artificial neural network corresponds to a neuronal group in a biological model. The activation value of a unit in the artificial neural network corresponds to the fraction of active neurons, synchronously firing, in a biological neuronal group. Weights of the artificial neural network correspond to the product of the interconnection density between groups, the group size of the presynaptic group, and the postsynaptic potential heights in the synchronous group model. All three of these parameters can modulate connection strengths between neuronal groups in the synchronous group models. We give an example of nonlinear classification (XOR) and a function approximation example in which the capability of the artificial neural network can be captured by a neural network model with biological integrate-and-fire neurons configured as a network of synchronously firing ensembles of such neurons. We point out that the general function approximation capability proven for feedforward artificial neural networks appears to be approximated by networks of neuronal groups that fire in synchrony, where the groups comprise integrate-and-fire neurons. We discuss the advantages of this type of model for biological systems, its possible learning mechanisms, and the associated timing relationships.
Boluda-Ruiz, Rubén; García-Zambrana, Antonio; Castillo-Vázquez, Carmen; Castillo-Vázquez, Beatriz
2016-10-03
A novel accurate and useful approximation of the well-known Beckmann distribution is presented here, which is used to model generalized pointing errors in the context of free-space optical (FSO) communication systems. We derive an approximate closed-form probability density function (PDF) for the composite gamma-gamma (GG) atmospheric turbulence with the pointing error model using the proposed approximation of the Beckmann distribution, which is valid for most practical terrestrial FSO links. This approximation takes into account the effect of the beam width, different jitters for the elevation and the horizontal displacement and the simultaneous effect of nonzero boresight errors for each axis at the receiver plane. Additionally, the proposed approximation allows us to delimit two different FSO scenarios. The first of them is when atmospheric turbulence is the dominant effect in relation to generalized pointing errors, and the second one when generalized pointing error is the dominant effect in relation to atmospheric turbulence. The second FSO scenario has not been studied in-depth by the research community. Moreover, the accuracy of the method is measured both visually and quantitatively using curve-fitting metrics. Simulation results are further included to confirm the analytical results.
Efficient Actor-Critic Algorithm with Hierarchical Model Learning and Planning
Fu, QiMing
2016-01-01
To improve the convergence rate and the sample efficiency, two efficient learning methods AC-HMLP and RAC-HMLP (AC-HMLP with ℓ 2-regularization) are proposed by combining actor-critic algorithm with hierarchical model learning and planning. The hierarchical models consisting of the local and the global models, which are learned at the same time during learning of the value function and the policy, are approximated by local linear regression (LLR) and linear function approximation (LFA), respectively. Both the local model and the global model are applied to generate samples for planning; the former is used only if the state-prediction error does not surpass the threshold at each time step, while the latter is utilized at the end of each episode. The purpose of taking both models is to improve the sample efficiency and accelerate the convergence rate of the whole algorithm through fully utilizing the local and global information. Experimentally, AC-HMLP and RAC-HMLP are compared with three representative algorithms on two Reinforcement Learning (RL) benchmark problems. The results demonstrate that they perform best in terms of convergence rate and sample efficiency. PMID:27795704
Efficient Actor-Critic Algorithm with Hierarchical Model Learning and Planning.
Zhong, Shan; Liu, Quan; Fu, QiMing
2016-01-01
To improve the convergence rate and the sample efficiency, two efficient learning methods AC-HMLP and RAC-HMLP (AC-HMLP with ℓ 2 -regularization) are proposed by combining actor-critic algorithm with hierarchical model learning and planning. The hierarchical models consisting of the local and the global models, which are learned at the same time during learning of the value function and the policy, are approximated by local linear regression (LLR) and linear function approximation (LFA), respectively. Both the local model and the global model are applied to generate samples for planning; the former is used only if the state-prediction error does not surpass the threshold at each time step, while the latter is utilized at the end of each episode. The purpose of taking both models is to improve the sample efficiency and accelerate the convergence rate of the whole algorithm through fully utilizing the local and global information. Experimentally, AC-HMLP and RAC-HMLP are compared with three representative algorithms on two Reinforcement Learning (RL) benchmark problems. The results demonstrate that they perform best in terms of convergence rate and sample efficiency.
NASA Astrophysics Data System (ADS)
Cheng, Rongjun; Sun, Fengxin; Wei, Qi; Wang, Jufeng
2018-02-01
Space-fractional advection-dispersion equation (SFADE) can describe particle transport in a variety of fields more accurately than the classical models of integer-order derivative. Because of nonlocal property of integro-differential operator of space-fractional derivative, it is very challenging to deal with fractional model, and few have been reported in the literature. In this paper, a numerical analysis of the two-dimensional SFADE is carried out by the element-free Galerkin (EFG) method. The trial functions for the SFADE are constructed by the moving least-square (MLS) approximation. By the Galerkin weak form, the energy functional is formulated. Employing the energy functional minimization procedure, the final algebraic equations system is obtained. The Riemann-Liouville operator is discretized by the Grünwald formula. With center difference method, EFG method and Grünwald formula, the fully discrete approximation schemes for SFADE are established. Comparing with exact results and available results by other well-known methods, the computed approximate solutions are presented in the format of tables and graphs. The presented results demonstrate the validity, efficiency and accuracy of the proposed techniques. Furthermore, the error is computed and the proposed method has reasonable convergence rates in spatial and temporal discretizations.
ERIC Educational Resources Information Center
Wetsel, Grover C., Jr.
1978-01-01
Calculates the energy-band structure of noninteracting electrons in a one-dimensional crystal using exact and approximate methods for a rectangular-well atomic potential. A comparison of the two solutions as a function of potential-well depth and ratio of lattice spacing to well width is presented. (Author/GA)
Development and application of accurate analytical models for single active electron potentials
NASA Astrophysics Data System (ADS)
Miller, Michelle; Jaron-Becker, Agnieszka; Becker, Andreas
2015-05-01
The single active electron (SAE) approximation is a theoretical model frequently employed to study scenarios in which inner-shell electrons may productively be treated as frozen spectators to a physical process of interest, and accurate analytical approximations for these potentials are sought as a useful simulation tool. Density function theory is often used to construct a SAE potential, requiring that a further approximation for the exchange correlation functional be enacted. In this study, we employ the Krieger, Li, and Iafrate (KLI) modification to the optimized-effective-potential (OEP) method to reduce the complexity of the problem to the straightforward solution of a system of linear equations through simple arguments regarding the behavior of the exchange-correlation potential in regions where a single orbital dominates. We employ this method for the solution of atomic and molecular potentials, and use the resultant curve to devise a systematic construction for highly accurate and useful analytical approximations for several systems. Supported by the U.S. Department of Energy (Grant No. DE-FG02-09ER16103), and the U.S. National Science Foundation (Graduate Research Fellowship, Grants No. PHY-1125844 and No. PHY-1068706).
Vertical spatial coherence model for a transient signal forward-scattered from the sea surface
Yoerger, E.J.; McDaniel, S.T.
1996-01-01
The treatment of acoustic energy forward scattered from the sea surface, which is modeled as a random communications scatter channel, is the basis for developing an expression for the time-dependent coherence function across a vertical receiving array. The derivation of this model uses linear filter theory applied to the Fresnel-corrected Kirchhoff approximation in obtaining an equation for the covariance function for the forward-scattered problem. The resulting formulation is used to study the dependence of the covariance on experimental and environmental factors. The modeled coherence functions are then formed for various geometrical and environmental parameters and compared to experimental data.
Optimization of the Monte Carlo code for modeling of photon migration in tissue.
Zołek, Norbert S; Liebert, Adam; Maniewski, Roman
2006-10-01
The Monte Carlo method is frequently used to simulate light transport in turbid media because of its simplicity and flexibility, allowing to analyze complicated geometrical structures. Monte Carlo simulations are, however, time consuming because of the necessity to track the paths of individual photons. The time consuming computation is mainly associated with the calculation of the logarithmic and trigonometric functions as well as the generation of pseudo-random numbers. In this paper, the Monte Carlo algorithm was developed and optimized, by approximation of the logarithmic and trigonometric functions. The approximations were based on polynomial and rational functions, and the errors of these approximations are less than 1% of the values of the original functions. The proposed algorithm was verified by simulations of the time-resolved reflectance at several source-detector separations. The results of the calculation using the approximated algorithm were compared with those of the Monte Carlo simulations obtained with an exact computation of the logarithm and trigonometric functions as well as with the solution of the diffusion equation. The errors of the moments of the simulated distributions of times of flight of photons (total number of photons, mean time of flight and variance) are less than 2% for a range of optical properties, typical of living tissues. The proposed approximated algorithm allows to speed up the Monte Carlo simulations by a factor of 4. The developed code can be used on parallel machines, allowing for further acceleration.
NASA Astrophysics Data System (ADS)
Koitz, Ralph; Soini, Thomas M.; Genest, Alexander; Trickey, S. B.; Rösch, Notker
2012-07-01
The performance of eight generalized gradient approximation exchange-correlation (xc) functionals is assessed by a series of scalar relativistic all-electron calculations on octahedral palladium model clusters Pdn with n = 13, 19, 38, 55, 79, 147 and the analogous clusters Aun (for n up through 79). For these model systems, we determined the cohesive energies and average bond lengths of the optimized octahedral structures. We extrapolate these values to the bulk limits and compare with the corresponding experimental values. While the well-established functionals BP, PBE, and PW91 are the most accurate at predicting energies, the more recent forms PBEsol, VMTsol, and VT{84}sol significantly improve the accuracy of geometries. The observed trends are largely similar for both Pd and Au. In the same spirit, we also studied the scalability of the ionization potentials and electron affinities of the Pd clusters, and extrapolated those quantities to estimates of the work function. Overall, the xc functionals can be classified into four distinct groups according to the accuracy of the computed parameters. These results allow a judicious selection of xc approximations for treating transition metal clusters.
NASA Astrophysics Data System (ADS)
Malekan, Mohammad; Barros, Felício B.
2017-12-01
Generalized or extended finite element method (G/XFEM) models the crack by enriching functions of partition of unity type with discontinuous functions that represent well the physical behavior of the problem. However, this enrichment functions are not available for all problem types. Thus, one can use numerically-built (global-local) enrichment functions to have a better approximate procedure. This paper investigates the effects of micro-defects/inhomogeneities on a main crack behavior by modeling the micro-defects/inhomogeneities in the local problem using a two-scale G/XFEM. The global-local enrichment functions are influenced by the micro-defects/inhomogeneities from the local problem and thus change the approximate solution of the global problem with the main crack. This approach is presented in detail by solving three different linear elastic fracture mechanics problems for different cases: two plane stress and a Reissner-Mindlin plate problems. The numerical results obtained with the two-scale G/XFEM are compared with the reference solutions from the analytical, numerical solution using standard G/XFEM method and ABAQUS as well, and from the literature.
Linear and non-linear dynamic models of a geared rotor-bearing system
NASA Technical Reports Server (NTRS)
Kahraman, Ahmet; Singh, Rajendra
1990-01-01
A three degree of freedom non-linear model of a geared rotor-bearing system with gear backlash and radial clearances in rolling element bearings is proposed here. This reduced order model can be used to describe the transverse-torsional motion of the system. It is justified by comparing the eigen solutions yielded by corresponding linear model with the finite element method results. Nature of nonlinearities in bearings is examined and two approximate nonlinear stiffness functions are proposed. These approximate bearing models are verified by comparing their frequency responses with the results given by the exact form of nonlinearity. The proposed nonlinear dynamic model of the geared rotor-bearing system can be used to investigate the dynamic behavior and chaos.
NASA Astrophysics Data System (ADS)
Yuan, Shifei; Jiang, Lei; Yin, Chengliang; Wu, Hongjie; Zhang, Xi
2017-06-01
To guarantee the safety, high efficiency and long lifetime for lithium-ion battery, an advanced battery management system requires a physics-meaningful yet computationally efficient battery model. The pseudo-two dimensional (P2D) electrochemical model can provide physical information about the lithium concentration and potential distributions across the cell dimension. However, the extensive computation burden caused by the temporal and spatial discretization limits its real-time application. In this research, we propose a new simplified electrochemical model (SEM) by modifying the boundary conditions for electrolyte diffusion equations, which significantly facilitates the analytical solving process. Then to obtain a reduced order transfer function, the Padé approximation method is adopted to simplify the derived transcendental impedance solution. The proposed model with the reduced order transfer function can be briefly computable and preserve physical meanings through the presence of parameters such as the solid/electrolyte diffusion coefficients (Ds&De) and particle radius. The simulation illustrates that the proposed simplified model maintains high accuracy for electrolyte phase concentration (Ce) predictions, saying 0.8% and 0.24% modeling error respectively, when compared to the rigorous model under 1C-rate pulse charge/discharge and urban dynamometer driving schedule (UDDS) profiles. Meanwhile, this simplified model yields significantly reduced computational burden, which benefits its real-time application.
NASA Technical Reports Server (NTRS)
Hibbard, William L.; Dyer, Charles R.; Paul, Brian E.
1994-01-01
The VIS-AD data model integrates metadata about the precision of values, including missing data indicators and the way that arrays sample continuous functions, with the data objects of a scientific programming language. The data objects of this data model form a lattice, ordered by the precision with which they approximate mathematical objects. We define a similar lattice of displays and study visualization processes as functions from data lattices to display lattices. Such functions can be applied to visualize data objects of all data types and are thus polymorphic.
A new probability distribution model of turbulent irradiance based on Born perturbation theory
NASA Astrophysics Data System (ADS)
Wang, Hongxing; Liu, Min; Hu, Hao; Wang, Qian; Liu, Xiguo
2010-10-01
The subject of the PDF (Probability Density Function) of the irradiance fluctuations in a turbulent atmosphere is still unsettled. Theory reliably describes the behavior in the weak turbulence regime, but theoretical description in the strong and whole turbulence regimes are still controversial. Based on Born perturbation theory, the physical manifestations and correlations of three typical PDF models (Rice-Nakagami, exponential-Bessel and negative-exponential distribution) were theoretically analyzed. It is shown that these models can be derived by separately making circular-Gaussian, strong-turbulence and strong-turbulence-circular-Gaussian approximations in Born perturbation theory, which denies the viewpoint that the Rice-Nakagami model is only applicable in the extremely weak turbulence regime and provides theoretical arguments for choosing rational models in practical applications. In addition, a common shortcoming of the three models is that they are all approximations. A new model, called the Maclaurin-spread distribution, is proposed without any approximation except for assuming the correlation coefficient to be zero. So, it is considered that the new model can exactly reflect the Born perturbation theory. Simulated results prove the accuracy of this new model.
Goličnik, Marko
2011-06-01
Many pharmacodynamic processes can be described by the nonlinear saturation kinetics that are most frequently based on the hyperbolic Michaelis-Menten equation. Thus, various time-dependent solutions for drugs obeying such kinetics can be expressed in terms of the Lambert W(x)-omega function. However, unfortunately, computer programs that can perform the calculations for W(x) are not widely available. To avoid this problem, the replacement of the integrated Michaelis-Menten equation with an empiric integrated 1--exp alternative model equation was proposed recently by Keller et al. (Ther Drug Monit. 2009;31:783-785), although, as shown here, it was not necessary. Simulated concentrations of model drugs obeying Michaelis-Menten elimination kinetics were generated by two approaches: 1) calculation of time-course data based on an approximation equation W2*(x) performed using Microsoft Excel; and 2) calculation of reference time-course data based on an exact W(x) function built in to the Wolfram Mathematica. I show here that the W2*(x) function approximates the actual W(x) accurately. W2*(x) is expressed in terms of elementary mathematical functions and, consequently, it can be easily implemented using any of the widely available software. Hence, with the example of a hypothetical drug, I demonstrate here that an equation based on this approximation is far better, because it is nearly equivalent to the original solution, whereas the same characteristics cannot be fully confirmed for the 1--exp model equation. The W2*(x) equation proposed here might have an important role as a useful shortcut in optional software to estimate kinetic parameters from experimental data for drugs, and it might represent an easy and universal analytical tool for simulating and designing dosing regimens.
Kosmidis, Kosmas; Argyrakis, Panos; Macheras, Panos
2003-07-01
To verify the Higuchi law and study the drug release from cylindrical and spherical matrices by means of Monte Carlo computer simulation. A one-dimensional matrix, based on the theoretical assumptions of the derivation of the Higuchi law, was simulated and its time evolution was monitored. Cylindrical and spherical three-dimensional lattices were simulated with sites at the boundary of the lattice having been denoted as leak sites. Particles were allowed to move inside it using the random walk model. Excluded volume interactions between the particles was assumed. We have monitored the system time evolution for different lattice sizes and different initial particle concentrations. The Higuchi law was verified using the Monte Carlo technique in a one-dimensional lattice. It was found that Fickian drug release from cylindrical matrices can be approximated nicely with the Weibull function. A simple linear relation between the Weibull function parameters and the specific surface of the system was found. Drug release from a matrix, as a result of a diffusion process assuming excluded volume interactions between the drug molecules, can be described using a Weibull function. This model, although approximate and semiempirical, has the benefit of providing a simple physical connection between the model parameters and the system geometry, which was something missing from other semiempirical models.
NASA Technical Reports Server (NTRS)
Alexandrov, N. M.; Nielsen, E. J.; Lewis, R. M.; Anderson, W. K.
2000-01-01
First-order approximation and model management is a methodology for a systematic use of variable-fidelity models or approximations in optimization. The intent of model management is to attain convergence to high-fidelity solutions with minimal expense in high-fidelity computations. The savings in terms of computationally intensive evaluations depends on the ability of the available lower-fidelity model or a suite of models to predict the improvement trends for the high-fidelity problem, Variable-fidelity models can be represented by data-fitting approximations, variable-resolution models. variable-convergence models. or variable physical fidelity models. The present work considers the use of variable-fidelity physics models. We demonstrate the performance of model management on an aerodynamic optimization of a multi-element airfoil designed to operate in the transonic regime. Reynolds-averaged Navier-Stokes equations represent the high-fidelity model, while the Euler equations represent the low-fidelity model. An unstructured mesh-based analysis code FUN2D evaluates functions and sensitivity derivatives for both models. Model management for the present demonstration problem yields fivefold savings in terms of high-fidelity evaluations compared to optimization done with high-fidelity computations alone.
mBEEF-vdW: Robust fitting of error estimation density functionals
NASA Astrophysics Data System (ADS)
Lundgaard, Keld T.; Wellendorff, Jess; Voss, Johannes; Jacobsen, Karsten W.; Bligaard, Thomas
2016-06-01
We propose a general-purpose semilocal/nonlocal exchange-correlation functional approximation, named mBEEF-vdW. The exchange is a meta generalized gradient approximation, and the correlation is a semilocal and nonlocal mixture, with the Rutgers-Chalmers approximation for van der Waals (vdW) forces. The functional is fitted within the Bayesian error estimation functional (BEEF) framework [J. Wellendorff et al., Phys. Rev. B 85, 235149 (2012), 10.1103/PhysRevB.85.235149; J. Wellendorff et al., J. Chem. Phys. 140, 144107 (2014), 10.1063/1.4870397]. We improve the previously used fitting procedures by introducing a robust MM-estimator based loss function, reducing the sensitivity to outliers in the datasets. To more reliably determine the optimal model complexity, we furthermore introduce a generalization of the bootstrap 0.632 estimator with hierarchical bootstrap sampling and geometric mean estimator over the training datasets. Using this estimator, we show that the robust loss function leads to a 10 % improvement in the estimated prediction error over the previously used least-squares loss function. The mBEEF-vdW functional is benchmarked against popular density functional approximations over a wide range of datasets relevant for heterogeneous catalysis, including datasets that were not used for its training. Overall, we find that mBEEF-vdW has a higher general accuracy than competing popular functionals, and it is one of the best performing functionals on chemisorption systems, surface energies, lattice constants, and dispersion. We also show the potential-energy curve of graphene on the nickel(111) surface, where mBEEF-vdW matches the experimental binding length. mBEEF-vdW is currently available in gpaw and other density functional theory codes through Libxc, version 3.0.0.
Approximate inference on planar graphs using loop calculus and belief progagation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chertkov, Michael; Gomez, Vicenc; Kappen, Hilbert
We introduce novel results for approximate inference on planar graphical models using the loop calculus framework. The loop calculus (Chertkov and Chernyak, 2006b) allows to express the exact partition function Z of a graphical model as a finite sum of terms that can be evaluated once the belief propagation (BP) solution is known. In general, full summation over all correction terms is intractable. We develop an algorithm for the approach presented in Chertkov et al. (2008) which represents an efficient truncation scheme on planar graphs and a new representation of the series in terms of Pfaffians of matrices. We analyzemore » in detail both the loop series and the Pfaffian series for models with binary variables and pairwise interactions, and show that the first term of the Pfaffian series can provide very accurate approximations. The algorithm outperforms previous truncation schemes of the loop series and is competitive with other state-of-the-art methods for approximate inference.« less
An asymptotically consistent approximant method with application to soft- and hard-sphere fluids.
Barlow, N S; Schultz, A J; Weinstein, S J; Kofke, D A
2012-11-28
A modified Padé approximant is used to construct an equation of state, which has the same large-density asymptotic behavior as the model fluid being described, while still retaining the low-density behavior of the virial equation of state (virial series). Within this framework, all sequences of rational functions that are analytic in the physical domain converge to the correct behavior at the same rate, eliminating the ambiguity of choosing the correct form of Padé approximant. The method is applied to fluids composed of "soft" spherical particles with separation distance r interacting through an inverse-power pair potential, φ = ε(σ∕r)(n), where ε and σ are model parameters and n is the "hardness" of the spheres. For n < 9, the approximants provide a significant improvement over the 8-term virial series, when compared against molecular simulation data. For n ≥ 9, both the approximants and the 8-term virial series give an accurate description of the fluid behavior, when compared with simulation data. When taking the limit as n → ∞, an equation of state for hard spheres is obtained, which is closer to simulation data than the 10-term virial series for hard spheres, and is comparable in accuracy to other recently proposed equations of state. By applying a least square fit to the approximants, we obtain a general and accurate soft-sphere equation of state as a function of n, valid over the full range of density in the fluid phase.
Kondo necklace model in approximants of Fibonacci chains
NASA Astrophysics Data System (ADS)
Reyes, Daniel; Tarazona, H.; Cuba-Supanta, G.; Landauro, C. V.; Espinoza, R.; Quispe-Marcatoma, J.
2017-11-01
The low energy behavior of the one dimensional Kondo necklace model with structural aperiodicity is studied using a representation for the localized and conduction electron spins, in terms of local Kondo singlet and triplet operators at zero temperature. A decoupling scheme on the double time Green's functions is used to find the dispersion relation for the excitations of the system. We determine the dependence between the structural aperiodicity modulation and the spin gap in a Fibonacci approximant chain at zero temperature and in the paramagnetic side of the phase diagram.
The stability of perfect elliptic disks. 1: The maximum streaming case
NASA Technical Reports Server (NTRS)
Levine, Stephen E.; Sparke, Linda S.
1994-01-01
Self-consistent distribution functions are constructed for two-dimensional perfect elliptic disks (for which the potential is exactly integrable) in the limit of maximum streaming; these are tested for stability by N-body integration. To obtain a discrete representation for each model, simulated annealing is used to choose a set of orbits which sample the distribution function and reproduce the required density profile while carrying the greatest possible amount of angular momentum. A quiet start technique is developed to place particles on these orbits uniformly in action-angle space, making the initial conditions as smooth as possible. The roundest models exhibit spiral instabilities similar to those of cold axisymmetric disks; the most elongated models show bending instabilities like those seen in prolate systems. Between these extremes, there is a range of axial ratios 0.25 approximately less than b/a approximately less than 0.6 within which these models appear to be stable. All the methods developed in this investigation can easily be extended to integrable potentials in three dimensions.
NASA Technical Reports Server (NTRS)
Davis, J. L.; Herring, T. A.; Shapiro, I. I.; Rogers, A. E. E.; Elgered, G.
1985-01-01
Analysis of very long baseline interferometry data indicates that systematic errors in prior estimates of baseline length, of order 5 cm for approximately 8000-km baselines, were due primarily to mismodeling of the electrical path length of the troposphere and mesosphere ('atmospheric delay'). Here observational evidence for the existence of such errors in the previously used models for the atmospheric delay is discussed, and a new 'mapping' function for the elevation angle dependence of this delay is developed. The delay predicted by this new mapping function differs from ray trace results by less than approximately 5 mm, at all elevations down to 5 deg elevation, and introduces errors into the estimates of baseline length of less than about 1 cm, for the multistation intercontinental experiment analyzed here.
Functional model of biological neural networks.
Lo, James Ting-Ho
2010-12-01
A functional model of biological neural networks, called temporal hierarchical probabilistic associative memory (THPAM), is proposed in this paper. THPAM comprises functional models of dendritic trees for encoding inputs to neurons, a first type of neuron for generating spike trains, a second type of neuron for generating graded signals to modulate neurons of the first type, supervised and unsupervised Hebbian learning mechanisms for easy learning and retrieving, an arrangement of dendritic trees for maximizing generalization, hardwiring for rotation-translation-scaling invariance, and feedback connections with different delay durations for neurons to make full use of present and past informations generated by neurons in the same and higher layers. These functional models and their processing operations have many functions of biological neural networks that have not been achieved by other models in the open literature and provide logically coherent answers to many long-standing neuroscientific questions. However, biological justifications of these functional models and their processing operations are required for THPAM to qualify as a macroscopic model (or low-order approximate) of biological neural networks.
Nonequilibrium Green's functions and atom-surface dynamics: Simple views from a simple model system
NASA Astrophysics Data System (ADS)
Boström, E.; Hopjan, M.; Kartsev, A.; Verdozzi, C.; Almbladh, C.-O.
2016-03-01
We employ Non-equilibrium Green's functions (NEGF) to describe the real-time dynamics of an adsorbate-surface model system exposed to ultrafast laser pulses. For a finite number of electronic orbitals, the system is solved exactly and within different levels of approximation. Specifically i) the full exact quantum mechanical solution for electron and nuclear degrees of freedom is used to benchmark ii) the Ehrenfest approximation (EA) for the nuclei, with the electron dynamics still treated exactly. Then, using the EA, electronic correlations are treated with NEGF within iii) 2nd Born and with iv) a recently introduced hybrid scheme, which mixes 2nd Born self-energies with non-perturbative, local exchange- correlation potentials of Density Functional Theory (DFT). Finally, the effect of a semi-infinite substrate is considered: we observe that a macroscopic number of de-excitation channels can hinder desorption. While very preliminary in character and based on a simple and rather specific model system, our results clearly illustrate the large potential of NEGF to investigate atomic desorption, and more generally, the non equilibrium dynamics of material surfaces subject to ultrafast laser fields.
Ionic structures and transport properties of hot dense W and U plasmas
NASA Astrophysics Data System (ADS)
Hou, Yong; Yuan, Jianmin
2016-10-01
We have combined the average-atom model with the hyper-netted chain approximation (AAHNC) to describe the electronic and ionic structure of uranium and tungsten in the hot dense matter regime. When the electronic structure is described within the average-atom model, the effects of others ions on the electronic structure are considered by the correlation functions. And the ionic structure is calculated though using the hyper-netted chain (HNC) approximation. The ion-ion pair potential is calculated using the modified Gordon-Kim model based on the electronic density distribution in the temperature-depended density functional theory. And electronic and ionic structures are determined self-consistently. On the basis of the ion-ion pair potential, we perform the classical (CMD) and Langevin (LMD) molecular dynamics to simulate the ionic transport properties, such as ionic self-diffusion and shear viscosity coefficients, through the ionic velocity correlation functions. Due that the free electrons become more and more with increasing the plasma temperature, the influence of the electron-ion collisions on the transport properties become more and more important.
Data-driven parameterization of the generalized Langevin equation
Lei, Huan; Baker, Nathan A.; Li, Xiantao
2016-11-29
We present a data-driven approach to determine the memory kernel and random noise of the generalized Langevin equation. To facilitate practical implementations, we parameterize the kernel function in the Laplace domain by a rational function, with coefficients directly linked to the equilibrium statistics of the coarse-grain variables. Further, we show that such an approximation can be constructed to arbitrarily high order. Within these approximations, the generalized Langevin dynamics can be embedded in an extended stochastic model without memory. We demonstrate how to introduce the stochastic noise so that the fluctuation-dissipation theorem is exactly satisfied.
Indicators of ecosystem function identify alternate states in the sagebrush steppe.
Kachergis, Emily; Rocca, Monique E; Fernandez-Gimenez, Maria E
2011-10-01
Models of ecosystem change that incorporate nonlinear dynamics and thresholds, such as state-and-transition models (STMs), are increasingly popular tools for land management decision-making. However, few models are based on systematic collection and documentation of ecological data, and of these, most rely solely on structural indicators (species composition) to identify states and transitions. As STMs are adopted as an assessment framework throughout the United States, finding effective and efficient ways to create data-driven models that integrate ecosystem function and structure is vital. This study aims to (1) evaluate the utility of functional indicators (indicators of rangeland health, IRH) as proxies for more difficult ecosystem function measurements and (2) create a data-driven STM for the sagebrush steppe of Colorado, USA, that incorporates both ecosystem structure and function. We sampled soils, plant communities, and IRH at 41 plots with similar clayey soils but different site histories to identify potential states and infer the effects of management practices and disturbances on transitions. We found that many IRH were correlated with quantitative measures of functional indicators, suggesting that the IRH can be used to approximate ecosystem function. In addition to a reference state that functions as expected for this soil type, we identified four biotically and functionally distinct potential states, consistent with the theoretical concept of alternate states. Three potential states were related to management practices (chemical and mechanical shrub treatments and seeding history) while one was related only to ecosystem processes (erosion). IRH and potential states were also related to environmental variation (slope, soil texture), suggesting that there are environmental factors within areas with similar soils that affect ecosystem dynamics and should be noted within STMs. Our approach generated an objective, data-driven model of ecosystem dynamics for rangeland management. Our findings suggest that the IRH approximate ecosystem processes and can distinguish between alternate states and communities and identify transitions when building data-driven STMs. Functional indicators are a simple, efficient way to create data-driven models that are consistent with alternate state theory. Managers can use them to improve current model-building methods and thus apply state-and-transition models more broadly for land management decision-making.
NASA Astrophysics Data System (ADS)
He, Zhenzong; Qi, Hong; Yao, Yuchen; Ruan, Liming
2014-12-01
The Ant Colony Optimization algorithm based on the probability density function (PDF-ACO) is applied to estimate the bimodal aerosol particle size distribution (PSD). The direct problem is solved by the modified Anomalous Diffraction Approximation (ADA, as an approximation for optically large and soft spheres, i.e., χ⪢1 and |m-1|⪡1) and the Beer-Lambert law. First, a popular bimodal aerosol PSD and three other bimodal PSDs are retrieved in the dependent model by the multi-wavelength extinction technique. All the results reveal that the PDF-ACO algorithm can be used as an effective technique to investigate the bimodal PSD. Then, the Johnson's SB (J-SB) function and the modified beta (M-β) function are employed as the general distribution function to retrieve the bimodal PSDs under the independent model. Finally, the J-SB and M-β functions are applied to recover actual measurement aerosol PSDs over Beijing and Shanghai obtained from the aerosol robotic network (AERONET). The numerical simulation and experimental results demonstrate that these two general functions, especially the J-SB function, can be used as a versatile distribution function to retrieve the bimodal aerosol PSD when no priori information about the PSD is available.
Rapid State Space Modeling Tool for Rectangular Wing Aeroservoelastic Studies
NASA Technical Reports Server (NTRS)
Suh, Peter M.; Conyers, Howard J.; Mavris, Dimitri N.
2015-01-01
This paper introduces a modeling and simulation tool for aeroservoelastic analysis of rectangular wings with trailing-edge control surfaces. The inputs to the code are planform design parameters such as wing span, aspect ratio, and number of control surfaces. Using this information, the generalized forces are computed using the doublet-lattice method. Using Roger's approximation, a rational function approximation is computed. The output, computed in a few seconds, is a state space aeroservoelastic model which can be used for analysis and control design. The tool is fully parameterized with default information so there is little required interaction with the model developer. All parameters can be easily modified if desired. The focus of this paper is on tool presentation, verification, and validation. These processes are carried out in stages throughout the paper. The rational function approximation is verified against computed generalized forces for a plate model. A model composed of finite element plates is compared to a modal analysis from commercial software and an independently conducted experimental ground vibration test analysis. Aeroservoelastic analysis is the ultimate goal of this tool, therefore, the flutter speed and frequency for a clamped plate are computed using damping-versus-velocity and frequency-versus-velocity analysis. The computational results are compared to a previously published computational analysis and wind-tunnel results for the same structure. A case study of a generic wing model with a single control surface is presented. Verification of the state space model is presented in comparison to damping-versus-velocity and frequency-versus-velocity analysis, including the analysis of the model in response to a 1-cos gust.
Rapid State Space Modeling Tool for Rectangular Wing Aeroservoelastic Studies
NASA Technical Reports Server (NTRS)
Suh, Peter M.; Conyers, Howard Jason; Mavris, Dimitri N.
2015-01-01
This report introduces a modeling and simulation tool for aeroservoelastic analysis of rectangular wings with trailing-edge control surfaces. The inputs to the code are planform design parameters such as wing span, aspect ratio, and number of control surfaces. Using this information, the generalized forces are computed using the doublet-lattice method. Using Roger's approximation, a rational function approximation is computed. The output, computed in a few seconds, is a state space aeroservoelastic model which can be used for analysis and control design. The tool is fully parameterized with default information so there is little required interaction with the model developer. All parameters can be easily modified if desired. The focus of this report is on tool presentation, verification, and validation. These processes are carried out in stages throughout the report. The rational function approximation is verified against computed generalized forces for a plate model. A model composed of finite element plates is compared to a modal analysis from commercial software and an independently conducted experimental ground vibration test analysis. Aeroservoelastic analysis is the ultimate goal of this tool, therefore, the flutter speed and frequency for a clamped plate are computed using damping-versus-velocity and frequency-versus-velocity analysis. The computational results are compared to a previously published computational analysis and wind-tunnel results for the same structure. A case study of a generic wing model with a single control surface is presented. Verification of the state space model is presented in comparison to damping-versus-velocity and frequency-versus-velocity analysis, including the analysis of the model in response to a 1-cos gust.
Technical Note: Approximate Bayesian parameterization of a process-based tropical forest model
NASA Astrophysics Data System (ADS)
Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.
2014-02-01
Inverse parameter estimation of process-based models is a long-standing problem in many scientific disciplines. A key question for inverse parameter estimation is how to define the metric that quantifies how well model predictions fit to the data. This metric can be expressed by general cost or objective functions, but statistical inversion methods require a particular metric, the probability of observing the data given the model parameters, known as the likelihood. For technical and computational reasons, likelihoods for process-based stochastic models are usually based on general assumptions about variability in the observed data, and not on the stochasticity generated by the model. Only in recent years have new methods become available that allow the generation of likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional Markov chain Monte Carlo (MCMC) sampler, performs well in retrieving known parameter values from virtual inventory data generated by the forest model. We analyze the results of the parameter estimation, examine its sensitivity to the choice and aggregation of model outputs and observed data (summary statistics), and demonstrate the application of this method by fitting the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss how this approach differs from approximate Bayesian computation (ABC), another method commonly used to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can be successfully applied to process-based models of high complexity. The methodology is particularly suitable for heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models.
Technical Note: Approximate Bayesian parameterization of a complex tropical forest model
NASA Astrophysics Data System (ADS)
Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.
2013-08-01
Inverse parameter estimation of process-based models is a long-standing problem in ecology and evolution. A key problem of inverse parameter estimation is to define a metric that quantifies how well model predictions fit to the data. Such a metric can be expressed by general cost or objective functions, but statistical inversion approaches are based on a particular metric, the probability of observing the data given the model, known as the likelihood. Deriving likelihoods for dynamic models requires making assumptions about the probability for observations to deviate from mean model predictions. For technical reasons, these assumptions are usually derived without explicit consideration of the processes in the simulation. Only in recent years have new methods become available that allow generating likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional MCMC, performs well in retrieving known parameter values from virtual field data generated by the forest model. We analyze the results of the parameter estimation, examine the sensitivity towards the choice and aggregation of model outputs and observed data (summary statistics), and show results from using this method to fit the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss differences of this approach to Approximate Bayesian Computing (ABC), another commonly used method to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can successfully be applied to process-based models of high complexity. The methodology is particularly suited to heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models in ecology and evolution.
Constitutive Modelling of Resins in the Stiffness Domain
NASA Astrophysics Data System (ADS)
Klasztorny, M.
2004-09-01
An analytic method for inverting the constitutive compliance equations of viscoelasticity for resins is developed. These equations describe the HWKK/H rheological model, which makes it possible to simulate, with a good accuracy, short-, medium- and long-term viscoelastic processes in epoxy and polyester resins. These processes are of first-rank reversible isothermal type. The time histories of deviatoric stresses are simulated with three independent strain history functions of fractional and normal exponential types. The stiffness equations are described by two elastic and six viscoelastic constants having a clear physic meaning (three long-term relaxation coefficients and three relaxation times). The time histories of axiatoric stresses are simulated as perfectly elastic. The inversion method utilizes approximate constitutive stiffness equations of viscoelasticity for the HWKK/H model. The constitutive compliance equations for the model are a basis for determining the exact complex shear stiffness, whereas the approximate constitutive stiffness equations are used for determining the approximate complex shear stiffness. The viscoelastic constants in the stiffness domain are derived by equating the exact and approximate complex shear stiffnesses. The viscoelastic constants are obtained for Epidian 53 epoxy and Polimal 109 polyester resins. The accuracy of the approximate constitutive stiffness equations are assessed by comparing the approximate and exact complex shear stiffnesses. The constitutive stiffness equations for the HWKK/H model are presented in uncoupled (shear/bulk) and coupled forms. Formulae for converting the constants of shear viscoelasticity into the constants of coupled viscoelasticity are given as well.
The Surface Density Distribution in the Solar Nebula
NASA Technical Reports Server (NTRS)
Davis, Sanford S.
2004-01-01
The commonly used minimum mass power law representation of the pre-solar nebula is reanalyzed using a new cumulative-mass-model. This model predicts a smoother surface density approximation compared with methods based on direct computation of surface density. The density is quantified using two independent analytical formulations. First, a best-fit transcendental function is applied directly to the basic planetary data. Next a solution to the time-dependent disk evolution equation is parametrically adapted to the solar nebula data. The latter model is shown to be a good approximation to the finite-size early Solar Nebula, and by extension to other extra solar protoplanetary disks.
Semi-empirical and phenomenological instrument functions for the scanning tunneling microscope
NASA Astrophysics Data System (ADS)
Feuchtwang, T. E.; Cutler, P. H.; Notea, A.
1988-08-01
Recent progress in the development of a convenient algorithm for the determination of a quantitative local density of states (LDOS) of the sample, from data measured in the STM, is reviewd. It is argued that the sample LDOS strikes a good balance between the information content of a surface characteristic and effort required to obtain it experimentally. Hence, procedures to determine the sample LDOS as directly and as tip-model independently as possible are emphasized. The solution of the STM's "inverse" problem in terms of novel versions of the instrument (or Green) function technique is considered in preference to the well known, more direct solutions. Two types of instrument functions are considered: Approximations of the basic tip-instrument function obtained from the transfer Hamiltonian theory of the STM-STS. And, phenomenological instrument functions devised as a systematic scheme for semi-empirical first order corrections of "ideal" models. The instrument function, in this case, describes the corrections as the response of an independent component of the measuring apparatus inserted between the "ideal" instrument and the measured data. This linear response theory of measurement is reviewed and applied. A procedure for the estimation of the consistency of the model and the systematic errors due to the use of an approximate instrument function is presented. The independence of the instrument function techniques from explicit microscopic models of the tip is noted. The need for semi-empirical, as opposed to strictly empirical or analytical determination of the instrument function is discussed. The extension of the theory to the scanning tunneling spectrometer is noted, as well as its use in a theory of resolution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buda, I. G.; Lane, C.; Barbiellini, B.
We discuss self-consistently obtained ground-state electronic properties of monolayers of graphene and a number of ’beyond graphene’ compounds, including films of transition-metal dichalcogenides (TMDs), using the recently proposed strongly constrained and appropriately normed (SCAN) meta-generalized gradient approximation (meta-GGA) to the density functional theory. The SCAN meta-GGA results are compared with those based on the local density approximation (LDA) as well as the generalized gradient approximation (GGA). As expected, the GGA yields expanded lattices and softened bonds in relation to the LDA, but the SCAN meta-GGA systematically improves the agreement with experiment. Our study suggests the efficacy of the SCAN functionalmore » for accurate modeling of electronic structures of layered materials in high-throughput calculations more generally.« less
Bypassing the malfunction junction in warm dense matter simulations
NASA Astrophysics Data System (ADS)
Cangi, Attila; Pribram-Jones, Aurora
2015-03-01
Simulation of warm dense matter requires computational methods that capture both quantum and classical behavior efficiently under high-temperature and high-density conditions. The state-of-the-art approach to model electrons and ions under those conditions is density functional theory molecular dynamics, but this method's computational cost skyrockets as temperatures and densities increase. We propose finite-temperature potential functional theory as an in-principle-exact alternative that suffers no such drawback. In analogy to the zero-temperature theory developed previously, we derive an orbital-free free energy approximation through a coupling-constant formalism. Our density approximation and its associated free energy approximation demonstrate the method's accuracy and efficiency. A.C. has been partially supported by NSF Grant CHE-1112442. A.P.J. is supported by DOE Grant DE-FG02-97ER25308.
Buda, I. G.; Lane, C.; Barbiellini, B.; ...
2017-03-23
We discuss self-consistently obtained ground-state electronic properties of monolayers of graphene and a number of ’beyond graphene’ compounds, including films of transition-metal dichalcogenides (TMDs), using the recently proposed strongly constrained and appropriately normed (SCAN) meta-generalized gradient approximation (meta-GGA) to the density functional theory. The SCAN meta-GGA results are compared with those based on the local density approximation (LDA) as well as the generalized gradient approximation (GGA). As expected, the GGA yields expanded lattices and softened bonds in relation to the LDA, but the SCAN meta-GGA systematically improves the agreement with experiment. Our study suggests the efficacy of the SCAN functionalmore » for accurate modeling of electronic structures of layered materials in high-throughput calculations more generally.« less
Classical Testing in Functional Linear Models.
Kong, Dehan; Staicu, Ana-Maria; Maity, Arnab
2016-01-01
We extend four tests common in classical regression - Wald, score, likelihood ratio and F tests - to functional linear regression, for testing the null hypothesis, that there is no association between a scalar response and a functional covariate. Using functional principal component analysis, we re-express the functional linear model as a standard linear model, where the effect of the functional covariate can be approximated by a finite linear combination of the functional principal component scores. In this setting, we consider application of the four traditional tests. The proposed testing procedures are investigated theoretically for densely observed functional covariates when the number of principal components diverges. Using the theoretical distribution of the tests under the alternative hypothesis, we develop a procedure for sample size calculation in the context of functional linear regression. The four tests are further compared numerically for both densely and sparsely observed noisy functional data in simulation experiments and using two real data applications.
Classical Testing in Functional Linear Models
Kong, Dehan; Staicu, Ana-Maria; Maity, Arnab
2016-01-01
We extend four tests common in classical regression - Wald, score, likelihood ratio and F tests - to functional linear regression, for testing the null hypothesis, that there is no association between a scalar response and a functional covariate. Using functional principal component analysis, we re-express the functional linear model as a standard linear model, where the effect of the functional covariate can be approximated by a finite linear combination of the functional principal component scores. In this setting, we consider application of the four traditional tests. The proposed testing procedures are investigated theoretically for densely observed functional covariates when the number of principal components diverges. Using the theoretical distribution of the tests under the alternative hypothesis, we develop a procedure for sample size calculation in the context of functional linear regression. The four tests are further compared numerically for both densely and sparsely observed noisy functional data in simulation experiments and using two real data applications. PMID:28955155
Geometrical-optics approximation of forward scattering by gradient-index spheres.
Li, Xiangzhen; Han, Xiang'e; Li, Renxian; Jiang, Huifen
2007-08-01
By means of geometrical optics we present an approximation method for acceleration of the computation of the scattering intensity distribution within a forward angular range (0-60 degrees ) for gradient-index spheres illuminated by a plane wave. The incident angle of reflected light is determined by the scattering angle, thus improving the approximation accuracy. The scattering angle and the optical path length are numerically integrated by a general-purpose integrator. With some special index models, the scattering angle and the optical path length can be expressed by a unique function and the calculation is faster. This method is proved effective for transparent particles with size parameters greater than 50. It fails to give good approximation results at scattering angles whose refractive rays are in the backward direction. For different index models, the geometrical-optics approximation is effective only for forward angles, typically those less than 60 degrees or when the refractive-index difference of a particle is less than a certain value.
Demidenko, Eugene
2017-09-01
The exact density distribution of the nonlinear least squares estimator in the one-parameter regression model is derived in closed form and expressed through the cumulative distribution function of the standard normal variable. Several proposals to generalize this result are discussed. The exact density is extended to the estimating equation (EE) approach and the nonlinear regression with an arbitrary number of linear parameters and one intrinsically nonlinear parameter. For a very special nonlinear regression model, the derived density coincides with the distribution of the ratio of two normally distributed random variables previously obtained by Fieller (1932), unlike other approximations previously suggested by other authors. Approximations to the density of the EE estimators are discussed in the multivariate case. Numerical complications associated with the nonlinear least squares are illustrated, such as nonexistence and/or multiple solutions, as major factors contributing to poor density approximation. The nonlinear Markov-Gauss theorem is formulated based on the near exact EE density approximation.
Relaxation approximations to second-order traffic flow models by high-resolution schemes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nikolos, I.K.; Delis, A.I.; Papageorgiou, M.
2015-03-10
A relaxation-type approximation of second-order non-equilibrium traffic models, written in conservation or balance law form, is considered. Using the relaxation approximation, the nonlinear equations are transformed to a semi-linear diagonilizable problem with linear characteristic variables and stiff source terms with the attractive feature that neither Riemann solvers nor characteristic decompositions are in need. In particular, it is only necessary to provide the flux and source term functions and an estimate of the characteristic speeds. To discretize the resulting relaxation system, high-resolution reconstructions in space are considered. Emphasis is given on a fifth-order WENO scheme and its performance. The computations reportedmore » demonstrate the simplicity and versatility of relaxation schemes as numerical solvers.« less
Application of finite element approach to transonic flow problems
NASA Technical Reports Server (NTRS)
Hafez, M. M.; Murman, E. M.; Wellford, L. C., Jr.
1976-01-01
A variational finite element model for transonic small disturbance calculations is described. Different strategy is adopted in subsonic and supersonic regions, and blending elements are introduced between different regions. In the supersonic region, no upstream effect is allowed. If rectangular elements with linear shape functions are used, the model is similar to Murman's finite difference operators. Higher order shape functions, nonrectangular elements, and discontinuous approximation of shock waves are also discussed.
Diffusion of Super-Gaussian Profiles
ERIC Educational Resources Information Center
Rosenberg, C.-J.; Anderson, D.; Desaix, M.; Johannisson, P.; Lisak, M.
2007-01-01
The present analysis describes an analytically simple and systematic approximation procedure for modelling the free diffusive spreading of initially super-Gaussian profiles. The approach is based on a self-similar ansatz for the evolution of the diffusion profile, and the parameter functions involved in the modelling are determined by suitable…
Padé approximant for normal stress differences in large-amplitude oscillatory shear flow
NASA Astrophysics Data System (ADS)
Poungthong, P.; Saengow, C.; Giacomin, A. J.; Kolitawong, C.; Merger, D.; Wilhelm, M.
2018-04-01
Analytical solutions for the normal stress differences in large-amplitude oscillatory shear flow (LAOS), for continuum or molecular models, normally take the inexact form of the first few terms of a series expansion in the shear rate amplitude. Here, we improve the accuracy of these truncated expansions by replacing them with rational functions called Padé approximants. The recent advent of exact solutions in LAOS presents an opportunity to identify accurate and useful Padé approximants. For this identification, we replace the truncated expansion for the corotational Jeffreys fluid with its Padé approximants for the normal stress differences. We uncover the most accurate and useful approximant, the [3,4] approximant, and then test its accuracy against the exact solution [C. Saengow and A. J. Giacomin, "Normal stress differences from Oldroyd 8-constant framework: Exact analytical solution for large-amplitude oscillatory shear flow," Phys. Fluids 29, 121601 (2017)]. We use Ewoldt grids to show the stunning accuracy of our [3,4] approximant in LAOS. We quantify this accuracy with an objective function and then map it onto the Pipkin space. Our two applications illustrate how to use our new approximant reliably. For this, we use the Spriggs relations to generalize our best approximant to multimode, and then, we compare with measurements on molten high-density polyethylene and on dissolved polyisobutylene in isobutylene oligomer.
The three-point function as a probe of models for large-scale structure
NASA Technical Reports Server (NTRS)
Frieman, Joshua A.; Gaztanaga, Enrique
1993-01-01
The consequences of models of structure formation for higher-order (n-point) galaxy correlation functions in the mildly non-linear regime are analyzed. Several variations of the standard Omega = 1 cold dark matter model with scale-invariant primordial perturbations were recently introduced to obtain more power on large scales, R(sub p) is approximately 20 h(sup -1) Mpc, e.g., low-matter-density (non-zero cosmological constant) models, 'tilted' primordial spectra, and scenarios with a mixture of cold and hot dark matter. They also include models with an effective scale-dependent bias, such as the cooperative galaxy formation scenario of Bower, etal. It is shown that higher-order (n-point) galaxy correlation functions can provide a useful test of such models and can discriminate between models with true large-scale power in the density field and those where the galaxy power arises from scale-dependent bias: a bias with rapid scale-dependence leads to a dramatic decrease of the hierarchical amplitudes Q(sub J) at large scales, r is approximately greater than R(sub p). Current observational constraints on the three-point amplitudes Q(sub 3) and S(sub 3) can place limits on the bias parameter(s) and appear to disfavor, but not yet rule out, the hypothesis that scale-dependent bias is responsible for the extra power observed on large scales.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Säkkinen, Niko; Peng, Yang; Fritz-Haber-Institut der Max-Planck-Gesellschaft, Faradayweg 4-6, 14195 Berlin-Dahlem
2015-12-21
We present a Kadanoff-Baym formalism to study time-dependent phenomena for systems of interacting electrons and phonons in the framework of many-body perturbation theory. The formalism takes correctly into account effects of the initial preparation of an equilibrium state and allows for an explicit time-dependence of both the electronic and phononic degrees of freedom. The method is applied to investigate the charge neutral and non-neutral excitation spectra of a homogeneous, two-site, two-electron Holstein model. This is an extension of a previous study of the ground state properties in the Hartree (H), partially self-consistent Born (Gd) and fully self-consistent Born (GD) approximationsmore » published in Säkkinen et al. [J. Chem. Phys. 143, 234101 (2015)]. Here, the homogeneous ground state solution is shown to become unstable for a sufficiently strong interaction while a symmetry-broken ground state solution is shown to be stable in the Hartree approximation. Signatures of this instability are observed for the partially self-consistent Born approximation but are not found for the fully self-consistent Born approximation. By understanding the stability properties, we are able to study the linear response regime by calculating the density-density response function by time-propagation. This amounts to a solution of the Bethe-Salpeter equation with a sophisticated kernel. The results indicate that none of the approximations is able to describe the response function during or beyond the bipolaronic crossover for the parameters investigated. Overall, we provide an extensive discussion on when the approximations are valid and how they fail to describe the studied exact properties of the chosen model system.« less
A multilayer shallow water system for polydisperse sedimentation
NASA Astrophysics Data System (ADS)
Fernández-Nieto, E. D.; Koné, E. H.; Morales de Luna, T.; Bürger, R.
2013-04-01
This work considers the flow of a fluid containing one disperse substance consisting of small particles that belong to different species differing in size and density. The flow is modelled by combining a multilayer shallow water approach with a polydisperse sedimentation process. This technique allows one to keep information on the vertical distribution of the solid particles in the mixture, and thereby to model the segregation of the particle species from each other, and from the fluid, taking place in the vertical direction of the gravity body force only. This polydisperse sedimentation process is described by the well-known Masliyah-Lockett-Bassoon (MLB) velocity functions. The resulting multilayer sedimentation-flow model can be written as a hyperbolic system with nonconservative products. The definitions of the nonconservative products are related to the hydrostatic pressure and to the mass and momentum hydrodynamic transfer terms between the layers. For the numerical discretization a strategy of two steps is proposed, where the first one is also divided into two parts. In the first step, instead of approximating the complete model, we approximate a reduced model with a smaller number of unknowns. Then, taking advantage of the fact that the concentrations are passive scalars in the system, we approximate the concentrations of the different species by an upwind scheme related to the numerical flux of the total concentration. In the second step, the effect of the transference terms defined in terms of the MLB model is introduced. These transfer terms are approximated by using a numerical flux function used to discretize the 1D vertical polydisperse model, see Bürger et al. [ R. Bürger, A. García, K.H. Karlsen, J.D. Towers, A family of numerical schemes for kinematic flows with discontinuous flux, J. Eng. Math. 60 (2008) 387-425]. Finally, some numerical examples are presented. Numerical results suggest that the multilayer shallow water model could be adequate in situations where the settling takes place from a suspension that undergoes horizontal movement.
NASA Astrophysics Data System (ADS)
Mukhopadhyay, Saumyadip; Abraham, John
2012-07-01
The unsteady flamelet progress variable (UFPV) model has been proposed by Pitsch and Ihme ["An unsteady/flamelet progress variable method for LES of nonpremixed turbulent combustion," AIAA Paper No. 2005-557, 2005] for modeling the averaged/filtered chemistry source terms in Reynolds averaged simulations and large eddy simulations of reacting non-premixed combustion. In the UFPV model, a look-up table of source terms is generated as a function of mixture fraction Z, scalar dissipation rate χ, and progress variable C by solving the unsteady flamelet equations. The assumption is that the unsteady flamelet represents the evolution of the reacting mixing layer in the non-premixed flame. We assess the accuracy of the model in predicting autoignition and flame development in compositionally stratified n-heptane/air mixtures using direct numerical simulations (DNS). The focus in this work is primarily on the assessment of accuracy of the probability density functions (PDFs) employed for obtaining averaged source terms. The performance of commonly employed presumed functions, such as the dirac-delta distribution function, the β distribution function, and statistically most likely distribution (SMLD) approach in approximating the shapes of the PDFs of the reactive and the conserved scalars is evaluated. For unimodal distributions, it is observed that functions that need two-moment information, e.g., the β distribution function and the SMLD approach with two-moment closure, are able to reasonably approximate the actual PDF. As the distribution becomes multimodal, higher moment information is required. Differences are observed between the ignition trends obtained from DNS and those predicted by the look-up table, especially for smaller gradients where the flamelet assumption becomes less applicable. The formulation assumes that the shape of the χ(Z) profile can be modeled by an error function which remains unchanged in the presence of heat release. We show that this assumption is not accurate.
Prediction of Unsteady Aerodynamic Coefficients at High Angles of Attack
NASA Technical Reports Server (NTRS)
Pamadi, Bandu N.; Murphy, Patrick C.; Klein, Vladislav; Brandon, Jay M.
2001-01-01
The nonlinear indicial response method is used to model the unsteady aerodynamic coefficients in the low speed longitudinal oscillatory wind tunnel test data of the 0.1 scale model of the F-16XL aircraft. Exponential functions are used to approximate the deficiency function in the indicial response. Using one set of oscillatory wind tunnel data and parameter identification method, the unknown parameters in the exponential functions are estimated. The genetic algorithm is used as a least square minimizing algorithm. The assumed model structures and parameter estimates are validated by comparing the predictions with other sets of available oscillatory wind tunnel test data.
Development of confidence limits by pivotal functions for estimating software reliability
NASA Technical Reports Server (NTRS)
Dotson, Kelly J.
1987-01-01
The utility of pivotal functions is established for assessing software reliability. Based on the Moranda geometric de-eutrophication model of reliability growth, confidence limits for attained reliability and prediction limits for the time to the next failure are derived using a pivotal function approach. Asymptotic approximations to the confidence and prediction limits are considered and are shown to be inadequate in cases where only a few bugs are found in the software. Departures from the assumed exponentially distributed interfailure times in the model are also investigated. The effect of these departures is discussed relative to restricting the use of the Moranda model.
Kaye, Stephen B
2009-04-01
To provide a scalar measure of refractive error, based on geometric lens power through principal, orthogonal and oblique meridians, that is not limited to the paraxial and sag height approximations. A function is derived to model sections through the principal meridian of a lens, followed by rotation of the section through orthogonal and oblique meridians. Average focal length is determined using the definition for the average of a function. Average univariate power in the principal meridian (including spherical aberration), can be computed from the average of a function over the angle of incidence as determined by the parameters of the given lens, or adequately computed from an integrated series function. Average power through orthogonal and oblique meridians, can be similarly determined using the derived formulae. The widely used computation for measuring refractive error, the spherical equivalent, introduces non-constant approximations, leading to a systematic bias. The equations proposed provide a good univariate representation of average lens power and are not subject to a systematic bias. They are particularly useful for the analysis of aggregate data, correlating with biological treatment variables and for developing analyses, which require a scalar equivalent representation of refractive power.
Variational Bayesian identification and prediction of stochastic nonlinear dynamic causal models.
Daunizeau, J; Friston, K J; Kiebel, S J
2009-11-01
In this paper, we describe a general variational Bayesian approach for approximate inference on nonlinear stochastic dynamic models. This scheme extends established approximate inference on hidden-states to cover: (i) nonlinear evolution and observation functions, (ii) unknown parameters and (precision) hyperparameters and (iii) model comparison and prediction under uncertainty. Model identification or inversion entails the estimation of the marginal likelihood or evidence of a model. This difficult integration problem can be finessed by optimising a free-energy bound on the evidence using results from variational calculus. This yields a deterministic update scheme that optimises an approximation to the posterior density on the unknown model variables. We derive such a variational Bayesian scheme in the context of nonlinear stochastic dynamic hierarchical models, for both model identification and time-series prediction. The computational complexity of the scheme is comparable to that of an extended Kalman filter, which is critical when inverting high dimensional models or long time-series. Using Monte-Carlo simulations, we assess the estimation efficiency of this variational Bayesian approach using three stochastic variants of chaotic dynamic systems. We also demonstrate the model comparison capabilities of the method, its self-consistency and its predictive power.
A diffusion approximation for ocean wave scatterings by randomly distributed ice floes
NASA Astrophysics Data System (ADS)
Zhao, Xin; Shen, Hayley
2016-11-01
This study presents a continuum approach using a diffusion approximation method to solve the scattering of ocean waves by randomly distributed ice floes. In order to model both strong and weak scattering, the proposed method decomposes the wave action density function into two parts: the transmitted part and the scattered part. For a given wave direction, the transmitted part of the wave action density is defined as the part of wave action density in the same direction before the scattering; and the scattered part is a first order Fourier series approximation for the directional spreading caused by scattering. An additional approximation is also adopted for simplification, in which the net directional redistribution of wave action by a single scatterer is assumed to be the reflected wave action of a normally incident wave into a semi-infinite ice cover. Other required input includes the mean shear modulus, diameter and thickness of ice floes, and the ice concentration. The directional spreading of wave energy from the diffusion approximation is found to be in reasonable agreement with the previous solution using the Boltzmann equation. The diffusion model provides an alternative method to implement wave scattering into an operational wave model.
THE TWO-LEVEL MODEL AT FINITE-TEMPERATURE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goodman, A.L.
1980-07-01
The finite-temperature HFB cranking equations are solved for the two-level model. The pair gap, moment of inertia and internal energy are determined as functions of spin and temperature. Thermal excitations and rotations collaborate to destroy the pair correlations. Raising the temperature eliminates the backbending effect and improves the HFB approximation.
Universality for 1d Random Band Matrices: Sigma-Model Approximation
NASA Astrophysics Data System (ADS)
Shcherbina, Mariya; Shcherbina, Tatyana
2018-02-01
The paper continues the development of the rigorous supersymmetric transfer matrix approach to the random band matrices started in (J Stat Phys 164:1233-1260, 2016; Commun Math Phys 351:1009-1044, 2017). We consider random Hermitian block band matrices consisting of W× W random Gaussian blocks (parametrized by j,k \\in Λ =[1,n]^d\\cap Z^d ) with a fixed entry's variance J_{jk}=δ _{j,k}W^{-1}+β Δ _{j,k}W^{-2} , β >0 in each block. Taking the limit W→ ∞ with fixed n and β , we derive the sigma-model approximation of the second correlation function similar to Efetov's one. Then, considering the limit β , n→ ∞, we prove that in the dimension d=1 the behaviour of the sigma-model approximation in the bulk of the spectrum, as β ≫ n , is determined by the classical Wigner-Dyson statistics.
Survival Bayesian Estimation of Exponential-Gamma Under Linex Loss Function
NASA Astrophysics Data System (ADS)
Rizki, S. W.; Mara, M. N.; Sulistianingsih, E.
2017-06-01
This paper elaborates a research of the cancer patients after receiving a treatment in cencored data using Bayesian estimation under Linex Loss function for Survival Model which is assumed as an exponential distribution. By giving Gamma distribution as prior and likelihood function produces a gamma distribution as posterior distribution. The posterior distribution is used to find estimatior {\\hat{λ }}BL by using Linex approximation. After getting {\\hat{λ }}BL, the estimators of hazard function {\\hat{h}}BL and survival function {\\hat{S}}BL can be found. Finally, we compare the result of Maximum Likelihood Estimation (MLE) and Linex approximation to find the best method for this observation by finding smaller MSE. The result shows that MSE of hazard and survival under MLE are 2.91728E-07 and 0.000309004 and by using Bayesian Linex worths 2.8727E-07 and 0.000304131, respectively. It concludes that the Bayesian Linex is better than MLE.
NASA Astrophysics Data System (ADS)
Mo, Shaoxing; Lu, Dan; Shi, Xiaoqing; Zhang, Guannan; Ye, Ming; Wu, Jianfeng; Wu, Jichun
2017-12-01
Global sensitivity analysis (GSA) and uncertainty quantification (UQ) for groundwater modeling are challenging because of the model complexity and significant computational requirements. To reduce the massive computational cost, a cheap-to-evaluate surrogate model is usually constructed to approximate and replace the expensive groundwater models in the GSA and UQ. Constructing an accurate surrogate requires actual model simulations on a number of parameter samples. Thus, a robust experimental design strategy is desired to locate informative samples so as to reduce the computational cost in surrogate construction and consequently to improve the efficiency in the GSA and UQ. In this study, we develop a Taylor expansion-based adaptive design (TEAD) that aims to build an accurate global surrogate model with a small training sample size. TEAD defines a novel hybrid score function to search informative samples, and a robust stopping criterion to terminate the sample search that guarantees the resulted approximation errors satisfy the desired accuracy. The good performance of TEAD in building global surrogate models is demonstrated in seven analytical functions with different dimensionality and complexity in comparison to two widely used experimental design methods. The application of the TEAD-based surrogate method in two groundwater models shows that the TEAD design can effectively improve the computational efficiency of GSA and UQ for groundwater modeling.
NASA Astrophysics Data System (ADS)
Korelin, Ivan A.; Porshnev, Sergey V.
2018-01-01
The paper demonstrates the possibility of calculating the characteristics of the flow of visitors to objects carrying out mass events passing through checkpoints. The mathematical model is based on the non-stationary queuing system (NQS) where dependence of requests input rate from time is described by the function. This function was chosen in such way that its properties were similar to the real dependencies of speed of visitors arrival on football matches to the stadium. A piecewise-constant approximation of the function is used when statistical modeling of NQS performing. Authors calculated the dependencies of the queue length and waiting time for visitors to service (time in queue) on time for different laws. Time required to service the entire queue and the number of visitors entering the stadium at the beginning of the match were calculated too. We found the dependence for macroscopic quantitative characteristics of NQS from the number of averaging sections of the input rate.
NASA Technical Reports Server (NTRS)
El-Alaoui, M.; Ashour-Abdalla, M.; Raeder, J.; Peroomian, V.; Frank, L. A.; Paterson, W. R.; Bosqued, J. M.
1998-01-01
On February 9, 1995, the Comprehensive Plasma Instrumentation (CPI) on the Geotail spacecraft observed a complex, structured ion distribution function near the magnetotail midplane at x approximately -30 R(sub E). On this same day the Wind spacecraft observed a quiet solar wind and an interplanetary magnetic field (IMF) that was northward for more than five hours, and an IMF B(sub y) component with a magnitude comparable to that of the RAF B(sub z) component. In this study, we determined the sources of the ions in this distribution function by following approximately 90,000 ion trajectories backward in time, using the time-dependent electric and magnetic fields obtained from a global MHD simulation. The Wind observations were used as input for the MHD model. The ion distribution function observed by Geotail at 1347 UT was found to consist primarily of particles from the dawn side low latitude boundary layer (LLBL) and from the dusk side LLBL; fewer than 2% of the particles originated in the ionosphere.
Phonon and magnetic structure in δ-plutonium from density-functional theory
Söderlind, Per; Zhou, F.; Landa, A.; ...
2015-10-30
We present phonon properties of plutonium metal obtained from a combination of density-functional-theory (DFT) electronic structure and the recently developed compressive sensing lattice dynamics (CSLD). The CSLD model is here trained on DFT total energies of several hundreds of quasi-random atomic configurations for best possible accuracy of the phonon properties. The calculated phonon dispersions compare better with experiment than earlier results obtained from dynamical mean-field theory. The density-functional model of the electronic structure consists of disordered magnetic moments with all relativistic effects and explicit orbital-orbital correlations. The magnetic disorder is approximated in two ways: (i) a special quasi-random structure andmore » (ii) the disordered-local-moment (DLM) method within the coherent potential approximation. Magnetism in plutonium has been debated intensely, However, the present magnetic approach for plutonium is validated by the close agreement between the predicted magnetic form factor and that of recent neutron-scattering experiments.« less
NASA Technical Reports Server (NTRS)
Tal-Ezer, Hillel
1987-01-01
During the process of solving a mathematical model numerically, there is often a need to operate on a vector v by an operator which can be expressed as f(A) while A is NxN matrix (ex: exp(A), sin(A), A sup -1). Except for very simple matrices, it is impractical to construct the matrix f(A) explicitly. Usually an approximation to it is used. In the present research, an algorithm is developed which uses a polynomial approximation to f(A). It is reduced to a problem of approximating f(z) by a polynomial in z while z belongs to the domain D in the complex plane which includes all the eigenvalues of A. This problem of approximation is approached by interpolating the function f(z) in a certain set of points which is known to have some maximal properties. The approximation thus achieved is almost best. Implementing the algorithm to some practical problem is described. Since a solution to a linear system Ax = b is x= A sup -1 b, an iterative solution to it can be regarded as a polynomial approximation to f(A) = A sup -1. Implementing the algorithm in this case is also described.
Theory of dissociative tunneling ionization
NASA Astrophysics Data System (ADS)
Svensmark, Jens; Tolstikhin, Oleg I.; Madsen, Lars Bojer
2016-05-01
We present a theoretical study of the dissociative tunneling ionization process. Analytic expressions for the nuclear kinetic energy distribution of the ionization rates are derived. A particularly simple expression for the spectrum is found by using the Born-Oppenheimer (BO) approximation in conjunction with the reflection principle. These spectra are compared to exact non-BO ab initio spectra obtained through model calculations with a quantum mechanical treatment of both the electronic and nuclear degrees of freedom. In the regime where the BO approximation is applicable, imaging of the BO nuclear wave function is demonstrated to be possible through reverse use of the reflection principle, when accounting appropriately for the electronic ionization rate. A qualitative difference between the exact and BO wave functions in the asymptotic region of large electronic distances is shown. Additionally, the behavior of the wave function across the turning line is seen to be reminiscent of light refraction. For weak fields, where the BO approximation does not apply, the weak-field asymptotic theory describes the spectrum accurately.
Two approximations of the present value distribution of a disability annuity
NASA Astrophysics Data System (ADS)
Spreeuw, Jaap
2006-02-01
The distribution function of the present value of a cash flow can be approximated by means of a distribution function of a random variable, which is also the present value of a sequence of payments, but with a simpler structure. The corresponding random variable has the same expectation as the random variable corresponding to the original distribution function and is a stochastic upper bound of convex order. A sharper upper bound can be obtained if more information about the risk is available. In this paper, it will be shown that such an approach can be adopted for disability annuities (also known as income protection policies) in a three state model under Markov assumptions. Benefits are payable during any spell of disability whilst premiums are only due whenever the insured is healthy. The quality of the two approximations is investigated by comparing the distributions obtained with the one derived from the algorithm presented in the paper by Hesselager and Norberg [Insurance Math. Econom. 18 (1996) 35-42].
Novel harmonic regularization approach for variable selection in Cox's proportional hazards model.
Chu, Ge-Jin; Liang, Yong; Wang, Jia-Xuan
2014-01-01
Variable selection is an important issue in regression and a number of variable selection methods have been proposed involving nonconvex penalty functions. In this paper, we investigate a novel harmonic regularization method, which can approximate nonconvex Lq (1/2 < q < 1) regularizations, to select key risk factors in the Cox's proportional hazards model using microarray gene expression data. The harmonic regularization method can be efficiently solved using our proposed direct path seeking approach, which can produce solutions that closely approximate those for the convex loss function and the nonconvex regularization. Simulation results based on the artificial datasets and four real microarray gene expression datasets, such as real diffuse large B-cell lymphoma (DCBCL), the lung cancer, and the AML datasets, show that the harmonic regularization method can be more accurate for variable selection than existing Lasso series methods.
Sire, Clément
2004-09-24
We study the autocorrelation function of a conserved spin system following a quench at the critical temperature. Defining the correlation length L(t) approximately t(1/z), we find that for times t' and t satisfying L(t')
Approximate Bayesian computation in large-scale structure: constraining the galaxy-halo connection
NASA Astrophysics Data System (ADS)
Hahn, ChangHoon; Vakili, Mohammadjavad; Walsh, Kilian; Hearin, Andrew P.; Hogg, David W.; Campbell, Duncan
2017-08-01
Standard approaches to Bayesian parameter inference in large-scale structure assume a Gaussian functional form (chi-squared form) for the likelihood. This assumption, in detail, cannot be correct. Likelihood free inferences such as approximate Bayesian computation (ABC) relax these restrictions and make inference possible without making any assumptions on the likelihood. Instead ABC relies on a forward generative model of the data and a metric for measuring the distance between the model and data. In this work, we demonstrate that ABC is feasible for LSS parameter inference by using it to constrain parameters of the halo occupation distribution (HOD) model for populating dark matter haloes with galaxies. Using specific implementation of ABC supplemented with population Monte Carlo importance sampling, a generative forward model using HOD and a distance metric based on galaxy number density, two-point correlation function and galaxy group multiplicity function, we constrain the HOD parameters of mock observation generated from selected 'true' HOD parameters. The parameter constraints we obtain from ABC are consistent with the 'true' HOD parameters, demonstrating that ABC can be reliably used for parameter inference in LSS. Furthermore, we compare our ABC constraints to constraints we obtain using a pseudo-likelihood function of Gaussian form with MCMC and find consistent HOD parameter constraints. Ultimately, our results suggest that ABC can and should be applied in parameter inference for LSS analyses.
NASA Technical Reports Server (NTRS)
Chadwick, C.
1984-01-01
This paper describes the development and use of an algorithm to compute approximate statistics of the magnitude of a single random trajectory correction maneuver (TCM) Delta v vector. The TCM Delta v vector is modeled as a three component Cartesian vector each of whose components is a random variable having a normal (Gaussian) distribution with zero mean and possibly unequal standard deviations. The algorithm uses these standard deviations as input to produce approximations to (1) the mean and standard deviation of the magnitude of Delta v, (2) points of the probability density function of the magnitude of Delta v, and (3) points of the cumulative and inverse cumulative distribution functions of Delta v. The approximates are based on Monte Carlo techniques developed in a previous paper by the author and extended here. The algorithm described is expected to be useful in both pre-flight planning and in-flight analysis of maneuver propellant requirements for space missions.
NASA Astrophysics Data System (ADS)
Ji, Xuewu; He, Xiangkun; Lv, Chen; Liu, Yahui; Wu, Jian
2018-06-01
Modelling uncertainty, parameter variation and unknown external disturbance are the major concerns in the development of an advanced controller for vehicle stability at the limits of handling. Sliding mode control (SMC) method has proved to be robust against parameter variation and unknown external disturbance with satisfactory tracking performance. But modelling uncertainty, such as errors caused in model simplification, is inevitable in model-based controller design, resulting in lowered control quality. The adaptive radial basis function network (ARBFN) can effectively improve the control performance against large system uncertainty by learning to approximate arbitrary nonlinear functions and ensure the global asymptotic stability of the closed-loop system. In this paper, a novel vehicle dynamics stability control strategy is proposed using the adaptive radial basis function network sliding mode control (ARBFN-SMC) to learn system uncertainty and eliminate its adverse effects. This strategy adopts a hierarchical control structure which consists of reference model layer, yaw moment control layer, braking torque allocation layer and executive layer. Co-simulation using MATLAB/Simulink and AMESim is conducted on a verified 15-DOF nonlinear vehicle system model with the integrated-electro-hydraulic brake system (I-EHB) actuator in a Sine With Dwell manoeuvre. The simulation results show that ARBFN-SMC scheme exhibits superior stability and tracking performance in different running conditions compared with SMC scheme.
Optimal clinical trial design based on a dichotomous Markov-chain mixed-effect sleep model.
Steven Ernest, C; Nyberg, Joakim; Karlsson, Mats O; Hooker, Andrew C
2014-12-01
D-optimal designs for discrete-type responses have been derived using generalized linear mixed models, simulation based methods and analytical approximations for computing the fisher information matrix (FIM) of non-linear mixed effect models with homogeneous probabilities over time. In this work, D-optimal designs using an analytical approximation of the FIM for a dichotomous, non-homogeneous, Markov-chain phase advanced sleep non-linear mixed effect model was investigated. The non-linear mixed effect model consisted of transition probabilities of dichotomous sleep data estimated as logistic functions using piecewise linear functions. Theoretical linear and nonlinear dose effects were added to the transition probabilities to modify the probability of being in either sleep stage. D-optimal designs were computed by determining an analytical approximation the FIM for each Markov component (one where the previous state was awake and another where the previous state was asleep). Each Markov component FIM was weighted either equally or by the average probability of response being awake or asleep over the night and summed to derive the total FIM (FIM(total)). The reference designs were placebo, 0.1, 1-, 6-, 10- and 20-mg dosing for a 2- to 6-way crossover study in six dosing groups. Optimized design variables were dose and number of subjects in each dose group. The designs were validated using stochastic simulation/re-estimation (SSE). Contrary to expectations, the predicted parameter uncertainty obtained via FIM(total) was larger than the uncertainty in parameter estimates computed by SSE. Nevertheless, the D-optimal designs decreased the uncertainty of parameter estimates relative to the reference designs. Additionally, the improvement for the D-optimal designs were more pronounced using SSE than predicted via FIM(total). Through the use of an approximate analytic solution and weighting schemes, the FIM(total) for a non-homogeneous, dichotomous Markov-chain phase advanced sleep model was computed and provided more efficient trial designs and increased nonlinear mixed-effects modeling parameter precision.
Unified control/structure design and modeling research
NASA Technical Reports Server (NTRS)
Mingori, D. L.; Gibson, J. S.; Blelloch, P. A.; Adamian, A.
1986-01-01
To demonstrate the applicability of the control theory for distributed systems to large flexible space structures, research was focused on a model of a space antenna which consists of a rigid hub, flexible ribs, and a mesh reflecting surface. The space antenna model used is discussed along with the finite element approximation of the distributed model. The basic control problem is to design an optimal or near-optimal compensator to suppress the linear vibrations and rigid-body displacements of the structure. The application of an infinite dimensional Linear Quadratic Gaussian (LQG) control theory to flexible structure is discussed. Two basic approaches for robustness enhancement were investigated: loop transfer recovery and sensitivity optimization. A third approach synthesized from elements of these two basic approaches is currently under development. The control driven finite element approximation of flexible structures is discussed. Three sets of finite element basic vectors for computing functional control gains are compared. The possibility of constructing a finite element scheme to approximate the infinite dimensional Hamiltonian system directly, instead of indirectly is discussed.
NASA Astrophysics Data System (ADS)
Li, Xiaoyu; Fan, Guodong; Pan, Ke; Wei, Guo; Zhu, Chunbo; Rizzoni, Giorgio; Canova, Marcello
2017-11-01
The design of a lumped parameter battery model preserving physical meaning is especially desired by the automotive researchers and engineers due to the strong demand for battery system control, estimation, diagnosis and prognostics. In light of this, a novel simplified fractional order electrochemical model is developed for electric vehicle (EV) applications in this paper. In the model, a general fractional order transfer function is designed for the solid phase lithium ion diffusion approximation. The dynamic characteristics of the electrolyte concentration overpotential are approximated by a first-order resistance-capacitor transfer function in the electrolyte phase. The Ohmic resistances and electrochemical reaction kinetics resistance are simplified to a lumped Ohmic resistance parameter. Overall, the number of model parameters is reduced from 30 to 9, yet the accuracy of the model is still guaranteed. In order to address the dynamics of phase-change phenomenon in the active particle during charging and discharging, variable solid-state diffusivity is taken into consideration in the model. Also, the observability of the model is analyzed on two types of lithium ion batteries subsequently. Results show the fractional order model with variable solid-state diffusivity agrees very well with experimental data at various current input conditions and is suitable for electric vehicle applications.
Issues related to the Fermion mass problem
NASA Astrophysics Data System (ADS)
Murakowski, Janusz Adam
1998-09-01
This thesis is divided into three parts. Each illustrates a different aspect of the fermion mass issue in elementary particle physics. In the first part, the possibility of chiral symmetry breaking in the presence of uniform magnetic and electric fields is investigated. The system is studied nonperturbatively with the use of basis functions compatible with the external field configuration, the parabolic cylinder functions. It is found that chiral symmetry, broken by a uniform magnetic field, is restored by electric field. Obtained result is nonperturbative in nature: even the tiniest deviation of the electric field from zero restores chiral symmetry. In the second part, heavy quarkonium systems are investigated. To study these systems, a phenomenological nonrelativistic model is built. Approximate solutions to this model are found with the use of a specially designed Pade approximation and by direct numerical integration of Schrodinger equation. The results are compared with experimental measurements of respective meson masses. Good agreement between theoretical calculations and experimental results is found. Advantages and shortcommings of the new approximation method are analysed. In the third part, an extension of the standard model of elementary particles is studied. The extension, called the aspon model, was originally introduced to cure the so called strong CP problem. In addition to fulfilling its original purpose, the aspon model modifies the couplings of the standard model quarks to the Z boson. As a result, the decay rates of the Z boson to quarks are altered. By using the recent precise measurements of the decay rates Z → bb and Z /to [/it c/=c], new constraints on the aspon model parameters are found.
Parametric reduced models for the nonlinear Schrödinger equation
NASA Astrophysics Data System (ADS)
Harlim, John; Li, Xiantao
2015-05-01
Reduced models for the (defocusing) nonlinear Schrödinger equation are developed. In particular, we develop reduced models that only involve the low-frequency modes given noisy observations of these modes. The ansatz of the reduced parametric models are obtained by employing a rational approximation and a colored-noise approximation, respectively, on the memory terms and the random noise of a generalized Langevin equation that is derived from the standard Mori-Zwanzig formalism. The parameters in the resulting reduced models are inferred from noisy observations with a recently developed ensemble Kalman filter-based parametrization method. The forecasting skill across different temperature regimes are verified by comparing the moments up to order four, a two-time correlation function statistics, and marginal densities of the coarse-grained variables.
Parametric reduced models for the nonlinear Schrödinger equation.
Harlim, John; Li, Xiantao
2015-05-01
Reduced models for the (defocusing) nonlinear Schrödinger equation are developed. In particular, we develop reduced models that only involve the low-frequency modes given noisy observations of these modes. The ansatz of the reduced parametric models are obtained by employing a rational approximation and a colored-noise approximation, respectively, on the memory terms and the random noise of a generalized Langevin equation that is derived from the standard Mori-Zwanzig formalism. The parameters in the resulting reduced models are inferred from noisy observations with a recently developed ensemble Kalman filter-based parametrization method. The forecasting skill across different temperature regimes are verified by comparing the moments up to order four, a two-time correlation function statistics, and marginal densities of the coarse-grained variables.
NASA Technical Reports Server (NTRS)
Box, M. A.; Deepak, A.
1981-01-01
The propagation of photons in a medium with strongly anisotropic scattering is a problem with a considerable history. Like the propagation of electrons in metal foils, it may be solved in the small-angle scattering approximation by the use of Fourier-transform techniques. In certain limiting cases, one may even obtain analytic expressions. This paper presents some of these results in a model-independent form and also illustrates them by the use of four different phase-function models. Sample calculations are provided for comparison purposes
Landau-Zener extension of the Tavis-Cummings model: Structure of the solution
Sun, Chen; Sinitsyn, Nikolai A.
2016-09-07
We explore the recently discovered solution of the driven Tavis-Cummings model (DTCM). It describes interaction of an arbitrary number of two-level systems with a bosonic mode that has linearly time-dependent frequency. We derive compact and tractable expressions for transition probabilities in terms of the well-known special functions. In this form, our formulas are suitable for fast numerical calculations and analytical approximations. As an application, we obtain the semiclassical limit of the exact solution and compare it to prior approximations. Furthermore, we also reveal connection between DTCM and q-deformed binomial statistics.
Note on the eigensolution of a homogeneous equation with semi-infinite domain
NASA Technical Reports Server (NTRS)
Wadia, A. R.
1980-01-01
The 'variation-iteration' method using Green's functions to find the eigenvalues and the corresponding eigenfunctions of a homogeneous Fredholm integral equation is employed for the stability analysis of fluid hydromechanics problems with a semiinfinite (infinite) domain of application. The objective of the study is to develop a suitable numerical approach to the solution of such equations in order to better understand the full set of equations for 'real-world' flow models. The study involves a search for a suitable value of the length of the domain which is a fair finite approximation to infinity, which makes the eigensolution an approximation dependent on the length of the interval chosen. In the examples investigated y = 1 = a seems to be the best approximation of infinity; for y greater than unity this method fails due to the polynomial nature of Green's functions.
NASA Technical Reports Server (NTRS)
Wetherill, G. W.; Cox, L. P.
1985-01-01
The validity of the two-body approximation in calculating encounters between planetesimals has been evaluated as a function of the ratio of unperturbed planetesimal velocity (with respect to a circular orbit) to mutual escape velocity when their surfaces are in contact (V/V-sub-e). Impact rates as a function of this ratio are calculated to within about 20 percent by numerical integration of the equations of motion. It is found that when the ratio is greater than 0.4 the two-body approximation is a good one. Consequences of reducing the ratio to less than 0.02 are examined. Factors leading to an optimal size for growth of planetesimals from a swarm of given eccentricity and placing a limit on the extent of runaway accretion are derived.
NASA Astrophysics Data System (ADS)
González, Diego Luis; Pimpinelli, Alberto; Einstein, T. L.
2011-07-01
We study the configurational structure of the point-island model for epitaxial growth in one dimension. In particular, we calculate the island gap and capture zone distributions. Our model is based on an approximate description of nucleation inside the gaps. Nucleation is described by the joint probability density pnXY(x,y), which represents the probability density to have nucleation at position x within a gap of size y. Our proposed functional form for pnXY(x,y) describes excellently the statistical behavior of the system. We compare our analytical model with extensive numerical simulations. Our model retains the most relevant physical properties of the system.
Non-Asymptotic Oracle Inequalities for the High-Dimensional Cox Regression via Lasso.
Kong, Shengchun; Nan, Bin
2014-01-01
We consider finite sample properties of the regularized high-dimensional Cox regression via lasso. Existing literature focuses on linear models or generalized linear models with Lipschitz loss functions, where the empirical risk functions are the summations of independent and identically distributed (iid) losses. The summands in the negative log partial likelihood function for censored survival data, however, are neither iid nor Lipschitz.We first approximate the negative log partial likelihood function by a sum of iid non-Lipschitz terms, then derive the non-asymptotic oracle inequalities for the lasso penalized Cox regression using pointwise arguments to tackle the difficulties caused by lacking iid Lipschitz losses.
Non-Asymptotic Oracle Inequalities for the High-Dimensional Cox Regression via Lasso
Kong, Shengchun; Nan, Bin
2013-01-01
We consider finite sample properties of the regularized high-dimensional Cox regression via lasso. Existing literature focuses on linear models or generalized linear models with Lipschitz loss functions, where the empirical risk functions are the summations of independent and identically distributed (iid) losses. The summands in the negative log partial likelihood function for censored survival data, however, are neither iid nor Lipschitz.We first approximate the negative log partial likelihood function by a sum of iid non-Lipschitz terms, then derive the non-asymptotic oracle inequalities for the lasso penalized Cox regression using pointwise arguments to tackle the difficulties caused by lacking iid Lipschitz losses. PMID:24516328
Fixed gain and adaptive techniques for rotorcraft vibration control
NASA Technical Reports Server (NTRS)
Roy, R. H.; Saberi, H. A.; Walker, R. A.
1985-01-01
The results of an analysis effort performed to demonstrate the feasibility of employing approximate dynamical models and frequency shaped cost functional control law desgin techniques for helicopter vibration suppression are presented. Both fixed gain and adaptive control designs based on linear second order dynamical models were implemented in a detailed Rotor Systems Research Aircraft (RSRA) simulation to validate these active vibration suppression control laws. Approximate models of fuselage flexibility were included in the RSRA simulation in order to more accurately characterize the structural dynamics. The results for both the fixed gain and adaptive approaches are promising and provide a foundation for pursuing further validation in more extensive simulation studies and in wind tunnel and/or flight tests.
An explicit canopy BRDF model and inversion. [Bidirectional Reflectance Distribution Function
NASA Technical Reports Server (NTRS)
Liang, Shunlin; Strahler, Alan H.
1992-01-01
Based on a rigorous canopy radiative transfer equation, the multiple scattering radiance is approximated by the asymptotic theory, and the single scattering radiance calculation, which requires an numerical intergration due to considering the hotspot effect, is simplified. A new formulation is presented to obtain more exact angular dependence of the sky radiance distribution. The unscattered solar radiance and single scattering radiance are calculated exactly, and the multiple scattering is approximated by the delta two-stream atmospheric radiative transfer model. The numerical algorithms prove that the parametric canopy model is very accurate, especially when the viewing angles are smaller than 55 deg. The Powell algorithm is used to retrieve biospheric parameters from the ground measured multiangle observations.
An approach to the analysis of performance of quasi-optimum digital phase-locked loops.
NASA Technical Reports Server (NTRS)
Polk, D. R.; Gupta, S. C.
1973-01-01
An approach to the analysis of performance of quasi-optimum digital phase-locked loops (DPLL's) is presented. An expression for the characteristic function of the prior error in the state estimate is derived, and from this expression an infinite dimensional equation for the prior error variance is obtained. The prior error-variance equation is a function of the communication system model and the DPLL gain and is independent of the method used to derive the DPLL gain. Two approximations are discussed for reducing the prior error-variance equation to finite dimension. The effectiveness of one approximation in analyzing DPLL performance is studied.
Exact Time-Dependent Exchange-Correlation Potential in Electron Scattering Processes
NASA Astrophysics Data System (ADS)
Suzuki, Yasumitsu; Lacombe, Lionel; Watanabe, Kazuyuki; Maitra, Neepa T.
2017-12-01
We identify peak and valley structures in the exact exchange-correlation potential of time-dependent density functional theory that are crucial for time-resolved electron scattering in a model one-dimensional system. These structures are completely missed by adiabatic approximations that, consequently, significantly underestimate the scattering probability. A recently proposed nonadiabatic approximation is shown to correctly capture the approach of the electron to the target when the initial Kohn-Sham state is chosen judiciously, and it is more accurate than standard adiabatic functionals but ultimately fails to accurately capture reflection. These results may explain the underestimation of scattering probabilities in some recent studies on molecules and surfaces.
Development of a grid-independent approximate Riemannsolver. Ph.D. Thesis - Michigan Univ.
NASA Technical Reports Server (NTRS)
Rumsey, Christopher Lockwood
1991-01-01
A grid-independent approximate Riemann solver for use with the Euler and Navier-Stokes equations was introduced and explored. The two-dimensional Euler and Navier-Stokes equations are described in Cartesian and generalized coordinates, as well as the traveling wave form of the Euler equations. The spatial and temporal discretization are described for both explicit and implicit time-marching schemes. The grid-aligned flux function of Roe is outlined, while the 5-wave grid-independent flux function is derived. The stability and monotonicity analysis of the 5-wave model are presented. Two-dimensional results are provided and extended to three dimensions. The corresponding results are presented.
Time-dependent spin-density-functional-theory description of He+-He collisions
NASA Astrophysics Data System (ADS)
Baxter, Matthew; Kirchner, Tom; Engel, Eberhard
2017-09-01
Theoretical total cross-section results for all ionization and capture processes in the He+-He collision system are presented in the approximate impact energy range of 10-1000 keV/amu. Calculations were performed within the framework of time-dependent spin-density functional theory. The Krieger-Li-Iafrate approximation was used to determine an accurate exchange-correlation potential in the exchange-only limit. The results of two models, one where electron translation factors in the orbitals used to calculate the potential are ignored and another where partial electron translation factors are included, are compared with available experimental data as well as a selection of previous theoretical calculations.
Jia, Shaoyang; Pennington, M. R.
2017-08-01
With the introduction of a spectral representation, the Schwinger-Dyson equation (SDE) for the fermion propagator is formulated in Minkowski space in QED. After imposing the on-shell renormalization conditions, analytic solutions for the fermion propagator spectral functions are obtained in four dimensions with a renormalizable version of the Gauge Technique anzatz for the fermion-photon vertex in the quenched approximation in the Landau gauge. Despite the limitations of this model, having an explicit solution provides a guiding example of the fermion propagator with the correct analytic structure. The Padé approximation for the spectral functions is also investigated.
Generating functionals and Gaussian approximations for interruptible delay reactions
NASA Astrophysics Data System (ADS)
Brett, Tobias; Galla, Tobias
2015-10-01
We develop a generating functional description of the dynamics of non-Markovian individual-based systems in which delay reactions can be terminated before completion. This generalizes previous work in which a path-integral approach was applied to dynamics in which delay reactions complete with certainty. We construct a more widely applicable theory, and from it we derive Gaussian approximations of the dynamics, valid in the limit of large, but finite, population sizes. As an application of our theory we study predator-prey models with delay dynamics due to gestation or lag periods to reach the reproductive age. In particular, we focus on the effects of delay on noise-induced cycles.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jia, Shaoyang; Pennington, M. R.
With the introduction of a spectral representation, the Schwinger-Dyson equation (SDE) for the fermion propagator is formulated in Minkowski space in QED. After imposing the on-shell renormalization conditions, analytic solutions for the fermion propagator spectral functions are obtained in four dimensions with a renormalizable version of the Gauge Technique anzatz for the fermion-photon vertex in the quenched approximation in the Landau gauge. Despite the limitations of this model, having an explicit solution provides a guiding example of the fermion propagator with the correct analytic structure. The Padé approximation for the spectral functions is also investigated.
Size-dependent error of the density functional theory ionization potential in vacuum and solution
Sosa Vazquez, Xochitl A.; Isborn, Christine M.
2015-12-22
Density functional theory is often the method of choice for modeling the energetics of large molecules and including explicit solvation effects. It is preferable to use a method that treats systems of different sizes and with different amounts of explicit solvent on equal footing. However, recent work suggests that approximate density functional theory has a size-dependent error in the computation of the ionization potential. We here investigate the lack of size-intensivity of the ionization potential computed with approximate density functionals in vacuum and solution. We show that local and semi-local approximations to exchange do not yield a constant ionization potentialmore » for an increasing number of identical isolated molecules in vacuum. Instead, as the number of molecules increases, the total energy required to ionize the system decreases. Rather surprisingly, we find that this is still the case in solution, whether using a polarizable continuum model or with explicit solvent that breaks the degeneracy of each solute, and we find that explicit solvent in the calculation can exacerbate the size-dependent delocalization error. We demonstrate that increasing the amount of exact exchange changes the character of the polarization of the solvent molecules; for small amounts of exact exchange the solvent molecules contribute a fraction of their electron density to the ionized electron, but for larger amounts of exact exchange they properly polarize in response to the cationic solute. As a result, in vacuum and explicit solvent, the ionization potential can be made size-intensive by optimally tuning a long-range corrected hybrid functional.« less
Size-dependent error of the density functional theory ionization potential in vacuum and solution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sosa Vazquez, Xochitl A.; Isborn, Christine M., E-mail: cisborn@ucmerced.edu
2015-12-28
Density functional theory is often the method of choice for modeling the energetics of large molecules and including explicit solvation effects. It is preferable to use a method that treats systems of different sizes and with different amounts of explicit solvent on equal footing. However, recent work suggests that approximate density functional theory has a size-dependent error in the computation of the ionization potential. We here investigate the lack of size-intensivity of the ionization potential computed with approximate density functionals in vacuum and solution. We show that local and semi-local approximations to exchange do not yield a constant ionization potentialmore » for an increasing number of identical isolated molecules in vacuum. Instead, as the number of molecules increases, the total energy required to ionize the system decreases. Rather surprisingly, we find that this is still the case in solution, whether using a polarizable continuum model or with explicit solvent that breaks the degeneracy of each solute, and we find that explicit solvent in the calculation can exacerbate the size-dependent delocalization error. We demonstrate that increasing the amount of exact exchange changes the character of the polarization of the solvent molecules; for small amounts of exact exchange the solvent molecules contribute a fraction of their electron density to the ionized electron, but for larger amounts of exact exchange they properly polarize in response to the cationic solute. In vacuum and explicit solvent, the ionization potential can be made size-intensive by optimally tuning a long-range corrected hybrid functional.« less
Orientational analysis of planar fibre systems observed as a Poisson shot-noise process.
Kärkkäinen, Salme; Lantuéjoul, Christian
2007-10-01
We consider two-dimensional fibrous materials observed as a digital greyscale image. The problem addressed is to estimate the orientation distribution of unobservable thin fibres from a greyscale image modelled by a planar Poisson shot-noise process. The classical stereological approach is not straightforward, because the point intensities of thin fibres along sampling lines may not be observable. For such cases, Kärkkäinen et al. (2001) suggested the use of scaled variograms determined from grey values along sampling lines in several directions. Their method is based on the assumption that the proportion between the scaled variograms and point intensities in all directions of sampling lines is constant. This assumption is proved to be valid asymptotically for Boolean models and dead leaves models, under some regularity conditions. In this work, we derive the scaled variogram and its approximations for a planar Poisson shot-noise process using the modified Bessel function. In the case of reasonable high resolution of the observed image, the scaled variogram has an approximate functional relation to the point intensity, and in the case of high resolution the relation is proportional. As the obtained relations are approximative, they are tested on simulations. The existing orientation analysis method based on the proportional relation is further experimented on images with different resolutions. The new result, the asymptotic proportionality between the scaled variograms and the point intensities for a Poisson shot-noise process, completes the earlier results for the Boolean models and for the dead leaves models.
Electron cyclotron thruster new modeling results preparation for initial experiments
NASA Technical Reports Server (NTRS)
Hooper, E. Bickford
1993-01-01
The following topics are discussed: a whistler-based electron cyclotron resonance heating (ECRH) thruster; cross-field coupling in the helicon approximation; wave propagation; wave structure; plasma density; wave absorption; the electron distribution function; isothermal and adiabatic plasma flow; ECRH thruster modeling; a PIC code model; electron temperature; electron energy; and initial experimental tests. The discussion is presented in vugraph form.
Prior-knowledge-based feedforward network simulation of true boiling point curve of crude oil.
Chen, C W; Chen, D Z
2001-11-01
Theoretical results and practical experience indicate that feedforward networks can approximate a wide class of functional relationships very well. This property is exploited in modeling chemical processes. Given finite and noisy training data, it is important to encode the prior knowledge in neural networks to improve the fit precision and the prediction ability of the model. In this paper, as to the three-layer feedforward networks and the monotonic constraint, the unconstrained method, Joerding's penalty function method, the interpolation method, and the constrained optimization method are analyzed first. Then two novel methods, the exponential weight method and the adaptive method, are proposed. These methods are applied in simulating the true boiling point curve of a crude oil with the condition of increasing monotonicity. The simulation experimental results show that the network models trained by the novel methods are good at approximating the actual process. Finally, all these methods are discussed and compared with each other.
Orientation-dependent integral equation theory for a two-dimensional model of water
NASA Astrophysics Data System (ADS)
Urbič, T.; Vlachy, V.; Kalyuzhnyi, Yu. V.; Dill, K. A.
2003-03-01
We develop an integral equation theory that applies to strongly associating orientation-dependent liquids, such as water. In an earlier treatment, we developed a Wertheim integral equation theory (IET) that we tested against NPT Monte Carlo simulations of the two-dimensional Mercedes Benz model of water. The main approximation in the earlier calculation was an orientational averaging in the multidensity Ornstein-Zernike equation. Here we improve the theory by explicit introduction of an orientation dependence in the IET, based upon expanding the two-particle angular correlation function in orthogonal basis functions. We find that the new orientation-dependent IET (ODIET) yields a considerable improvement of the predicted structure of water, when compared to the Monte Carlo simulations. In particular, ODIET predicts more long-range order than the original IET, with hexagonal symmetry, as expected for the hydrogen bonded ice in this model. The new theoretical approximation still errs in some subtle properties; for example, it does not predict liquid water's density maximum with temperature or the negative thermal expansion coefficient.
NASA Astrophysics Data System (ADS)
Chan, Kevin T.; Lee, Hoonkyung; Cohen, Marvin L.
2011-10-01
Graphene provides many advantages for controlling the electronic structure of adatoms and other adsorbates via gating. Using the projected density of states and charge density obtained from first-principles density-functional periodic supercell calculations, we investigate the possibility of performing “alchemy” of adatoms on graphene, i.e., transforming the electronic structure of one species of adatom into that of another species by application of a gate voltage. Gating is modeled as a change in the number of electrons in the unit cell, with the inclusion of a compensating uniform background charge. Within this model and the generalized gradient approximation to the exchange-correlation functional, we find that such transformations are possible for K, Ca, and several transition-metal adatoms. Gate control of the occupation of the p states of In on graphene is also investigated. The validity of the supercell approximation with uniform compensating charge and the model for exchange and correlation is also discussed.
Double multiple streamtube model with recent improvements
NASA Astrophysics Data System (ADS)
Paraschivoiu, I.; Delclaux, F.
1983-06-01
The objective of the present paper is to show the new capabilities of the double multiple streamtube (DMS) model for predicting the aerodynamic loads and performance of the Darrieus vertical-axis turbine. The original DMS model has been improved (DMSV model) by considering the variation in the upwind and downwind induced velocities as a function of the azimuthal angle for each streamtube. A comparison is made of the rotor performance for several blade geometries (parabola, catenary, troposkien, and Sandia shape). A new formulation is given for an approximate troposkien shape by considering the effect of the gravitational field. The effects of three NACA symmetrical profiles, 0012, 0015 and 0018, on the aerodynamic performance of the turbine are shown. Finally, a semiempirical dynamic-stall model has been incorporated and a better approximation obtained for modeling the local aerodynamic forces and performance for a Darrieus rotor.
Aerodynamic parameter estimation via Fourier modulating function techniques
NASA Technical Reports Server (NTRS)
Pearson, A. E.
1995-01-01
Parameter estimation algorithms are developed in the frequency domain for systems modeled by input/output ordinary differential equations. The approach is based on Shinbrot's method of moment functionals utilizing Fourier based modulating functions. Assuming white measurement noises for linear multivariable system models, an adaptive weighted least squares algorithm is developed which approximates a maximum likelihood estimate and cannot be biased by unknown initial or boundary conditions in the data owing to a special property attending Shinbrot-type modulating functions. Application is made to perturbation equation modeling of the longitudinal and lateral dynamics of a high performance aircraft using flight-test data. Comparative studies are included which demonstrate potential advantages of the algorithm relative to some well established techniques for parameter identification. Deterministic least squares extensions of the approach are made to the frequency transfer function identification problem for linear systems and to the parameter identification problem for a class of nonlinear-time-varying differential system models.
Stochastic derivative-free optimization using a trust region framework
Larson, Jeffrey; Billups, Stephen C.
2016-02-17
This study presents a trust region algorithm to minimize a function f when one has access only to noise-corrupted function values f¯. The model-based algorithm dynamically adjusts its step length, taking larger steps when the model and function agree and smaller steps when the model is less accurate. The method does not require the user to specify a fixed pattern of points used to build local models and does not repeatedly sample points. If f is sufficiently smooth and the noise is independent and identically distributed with mean zero and finite variance, we prove that our algorithm produces iterates suchmore » that the corresponding function gradients converge in probability to zero. As a result, we present a prototype of our algorithm that, while simplistic in its management of previously evaluated points, solves benchmark problems in fewer function evaluations than do existing stochastic approximation methods.« less
On the work function and the charging of small ( r ≤ 5 nm) nanoparticles in plasmas
NASA Astrophysics Data System (ADS)
Kalered, E.; Brenning, N.; Pilch, I.; Caillault, L.; Minéa, T.; Ojamäe, L.
2017-01-01
The growth of nanoparticles (NPs) in plasmas is an attractive technique where improved theoretical understanding is needed for quantitative modeling. The variation of the work function W with size for small NPs, rN P≤ 5 nm, is a key quantity for modeling of three NP charging processes that become increasingly important at a smaller size: electron field emission, thermionic electron emission, and electron impact detachment. Here we report the theoretical values of the work function in this size range. Density functional theory is used to calculate the work functions for a set of NP charge numbers, sizes, and shapes, using copper for a case study. An analytical approximation is shown to give quite accurate work functions provided that rN P > 0.4 nm, i.e., consisting of about >20 atoms, and provided also that the NPs have relaxed close to spherical shape. For smaller sizes, W deviates from the approximation, and also depends on the charge number. Some consequences of these results for nanoparticle charging are outlined. In particular, a decrease in W for NP radius below about 1 nm has fundamental consequences for their charge in a plasma environment, and thereby on the important processes of NP nucleation, early growth, and agglomeration.
Function approximation and documentation of sampling data using artificial neural networks.
Zhang, Wenjun; Barrion, Albert
2006-11-01
Biodiversity studies in ecology often begin with the fitting and documentation of sampling data. This study is conducted to make function approximation on sampling data and to document the sampling information using artificial neural network algorithms, based on the invertebrate data sampled in the irrigated rice field. Three types of sampling data, i.e., the curve species richness vs. the sample size, the curve rarefaction, and the curve mean abundance of newly sampled species vs.the sample size, are fitted and documented using BP (Backpropagation) network and RBF (Radial Basis Function) network. As the comparisons, The Arrhenius model, and rarefaction model, and power function are tested for their ability to fit these data. The results show that the BP network and RBF network fit the data better than these models with smaller errors. BP network and RBF network can fit non-linear functions (sampling data) with specified accuracy and don't require mathematical assumptions. In addition to the interpolation, BP network is used to extrapolate the functions and the asymptote of the sampling data can be drawn. BP network cost a longer time to train the network and the results are always less stable compared to the RBF network. RBF network require more neurons to fit functions and generally it may not be used to extrapolate the functions. The mathematical function for sampling data can be exactly fitted using artificial neural network algorithms by adjusting the desired accuracy and maximum iterations. The total numbers of functional species of invertebrates in the tropical irrigated rice field are extrapolated as 140 to 149 using trained BP network, which are similar to the observed richness.
ERIC Educational Resources Information Center
Esposito, Alena G.; Baker-Ward, Lynne
2013-01-01
This investigation is an initial examination of possible enhancement of executive function through a dual-language (50:50) education model. The ethnically diverse, low-income sample of 120 children from Grades K, 2, and 4 consisted of approximately equal numbers of children enrolled in dual-language and traditional classrooms. Dual-language…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gu, Z.; Ching, W.Y.
Based on the Sterne-Inkson model for the self-energy correction to the single-particle energy in the local-density approximation (LDA), we have implemented an approximate energy-dependent and [bold k]-dependent [ital GW] correction scheme to the orthogonalized linear combination of atomic orbital-based local-density calculation for insulators. In contrast to the approach of Jenkins, Srivastava, and Inkson, we evaluate the on-site exchange integrals using the LDA Bloch functions throughout the Brillouin zone. By using a [bold k]-weighted band gap [ital E][sub [ital g
Approximate Dynamic Programming Algorithms for United States Air Force Officer Sustainment
2015-03-26
level of correction needed. While paying bonuses has an easily calculable cost, RIFs have more subtle costs. Mone (1994) discovered that in a steady...a regression is performed utilizing instrumental variables to minimize Bellman error. This algorithm uses a set of basis functions to approximate the...transitioned to an all-volunteer force. Charnes et al. (1972) utilize a goal programming model for General Schedule civilian manpower management in the
Nuclear effects in (anti)neutrino charge-current quasielastic scattering at MINER νA kinematics
NASA Astrophysics Data System (ADS)
Ivanov, M. V.; Antonov, A. N.; Megias, G. D.; González-Jiménez, R.; Barbaro, M. B.; Caballero, J. A.; Donnelly, T. W.; Udías, J. M.
2018-05-01
We compare the characteristics of the charged-current quasielastic (anti)neutrino scattering obtained in two different nuclear models, the phenomenological SuperScaling Approximation and the model using a realistic spectral function S(p, ɛ) that gives a scaling function in accordance with the (e, e‧ ) scattering data, with the recent data published by the MiniBooNE, MINER νA, and NOMAD collaborations. The spectral function accounts for the nucleon-nucleon (NN) correlations by using natural orbitals from the Jastrow correlation method and has a realistic energy dependence. Both models provide a good description of the MINER νA and NOMAD data without the need of an ad hoc increase of the value of the mass parameter in the axial-vector dipole form factor. The models considered in this work, based on the the impulse approximation (IA), underpredict the MiniBooNE data for the flux-averaged charged-current quasielastic {ν }μ ({\\bar{ν }}μ ){+}12\\text{C} differential cross section per nucleon and the total cross sections, although the shape of the cross sections is represented by the approaches. The discrepancy is most likely due to missing of the effects beyond the IA, e.g., those of the 2p–2h meson exchange currents that have contribution in the transverse responses.
Case-Deletion Diagnostics for Nonlinear Structural Equation Models
ERIC Educational Resources Information Center
Lee, Sik-Yum; Lu, Bin
2003-01-01
In this article, a case-deletion procedure is proposed to detect influential observations in a nonlinear structural equation model. The key idea is to develop the diagnostic measures based on the conditional expectation of the complete-data log-likelihood function in the EM algorithm. An one-step pseudo approximation is proposed to reduce the…
On the Power of Multivariate Latent Growth Curve Models to Detect Correlated Change
ERIC Educational Resources Information Center
Hertzog, Christopher; Lindenberger, Ulman; Ghisletta, Paolo; Oertzen, Timo von
2006-01-01
We evaluated the statistical power of single-indicator latent growth curve models (LGCMs) to detect correlated change between two variables (covariance of slopes) as a function of sample size, number of longitudinal measurement occasions, and reliability (measurement error variance). Power approximations following the method of Satorra and Saris…
Zaĭtseva, N V; Trusov, P V; Kir'ianov, D A
2012-01-01
The mathematic concept model presented describes accumulation of functional disorders associated with environmental factors, plays predictive role and is designed for assessments of possible effects caused by heterogenous factors with variable exposures. Considering exposure changes with self-restoration process opens prospects of using the model to evaluate, analyse and manage occupational risks. To develop current theoretic approaches, the authors suggested a model considering age-related body peculiarities, systemic interactions of organs, including neuro-humoral regulation, accumulation of functional disorders due to external factors, rehabilitation of functions during treatment. General objective setting covers defining over a hundred unknow coefficients that characterize speed of various processes within the body. To solve this problem, the authors used iteration approach, successive identification, that starts from the certain primary approximation of the model parameters and processes subsequent updating on the basis of new theoretic and empirical knowledge.
NASA Astrophysics Data System (ADS)
Noah, Joyce E.
Time correlation functions of density fluctuations of liquids at equilibrium can be used to relate the microscopic dynamics of a liquid to its macroscopic transport properties. Time correlation functions are especially useful since they can be generated in a variety of ways, from scattering experiments to computer simulation to analytic theory. The kinetic theory of fluctuations in equilibrium liquids is an analytic theory for calculating correlation functions using memory functions. In this work, we use a diagrammatic formulation of the kinetic theory to develop a series of binary collision approximations for the collisional part of the memory function. We define binary collisions as collisions between two distinct density fluctuations whose identities are fixed during the duration of a collsion. R approximations are for the short time part of the memory function, and build upon the work of Ranganathan and Andersen. These approximations have purely repulsive interactions between the fluctuations. The second type of approximation, RA approximations, is for the longer time part of the memory function, where the density fluctuations now interact via repulsive and attractive forces. Although RA approximations are a natural extension of R approximations, they permit two density fluctuations to become trapped in the wells of the interaction potential, leading to long-lived oscillatory behavior, which is unphysical. Therefore we consider S approximations which describe binary particles which experience the random effect of the surroundings while interacting via repulsive or repulsive and attractive interactions. For each of these approximations for the memory function we numerically solve the kinetic equation to generate correlation functions. These results are compared to molecular dynamics results for the correlation functions. Comparing the successes and failures of the different approximations, we conclude that R approximations give more accurate intermediate and long time results while RA and S approximations do particularly well at predicting the short time behavior. Lastly, we also develop a series of non-graphically derived approximations and use an optimization procedure to determine the underlying memory function from the simulation data. These approaches provide valuable information about the memory function that will be used in the development of future kinetic theories.
A study of different modeling choices for simulating platelets within the immersed boundary method
Shankar, Varun; Wright, Grady B.; Fogelson, Aaron L.; Kirby, Robert M.
2012-01-01
The Immersed Boundary (IB) method is a widely-used numerical methodology for the simulation of fluid–structure interaction problems. The IB method utilizes an Eulerian discretization for the fluid equations of motion while maintaining a Lagrangian representation of structural objects. Operators are defined for transmitting information (forces and velocities) between these two representations. Most IB simulations represent their structures with piecewise linear approximations and utilize Hookean spring models to approximate structural forces. Our specific motivation is the modeling of platelets in hemodynamic flows. In this paper, we study two alternative representations – radial basis functions (RBFs) and Fourier-based (trigonometric polynomials and spherical harmonics) representations – for the modeling of platelets in two and three dimensions within the IB framework, and compare our results with the traditional piecewise linear approximation methodology. For different representative shapes, we examine the geometric modeling errors (position and normal vectors), force computation errors, and computational cost and provide an engineering trade-off strategy for when and why one might select to employ these different representations. PMID:23585704
Probability density and exceedance rate functions of locally Gaussian turbulence
NASA Technical Reports Server (NTRS)
Mark, W. D.
1989-01-01
A locally Gaussian model of turbulence velocities is postulated which consists of the superposition of a slowly varying strictly Gaussian component representing slow temporal changes in the mean wind speed and a more rapidly varying locally Gaussian turbulence component possessing a temporally fluctuating local variance. Series expansions of the probability density and exceedance rate functions of the turbulence velocity model, based on Taylor's series, are derived. Comparisons of the resulting two-term approximations with measured probability density and exceedance rate functions of atmospheric turbulence velocity records show encouraging agreement, thereby confirming the consistency of the measured records with the locally Gaussian model. Explicit formulas are derived for computing all required expansion coefficients from measured turbulence records.
Free-energy functional of the Debye-Hückel model of simple fluids
NASA Astrophysics Data System (ADS)
Piron, R.; Blenski, T.
2016-12-01
The Debye-Hückel approximation to the free energy of a simple fluid is written as a functional of the pair correlation function. This functional can be seen as the Debye-Hückel equivalent to the functional derived in the hypernetted chain framework by Morita and Hiroike, as well as by Lado. It allows one to obtain the Debye-Hückel integral equation through a minimization with respect to the pair correlation function, leads to the correct form of the internal energy, and fulfills the virial theorem.
NASA Astrophysics Data System (ADS)
Scherstjanoi, M.; Kaplan, J. O.; Lischke, H.
2014-07-01
To be able to simulate climate change effects on forest dynamics over the whole of Switzerland, we adapted the second-generation DGVM (dynamic global vegetation model) LPJ-GUESS (Lund-Potsdam-Jena General Ecosystem Simulator) to the Alpine environment. We modified model functions, tuned model parameters, and implemented new tree species to represent the potential natural vegetation of Alpine landscapes. Furthermore, we increased the computational efficiency of the model to enable area-covering simulations in a fine resolution (1 km) sufficient for the complex topography of the Alps, which resulted in more than 32 000 simulation grid cells. To this aim, we applied the recently developed method GAPPARD (approximating GAP model results with a Probabilistic Approach to account for stand Replacing Disturbances) (Scherstjanoi et al., 2013) to LPJ-GUESS. GAPPARD derives mean output values from a combination of simulation runs without disturbances and a patch age distribution defined by the disturbance frequency. With this computationally efficient method, which increased the model's speed by approximately the factor 8, we were able to faster detect the shortcomings of LPJ-GUESS functions and parameters. We used the adapted LPJ-GUESS together with GAPPARD to assess the influence of one climate change scenario on dynamics of tree species composition and biomass throughout the 21st century in Switzerland. To allow for comparison with the original model, we additionally simulated forest dynamics along a north-south transect through Switzerland. The results from this transect confirmed the high value of the GAPPARD method despite some limitations towards extreme climatic events. It allowed for the first time to obtain area-wide, detailed high-resolution LPJ-GUESS simulation results for a large part of the Alpine region.
Cox, G; Beresford, N A; Alvarez-Farizo, B; Oughton, D; Kis, Z; Eged, K; Thørring, H; Hunt, J; Wright, S; Barnett, C L; Gil, J M; Howard, B J; Crout, N M J
2005-01-01
A spatially implemented model designed to assist the identification of optimal countermeasure strategies for radioactively contaminated regions is described. Collective and individual ingestion doses for people within the affected area are estimated together with collective exported ingestion dose. A range of countermeasures are incorporated within the model, and environmental restrictions have been included as appropriate. The model evaluates the effectiveness of a given combination of countermeasures through a cost function which balances the benefit obtained through the reduction in dose with the cost of implementation. The optimal countermeasure strategy is the combination of individual countermeasures (and when and where they are implemented) which gives the lowest value of the cost function. The model outputs should not be considered as definitive solutions, rather as interactive inputs to the decision making process. As a demonstration the model has been applied to a hypothetical scenario in Cumbria (UK). This scenario considered a published nuclear power plant accident scenario with a total deposition of 1.7x10(14), 1.2x10(13), 2.8x10(10) and 5.3x10(9)Bq for Cs-137, Sr-90, Pu-239/240 and Am-241, respectively. The model predicts that if no remediation measures were implemented the resulting collective dose would be approximately 36 000 person-Sv (predominantly from 137Cs) over a 10-year period post-deposition. The optimal countermeasure strategy is predicted to avert approximately 33 000 person-Sv at a cost of approximately 160 million pounds. The optimal strategy comprises a mixture of ploughing, AFCF (ammonium-ferric hexacyano-ferrate) administration, potassium fertiliser application, clean feeding of livestock and food restrictions. The model recommends specific areas within the contaminated area and time periods where these measures should be implemented.
Strategies for Efficient Computation of the Expected Value of Partial Perfect Information
Madan, Jason; Ades, Anthony E.; Price, Malcolm; Maitland, Kathryn; Jemutai, Julie; Revill, Paul; Welton, Nicky J.
2014-01-01
Expected value of information methods evaluate the potential health benefits that can be obtained from conducting new research to reduce uncertainty in the parameters of a cost-effectiveness analysis model, hence reducing decision uncertainty. Expected value of partial perfect information (EVPPI) provides an upper limit to the health gains that can be obtained from conducting a new study on a subset of parameters in the cost-effectiveness analysis and can therefore be used as a sensitivity analysis to identify parameters that most contribute to decision uncertainty and to help guide decisions around which types of study are of most value to prioritize for funding. A common general approach is to use nested Monte Carlo simulation to obtain an estimate of EVPPI. This approach is computationally intensive, can lead to significant sampling bias if an inadequate number of inner samples are obtained, and incorrect results can be obtained if correlations between parameters are not dealt with appropriately. In this article, we set out a range of methods for estimating EVPPI that avoid the need for nested simulation: reparameterization of the net benefit function, Taylor series approximations, and restricted cubic spline estimation of conditional expectations. For each method, we set out the generalized functional form that net benefit must take for the method to be valid. By specifying this functional form, our methods are able to focus on components of the model in which approximation is required, avoiding the complexities involved in developing statistical approximations for the model as a whole. Our methods also allow for any correlations that might exist between model parameters. We illustrate the methods using an example of fluid resuscitation in African children with severe malaria. PMID:24449434
Electron localisation in static and time-dependent one-dimensional model systems
NASA Astrophysics Data System (ADS)
Durrant, T. R.; Hodgson, M. J. P.; Ramsden, J. D.; Godby, R. W.
2018-02-01
The most direct signature of electron localisation is the tendency of an electron in a many-body system to exclude other same-spin electrons from its vicinity. By applying this concept directly to the exact many-body wavefunction, we find that localisation can vary considerably between different ground-state systems, and can also be strongly disrupted, as a function of time, when a system is driven by an applied electric field. We use this measure to assess the well-known electron localisation function (ELF), both in its approximate single-particle form (often applied within density-functional theory) and its full many-particle form. The full ELF always gives an excellent description of localisation, but the approximate ELF fails in time-dependent situations, even when the exact Kohn-Sham orbitals are employed.
Lutchen, K R
1990-08-01
A sensitivity analysis based on weighted least-squares regression is presented to evaluate alternative methods for fitting lumped-parameter models to respiratory impedance data. The goal is to maintain parameter accuracy simultaneously with practical experiment design. The analysis focuses on predicting parameter uncertainties using a linearized approximation for joint confidence regions. Applications are with four-element parallel and viscoelastic models for 0.125- to 4-Hz data and a six-element model with separate tissue and airway properties for input and transfer impedance data from 2-64 Hz. The criterion function form was evaluated by comparing parameter uncertainties when data are fit as magnitude and phase, dynamic resistance and compliance, or real and imaginary parts of input impedance. The proper choice of weighting can make all three criterion variables comparable. For the six-element model, parameter uncertainties were predicted when both input impedance and transfer impedance are acquired and fit simultaneously. A fit to both data sets from 4 to 64 Hz could reduce parameter estimate uncertainties considerably from those achievable by fitting either alone. For the four-element models, use of an independent, but noisy, measure of static compliance was assessed as a constraint on model parameters. This may allow acceptable parameter uncertainties for a minimum frequency of 0.275-0.375 Hz rather than 0.125 Hz. This reduces data acquisition requirements from a 16- to a 5.33- to 8-s breath holding period. These results are approximations, and the impact of using the linearized approximation for the confidence regions is discussed.
Creating Weather System Ensembles Through Synergistic Process Modeling and Machine Learning
NASA Astrophysics Data System (ADS)
Chen, B.; Posselt, D. J.; Nguyen, H.; Wu, L.; Su, H.; Braverman, A. J.
2017-12-01
Earth's weather and climate are sensitive to a variety of control factors (e.g., initial state, forcing functions, etc). Characterizing the response of the atmosphere to a change in initial conditions or model forcing is critical for weather forecasting (ensemble prediction) and climate change assessment. Input - response relationships can be quantified by generating an ensemble of multiple (100s to 1000s) realistic realizations of weather and climate states. Atmospheric numerical models generate simulated data through discretized numerical approximation of the partial differential equations (PDEs) governing the underlying physics. However, the computational expense of running high resolution atmospheric state models makes generation of more than a few simulations infeasible. Here, we discuss an experiment wherein we approximate the numerical PDE solver within the Weather Research and Forecasting (WRF) Model using neural networks trained on a subset of model run outputs. Once trained, these neural nets can produce large number of realization of weather states from a small number of deterministic simulations with speeds that are orders of magnitude faster than the underlying PDE solver. Our neural network architecture is inspired by the governing partial differential equations. These equations are location-invariant, and consist of first and second derivations. As such, we use a 3x3 lon-lat grid of atmospheric profiles as the predictor in the neural net to provide the network the information necessary to compute the first and second moments. Results indicate that the neural network algorithm can approximate the PDE outputs with high degree of accuracy (less than 1% error), and that this error increases as a function of the prediction time lag.
Li, Tsung-Lung; Lu, Wen-Cai
2015-10-05
In this work, Koopmans' theorem for Kohn-Sham density functional theory (KS-DFT) is applied to the photoemission spectra (PES) modeling over the entire valence-band. To examine the validity of this application, a PES modeling scheme is developed to facilitate a full valence-band comparison of theoretical PES spectra with experiments. The PES model incorporates the variations of electron ionization cross-sections over atomic orbitals and a linear dispersion of spectral broadening widths. KS-DFT simulations of pristine rubrene (5,6,11,12-tetraphenyltetracene) and potassium-rubrene complex are performed, and the simulation results are used as the input to the PES models. Two conclusions are reached. First, decompositions of the theoretical total spectra show that the dissociated electron of the potassium mainly remains on the backbone and has little effect on the electronic structures of phenyl side groups. This and other electronic-structure results deduced from the spectral decompositions have been qualitatively obtained with the anionic approximation to potassium-rubrene complexes. The qualitative validity of the anionic approximation is thus verified. Second, comparison of the theoretical PES with the experiments shows that the full-scale simulations combined with the PES modeling methods greatly enhance the agreement on spectral shapes over the anionic approximation. This agreement of the theoretical PES spectra with the experiments over the full valence-band can be regarded, to some extent, as a collective validation of the application of Koopmans' theorem for KS-DFT to valence-band PES, at least, for this hydrocarbon and its alkali-adsorbed complex. Copyright © 2015 Elsevier B.V. All rights reserved.
Gaussian approximation potential modeling of lithium intercalation in carbon nanostructures
NASA Astrophysics Data System (ADS)
Fujikake, So; Deringer, Volker L.; Lee, Tae Hoon; Krynski, Marcin; Elliott, Stephen R.; Csányi, Gábor
2018-06-01
We demonstrate how machine-learning based interatomic potentials can be used to model guest atoms in host structures. Specifically, we generate Gaussian approximation potential (GAP) models for the interaction of lithium atoms with graphene, graphite, and disordered carbon nanostructures, based on reference density functional theory data. Rather than treating the full Li-C system, we demonstrate how the energy and force differences arising from Li intercalation can be modeled and then added to a (prexisting and unmodified) GAP model of pure elemental carbon. Furthermore, we show the benefit of using an explicit pair potential fit to capture "effective" Li-Li interactions and to improve the performance of the GAP model. This provides proof-of-concept for modeling guest atoms in host frameworks with machine-learning based potentials and in the longer run is promising for carrying out detailed atomistic studies of battery materials.
Approximate likelihood calculation on a phylogeny for Bayesian estimation of divergence times.
dos Reis, Mario; Yang, Ziheng
2011-07-01
The molecular clock provides a powerful way to estimate species divergence times. If information on some species divergence times is available from the fossil or geological record, it can be used to calibrate a phylogeny and estimate divergence times for all nodes in the tree. The Bayesian method provides a natural framework to incorporate different sources of information concerning divergence times, such as information in the fossil and molecular data. Current models of sequence evolution are intractable in a Bayesian setting, and Markov chain Monte Carlo (MCMC) is used to generate the posterior distribution of divergence times and evolutionary rates. This method is computationally expensive, as it involves the repeated calculation of the likelihood function. Here, we explore the use of Taylor expansion to approximate the likelihood during MCMC iteration. The approximation is much faster than conventional likelihood calculation. However, the approximation is expected to be poor when the proposed parameters are far from the likelihood peak. We explore the use of parameter transforms (square root, logarithm, and arcsine) to improve the approximation to the likelihood curve. We found that the new methods, particularly the arcsine-based transform, provided very good approximations under relaxed clock models and also under the global clock model when the global clock is not seriously violated. The approximation is poorer for analysis under the global clock when the global clock is seriously wrong and should thus not be used. The results suggest that the approximate method may be useful for Bayesian dating analysis using large data sets.
Laplace approximation for Bessel functions of matrix argument
NASA Astrophysics Data System (ADS)
Butler, Ronald W.; Wood, Andrew T. A.
2003-06-01
We derive Laplace approximations to three functions of matrix argument which arise in statistics and elsewhere: matrix Bessel A[nu]; matrix Bessel B[nu]; and the type II confluent hypergeometric function of matrix argument, [Psi]. We examine the theoretical and numerical properties of the approximations. On the theoretical side, it is shown that the Laplace approximations to A[nu], B[nu] and [Psi] given here, together with the Laplace approximations to the matrix argument functions 1F1 and 2F1 presented in Butler and Wood (Laplace approximations to hyper-geometric functions with matrix argument, Ann. Statist. (2002)), satisfy all the important confluence relations and symmetry relations enjoyed by the original functions.
High-Dimensional Function Approximation With Neural Networks for Large Volumes of Data.
Andras, Peter
2018-02-01
Approximation of high-dimensional functions is a challenge for neural networks due to the curse of dimensionality. Often the data for which the approximated function is defined resides on a low-dimensional manifold and in principle the approximation of the function over this manifold should improve the approximation performance. It has been show that projecting the data manifold into a lower dimensional space, followed by the neural network approximation of the function over this space, provides a more precise approximation of the function than the approximation of the function with neural networks in the original data space. However, if the data volume is very large, the projection into the low-dimensional space has to be based on a limited sample of the data. Here, we investigate the nature of the approximation error of neural networks trained over the projection space. We show that such neural networks should have better approximation performance than neural networks trained on high-dimensional data even if the projection is based on a relatively sparse sample of the data manifold. We also find that it is preferable to use a uniformly distributed sparse sample of the data for the purpose of the generation of the low-dimensional projection. We illustrate these results considering the practical neural network approximation of a set of functions defined on high-dimensional data including real world data as well.
Resumming the large-N approximation for time evolving quantum systems
NASA Astrophysics Data System (ADS)
Mihaila, Bogdan; Dawson, John F.; Cooper, Fred
2001-05-01
In this paper we discuss two methods of resumming the leading and next to leading order in 1/N diagrams for the quartic O(N) model. These two approaches have the property that they preserve both boundedness and positivity for expectation values of operators in our numerical simulations. These approximations can be understood either in terms of a truncation to the infinitely coupled Schwinger-Dyson hierarchy of equations, or by choosing a particular two-particle irreducible vacuum energy graph in the effective action of the Cornwall-Jackiw-Tomboulis formalism. We confine our discussion to the case of quantum mechanics where the Lagrangian is L(x,ẋ)=(12)∑Ni=1x˙2i-(g/8N)[∑Ni=1x2i- r20]2. The key to these approximations is to treat both the x propagator and the x2 propagator on similar footing which leads to a theory whose graphs have the same topology as QED with the x2 propagator playing the role of the photon. The bare vertex approximation is obtained by replacing the exact vertex function by the bare one in the exact Schwinger-Dyson equations for the one and two point functions. The second approximation, which we call the dynamic Debye screening approximation, makes the further approximation of replacing the exact x2 propagator by its value at leading order in the 1/N expansion. These two approximations are compared with exact numerical simulations for the quantum roll problem. The bare vertex approximation captures the physics at large and modest N better than the dynamic Debye screening approximation.
Novel Harmonic Regularization Approach for Variable Selection in Cox's Proportional Hazards Model
Chu, Ge-Jin; Liang, Yong; Wang, Jia-Xuan
2014-01-01
Variable selection is an important issue in regression and a number of variable selection methods have been proposed involving nonconvex penalty functions. In this paper, we investigate a novel harmonic regularization method, which can approximate nonconvex Lq (1/2 < q < 1) regularizations, to select key risk factors in the Cox's proportional hazards model using microarray gene expression data. The harmonic regularization method can be efficiently solved using our proposed direct path seeking approach, which can produce solutions that closely approximate those for the convex loss function and the nonconvex regularization. Simulation results based on the artificial datasets and four real microarray gene expression datasets, such as real diffuse large B-cell lymphoma (DCBCL), the lung cancer, and the AML datasets, show that the harmonic regularization method can be more accurate for variable selection than existing Lasso series methods. PMID:25506389
The positronium and the dipositronium in a Hartree-Fock approximation of quantum electrodynamics
NASA Astrophysics Data System (ADS)
Sok, Jérémy
2016-02-01
The Bogoliubov-Dirac-Fock (BDF) model is a no-photon approximation of quantum electrodynamics. It allows to study relativistic electrons in interaction with the Dirac sea. A state is fully characterized by its one-body density matrix, an infinite rank non-negative projector. We prove the existence of the para-positronium, the bound state of an electron and a positron with antiparallel spins, in the BDF model represented by a critical point of the energy functional in the absence of an external field. We also prove the existence of the dipositronium, a molecule made of two electrons and two positrons that also appears as a critical point. More generally, for any half integer j ∈ 1/2 + Z + , we prove the existence of a critical point of the energy functional made of 2j + 1 electrons and 2j + 1 positrons.
NASA Astrophysics Data System (ADS)
Yuan, Shifei; Jiang, Lei; Yin, Chengliang; Wu, Hongjie; Zhang, Xi
2017-06-01
The electrochemistry-based battery model can provide physics-meaningful knowledge about the lithium-ion battery system with extensive computation burdens. To motivate the development of reduced order battery model, three major contributions have been made throughout this paper: (1) the transfer function type of simplified electrochemical model is proposed to address the current-voltage relationship with Padé approximation method and modified boundary conditions for electrolyte diffusion equations. The model performance has been verified under pulse charge/discharge and dynamic stress test (DST) profiles with the standard derivation less than 0.021 V and the runtime 50 times faster. (2) the parametric relationship between the equivalent circuit model and simplified electrochemical model has been established, which will enhance the comprehension level of two models with more in-depth physical significance and provide new methods for electrochemical model parameter estimation. (3) four simplified electrochemical model parameters: equivalent resistance Req, effective diffusion coefficient in electrolyte phase Deeff, electrolyte phase volume fraction ε and open circuit voltage (OCV), have been identified by the recursive least square (RLS) algorithm with the modified DST profiles under 45, 25 and 0 °C. The simulation results indicate that the proposed model coupled with RLS algorithm can achieve high accuracy for electrochemical parameter identification in dynamic scenarios.
Topics in elementary particle physics
NASA Astrophysics Data System (ADS)
Jin, Xiang
The author of this thesis discusses two topics in elementary particle physics:
Approximation theory for LQG (Linear-Quadratic-Gaussian) optimal control of flexible structures
NASA Technical Reports Server (NTRS)
Gibson, J. S.; Adamian, A.
1988-01-01
An approximation theory is presented for the LQG (Linear-Quadratic-Gaussian) optimal control problem for flexible structures whose distributed models have bounded input and output operators. The main purpose of the theory is to guide the design of finite dimensional compensators that approximate closely the optimal compensator. The optimal LQG problem separates into an optimal linear-quadratic regulator problem and an optimal state estimation problem. The solution of the former problem lies in the solution to an infinite dimensional Riccati operator equation. The approximation scheme approximates the infinite dimensional LQG problem with a sequence of finite dimensional LQG problems defined for a sequence of finite dimensional, usually finite element or modal, approximations of the distributed model of the structure. Two Riccati matrix equations determine the solution to each approximating problem. The finite dimensional equations for numerical approximation are developed, including formulas for converting matrix control and estimator gains to their functional representation to allow comparison of gains based on different orders of approximation. Convergence of the approximating control and estimator gains and of the corresponding finite dimensional compensators is studied. Also, convergence and stability of the closed-loop systems produced with the finite dimensional compensators are discussed. The convergence theory is based on the convergence of the solutions of the finite dimensional Riccati equations to the solutions of the infinite dimensional Riccati equations. A numerical example with a flexible beam, a rotating rigid body, and a lumped mass is given.
Modeling Sound Propagation Through Non-Axisymmetric Jets
NASA Technical Reports Server (NTRS)
Leib, Stewart J.
2014-01-01
A method for computing the far-field adjoint Green's function of the generalized acoustic analogy equations under a locally parallel mean flow approximation is presented. The method is based on expanding the mean-flow-dependent coefficients in the governing equation and the scalar Green's function in truncated Fourier series in the azimuthal direction and a finite difference approximation in the radial direction in circular cylindrical coordinates. The combined spectral/finite difference method yields a highly banded system of algebraic equations that can be efficiently solved using a standard sparse system solver. The method is applied to test cases, with mean flow specified by analytical functions, corresponding to two noise reduction concepts of current interest: the offset jet and the fluid shield. Sample results for the Green's function are given for these two test cases and recommendations made as to the use of the method as part of a RANS-based jet noise prediction code.
Subsystem functional and the missing ingredient of confinement physics in density functionals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Armiento, Rickard Roberto; Mattsson, Ann Elisabet; Hao, Feng
2010-08-01
The subsystem functional scheme is a promising approach recently proposed for constructing exchange-correlation density functionals. In this scheme, the physics in each part of real materials is described by mapping to a characteristic model system. The 'confinement physics,' an essential physical ingredient that has been left out in present functionals, is studied by employing the harmonic-oscillator (HO) gas model. By performing the potential {yields} density and the density {yields} exchange energy per particle mappings based on two model systems characterizing the physics in the interior (uniform electron-gas model) and surface regions (Airy gas model) of materials for the HO gases,more » we show that the confinement physics emerges when only the lowest subband of the HO gas is occupied by electrons. We examine the approximations of the exchange energy by several state-of-the-art functionals for the HO gas, and none of them produces adequate accuracy in the confinement dominated cases. A generic functional that incorporates the description of the confinement physics is needed.« less
Approximate Bayesian Computation in the estimation of the parameters of the Forbush decrease model
NASA Astrophysics Data System (ADS)
Wawrzynczak, A.; Kopka, P.
2017-12-01
Realistic modeling of the complicated phenomena as Forbush decrease of the galactic cosmic ray intensity is a quite challenging task. One aspect is a numerical solution of the Fokker-Planck equation in five-dimensional space (three spatial variables, the time and particles energy). The second difficulty arises from a lack of detailed knowledge about the spatial and time profiles of the parameters responsible for the creation of the Forbush decrease. Among these parameters, the central role plays a diffusion coefficient. Assessment of the correctness of the proposed model can be done only by comparison of the model output with the experimental observations of the galactic cosmic ray intensity. We apply the Approximate Bayesian Computation (ABC) methodology to match the Forbush decrease model to experimental data. The ABC method is becoming increasing exploited for dynamic complex problems in which the likelihood function is costly to compute. The main idea of all ABC methods is to accept samples as an approximate posterior draw if its associated modeled data are close enough to the observed one. In this paper, we present application of the Sequential Monte Carlo Approximate Bayesian Computation algorithm scanning the space of the diffusion coefficient parameters. The proposed algorithm is adopted to create the model of the Forbush decrease observed by the neutron monitors at the Earth in March 2002. The model of the Forbush decrease is based on the stochastic approach to the solution of the Fokker-Planck equation.
NASA Astrophysics Data System (ADS)
Aymard, François; Gulminelli, Francesca; Margueron, Jérôme
2016-08-01
We have recently addressed the problem of the determination of the nuclear surface energy for symmetric nuclei in the framework of the extended Thomas-Fermi (ETF) approximation using Skyrme functionals. We presently extend this formalism to the case of asymmetric nuclei and the question of the surface symmetry energy. We propose an approximate expression for the diffuseness and the surface energy. These quantities are analytically related to the parameters of the energy functional. In particular, the influence of the different equation of state parameters can be explicitly quantified. Detailed analyses of the different energy components (local/non-local, isoscalar/isovector, surface/curvature and higher order) are also performed. Our analytical solution of the ETF integral improves previous models and leads to a precision of better than 200 keV per nucleon in the determination of the nuclear binding energy for dripline nuclei.
Approximating quantum many-body wave functions using artificial neural networks
NASA Astrophysics Data System (ADS)
Cai, Zi; Liu, Jinguo
2018-01-01
In this paper, we demonstrate the expressibility of artificial neural networks (ANNs) in quantum many-body physics by showing that a feed-forward neural network with a small number of hidden layers can be trained to approximate with high precision the ground states of some notable quantum many-body systems. We consider the one-dimensional free bosons and fermions, spinless fermions on a square lattice away from half-filling, as well as frustrated quantum magnetism with a rapidly oscillating ground-state characteristic function. In the latter case, an ANN with a standard architecture fails, while that with a slightly modified one successfully learns the frustration-induced complex sign rule in the ground state and approximates the ground states with high precisions. As an example of practical use of our method, we also perform the variational method to explore the ground state of an antiferromagnetic J1-J2 Heisenberg model.
Yang, Weitao; Mori-Sánchez, Paula; Cohen, Aron J
2013-09-14
The exact conditions for density functionals and density matrix functionals in terms of fractional charges and fractional spins are known, and their violation in commonly used functionals has been shown to be the root of many major failures in practical applications. However, approximate functionals are designed for physical systems with integer charges and spins, not in terms of the fractional variables. Here we develop a general framework for extending approximate density functionals and many-electron theory to fractional-charge and fractional-spin systems. Our development allows for the fractional extension of any approximate theory that is a functional of G(0), the one-electron Green's function of the non-interacting reference system. The extension to fractional charge and fractional spin systems is based on the ensemble average of the basic variable, G(0). We demonstrate the fractional extension for the following theories: (1) any explicit functional of the one-electron density, such as the local density approximation and generalized gradient approximations; (2) any explicit functional of the one-electron density matrix of the non-interacting reference system, such as the exact exchange functional (or Hartree-Fock theory) and hybrid functionals; (3) many-body perturbation theory; and (4) random-phase approximations. A general rule for such an extension has also been derived through scaling the orbitals and should be useful for functionals where the link to the Green's function is not obvious. The development thus enables the examination of approximate theories against known exact conditions on the fractional variables and the analysis of their failures in chemical and physical applications in terms of violations of exact conditions of the energy functionals. The present work should facilitate the calculation of chemical potentials and fundamental bandgaps with approximate functionals and many-electron theories through the energy derivatives with respect to the fractional charge. It should play an important role in developing accurate approximate density functionals and many-body theory.
NASA Astrophysics Data System (ADS)
Constantin, Lucian A.; Fabiano, Eduardo; Della Sala, Fabio
2018-05-01
Orbital-free density functional theory (OF-DFT) promises to describe the electronic structure of very large quantum systems, being its computational cost linear with the system size. However, the OF-DFT accuracy strongly depends on the approximation made for the kinetic energy (KE) functional. To date, the most accurate KE functionals are nonlocal functionals based on the linear-response kernel of the homogeneous electron gas, i.e., the jellium model. Here, we use the linear-response kernel of the jellium-with-gap model to construct a simple nonlocal KE functional (named KGAP) which depends on the band-gap energy. In the limit of vanishing energy gap (i.e., in the case of metals), the KGAP is equivalent to the Smargiassi-Madden (SM) functional, which is accurate for metals. For a series of semiconductors (with different energy gaps), the KGAP performs much better than SM, and results are close to the state-of-the-art functionals with sophisticated density-dependent kernels.
Simulation-based decision support framework for dynamic ambulance redeployment in Singapore.
Lam, Sean Shao Wei; Ng, Clarence Boon Liang; Nguyen, Francis Ngoc Hoang Long; Ng, Yih Yng; Ong, Marcus Eng Hock
2017-10-01
Dynamic ambulance redeployment policies tend to introduce much more flexibilities in improving ambulance resource allocation by capitalizing on the definite geospatial-temporal variations in ambulance demand patterns over the time-of-the-day and day-of-the-week effects. A novel modelling framework based on the Approximate Dynamic Programming (ADP) approach leveraging on a Discrete Events Simulation (DES) model for dynamic ambulance redeployment in Singapore is proposed in this paper. The study was based on the Singapore's national Emergency Medical Services (EMS) system. Based on a dataset comprising 216,973 valid incidents over a continuous two-years study period from 1 January 2011-31 December 2012, a DES model for the EMS system was developed. An ADP model based on linear value function approximations was then evaluated using the DES model via the temporal difference (TD) learning family of algorithms. The objective of the ADP model is to derive approximate optimal dynamic redeployment policies based on the primary outcome of ambulance coverage. Considering an 8min response time threshold, an estimated 5% reduction in the proportion of calls that cannot be reached within the threshold (equivalent to approximately 8000 dispatches) was observed from the computational experiments. The study also revealed that the redeployment policies which are restricted within the same operational division could potentially result in a more promising response time performance. Furthermore, the best policy involved the combination of redeploying ambulances whenever they are released from service and that of relocating ambulances that are idle in bases. This study demonstrated the successful application of an approximate modelling framework based on ADP that leverages upon a detailed DES model of the Singapore's EMS system to generate approximate optimal dynamic redeployment plans. Various policies and scenarios relevant to the Singapore EMS system were evaluated. Copyright © 2017 Elsevier B.V. All rights reserved.
Modeling of microporous silicon betaelectric converter with 63Ni plating in GEANT4 toolkit*
NASA Astrophysics Data System (ADS)
Zelenkov, P. V.; Sidorov, V. G.; Lelekov, E. T.; Khoroshko, A. Y.; Bogdanov, S. V.; Lelekov, A. T.
2016-04-01
The model of electron-hole pairs generation rate distribution in semiconductor is needed to optimize the parameters of microporous silicon betaelectric converter, which uses 63Ni isotope radiation. By using Monte-Carlo methods of GEANT4 software with ultra-low energy electron physics models this distribution in silicon was calculated and approximated with exponential function. Optimal pore configuration was estimated.
Jurcisinová, E; Jurcisin, M; Remecký, R
2009-10-01
The influence of weak uniaxial small-scale anisotropy on the stability of the scaling regime and on the anomalous scaling of the single-time structure functions of a passive scalar advected by the velocity field governed by the stochastic Navier-Stokes equation is investigated by the field theoretic renormalization group and operator-product expansion within one-loop approximation of a perturbation theory. The explicit analytical expressions for coordinates of the corresponding fixed point of the renormalization-group equations as functions of anisotropy parameters are found, the stability of the three-dimensional Kolmogorov-like scaling regime is demonstrated, and the dependence of the borderline dimension d(c) is an element of (2,3] between stable and unstable scaling regimes is found as a function of the anisotropy parameters. The dependence of the turbulent Prandtl number on the anisotropy parameters is also briefly discussed. The influence of weak small-scale anisotropy on the anomalous scaling of the structure functions of a passive scalar field is studied by the operator-product expansion and their explicit dependence on the anisotropy parameters is present. It is shown that the anomalous dimensions of the structure functions, which are the same (universal) for the Kraichnan model, for the model with finite time correlations of the velocity field, and for the model with the advection by the velocity field driven by the stochastic Navier-Stokes equation in the isotropic case, can be distinguished by the assumption of the presence of the small-scale anisotropy in the systems even within one-loop approximation. The corresponding comparison of the anisotropic anomalous dimensions for the present model with that obtained within the Kraichnan rapid-change model is done.
Variational methods in supersymmetric lattice field theory: The vacuum sector
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duncan, A.; Meyer-Ortmanns, H.; Roskies, R.
1987-12-15
The application of variational methods to the computation of the spectrum in supersymmetric lattice theories is considered, with special attention to O(N) supersymmetric sigma models. Substantial cancellations are found between bosonic and fermionic contributions even in approximate Ansa$uml: tze for the vacuum wave function. The nonlinear limit of the linear sigma model is studied in detail, and it is shown how to construct an appropriate non-Gaussian vacuum wave function for the nonlinear model. The vacuum energy is shown to be of order unity in lattice units in the latter case, after infinite cancellations.
Mo, Shaoxing; Lu, Dan; Shi, Xiaoqing; ...
2017-12-27
Global sensitivity analysis (GSA) and uncertainty quantification (UQ) for groundwater modeling are challenging because of the model complexity and significant computational requirements. To reduce the massive computational cost, a cheap-to-evaluate surrogate model is usually constructed to approximate and replace the expensive groundwater models in the GSA and UQ. Constructing an accurate surrogate requires actual model simulations on a number of parameter samples. Thus, a robust experimental design strategy is desired to locate informative samples so as to reduce the computational cost in surrogate construction and consequently to improve the efficiency in the GSA and UQ. In this study, we developmore » a Taylor expansion-based adaptive design (TEAD) that aims to build an accurate global surrogate model with a small training sample size. TEAD defines a novel hybrid score function to search informative samples, and a robust stopping criterion to terminate the sample search that guarantees the resulted approximation errors satisfy the desired accuracy. The good performance of TEAD in building global surrogate models is demonstrated in seven analytical functions with different dimensionality and complexity in comparison to two widely used experimental design methods. The application of the TEAD-based surrogate method in two groundwater models shows that the TEAD design can effectively improve the computational efficiency of GSA and UQ for groundwater modeling.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mo, Shaoxing; Lu, Dan; Shi, Xiaoqing
Global sensitivity analysis (GSA) and uncertainty quantification (UQ) for groundwater modeling are challenging because of the model complexity and significant computational requirements. To reduce the massive computational cost, a cheap-to-evaluate surrogate model is usually constructed to approximate and replace the expensive groundwater models in the GSA and UQ. Constructing an accurate surrogate requires actual model simulations on a number of parameter samples. Thus, a robust experimental design strategy is desired to locate informative samples so as to reduce the computational cost in surrogate construction and consequently to improve the efficiency in the GSA and UQ. In this study, we developmore » a Taylor expansion-based adaptive design (TEAD) that aims to build an accurate global surrogate model with a small training sample size. TEAD defines a novel hybrid score function to search informative samples, and a robust stopping criterion to terminate the sample search that guarantees the resulted approximation errors satisfy the desired accuracy. The good performance of TEAD in building global surrogate models is demonstrated in seven analytical functions with different dimensionality and complexity in comparison to two widely used experimental design methods. The application of the TEAD-based surrogate method in two groundwater models shows that the TEAD design can effectively improve the computational efficiency of GSA and UQ for groundwater modeling.« less
Tachyon warm-intermediate inflationary universe model in high dissipative regime
DOE Office of Scientific and Technical Information (OSTI.GOV)
Setare, M.R.; Kamali, V., E-mail: rezakord@ipm.ir, E-mail: vkamali1362@gmail.com
2012-08-01
We consider tachyonic warm-inflationary models in the context of intermediate inflation. We derive the characteristics of this model in slow-roll approximation and develop our model in two cases, 1- For a constant dissipative parameter Γ. 2- Γ as a function of tachyon field φ. We also describe scalar and tensor perturbations for this scenario. The parameters appearing in our model are constrained by recent observational data. We find that the level of non-Gaussianity for this model is comparable with non-tachyonic model.
Average focal length and power of a section of any defined surface.
Kaye, Stephen B
2010-04-01
To provide a method to allow calculation of the average focal length and power of a lens through a specified meridian of any defined surface, not limited to the paraxial approximations. University of Liverpool, Liverpool, United Kingdom. Functions were derived to model back-vertex focal length and representative power through a meridian containing any defined surface. Average back-vertex focal length was based on the definition of the average of a function, using the angle of incidence as an independent variable. Univariate functions allowed determination of average focal length and power through a section of any defined or topographically measured surface of a known refractive index. These functions incorporated aberrations confined to the section. The proposed method closely approximates the average focal length, and by inference power, of a section (meridian) of a surface to a single or scalar value. It is not dependent on the paraxial and other nonconstant approximations and includes aberrations confined to that meridian. A generalization of this method to include all orthogonal and oblique meridians is needed before a comparison with measured wavefront values can be made. Copyright (c) 2010 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
Silvestrelli, Pier Luigi; Ambrosetti, Alberto
2014-03-28
The Density Functional Theory (DFT)/van der Waals-Quantum Harmonic Oscillator-Wannier function (vdW-QHO-WF) method, recently developed to include the vdW interactions in approximated DFT by combining the quantum harmonic oscillator model with the maximally localized Wannier function technique, is applied to the cases of atoms and small molecules (X=Ar, CO, H2, H2O) weakly interacting with benzene and with the ideal planar graphene surface. Comparison is also presented with the results obtained by other DFT vdW-corrected schemes, including PBE+D, vdW-DF, vdW-DF2, rVV10, and by the simpler Local Density Approximation (LDA) and semilocal generalized gradient approximation approaches. While for the X-benzene systems all the considered vdW-corrected schemes perform reasonably well, it turns out that an accurate description of the X-graphene interaction requires a proper treatment of many-body contributions and of short-range screening effects, as demonstrated by adopting an improved version of the DFT/vdW-QHO-WF method. We also comment on the widespread attitude of relying on LDA to get a rough description of weakly interacting systems.
NASA Astrophysics Data System (ADS)
Yao, Yi; Kanai, Yosuke
Our ability to correctly model the association of oppositely charged ions in water is fundamental in physical chemistry and essential to various technological and biological applications of molecular dynamics (MD) simulations. MD simulations using classical force fields often show strong clustering of NaCl in the aqueous ionic solutions as a consequence of a deep contact pair minimum in the potential of mean force (PMF) curve. First-Principles Molecular Dynamics (FPMD) based on Density functional theory (DFT) with the popular PBE exchange-correlation approximation, on the other hand, show a different result with a shallow contact pair minimum in the PMF. We employed two of most promising exchange-correlation approximations, ωB97xv by Mardiorossian and Head-Gordon and SCAN by Sun, Ruzsinszky and Perdew, to examine the PMF using FPMD simulations. ωB97xv is highly empirically and optimized in the space of range-separated hybrid functional with a dispersion correction while SCAN is the most recent meta-GGA functional that is constructed by satisfying various known conditions in well-defined physical limits. We will discuss our findings for PMF, charge transfer, water dipoles, etc.
Optimal estimation of large structure model errors. [in Space Shuttle controller design
NASA Technical Reports Server (NTRS)
Rodriguez, G.
1979-01-01
In-flight estimation of large structure model errors is usually required as a means of detecting inevitable deficiencies in large structure controller/estimator models. The present paper deals with a least-squares formulation which seeks to minimize a quadratic functional of the model errors. The properties of these error estimates are analyzed. It is shown that an arbitrary model error can be decomposed as the sum of two components that are orthogonal in a suitably defined function space. Relations between true and estimated errors are defined. The estimates are found to be approximations that retain many of the significant dynamics of the true model errors. Current efforts are directed toward application of the analytical results to a reference large structure model.
Approximate solution for the electronic density profile at the surface of jellium
NASA Astrophysics Data System (ADS)
Schmickler, Wolfgang; Henderson, Douglas
1984-09-01
A simple family of trial functions for the electronic density at the surface of jellium, which accounts for Friedel oscillations and incorporates the Budd-Vannimenus theorem, is proposed. The free parameters are determined by energy minimization. Model calculations give good results for the work function and for the induced surface charge in the presence of an external field.
Adzhemyan, L Ts; Antonov, N V; Honkonen, J; Kim, T L
2005-01-01
The field theoretic renormalization group and operator-product expansion are applied to the model of a passive scalar quantity advected by a non-Gaussian velocity field with finite correlation time. The velocity is governed by the Navier-Stokes equation, subject to an external random stirring force with the correlation function proportional to delta(t- t')k(4-d-2epsilon). It is shown that the scalar field is intermittent already for small epsilon, its structure functions display anomalous scaling behavior, and the corresponding exponents can be systematically calculated as series in epsilon. The practical calculation is accomplished to order epsilon2 (two-loop approximation), including anisotropic sectors. As for the well-known Kraichnan rapid-change model, the anomalous scaling results from the existence in the model of composite fields (operators) with negative scaling dimensions, identified with the anomalous exponents. Thus the mechanism of the origin of anomalous scaling appears similar for the Gaussian model with zero correlation time and the non-Gaussian model with finite correlation time. It should be emphasized that, in contrast to Gaussian velocity ensembles with finite correlation time, the model and the perturbation theory discussed here are manifestly Galilean covariant. The relevance of these results for real passive advection and comparison with the Gaussian models and experiments are briefly discussed.
Few-body quark dynamics for doubly heavy baryons and tetraquarks
NASA Astrophysics Data System (ADS)
Richard, Jean-Marc; Valcarce, Alfredo; Vijande, Javier
2018-03-01
We discuss the adequate treatment of the three- and four-body dynamics for the quark model picture of double-charm baryons and tetraquarks. We stress that the variational and Born-Oppenheimer approximations give energies very close to the exact ones, while the diquark approximation might be somewhat misleading. The Hall-Post inequalities also provide very useful lower bounds that exclude the possibility of stable tetraquarks for some mass ratios and some color wave functions.
Field theoretic approach to roughness corrections
NASA Astrophysics Data System (ADS)
Wu, Hua Yao; Schaden, Martin
2012-02-01
We develop a systematic field theoretic description of roughness corrections to the Casimir free energy of a massless scalar field in the presence of parallel plates with mean separation a. Roughness is modeled by specifying a generating functional for correlation functions of the height profile. The two-point correlation function being characterized by its variance, σ2, and correlation length, ℓ. We obtain the partition function of a massless scalar quantum field interacting with the height profile of the surface via a δ-function potential. The partition function is given by a holographic reduction of this model to three coupled scalar fields on a two-dimensional plane. The original three-dimensional space with a flat parallel plate at a distance a from the rough plate is encoded in the nonlocal propagators of the surface fields on its boundary. Feynman rules for this equivalent 2+1-dimensional model are derived and its counterterms constructed. The two-loop contribution to the free energy of this model gives the leading roughness correction. The effective separation, aeff, to a rough plate is measured to a plane that is displaced a distance ρ∝σ2/ℓ from the mean of its profile. This definition of the separation eliminates corrections to the free energy of order 1/a4 and results in unitary scattering matrices. We obtain an effective low-energy model in the limit ℓ≪a. It determines the scattering matrix and equivalent planar scattering surface of a very rough plate in terms of the single length scale ρ. The Casimir force on a rough plate is found to always weaken with decreasing correlation length ℓ. The two-loop approximation to the free energy interpolates between the free energy of the effective low-energy model and that of the proximity force approximation - the force on a very rough plate with σ≳0.5ℓ being weaker than on a planar Dirichlet surface at any separation.
High-functionality star-branched macromolecules: polymer size and virial coefficients.
Randisi, Ferdinando; Pelissetto, Andrea
2013-10-21
We perform high-statistics Monte Carlo simulations of a lattice model to compute the radius of gyration Rg, the center-to-end distance, the monomer distribution, and the second and third virial coefficients of star polymers for a wide range of functionalities f, 6 ≤ f ≤ 120. We consider systems with a large number L of monomers per arm (100 is approximately < L is approximately < 1000 for f ≤ 40 and 100 is approximately < L is approximately < 400 for f = 80, 120), which allows us to determine accurately all quantities in the scaling regime. Results are extrapolated to determine the behavior of the different quantities in the limit f → ∞. Structural results are finally compared with the predictions of the Daoud-Cotton model. It turns out that the blob picture of a star polymer is essentially correct up to the corona radius Rc, which depends on f and which varies from 0.7Rg for f = 6 to 1.0Rg for f = 40. The outer region (r > Rc), in which the monomer distribution decays exponentially, shrinks as f increases, but it does not disappear in the scaling regime even in the limit f → ∞. We also consider the Daoud-Cotton scaling relation Rg (2)~f(1-ν)L(2ν), which is found to hold only for f > 100.
The derivation and approximation of coarse-grained dynamics from Langevin dynamics
NASA Astrophysics Data System (ADS)
Ma, Lina; Li, Xiantao; Liu, Chun
2016-11-01
We present a derivation of a coarse-grained description, in the form of a generalized Langevin equation, from the Langevin dynamics model that describes the dynamics of bio-molecules. The focus is placed on the form of the memory kernel function, the colored noise, and the second fluctuation-dissipation theorem that connects them. Also presented is a hierarchy of approximations for the memory and random noise terms, using rational approximations in the Laplace domain. These approximations offer increasing accuracy. More importantly, they eliminate the need to evaluate the integral associated with the memory term at each time step. Direct sampling of the colored noise can also be avoided within this framework. Therefore, the numerical implementation of the generalized Langevin equation is much more efficient.
First-Order Frameworks for Managing Models in Engineering Optimization
NASA Technical Reports Server (NTRS)
Alexandrov, Natlia M.; Lewis, Robert Michael
2000-01-01
Approximation/model management optimization (AMMO) is a rigorous methodology for attaining solutions of high-fidelity optimization problems with minimal expense in high- fidelity function and derivative evaluation. First-order AMMO frameworks allow for a wide variety of models and underlying optimization algorithms. Recent demonstrations with aerodynamic optimization achieved three-fold savings in terms of high- fidelity function and derivative evaluation in the case of variable-resolution models and five-fold savings in the case of variable-fidelity physics models. The savings are problem dependent but certain trends are beginning to emerge. We give an overview of the first-order frameworks, current computational results, and an idea of the scope of the first-order framework applicability.
Coulomb matrix elements in multi-orbital Hubbard models.
Bünemann, Jörg; Gebhard, Florian
2017-04-26
Coulomb matrix elements are needed in all studies in solid-state theory that are based on Hubbard-type multi-orbital models. Due to symmetries, the matrix elements are not independent. We determine a set of independent Coulomb parameters for a d-shell and an f-shell and all point groups with up to 16 elements (O h , O, T d , T h , D 6h , and D 4h ). Furthermore, we express all other matrix elements as a function of the independent Coulomb parameters. Apart from the solution of the general point-group problem we investigate in detail the spherical approximation and first-order corrections to the spherical approximation.
Magnetization of the Ising model on the Sierpinski pastry-shell
NASA Astrophysics Data System (ADS)
Chame, Anna; Branco, N. S.
1992-02-01
Using a real-space renormalization group approach, we calculate the approximate magnetization in the Ising model on the Sierpinski Pastry-shell. We consider, as an approximation, only two regions of the fractal: the internal surfaces, or walls (sites on the border of eliminated areas), with coupling constants JS, and the bulk (all other sites), with coupling constants Jv. We obtain the mean magnetization of the two regions as a function of temperature, for different values of α= JS/ JV and different geometric parameters b and l. Curves present a step-like behavior for some values of b and l, as well as different universality classes for the bulk transition.
NASA Astrophysics Data System (ADS)
Serene, J. W.; Deisz, J. J.; Hess, D. W.
1997-03-01
Calculations performed in the fluctuation exchange approximation for the single-band 2D Hubbard model on a cylinder and threaded by a flux, show the appearance of a finite superfluid density below T ~ 0.13t, for U=-4t and at three-eighths filling.(J.J. Deisz, D.W. Hess, Bull. Am. Phys. Soc. 41, 239 (1996); J.J. Deisz, D.W. Hess, and J.W. Serene, in preparation.) We show the evolution, with decreasing temperature, of the single-particle spectral function, the self-energy, the particle-particle T-matrix, and thermodynamic properties as the superfluid state is approached and entered.
On Using Surrogates with Genetic Programming.
Hildebrandt, Torsten; Branke, Jürgen
2015-01-01
One way to accelerate evolutionary algorithms with expensive fitness evaluations is to combine them with surrogate models. Surrogate models are efficiently computable approximations of the fitness function, derived by means of statistical or machine learning techniques from samples of fully evaluated solutions. But these models usually require a numerical representation, and therefore cannot be used with the tree representation of genetic programming (GP). In this paper, we present a new way to use surrogate models with GP. Rather than using the genotype directly as input to the surrogate model, we propose using a phenotypic characterization. This phenotypic characterization can be computed efficiently and allows us to define approximate measures of equivalence and similarity. Using a stochastic, dynamic job shop scenario as an example of simulation-based GP with an expensive fitness evaluation, we show how these ideas can be used to construct surrogate models and improve the convergence speed and solution quality of GP.
The Hindmarsh-Rose neuron model: bifurcation analysis and piecewise-linear approximations.
Storace, Marco; Linaro, Daniele; de Lange, Enno
2008-09-01
This paper provides a global picture of the bifurcation scenario of the Hindmarsh-Rose model. A combination between simulations and numerical continuations is used to unfold the complex bifurcation structure. The bifurcation analysis is carried out by varying two bifurcation parameters and evidence is given that the structure that is found is universal and appears for all combinations of bifurcation parameters. The information about the organizing principles and bifurcation diagrams are then used to compare the dynamics of the model with that of a piecewise-linear approximation, customized for circuit implementation. A good match between the dynamical behaviors of the models is found. These results can be used both to design a circuit implementation of the Hindmarsh-Rose model mimicking the diversity of neural response and as guidelines to predict the behavior of the model as well as its circuit implementation as a function of parameters. (c) 2008 American Institute of Physics.
NASA Astrophysics Data System (ADS)
Jamróz, Weronika
2016-06-01
The paper shows the way enrgy-based models aproximate mechanical properties of hiperelastic materials. Main goal of research was to create a method of finding a set of material constants that are included in a strain energy function that constitutes a heart of an energy-based model. The most optimal set of material constants determines the best adjustment of a theoretical stress-strain relation to the experimental one. This kind of adjustment enables better prediction of behaviour of a chosen material. In order to obtain more precised solution the approximation was made with use of data obtained in a modern experiment widely describen in [1]. To save computation time main algorithm is based on genetic algorithms.
Investigation of passive atmospheric sounding using millimeter and submillimeter wavelength channels
NASA Technical Reports Server (NTRS)
Gasiewski, Albin J.
1993-01-01
Presented in this study are the results of controlled partially polarimetric measurements of thermal emission at 91.65 GHz from a striated water surface as corroborated by a geometrical optics radiative model. The measurements were obtained outdoors using a precision polarimetric radiometer which directly measured the first three modified Stokes' parameters. Significant variations in these parameters as a function of azimuthal water wave angle were found, with peak-to-peak variations in T(sub u) of up to approximately 10 K. The measurements are well corroborated by the GO model over a range of observations angles from near nadir up to approximately 65 degrees from nadir. The model incorporates both multiple scattering and a realistic downwelling background brightness field.
Spheroidal Integral Equations for Geodetic Inversion of Geopotential Gradients
NASA Astrophysics Data System (ADS)
Novák, Pavel; Šprlák, Michal
2018-03-01
The static Earth's gravitational field has traditionally been described in geodesy and geophysics by the gravitational potential (geopotential for short), a scalar function of 3-D position. Although not directly observable, geopotential functionals such as its first- and second-order gradients are routinely measured by ground, airborne and/or satellite sensors. In geodesy, these observables are often used for recovery of the static geopotential at some simple reference surface approximating the actual Earth's surface. A generalized mathematical model is represented by a surface integral equation which originates in solving Dirichlet's boundary-value problem of the potential theory defined for the harmonic geopotential, spheroidal boundary and globally distributed gradient data. The mathematical model can be used for combining various geopotential gradients without necessity of their re-sampling or prior continuation in space. The model extends the apparatus of integral equations which results from solving boundary-value problems of the potential theory to all geopotential gradients observed by current ground, airborne and satellite sensors. Differences between spherical and spheroidal formulations of integral kernel functions of Green's kind are investigated. Estimated differences reach relative values at the level of 3% which demonstrates the significance of spheroidal approximation for flattened bodies such as the Earth. The observation model can be used for combined inversion of currently available geopotential gradients while exploring their spectral and stochastic characteristics. The model would be even more relevant to gravitational field modelling of other bodies in space with more pronounced spheroidal geometry than that of the Earth.
Combining Approach in Stages with Least Squares for fits of data in hyperelasticity
NASA Astrophysics Data System (ADS)
Beda, Tibi
2006-10-01
The present work concerns a method of continuous approximation by block of a continuous function; a method of approximation combining the Approach in Stages with the finite domains Least Squares. An identification procedure by sub-domains: basic generating functions are determined step-by-step permitting their weighting effects to be felt. This procedure allows one to be in control of the signs and to some extent of the optimal values of the parameters estimated, and consequently it provides a unique set of solutions that should represent the real physical parameters. Illustrations and comparisons are developed in rubber hyperelastic modeling. To cite this article: T. Beda, C. R. Mecanique 334 (2006).
Kinetic response of ionospheric ions to onset of auroral electric fields
NASA Technical Reports Server (NTRS)
Chiu, Y. T.; Kan, J. R.
1981-01-01
By examining the exact analytic solution of a kinetic model of collisional interaction of ionospheric ions with atmospheric neutrals in the Bhatnagar-Gross-Krook approximation, we show that the onset of intense auroral electric fields in the topside ionosphere can produce the following kinetic effects: (1) heat the bulk ionospheric ions to approximately 2 eV, thus driving them up to higher altitudes where they can be subjected to collisionless plasma processes; (2) produce a non-Maxwellian superthermal tail in the distribution function; and (3) cause the ion distribution function to be anisotropic with respect to the magnetic field with the perpendicular average thermal energy exceeding the parallel thermal energy.
Kinetic response of ionospheric ions to onset of auroral electric fields
NASA Technical Reports Server (NTRS)
Chiu, Y. T.; Kan, J. R.
1981-01-01
Examination of the exact analytic solution of a kinetic model of collisional interaction of ionospheric fions with atmospheric neutrals in the Bhatnagar-Gross-Krook approximation, shows that the onset of intense auroral electric fields in the topside ionosphere can produce the following kinetic effects: (1) heat the bulk ionospheric ions to approximately 2 eV, thus driving them up to higher altitudes where they can be subjected to collisionless plasma processes; (2) produce a nonMaxwellian superthermal tail in the distribution function; and (3) cause the ion distribution function to be anisotropic with respect to the magnetic field with the perpendicular average thermal energy exceeding the parallel thermal energy.
Applications of Laplace transform methods to airfoil motion and stability calculations
NASA Technical Reports Server (NTRS)
Edwards, J. W.
1979-01-01
This paper reviews the development of generalized unsteady aerodynamic theory and presents a derivation of the generalized Possio integral equation. Numerical calculations resolve questions concerning subsonic indicial lift functions and demonstrate the generation of Kutta waves at high values of reduced frequency, subsonic Mach number, or both. The use of rational function approximations of unsteady aerodynamic loads in aeroelastic stability calculations is reviewed, and a reformulation of the matrix Pade approximation technique is given. Numerical examples of flutter boundary calculations for a wing which is to be flight tested are given. Finally, a simplified aerodynamic model of transonic flow is used to study the stability of an airfoil exposed to supersonic and subsonic flow regions.
Electromagnetic wave scattering from some vegetation samples
NASA Technical Reports Server (NTRS)
Karam, Mostafa A.; Fung, Adrian K.; Antar, Yahia M.
1988-01-01
For an incident plane wave, the field inside a thin scatterer (disk and needle) is estimated by the generalized Rayleigh-Gans (GRG) approximation. This leads to a scattering amplitude tensor equal to that obtained via the Rayleigh approximation (dipole term) with a modifying function. For a finite-length cylinder the inner field is estimated by the corresponding field for the same cylinder of infinite lenght. The effects of different approaches in estimating the field inside the scatterer on the backscattering cross section are illustrated numerically for a circular disk, a needle, and a finite-length cylinder as a function of the wave number and the incidence angle. Finally, the modeling predictions are compared with measurements.
NASA Astrophysics Data System (ADS)
Sanders, Sören; Holthaus, Martin
2017-11-01
We explore in detail how analytic continuation of divergent perturbation series by generalized hypergeometric functions is achieved in practice. Using the example of strong-coupling perturbation series provided by the two-dimensional Bose-Hubbard model, we compare hypergeometric continuation to Shanks and Padé techniques, and demonstrate that the former yields a powerful, efficient and reliable alternative for computing the phase diagram of the Mott insulator-to-superfluid transition. In contrast to Shanks transformations and Padé approximations, hypergeometric continuation also allows us to determine the exponents which characterize the divergence of correlation functions at the transition points. Therefore, hypergeometric continuation constitutes a promising tool for the study of quantum phase transitions.
Limitations of the method of complex basis functions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baumel, R.T.; Crocker, M.C.; Nuttall, J.
1975-08-01
The method of complex basis functions proposed by Rescigno and Reinhardt is applied to the calculation of the amplitude in a model problem which can be treated analytically. It is found for an important class of potentials, including some of infinite range and also the square well, that the method does not provide a converging sequence of approximations. However, in some cases, approximations of relatively low order might be close to the correct result. The method is also applied to S-wave e-H elastic scattering above the ionization threshold, and spurious ''convergence'' to the wrong result is found. A procedure whichmore » might overcome the difficulties of the method is proposed.« less
Angular correlations in pair production at the LHC in the parton Reggeization approach
NASA Astrophysics Data System (ADS)
Karpishkov, Anton; Nefedov, Maxim; Saleev, Vladimir
2017-10-01
We calculate angular correlation spectra between beauty (B) and anti-beauty mesons in proton-proton collisions in the leading order approximation of the parton Reggeization approach consistently merged with the next-to-leading order corrections from the emission of additional hard gluon (NLO* approximation). To describe b-quark hadronization we use the universal scale-depended parton-to-meson fragmentation functions extracted from the combined e+e- annihilation data. The Kimber-Martin-Ryskin model for the unintegrated parton distribution functions in a proton is implied. We have obtained good agreement between our predictions and data from the CMS Collaboration at the energy TeV for angular correlations within uncertainties and without free parameters.
Electron scattering intensities and Patterson functions of Skyrmions
NASA Astrophysics Data System (ADS)
Karliner, M.; King, C.; Manton, N. S.
2016-06-01
The scattering of electrons off nuclei is one of the best methods of probing nuclear structure. In this paper we focus on electron scattering off nuclei with spin and isospin zero within the Skyrme model. We consider two distinct methods and simplify our calculations by use of the Born approximation. The first method is to calculate the form factor of the spherically averaged Skyrmion charge density; the second uses the Patterson function to calculate the scattering intensity off randomly oriented Skyrmions, and spherically averages at the end. We compare our findings with experimental scattering data. We also find approximate analytical formulae for the first zero and first stationary point of a form factor.
Structure and osmotic pressure of ionic microgel dispersions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hedrick, Mary M.; Department of Chemistry and Biochemistry, North Dakota State University, Fargo, North Dakota 58108-6050; Chung, Jun Kyung
We investigate structural and thermodynamic properties of aqueous dispersions of ionic microgels—soft colloidal gel particles that exhibit unusual phase behavior. Starting from a coarse-grained model of microgel macroions as charged spheres that are permeable to microions, we perform simulations and theoretical calculations using two complementary implementations of Poisson-Boltzmann (PB) theory. Within a one-component model, based on a linear-screening approximation for effective electrostatic pair interactions, we perform molecular dynamics simulations to compute macroion-macroion radial distribution functions, static structure factors, and macroion contributions to the osmotic pressure. For the same model, using a variational approximation for the free energy, we compute bothmore » macroion and microion contributions to the osmotic pressure. Within a spherical cell model, which neglects macroion correlations, we solve the nonlinear PB equation to compute microion distributions and osmotic pressures. By comparing the one-component and cell model implementations of PB theory, we demonstrate that the linear-screening approximation is valid for moderately charged microgels. By further comparing cell model predictions with simulation data for osmotic pressure, we chart the cell model’s limits in predicting osmotic pressures of salty dispersions.« less
A Model-Free Diagnostic for Single-Peakedness of Item Responses Using Ordered Conditional Means
ERIC Educational Resources Information Center
Polak, Marike; De Rooij, Mark; Heiser, Willem J.
2012-01-01
In this article we propose a model-free diagnostic for single-peakedness (unimodality) of item responses. Presuming a unidimensional unfolding scale and a given item ordering, we approximate item response functions of all items based on ordered conditional means (OCM). The proposed OCM methodology is based on Thurstone & Chave's (1929) "criterion…
van Turnhout, J.
2016-01-01
The dielectric spectra of colloidal systems often contain a typical low frequency dispersion, which usually remains unnoticed, because of the presence of strong conduction losses. The KK relations offer a means for converting ε′ into ε″ data. This allows us to calculate conduction free ε″ spectra in which the l.f. dispersion will show up undisturbed. This interconversion can be done on line with a moving frame of logarithmically spaced ε′ data. The coefficients of the conversion frames were obtained by kernel matching and by using symbolic differential operators. Logarithmic derivatives and differences of ε′ and ε″ provide another option for conduction free data analysis. These difference-based functions actually derived from approximations to the distribution function, have the additional advantage of improving the resolution power of dielectric studies. A high resolution is important because of the rich relaxation structure of colloidal suspensions. The development of all-in-1 modeling facilitates the conduction free and high resolution data analysis. This mathematical tool allows the apart-together fitting of multiple data and multiple model functions. It proved also useful to go around the KK conversion altogether. This was achieved by the combined approximating ε′ and ε″ data with a complex rational fractional power function. The all-in-1 minimization turned out to be also highly useful for the dielectric modeling of a suspension with the complex dipolar coefficient. It guarantees a secure correction for the electrode polarization, so that the modeling with the help of the differences ε′ and ε″ can zoom in on the genuine colloidal relaxations. PMID:27242997
Liu, Peigui; Elshall, Ahmed S.; Ye, Ming; ...
2016-02-05
Evaluating marginal likelihood is the most critical and computationally expensive task, when conducting Bayesian model averaging to quantify parametric and model uncertainties. The evaluation is commonly done by using Laplace approximations to evaluate semianalytical expressions of the marginal likelihood or by using Monte Carlo (MC) methods to evaluate arithmetic or harmonic mean of a joint likelihood function. This study introduces a new MC method, i.e., thermodynamic integration, which has not been attempted in environmental modeling. Instead of using samples only from prior parameter space (as in arithmetic mean evaluation) or posterior parameter space (as in harmonic mean evaluation), the thermodynamicmore » integration method uses samples generated gradually from the prior to posterior parameter space. This is done through a path sampling that conducts Markov chain Monte Carlo simulation with different power coefficient values applied to the joint likelihood function. The thermodynamic integration method is evaluated using three analytical functions by comparing the method with two variants of the Laplace approximation method and three MC methods, including the nested sampling method that is recently introduced into environmental modeling. The thermodynamic integration method outperforms the other methods in terms of their accuracy, convergence, and consistency. The thermodynamic integration method is also applied to a synthetic case of groundwater modeling with four alternative models. The application shows that model probabilities obtained using the thermodynamic integration method improves predictive performance of Bayesian model averaging. As a result, the thermodynamic integration method is mathematically rigorous, and its MC implementation is computationally general for a wide range of environmental problems.« less
A Short Note on the Scaling Function Constant Problem in the Two-Dimensional Ising Model
NASA Astrophysics Data System (ADS)
Bothner, Thomas
2018-02-01
We provide a simple derivation of the constant factor in the short-distance asymptotics of the tau-function associated with the 2-point function of the two-dimensional Ising model. This factor was first computed by Tracy (Commun Math Phys 142:297-311, 1991) via an exponential series expansion of the correlation function. Further simplifications in the analysis are due to Tracy and Widom (Commun Math Phys 190:697-721, 1998) using Fredholm determinant representations of the correlation function and Wiener-Hopf approximation results for the underlying resolvent operator. Our method relies on an action integral representation of the tau-function and asymptotic results for the underlying Painlevé-III transcendent from McCoy et al. (J Math Phys 18:1058-1092, 1977).
Rajeswaran, Jeevanantham; Blackstone, Eugene H; Barnard, John
2018-07-01
In many longitudinal follow-up studies, we observe more than one longitudinal outcome. Impaired renal and liver functions are indicators of poor clinical outcomes for patients who are on mechanical circulatory support and awaiting heart transplant. Hence, monitoring organ functions while waiting for heart transplant is an integral part of patient management. Longitudinal measurements of bilirubin can be used as a marker for liver function and glomerular filtration rate for renal function. We derive an approximation to evolution of association between these two organ functions using a bivariate nonlinear mixed effects model for continuous longitudinal measurements, where the two submodels are linked by a common distribution of time-dependent latent variables and a common distribution of measurement errors.
NASA Astrophysics Data System (ADS)
Sadeghi, Morteza; Ghanbarian, Behzad; Horton, Robert
2018-02-01
Thermal conductivity is an essential component in multiphysics models and coupled simulation of heat transfer, fluid flow, and solute transport in porous media. In the literature, various empirical, semiempirical, and physical models were developed for thermal conductivity and its estimation in partially saturated soils. Recently, Ghanbarian and Daigle (GD) proposed a theoretical model, using the percolation-based effective-medium approximation, whose parameters are physically meaningful. The original GD model implicitly formulates thermal conductivity λ as a function of volumetric water content θ. For the sake of computational efficiency in numerical calculations, in this study, we derive an explicit λ(θ) form of the GD model. We also demonstrate that some well-known empirical models, e.g., Chung-Horton, widely applied in the HYDRUS model, as well as mixing models are special cases of the GD model under specific circumstances. Comparison with experiments indicates that the GD model can accurately estimate soil thermal conductivity.
Asymptotic safety of quantum gravity beyond Ricci scalars
NASA Astrophysics Data System (ADS)
Falls, Kevin; King, Callum R.; Litim, Daniel F.; Nikolakopoulos, Kostas; Rahmede, Christoph
2018-04-01
We investigate the asymptotic safety conjecture for quantum gravity including curvature invariants beyond Ricci scalars. Our strategy is put to work for families of gravitational actions which depend on functions of the Ricci scalar, the Ricci tensor, and products thereof. Combining functional renormalization with high order polynomial approximations and full numerical integration we derive the renormalization group flow for all couplings and analyse their fixed points, scaling exponents, and the fixed point effective action as a function of the background Ricci curvature. The theory is characterized by three relevant couplings. Higher-dimensional couplings show near-Gaussian scaling with increasing canonical mass dimension. We find that Ricci tensor invariants stabilize the UV fixed point and lead to a rapid convergence of polynomial approximations. We apply our results to models for cosmology and establish that the gravitational fixed point admits inflationary solutions. We also compare findings with those from f (R ) -type theories in the same approximation and pin-point the key new effects due to Ricci tensor interactions. Implications for the asymptotic safety conjecture of gravity are indicated.
NASA Astrophysics Data System (ADS)
Bescond, Marc; Li, Changsheng; Mera, Hector; Cavassilas, Nicolas; Lannoo, Michel
2013-10-01
We present a one-shot current-conserving approach to model the influence of electron-phonon scattering in nano-transistors using the non-equilibrium Green's function formalism. The approach is based on the lowest order approximation (LOA) to the current and its simplest analytic continuation (LOA+AC). By means of a scaling argument, we show how both LOA and LOA+AC can be easily obtained from the first iteration of the usual self-consistent Born approximation (SCBA) algorithm. Both LOA and LOA+AC are then applied to model n-type silicon nanowire field-effect-transistors and are compared to SCBA current characteristics. In this system, the LOA fails to describe electron-phonon scattering, mainly because of the interactions with acoustic phonons at the band edges. In contrast, the LOA+AC still well approximates the SCBA current characteristics, thus demonstrating the power of analytic continuation techniques. The limits of validity of LOA+AC are also discussed, and more sophisticated and general analytic continuation techniques are suggested for more demanding cases.
Marginally specified priors for non-parametric Bayesian estimation
Kessler, David C.; Hoff, Peter D.; Dunson, David B.
2014-01-01
Summary Prior specification for non-parametric Bayesian inference involves the difficult task of quantifying prior knowledge about a parameter of high, often infinite, dimension. A statistician is unlikely to have informed opinions about all aspects of such a parameter but will have real information about functionals of the parameter, such as the population mean or variance. The paper proposes a new framework for non-parametric Bayes inference in which the prior distribution for a possibly infinite dimensional parameter is decomposed into two parts: an informative prior on a finite set of functionals, and a non-parametric conditional prior for the parameter given the functionals. Such priors can be easily constructed from standard non-parametric prior distributions in common use and inherit the large support of the standard priors on which they are based. Additionally, posterior approximations under these informative priors can generally be made via minor adjustments to existing Markov chain approximation algorithms for standard non-parametric prior distributions. We illustrate the use of such priors in the context of multivariate density estimation using Dirichlet process mixture models, and in the modelling of high dimensional sparse contingency tables. PMID:25663813
Oscillations and Rolling for Duffing's Equation
NASA Astrophysics Data System (ADS)
Aref'eva, I. Ya.; Piskovskiy, E. V.; Volovich, I. V.
2013-01-01
The Duffing equation has been used to model nonlinear dynamics not only in mechanics and electronics but also in biology and in neurology for the brain process modeling. Van der Pol's method is often used in nonlinear dynamics to improve perturbation theory results when describing small oscillations. However, in some other problems of nonlinear dynamics particularly in case of Duffing-Higgs equation in field theory, for the Einsten-Friedmann equations in cosmology and for relaxation processes in neurology not only small oscillations regime is of interest but also the regime of slow rolling. In the present work a method for approximate solution to nonlinear dynamics equations in the rolling regime is developed. It is shown that in order to improve perturbation theory in the rolling regime it turns out to be effective to use an expansion in hyperbolic functions instead of trigonometric functions as it is done in van der Pol's method in case of small oscillations. In particular the Duffing equation in the rolling regime is investigated using solution expressed in terms of elliptic functions. Accuracy of obtained approximation is estimated. The Duffing equation with dissipation is also considered.
Probabilistic and deterministic aspects of linear estimation in geodesy. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Dermanis, A.
1976-01-01
Recent advances in observational techniques related to geodetic work (VLBI, laser ranging) make it imperative that more consideration should be given to modeling problems. Uncertainties in the effect of atmospheric refraction, polar motion and precession-nutation parameters, cannot be dispensed with in the context of centimeter level geodesy. Even physical processes that have generally been previously altogether neglected (station motions) must now be taken into consideration. The problem of modeling functions of time or space, or at least their values at observation points (epochs) is explored. When the nature of the function to be modeled is unknown. The need to include a limited number of terms and to a priori decide upon a specific form may result in a representation which fails to sufficiently approximate the unknown function. An alternative approach of increasing application is the modeling of unknown functions as stochastic processes.
When can time-dependent currents be reproduced by the Landauer steady-state approximation?
NASA Astrophysics Data System (ADS)
Carey, Rachel; Chen, Liping; Gu, Bing; Franco, Ignacio
2017-05-01
We establish well-defined limits in which the time-dependent electronic currents across a molecular junction subject to a fluctuating environment can be quantitatively captured via the Landauer steady-state approximation. For this, we calculate the exact time-dependent non-equilibrium Green's function (TD-NEGF) current along a model two-site molecular junction, in which the site energies are subject to correlated noise, and contrast it with that obtained from the Landauer approach. The ability of the steady-state approximation to capture the TD-NEGF behavior at each instant of time is quantified via the same-time correlation function of the currents obtained from the two methods, while their global agreement is quantified by examining differences in the average currents. The Landauer steady-state approach is found to be a useful approximation when (i) the fluctuations do not disrupt the degree of delocalization of the molecular eigenstates responsible for transport and (ii) the characteristic time for charge exchange between the molecule and leads is fast with respect to the molecular correlation time. For resonant transport, when these conditions are satisfied, the Landauer approach is found to accurately describe the current, both on average and at each instant of time. For non-resonant transport, we find that while the steady-state approach fails to capture the time-dependent transport at each instant of time, it still provides a good approximation to the average currents. These criteria can be employed to adopt effective modeling strategies for transport through molecular junctions in interaction with a fluctuating environment, as is necessary to describe experiments.
Evolutionary dynamics from a variational principle.
Klimek, Peter; Thurner, Stefan; Hanel, Rudolf
2010-07-01
We demonstrate with a thought experiment that fitness-based population dynamical approaches to evolution are not able to make quantitative, falsifiable predictions about the long-term behavior of some evolutionary systems. A key characteristic of evolutionary systems is the ongoing endogenous production of new species. These novel entities change the conditions for already existing species. Even Darwin's Demon, a hypothetical entity with exact knowledge of the abundance of all species and their fitness functions at a given time, could not prestate the impact of these novelties on established populations. We argue that fitness is always a posteriori knowledge--it measures but does not explain why a species has reproductive success or not. To overcome these conceptual limitations, a variational principle is proposed in a spin-model-like setup of evolutionary systems. We derive a functional which is minimized under the most general evolutionary formulation of a dynamical system, i.e., evolutionary trajectories causally emerge as a minimization of a functional. This functional allows the derivation of analytic solutions of the asymptotic diversity for stochastic evolutionary systems within a mean-field approximation. We test these approximations by numerical simulations of the corresponding model and find good agreement in the position of phase transitions in diversity curves. The model is further able to reproduce stylized facts of timeseries from several man-made and natural evolutionary systems. Light will be thrown on how species and their fitness landscapes dynamically coevolve.
A Gaussian Approximation Potential for Silicon
NASA Astrophysics Data System (ADS)
Bernstein, Noam; Bartók, Albert; Kermode, James; Csányi, Gábor
We present an interatomic potential for silicon using the Gaussian Approximation Potential (GAP) approach, which uses the Gaussian process regression method to approximate the reference potential energy surface as a sum of atomic energies. Each atomic energy is approximated as a function of the local environment around the atom, which is described with the smooth overlap of atomic environments (SOAP) descriptor. The potential is fit to a database of energies, forces, and stresses calculated using density functional theory (DFT) on a wide range of configurations from zero and finite temperature simulations. These include crystalline phases, liquid, amorphous, and low coordination structures, and diamond-structure point defects, dislocations, surfaces, and cracks. We compare the results of the potential to DFT calculations, as well as to previously published models including Stillinger-Weber, Tersoff, modified embedded atom method (MEAM), and ReaxFF. We show that it is very accurate as compared to the DFT reference results for a wide range of properties, including low energy bulk phases, liquid structure, as well as point, line, and plane defects in the diamond structure.
Boundary Control of Linear Uncertain 1-D Parabolic PDE Using Approximate Dynamic Programming.
Talaei, Behzad; Jagannathan, Sarangapani; Singler, John
2018-04-01
This paper develops a near optimal boundary control method for distributed parameter systems governed by uncertain linear 1-D parabolic partial differential equations (PDE) by using approximate dynamic programming. A quadratic surface integral is proposed to express the optimal cost functional for the infinite-dimensional state space. Accordingly, the Hamilton-Jacobi-Bellman (HJB) equation is formulated in the infinite-dimensional domain without using any model reduction. Subsequently, a neural network identifier is developed to estimate the unknown spatially varying coefficient in PDE dynamics. Novel tuning law is proposed to guarantee the boundedness of identifier approximation error in the PDE domain. A radial basis network (RBN) is subsequently proposed to generate an approximate solution for the optimal surface kernel function online. The tuning law for near optimal RBN weights is created, such that the HJB equation error is minimized while the dynamics are identified and closed-loop system remains stable. Ultimate boundedness (UB) of the closed-loop system is verified by using the Lyapunov theory. The performance of the proposed controller is successfully confirmed by simulation on an unstable diffusion-reaction process.
Traveling-cluster approximation for uncorrelated amorphous systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sen, A.K.; Mills, R.; Kaplan, T.
1984-11-15
We have developed a formalism for including cluster effects in the one-electron Green's function for a positionally disordered (liquid or amorphous) system without any correlation among the scattering sites. This method is an extension of the technique known as the traveling-cluster approximation (TCA) originally obtained and applied to a substitutional alloy by Mills and Ratanavararaksa. We have also proved the appropriate fixed-point theorem, which guarantees, for a bounded local potential, that the self-consistent equations always converge upon iteration to a unique, Herglotz solution. To our knowledge, this is the only analytic theory for considering cluster effects. Furthermore, we have performedmore » some computer calculations in the pair TCA, for the model case of delta-function potentials on a one-dimensional random chain. These results have been compared with ''exact calculations'' (which, in principle, take into account all cluster effects) and with the coherent-potential approximation (CPA), which is the single-site TCA. The density of states for the pair TCA clearly shows some improvement over the CPA and yet, apparently, the pair approximation distorts some of the features of the exact results.« less
Baker, Stuart G
2018-02-01
When using risk prediction models, an important consideration is weighing performance against the cost (monetary and harms) of ascertaining predictors. The minimum test tradeoff (MTT) for ruling out a model is the minimum number of all-predictor ascertainments per correct prediction to yield a positive overall expected utility. The MTT for ruling out an added predictor is the minimum number of added-predictor ascertainments per correct prediction to yield a positive overall expected utility. An approximation to the MTT for ruling out a model is 1/[P (H(AUC model )], where H(AUC) = AUC - {½ (1-AUC)} ½ , AUC is the area under the receiver operating characteristic (ROC) curve, and P is the probability of the predicted event in the target population. An approximation to the MTT for ruling out an added predictor is 1 /[P {(H(AUC Model:2 ) - H(AUC Model:1 )], where Model 2 includes an added predictor relative to Model 1. The latter approximation requires the Tangent Condition that the true positive rate at the point on the ROC curve with a slope of 1 is larger for Model 2 than Model 1. These approximations are suitable for back-of-the-envelope calculations. For example, in a study predicting the risk of invasive breast cancer, Model 2 adds to the predictors in Model 1 a set of 7 single nucleotide polymorphisms (SNPs). Based on the AUCs and the Tangent Condition, an MTT of 7200 was computed, which indicates that 7200 sets of SNPs are needed for every correct prediction of breast cancer to yield a positive overall expected utility. If ascertaining the SNPs costs $500, this MTT suggests that SNP ascertainment is not likely worthwhile for this risk prediction.
NASA Astrophysics Data System (ADS)
Shankar, Praveen
The performance of nonlinear control algorithms such as feedback linearization and dynamic inversion is heavily dependent on the fidelity of the dynamic model being inverted. Incomplete or incorrect knowledge of the dynamics results in reduced performance and may lead to instability. Augmenting the baseline controller with approximators which utilize a parametrization structure that is adapted online reduces the effect of this error between the design model and actual dynamics. However, currently existing parameterizations employ a fixed set of basis functions that do not guarantee arbitrary tracking error performance. To address this problem, we develop a self-organizing parametrization structure that is proven to be stable and can guarantee arbitrary tracking error performance. The training algorithm to grow the network and adapt the parameters is derived from Lyapunov theory. In addition to growing the network of basis functions, a pruning strategy is incorporated to keep the size of the network as small as possible. This algorithm is implemented on a high performance flight vehicle such as F-15 military aircraft. The baseline dynamic inversion controller is augmented with a Self-Organizing Radial Basis Function Network (SORBFN) to minimize the effect of the inversion error which may occur due to imperfect modeling, approximate inversion or sudden changes in aircraft dynamics. The dynamic inversion controller is simulated for different situations including control surface failures, modeling errors and external disturbances with and without the adaptive network. A performance measure of maximum tracking error is specified for both the controllers a priori. Excellent tracking error minimization to a pre-specified level using the adaptive approximation based controller was achieved while the baseline dynamic inversion controller failed to meet this performance specification. The performance of the SORBFN based controller is also compared to a fixed RBF network based adaptive controller. While the fixed RBF network based controller which is tuned to compensate for control surface failures fails to achieve the same performance under modeling uncertainty and disturbances, the SORBFN is able to achieve good tracking convergence under all error conditions.
Fujita, Masahiko
2016-03-01
Lesions of the cerebellum result in large errors in movements. The cerebellum adaptively controls the strength and timing of motor command signals depending on the internal and external environments of movements. The present theory describes how the cerebellar cortex can control signals for accurate and timed movements. A model network of the cerebellar Golgi and granule cells is shown to be equivalent to a multiple-input (from mossy fibers) hierarchical neural network with a single hidden layer of threshold units (granule cells) that receive a common recurrent inhibition (from a Golgi cell). The weighted sum of the hidden unit signals (Purkinje cell output) is theoretically analyzed regarding the capability of the network to perform two types of universal function approximation. The hidden units begin firing as the excitatory inputs exceed the recurrent inhibition. This simple threshold feature leads to the first approximation theory, and the network final output can be any continuous function of the multiple inputs. When the input is constant, this output becomes stationary. However, when the recurrent unit activity is triggered to decrease or the recurrent inhibition is triggered to increase through a certain mechanism (metabotropic modulation or extrasynaptic spillover), the network can generate any continuous signals for a prolonged period of change in the activity of recurrent signals, as the second approximation theory shows. By incorporating the cerebellar capability of two such types of approximations to a motor system, in which learning proceeds through repeated movement trials with accompanying corrections, accurate and timed responses for reaching the target can be adaptively acquired. Simple models of motor control can solve the motor error vs. sensory error problem, as well as the structural aspects of credit (or error) assignment problem. Two physiological experiments are proposed for examining the delay and trace conditioning of eyelid responses, as well as saccade adaptation, to investigate this novel idea of cerebellar processing. Copyright © 2015 Elsevier Ltd. All rights reserved.
Practical auxiliary basis implementation of Rung 3.5 functionals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Janesko, Benjamin G., E-mail: b.janesko@tcu.edu; Scalmani, Giovanni; Frisch, Michael J.
2014-07-21
Approximate exchange-correlation functionals for Kohn-Sham density functional theory often benefit from incorporating exact exchange. Exact exchange is constructed from the noninteracting reference system's nonlocal one-particle density matrix γ(r{sup -vector},r{sup -vector}′). Rung 3.5 functionals attempt to balance the strengths and limitations of exact exchange using a new ingredient, a projection of γ(r{sup -vector},r{sup -vector} ′) onto a semilocal model density matrix γ{sub SL}(ρ(r{sup -vector}),∇ρ(r{sup -vector}),r{sup -vector}−r{sup -vector} ′). γ{sub SL} depends on the electron density ρ(r{sup -vector}) at reference point r{sup -vector}, and is closely related to semilocal model exchange holes. We present a practical implementation of Rung 3.5 functionals, expandingmore » the r{sup -vector}−r{sup -vector} ′ dependence of γ{sub SL} in an auxiliary basis set. Energies and energy derivatives are obtained from 3D numerical integration as in standard semilocal functionals. We also present numerical tests of a range of properties, including molecular thermochemistry and kinetics, geometries and vibrational frequencies, and bandgaps and excitation energies. Rung 3.5 functionals typically provide accuracy intermediate between semilocal and hybrid approximations. Nonlocal potential contributions from γ{sub SL} yield interesting successes and failures for band structures and excitation energies. The results enable and motivate continued exploration of Rung 3.5 functional forms.« less
Shi, Ran; Guo, Ying
2016-12-01
Human brains perform tasks via complex functional networks consisting of separated brain regions. A popular approach to characterize brain functional networks in fMRI studies is independent component analysis (ICA), which is a powerful method to reconstruct latent source signals from their linear mixtures. In many fMRI studies, an important goal is to investigate how brain functional networks change according to specific clinical and demographic variabilities. Existing ICA methods, however, cannot directly incorporate covariate effects in ICA decomposition. Heuristic post-ICA analysis to address this need can be inaccurate and inefficient. In this paper, we propose a hierarchical covariate-adjusted ICA (hc-ICA) model that provides a formal statistical framework for estimating covariate effects and testing differences between brain functional networks. Our method provides a more reliable and powerful statistical tool for evaluating group differences in brain functional networks while appropriately controlling for potential confounding factors. We present an analytically tractable EM algorithm to obtain maximum likelihood estimates of our model. We also develop a subspace-based approximate EM that runs significantly faster while retaining high accuracy. To test the differences in functional networks, we introduce a voxel-wise approximate inference procedure which eliminates the need of computationally expensive covariance matrix estimation and inversion. We demonstrate the advantages of our methods over the existing method via simulation studies. We apply our method to an fMRI study to investigate differences in brain functional networks associated with post-traumatic stress disorder (PTSD).
Anharmonic effects in the quantum cluster equilibrium method
NASA Astrophysics Data System (ADS)
von Domaros, Michael; Perlt, Eva
2017-03-01
The well-established quantum cluster equilibrium (QCE) model provides a statistical thermodynamic framework to apply high-level ab initio calculations of finite cluster structures to macroscopic liquid phases using the partition function. So far, the harmonic approximation has been applied throughout the calculations. In this article, we apply an important correction in the evaluation of the one-particle partition function and account for anharmonicity. Therefore, we implemented an analytical approximation to the Morse partition function and the derivatives of its logarithm with respect to temperature, which are required for the evaluation of thermodynamic quantities. This anharmonic QCE approach has been applied to liquid hydrogen chloride and cluster distributions, and the molar volume, the volumetric thermal expansion coefficient, and the isobaric heat capacity have been calculated. An improved description for all properties is observed if anharmonic effects are considered.
NASA Technical Reports Server (NTRS)
Peck, Charles C.; Dhawan, Atam P.; Meyer, Claudia M.
1991-01-01
A genetic algorithm is used to select the inputs to a neural network function approximator. In the application considered, modeling critical parameters of the space shuttle main engine (SSME), the functional relationship between measured parameters is unknown and complex. Furthermore, the number of possible input parameters is quite large. Many approaches have been used for input selection, but they are either subjective or do not consider the complex multivariate relationships between parameters. Due to the optimization and space searching capabilities of genetic algorithms they were employed to systematize the input selection process. The results suggest that the genetic algorithm can generate parameter lists of high quality without the explicit use of problem domain knowledge. Suggestions for improving the performance of the input selection process are also provided.
N-point statistics of large-scale structure in the Zel'dovich approximation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tassev, Svetlin, E-mail: tassev@astro.princeton.edu
2014-06-01
Motivated by the results presented in a companion paper, here we give a simple analytical expression for the matter n-point functions in the Zel'dovich approximation (ZA) both in real and in redshift space (including the angular case). We present numerical results for the 2-dimensional redshift-space correlation function, as well as for the equilateral configuration for the real-space 3-point function. We compare those to the tree-level results. Our analysis is easily extendable to include Lagrangian bias, as well as higher-order perturbative corrections to the ZA. The results should be especially useful for modelling probes of large-scale structure in the linear regime,more » such as the Baryon Acoustic Oscillations. We make the numerical code used in this paper freely available.« less
Analytical Debye-Huckel model for electrostatic potentials around dissolved DNA.
Wagner, K; Keyes, E; Kephart, T W; Edwards, G
1997-07-01
We present an analytical, Green-function-based model for the electric potential of DNA in solution, treating the surrounding solvent with the Debye-Huckel approximation. The partial charge of each atom is accounted for by modeling DNA as linear distributions of atoms on concentric cylindrical surfaces. The condensed ions of the solvent are treated with the Debye-Huckel approximation. The resultant leading term of the potential is that of a continuous shielded line charge, and the higher order terms account for the helical structure. Within several angstroms of the surface there is sufficient information in the electric potential to distinguish features and symmetries of DNA. Plots of the potential and equipotential surfaces, dominated by the phosphate charges, reflect the structural differences between the A, B, and Z conformations and, to a smaller extent, the difference between base sequences. As the distances from the helices increase, the magnitudes of the potentials decrease. However, the bases and sugars account for a larger fraction of the double helix potential with increasing distance. We have found that when the solvent is treated with the Debye-Huckel approximation, the potential decays more rapidly in every direction from the surface than it did in the concentric dielectric cylinder approximation.
Optimal causal inference: estimating stored information and approximating causal architecture.
Still, Susanne; Crutchfield, James P; Ellison, Christopher J
2010-09-01
We introduce an approach to inferring the causal architecture of stochastic dynamical systems that extends rate-distortion theory to use causal shielding--a natural principle of learning. We study two distinct cases of causal inference: optimal causal filtering and optimal causal estimation. Filtering corresponds to the ideal case in which the probability distribution of measurement sequences is known, giving a principled method to approximate a system's causal structure at a desired level of representation. We show that in the limit in which a model-complexity constraint is relaxed, filtering finds the exact causal architecture of a stochastic dynamical system, known as the causal-state partition. From this, one can estimate the amount of historical information the process stores. More generally, causal filtering finds a graded model-complexity hierarchy of approximations to the causal architecture. Abrupt changes in the hierarchy, as a function of approximation, capture distinct scales of structural organization. For nonideal cases with finite data, we show how the correct number of the underlying causal states can be found by optimal causal estimation. A previously derived model-complexity control term allows us to correct for the effect of statistical fluctuations in probability estimates and thereby avoid overfitting.
NASA Astrophysics Data System (ADS)
Coletta, Vincent P.; Evans, Jonathan
2008-10-01
We analyze the motion of a gravity powered model race car on a downhill track of variable slope. Using a simple algebraic function to approximate the height of the track as a function of the distance along the track, and taking account of the rotational energy of the wheels, rolling friction, and air resistance, we obtain analytic expressions for the velocity and time of the car as functions of the distance traveled along the track. Photogates are used to measure the time at selected points along the track, and the measured values are in excellent agreement with the values predicted from theory. The design and analysis of model race cars provides a good application of principles of mechanics and suggests interesting projects for classes in introductory and intermediate mechanics.
Rights, Jason D; Sterba, Sonya K
2016-11-01
Multilevel data structures are common in the social sciences. Often, such nested data are analysed with multilevel models (MLMs) in which heterogeneity between clusters is modelled by continuously distributed random intercepts and/or slopes. Alternatively, the non-parametric multilevel regression mixture model (NPMM) can accommodate the same nested data structures through discrete latent class variation. The purpose of this article is to delineate analytic relationships between NPMM and MLM parameters that are useful for understanding the indirect interpretation of the NPMM as a non-parametric approximation of the MLM, with relaxed distributional assumptions. We define how seven standard and non-standard MLM specifications can be indirectly approximated by particular NPMM specifications. We provide formulas showing how the NPMM can serve as an approximation of the MLM in terms of intraclass correlation, random coefficient means and (co)variances, heteroscedasticity of residuals at level 1, and heteroscedasticity of residuals at level 2. Further, we discuss how these relationships can be useful in practice. The specific relationships are illustrated with simulated graphical demonstrations, and direct and indirect interpretations of NPMM classes are contrasted. We provide an R function to aid in implementing and visualizing an indirect interpretation of NPMM classes. An empirical example is presented and future directions are discussed. © 2016 The British Psychological Society.
Nerurkar, Nandan L; Mauck, Robert L; Elliott, Dawn M
2008-12-01
Integrating theoretical and experimental approaches for annulus fibrosus (AF) functional tissue engineering. Apply a hyperelastic constitutive model to characterize the evolution of engineered AF via scalar model parameters. Validate the model and predict the response of engineered constructs to physiologic loading scenarios. There is need for a tissue engineered replacement for degenerate AF. When evaluating engineered replacements for load-bearing tissues, it is necessary to evaluate mechanical function with respect to the native tissue, including nonlinearity and anisotropy. Aligned nanofibrous poly-epsilon-caprolactone scaffolds with prescribed fiber angles were seeded with bovine AF cells and analyzed over 8 weeks, using experimental (mechanical testing, biochemistry, histology) and theoretical methods (a hyperelastic fiber-reinforced constitutive model). The linear region modulus for phi = 0 degrees constructs increased by approximately 25 MPa, and for phi = 90 degrees by approximately 2 MPa from 1 day to 8 weeks in culture. Infiltration and proliferation of AF cells into the scaffold and abundant deposition of s-GAG and aligned collagen was observed. The constitutive model had excellent fits to experimental data to yield matrix and fiber parameters that increased with time in culture. Correlations were observed between biochemical measures and model parameters. The model was successfully validated and used to simulate time-varying responses of engineered AF under shear and biaxial loading. AF cells seeded on nanofibrous scaffolds elaborated an organized, anisotropic AF-like extracellular matrix, resulting in improved mechanical properties. A hyperelastic fiber-reinforced constitutive model characterized the functional evolution of engineered AF constructs, and was used to simulate physiologically relevant loading configurations. Model predictions demonstrated that fibers resist shear even when the shearing direction does not coincide with the fiber direction. Further, the model suggested that the native AF fiber architecture is uniquely designed to support shear stresses encountered under multiple loading configurations.
NASA Astrophysics Data System (ADS)
Gambacurta, D.; Grasso, M.; Vasseur, O.
2018-02-01
The second random-phase-approximation model corrected by a subtraction procedure designed to cure double counting, instabilities, and ultraviolet divergences, is employed for the first time to analyze the dipole strength and polarizability in 48Ca. All the terms of the residual interaction are included, leading to a fully self-consistent scheme. Results are illustrated with two Skyrme parametrizations, SGII and SLy4. Those obtained with the SGII interaction are particularly satisfactory. In this case, the low-lying strength below the neutron threshold is well reproduced and the giant dipole resonance is described in a very satisfactory way especially in its spreading and fragmentation. Spreading and fragmentation are produced in a natural way within such a theoretical model by the coupling of 1 particle-1 hole and 2 particle-2 hole configurations. Owing to this feature, we may provide for the electric polarizability as a function of the excitation energy a curve with a similar slope around the centroid energy of the giant resonance compared to the corresponding experimental results. This represents a considerable improvement with respect to previous theoretical predictions obtained with the random-phase approximation or with several ab-initio models. In such cases, the spreading width of the excitation cannot be reproduced and the polarizability as a function of the excitation energy displays a stiff increase around the predicted centroid energy of the giant resonance.
Probabilistic inference using linear Gaussian importance sampling for hybrid Bayesian networks
NASA Astrophysics Data System (ADS)
Sun, Wei; Chang, K. C.
2005-05-01
Probabilistic inference for Bayesian networks is in general NP-hard using either exact algorithms or approximate methods. However, for very complex networks, only the approximate methods such as stochastic sampling could be used to provide a solution given any time constraint. There are several simulation methods currently available. They include logic sampling (the first proposed stochastic method for Bayesian networks, the likelihood weighting algorithm) the most commonly used simulation method because of its simplicity and efficiency, the Markov blanket scoring method, and the importance sampling algorithm. In this paper, we first briefly review and compare these available simulation methods, then we propose an improved importance sampling algorithm called linear Gaussian importance sampling algorithm for general hybrid model (LGIS). LGIS is aimed for hybrid Bayesian networks consisting of both discrete and continuous random variables with arbitrary distributions. It uses linear function and Gaussian additive noise to approximate the true conditional probability distribution for continuous variable given both its parents and evidence in a Bayesian network. One of the most important features of the newly developed method is that it can adaptively learn the optimal important function from the previous samples. We test the inference performance of LGIS using a 16-node linear Gaussian model and a 6-node general hybrid model. The performance comparison with other well-known methods such as Junction tree (JT) and likelihood weighting (LW) shows that LGIS-GHM is very promising.
NASA Astrophysics Data System (ADS)
Henderson, Douglas; Quintana, Jacqueline; Sokołowski, Stefan
1995-03-01
A comparison of Percus-Yevick-Pynn-Lado model theory and a density functional (DF) theory of nonuniform fluids of nonspherical particles is performed. The DF used is a new generalization of Tarazona's theory. The conclusion is that DF theory provides a preferable route to describe the system under consideration. Its accuracy can be improved with better approximation for the direct correlation function (DCF) for bulk system.
Nonequilibrium self-energy functional theory
NASA Astrophysics Data System (ADS)
Hofmann, Felix; Eckstein, Martin; Arrigoni, Enrico; Potthoff, Michael
2013-10-01
The self-energy functional theory (SFT) is generalized to describe the real-time dynamics of correlated lattice-fermion models far from thermal equilibrium. This is achieved by starting from a reformulation of the original equilibrium theory in terms of double-time Green's functions on the Keldysh-Matsubara contour. With the help of a generalized Luttinger-Ward functional, we construct a functional Ω̂[Σ] which is stationary at the physical (nonequilibrium) self-energy Σ and which yields the grand potential of the initial thermal state Ω at the physical point. Nonperturbative approximations can be defined by specifying a reference system that serves to generate trial self-energies. These self-energies are varied by varying the reference system's one-particle parameters on the Keldysh-Matsubara contour. In the case of thermal equilibrium, this approach reduces to the conventional SFT. Contrary to the equilibrium theory, however, “unphysical” variations, i.e., variations that are different on the upper and the lower branches of the Keldysh contour, must be considered to fix the time dependence of the optimal physical parameters via the variational principle. Functional derivatives in the nonequilibrium SFT Euler equation are carried out analytically to derive conditional equations for the variational parameters that are accessible to a numerical evaluation via a time-propagation scheme. Approximations constructed by means of the nonequilibrium SFT are shown to be inherently causal, internally consistent, and to respect macroscopic conservation laws resulting from gauge symmetries of the Hamiltonian. This comprises the nonequilibrium dynamical mean-field theory but also dynamical-impurity and variational-cluster approximations that are specified by reference systems with a finite number of degrees of freedom. In this way, nonperturbative and consistent approximations can be set up, the numerical evaluation of which is accessible to an exact-diagonalization approach.
Approximation Of Multi-Valued Inverse Functions Using Clustering And Sugeno Fuzzy Inference
NASA Technical Reports Server (NTRS)
Walden, Maria A.; Bikdash, Marwan; Homaifar, Abdollah
1998-01-01
Finding the inverse of a continuous function can be challenging and computationally expensive when the inverse function is multi-valued. Difficulties may be compounded when the function itself is difficult to evaluate. We show that we can use fuzzy-logic approximators such as Sugeno inference systems to compute the inverse on-line. To do so, a fuzzy clustering algorithm can be used in conjunction with a discriminating function to split the function data into branches for the different values of the forward function. These data sets are then fed into a recursive least-squares learning algorithm that finds the proper coefficients of the Sugeno approximators; each Sugeno approximator finds one value of the inverse function. Discussions about the accuracy of the approximation will be included.
Symmetric Positive 4th Order Tensors & Their Estimation from Diffusion Weighted MRI⋆
Barmpoutis, Angelos; Jian, Bing; Vemuri, Baba C.; Shepherd, Timothy M.
2009-01-01
In Diffusion Weighted Magnetic Resonance Image (DW-MRI) processing a 2nd order tensor has been commonly used to approximate the diffusivity function at each lattice point of the DW-MRI data. It is now well known that this 2nd-order approximation fails to approximate complex local tissue structures, such as fibers crossings. In this paper we employ a 4th order symmetric positive semi-definite (PSD) tensor approximation to represent the diffusivity function and present a novel technique to estimate these tensors from the DW-MRI data guaranteeing the PSD property. There have been several published articles in literature on higher order tensor approximations of the diffusivity function but none of them guarantee the positive semi-definite constraint, which is a fundamental constraint since negative values of the diffusivity coefficients are not meaningful. In our methods, we parameterize the 4th order tensors as a sum of squares of quadratic forms by using the so called Gram matrix method from linear algebra and its relation to the Hilbert’s theorem on ternary quartics. This parametric representation is then used in a nonlinear-least squares formulation to estimate the PSD tensors of order 4 from the data. We define a metric for the higher-order tensors and employ it for regularization across the lattice. Finally, performance of this model is depicted on synthetic data as well as real DW-MRI from an isolated rat hippocampus. PMID:17633709
Heisenberg-Langevin versus quantum master equation
NASA Astrophysics Data System (ADS)
Boyanovsky, Daniel; Jasnow, David
2017-12-01
The quantum master equation is an important tool in the study of quantum open systems. It is often derived under a set of approximations, chief among them the Born (factorization) and Markov (neglect of memory effects) approximations. In this article we study the paradigmatic model of quantum Brownian motion of a harmonic oscillator coupled to a bath of oscillators with a Drude-Ohmic spectral density. We obtain analytically the exact solution of the Heisenberg-Langevin equations, with which we study correlation functions in the asymptotic stationary state. We compare the exact correlation functions to those obtained in the asymptotic long time limit with the quantum master equation in the Born approximation with and without the Markov approximation. In the latter case we implement a systematic derivative expansion that yields the exact asymptotic limit under the factorization approximation only. We find discrepancies that could be significant when the bandwidth of the bath Λ is much larger than the typical scales of the system. We study the exact interaction energy as a proxy for the correlations missed by the Born approximation and find that its dependence on Λ is similar to the discrepancy between the exact solution and that of the quantum master equation in the Born approximation. We quantify the regime of validity of the quantum master equation in the Born approximation with or without the Markov approximation in terms of the system's relaxation rate γ , its unrenormalized natural frequency Ω and Λ : γ /Ω ≪1 and also γ Λ /Ω2≪1 . The reliability of the Born approximation is discussed within the context of recent experimental settings and more general environments.
Ion Thermal Conductivity and Ion Distribution Function in the Banana Regime
1988-04-01
approximate collision operator which is more general than the model operator derived by HIRSHMAN and SIGMAR is presented. By use of this collision...by HIRSHMAN and SIGMAR (1976). The finite aspect ratio correction is shown to increase the ion thermal conductivity by a factor of two in the...operator (12) is more general than that of Hirshman and Sigmar which can be derived by approximating Ct(1=0,1,2)in (12) by more simple forms. Let us
Hybrid Discrete-Continuous Markov Decision Processes
NASA Technical Reports Server (NTRS)
Feng, Zhengzhu; Dearden, Richard; Meuleau, Nicholas; Washington, Rich
2003-01-01
This paper proposes a Markov decision process (MDP) model that features both discrete and continuous state variables. We extend previous work by Boyan and Littman on the mono-dimensional time-dependent MDP to multiple dimensions. We present the principle of lazy discretization, and piecewise constant and linear approximations of the model. Having to deal with several continuous dimensions raises several new problems that require new solutions. In the (piecewise) linear case, we use techniques from partially- observable MDPs (POMDPS) to represent value functions as sets of linear functions attached to different partitions of the state space.
NASA Astrophysics Data System (ADS)
Suzuki, Kunihiro
2009-04-01
Ion implantation profiles are expressed by the Pearson function with first, second, third, and fourth moment parameters of Rp, ΔRp, γ, and β. We derived an analytical model for these profile moments by solving a Lindhard-Scharf-Schiott (LSS) integration equation using perturbation approximation. This analytical model reproduces Monte Carlo data that were well calibrated to reproduce a vast experimental database. The extended LSS theory is vital for instantaneously predicting ion implantation profiles with any combination of incident ions and substrate atoms including their energy dependence.
Approximating a retarded-advanced differential equation that models human phonation
NASA Astrophysics Data System (ADS)
Teodoro, M. Filomena
2017-11-01
In [1, 2, 3] we have got the numerical solution of a linear mixed type functional differential equation (MTFDE) introduced initially in [4], considering the autonomous and non-autonomous case by collocation, least squares and finite element methods considering B-splines basis set. The present work introduces a numerical scheme using least squares method (LSM) and Gaussian basis functions to solve numerically a nonlinear mixed type equation with symmetric delay and advance which models human phonation. The preliminary results are promising. We obtain an accuracy comparable with the previous results.
Optimal remediation of unconfined aquifers: Numerical applications and derivative calculations
NASA Astrophysics Data System (ADS)
Mansfield, Christopher M.; Shoemaker, Christine A.
1999-05-01
This paper extends earlier work on derivative-based optimization for cost-effective remediation to unconfined aquifers, which have more complex, nonlinear flow dynamics than confined aquifers. Most previous derivative-based optimization of contaminant removal has been limited to consideration of confined aquifers; however, contamination is more common in unconfined aquifers. Exact derivative equations are presented, and two computationally efficient approximations, the quasi-confined (QC) and head independent from previous (HIP) unconfined-aquifer finite element equation derivative approximations, are presented and demonstrated to be highly accurate. The derivative approximations can be used with any nonlinear optimization method requiring derivatives for computation of either time-invariant or time-varying pumping rates. The QC and HIP approximations are combined with the nonlinear optimal control algorithm SALQR into the unconfined-aquifer algorithm, which is shown to compute solutions for unconfined aquifers in CPU times that were not significantly longer than those required by the confined-aquifer optimization model. Two of the three example unconfined-aquifer cases considered obtained pumping policies with substantially lower objective function values with the unconfined model than were obtained with the confined-aquifer optimization, even though the mean differences in hydraulic heads predicted by the unconfined- and confined-aquifer models were small (less than 0.1%). We suggest a possible geophysical index based on differences in drawdown predictions between unconfined- and confined-aquifer models to estimate which aquifers require unconfined-aquifer optimization and which can be adequately approximated by the simpler confined-aquifer analysis.
Scaling and percolation in the small-world network model
NASA Astrophysics Data System (ADS)
Newman, M. E. J.; Watts, D. J.
1999-12-01
In this paper we study the small-world network model of Watts and Strogatz, which mimics some aspects of the structure of networks of social interactions. We argue that there is one nontrivial length-scale in the model, analogous to the correlation length in other systems, which is well-defined in the limit of infinite system size and which diverges continuously as the randomness in the network tends to zero, giving a normal critical point in this limit. This length-scale governs the crossover from large- to small-world behavior in the model, as well as the number of vertices in a neighborhood of given radius on the network. We derive the value of the single critical exponent controlling behavior in the critical region and the finite size scaling form for the average vertex-vertex distance on the network, and, using series expansion and Padé approximants, find an approximate analytic form for the scaling function. We calculate the effective dimension of small-world graphs and show that this dimension varies as a function of the length-scale on which it is measured, in a manner reminiscent of multifractals. We also study the problem of site percolation on small-world networks as a simple model of disease propagation, and derive an approximate expression for the percolation probability at which a giant component of connected vertices first forms (in epidemiological terms, the point at which an epidemic occurs). The typical cluster radius satisfies the expected finite size scaling form with a cluster size exponent close to that for a random graph. All our analytic results are confirmed by extensive numerical simulations of the model.
Statistical time-dependent model for the interstellar gas
NASA Technical Reports Server (NTRS)
Gerola, H.; Kafatos, M.; Mccray, R.
1974-01-01
We present models for temperature and ionization structure of low, uniform-density (approximately 0.3 per cu cm) interstellar gas in a galactic disk which is exposed to soft X rays from supernova outbursts occurring randomly in space and time. The structure was calculated by computing the time record of temperature and ionization at a given point by Monte Carlo simulation. The calculation yields probability distribution functions for ionized fraction, temperature, and their various observable moments. These time-dependent models predict a bimodal temperature distribution of the gas that agrees with various observations. Cold regions in the low-density gas may have the appearance of clouds in 21-cm absorption. The time-dependent model, in contrast to the steady-state model, predicts large fluctuations in ionization rate and the existence of cold (approximately 30 K), ionized (ionized fraction equal to about 0.1) regions.
Hermite Functional Link Neural Network for Solving the Van der Pol-Duffing Oscillator Equation.
Mall, Susmita; Chakraverty, S
2016-08-01
Hermite polynomial-based functional link artificial neural network (FLANN) is proposed here to solve the Van der Pol-Duffing oscillator equation. A single-layer hermite neural network (HeNN) model is used, where a hidden layer is replaced by expansion block of input pattern using Hermite orthogonal polynomials. A feedforward neural network model with the unsupervised error backpropagation principle is used for modifying the network parameters and minimizing the computed error function. The Van der Pol-Duffing and Duffing oscillator equations may not be solved exactly. Here, approximate solutions of these types of equations have been obtained by applying the HeNN model for the first time. Three mathematical example problems and two real-life application problems of Van der Pol-Duffing oscillator equation, extracting the features of early mechanical failure signal and weak signal detection problems, are solved using the proposed HeNN method. HeNN approximate solutions have been compared with results obtained by the well known Runge-Kutta method. Computed results are depicted in term of graphs. After training the HeNN model, we may use it as a black box to get numerical results at any arbitrary point in the domain. Thus, the proposed HeNN method is efficient. The results reveal that this method is reliable and can be applied to other nonlinear problems too.
Combined Henyey-Greenstein and Rayleigh phase function.
Liu, Quanhua; Weng, Fuzhong
2006-10-01
The phase function is an important parameter that affects the distribution of scattered radiation. In Rayleigh scattering, a scatterer is approximated by a dipole, and its phase function is analytically related to the scattering angle. For the Henyey-Greenstein (HG) approximation, the phase function preserves only the correct asymmetry factor (i.e., the first moment), which is essentially important for anisotropic scattering. When the HG function is applied to small particles, it produces a significant error in radiance. In addition, the HG function is applied only for an intensity radiative transfer. We develop a combined HG and Rayleigh (HG-Rayleigh) phase function. The HG phase function plays the role of modulator extending the application of the Rayleigh phase function for small asymmetry scattering. The HG-Rayleigh phase function guarantees the correct asymmetry factor and is valid for a polarization radiative transfer. It approaches the Rayleigh phase function for small particles. Thus the HG-Rayleigh phase function has wider applications for both intensity and polarimetric radiative transfers. For microwave radiative transfer modeling in this study, the largest errors in the brightness temperature calculations for weak asymmetry scattering are generally below 0.02 K by using the HG-Rayleigh phase function. The errors can be much larger, in the 1-3 K range, if the Rayleigh and HG functions are applied separately.
Daniel J. Leduc; Thomas G. Matney; Keith L. Belli; V. Clark Baldwin
2001-01-01
Artificial neural networks (NN) are becoming a popular estimation tool. Because they require no assumptions about the form of a fitting function, they can free the modeler from reliance on parametric approximating functions that may or may not satisfactorily fit the observed data. To date there have been few applications in forestry science, but as better NN software...
Quasi-radial modes of rotating stars in general relativity
NASA Astrophysics Data System (ADS)
Yoshida, Shin'ichirou; Eriguchi, Yoshiharu
2001-04-01
By using the Cowling approximation, quasi-radial modes of rotating general relativistic stars are computed along equilibrium sequences from non-rotating to maximally rotating models. The eigenfrequencies of these modes are decreasing functions of the rotational frequency. The eigenfrequency curve of each mode as a function of the rotational frequency has discontinuities, which arise from the avoided crossing with other curves of axisymmetric modes.
Function approximation using combined unsupervised and supervised learning.
Andras, Peter
2014-03-01
Function approximation is one of the core tasks that are solved using neural networks in the context of many engineering problems. However, good approximation results need good sampling of the data space, which usually requires exponentially increasing volume of data as the dimensionality of the data increases. At the same time, often the high-dimensional data is arranged around a much lower dimensional manifold. Here we propose the breaking of the function approximation task for high-dimensional data into two steps: (1) the mapping of the high-dimensional data onto a lower dimensional space corresponding to the manifold on which the data resides and (2) the approximation of the function using the mapped lower dimensional data. We use over-complete self-organizing maps (SOMs) for the mapping through unsupervised learning, and single hidden layer neural networks for the function approximation through supervised learning. We also extend the two-step procedure by considering support vector machines and Bayesian SOMs for the determination of the best parameters for the nonlinear neurons in the hidden layer of the neural networks used for the function approximation. We compare the approximation performance of the proposed neural networks using a set of functions and show that indeed the neural networks using combined unsupervised and supervised learning outperform in most cases the neural networks that learn the function approximation using the original high-dimensional data.
NASA Astrophysics Data System (ADS)
Sahni, V.; Ma, C. Q.
1980-12-01
The inhomogeneous electron gas at a jellium metal surface is studied in the Hartree-Fock approximation by Kohn-Sham density functional theory. Rigorous upper bounds to the surface energy are derived by application of the Rayleigh-Ritz variational principle for the energy, the surface kinetic, electrostatic, and nonlocal exchange energy functionals being determined exactly for the accurate linear-potential model electronic wave functions. The densities obtained by the energy minimization constraint are then employed to determine work-function results via the variationally accurate "displaced-profile change-in-self-consistent-field" expression. The theoretical basis of this non-self-consistent procedure and its demonstrated accuracy for the fully correlated system (as treated within the local-density approximation for exchange and correlation) leads us to conclude these results for the surface energies and work functions to be essentially exact. Work-function values are also determined by the Koopmans'-theorem expression, both for these densities as well as for those obtained by satisfaction of the constraint set on the electrostatic potential by the Budd-Vannimenus theorem. The use of the Hartree-Fock results in the accurate estimation of correlation-effect contributions to these surface properties of the nonuniform electron gas is also indicated. In addition, the original work and approximations made by Bardeen in this attempt at a solution of the Hartree-Fock problem are briefly reviewed in order to contrast with the present work.
NASA Astrophysics Data System (ADS)
Flory, Curt A.; Musgrave, Charles B.; Zhang, Zhiyong
2008-05-01
A number of physical processes involving quantum dots depend critically upon the “evanescent” electron eigenstate wave function that extends outside of the material surface into the surrounding region. These processes include electron tunneling through quantum dots, as well as interactions between multiple quantum dot structures. In order to unambiguously determine these evanescent fields, appropriate boundary conditions have been developed to connect the electronic solutions interior to the semiconductor quantum dot to exterior vacuum solutions. In standard envelope function theory, the interior wave function consists of products of band edge and envelope functions, and both must be considered when matching to the external solution. While the envelope functions satisfy tractable equations, the band edge functions are generally not known. In this work, symmetry arguments in the spherically symmetric approximation are used in conjunction with the known qualitative behavior of bonding and antibonding orbitals to catalog the behavior of the band edge functions at the unit cell boundary. This physical approximation allows consolidation of the influence of the band edge functions to two simple surface parameters that are incorporated into the boundary conditions and are straightforwardly computed by using numerical first-principles quantum techniques. These new boundary conditions are employed to analyze an isolated spherically symmetric semiconductor quantum dot in vacuum within the analytical model of Sercel and Vahala [Phys. Rev. Lett. 65, 239 (1990); Phys. Rev. B 42, 3690 (1990)]. Results are obtained for quantum dots made of GaAs and InP, which are compared with ab initio calculations that have appeared in the literature.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engberg, L; KTH Royal Institute of Technology, Stockholm; Eriksson, K
Purpose: To formulate objective functions of a multicriteria fluence map optimization model that correlate well with plan quality metrics, and to solve this multicriteria model by convex approximation. Methods: In this study, objectives of a multicriteria model are formulated to explicitly either minimize or maximize a dose-at-volume measure. Given the widespread agreement that dose-at-volume levels play important roles in plan quality assessment, these objectives correlate well with plan quality metrics. This is in contrast to the conventional objectives, which are to maximize clinical goal achievement by relating to deviations from given dose-at-volume thresholds: while balancing the new objectives means explicitlymore » balancing dose-at-volume levels, balancing the conventional objectives effectively means balancing deviations. Constituted by the inherently non-convex dose-at-volume measure, the new objectives are approximated by the convex mean-tail-dose measure (CVaR measure), yielding a convex approximation of the multicriteria model. Results: Advantages of using the convex approximation are investigated through juxtaposition with the conventional objectives in a computational study of two patient cases. Clinical goals of each case respectively point out three ROI dose-at-volume measures to be considered for plan quality assessment. This is translated in the convex approximation into minimizing three mean-tail-dose measures. Evaluations of the three ROI dose-at-volume measures on Pareto optimal plans are used to represent plan quality of the Pareto sets. Besides providing increased accuracy in terms of feasibility of solutions, the convex approximation generates Pareto sets with overall improved plan quality. In one case, the Pareto set generated by the convex approximation entirely dominates that generated with the conventional objectives. Conclusion: The initial computational study indicates that the convex approximation outperforms the conventional objectives in aspects of accuracy and plan quality.« less
Weiss, M; Stedtler, C; Roberts, M S
1997-09-01
The dispersion model with mixed boundary conditions uses a single parameter, the dispersion number, to describe the hepatic elimination of xenobiotics and endogenous substances. An implicit a priori assumption of the model is that the transit time density of intravascular indicators is approximately by an inverse Gaussian distribution. This approximation is limited in that the model poorly describes the tail part of the hepatic outflow curves of vascular indicators. A sum of two inverse Gaussian functions is proposed as an alternative, more flexible empirical model for transit time densities of vascular references. This model suggests that a more accurate description of the tail portion of vascular reference curves yields an elimination rate constant (or intrinsic clearance) which is 40% less than predicted by the dispersion model with mixed boundary conditions. The results emphasize the need to accurately describe outflow curves in using them as a basis for determining pharmacokinetic parameters using hepatic elimination models.
NASA Astrophysics Data System (ADS)
Badillo-Olvera, A.; Begovich, O.; Peréz-González, A.
2017-01-01
The present paper is motivated by the purpose of detection and isolation of a single leak considering the Fault Model Approach (FMA) focused on pipelines with changes in their geometry. These changes generate a different pressure drop that those produced by the friction, this phenomenon is a common scenario in real pipeline systems. The problem arises, since the dynamical model of the fluid in a pipeline only considers straight geometries without fittings. In order to address this situation, several papers work with a virtual model of a pipeline that generates a equivalent straight length, thus, friction produced by the fittings is taking into account. However, when this method is applied, the leak is isolated in a virtual length, which for practical reasons does not represent a complete solution. This research proposes as a solution to the problem of leak isolation in a virtual length, the use of a polynomial interpolation function in order to approximate the conversion of the virtual position to a real-coordinates value. Experimental results in a real prototype are shown, concluding that the proposed methodology has a good performance.
Analysis of an electrohydraulic aircraft control surface servo and comparison with test results
NASA Technical Reports Server (NTRS)
Edwards, J. W.
1972-01-01
An analysis of an electrohydraulic aircraft control-surface system is made in which the system is modeled as a lumped, two-mass, spring-coupled system controlled by a servo valve. Both linear and nonlinear models are developed, and the effects of hinge-moment loading are included. Transfer functions of the system and approximate literal factors of the transfer functions for several cases are presented. The damping action of dynamic pressure feedback is analyzed. Comparisons of the model responses with results from tests made on a highly resonant rudder control-surface servo indicate the adequacy of the model. The effects of variations in hinge-moment loading are illustrated.
Li, Changyang; Wang, Xiuying; Eberl, Stefan; Fulham, Michael; Yin, Yong; Dagan Feng, David
2015-01-01
Automated and general medical image segmentation can be challenging because the foreground and the background may have complicated and overlapping density distributions in medical imaging. Conventional region-based level set algorithms often assume piecewise constant or piecewise smooth for segments, which are implausible for general medical image segmentation. Furthermore, low contrast and noise make identification of the boundaries between foreground and background difficult for edge-based level set algorithms. Thus, to address these problems, we suggest a supervised variational level set segmentation model to harness the statistical region energy functional with a weighted probability approximation. Our approach models the region density distributions by using the mixture-of-mixtures Gaussian model to better approximate real intensity distributions and distinguish statistical intensity differences between foreground and background. The region-based statistical model in our algorithm can intuitively provide better performance on noisy images. We constructed a weighted probability map on graphs to incorporate spatial indications from user input with a contextual constraint based on the minimization of contextual graphs energy functional. We measured the performance of our approach on ten noisy synthetic images and 58 medical datasets with heterogeneous intensities and ill-defined boundaries and compared our technique to the Chan-Vese region-based level set model, the geodesic active contour model with distance regularization, and the random walker model. Our method consistently achieved the highest Dice similarity coefficient when compared to the other methods.
Martin, Guillaume; Roques, Lionel
2016-01-01
Various models describe asexual evolution by mutation, selection, and drift. Some focus directly on fitness, typically modeling drift but ignoring or simplifying both epistasis and the distribution of mutation effects (traveling wave models). Others follow the dynamics of quantitative traits determining fitness (Fisher’s geometric model), imposing a complex but fixed form of mutation effects and epistasis, and often ignoring drift. In all cases, predictions are typically obtained in high or low mutation rate limits and for long-term stationary regimes, thus losing information on transient behaviors and the effect of initial conditions. Here, we connect fitness-based and trait-based models into a single framework, and seek explicit solutions even away from stationarity. The expected fitness distribution is followed over time via its cumulant generating function, using a deterministic approximation that neglects drift. In several cases, explicit trajectories for the full fitness distribution are obtained for arbitrary mutation rates and standing variance. For nonepistatic mutations, especially with beneficial mutations, this approximation fails over the long term but captures the early dynamics, thus complementing stationary stochastic predictions. The approximation also handles several diminishing returns epistasis models (e.g., with an optimal genotype); it can be applied at and away from equilibrium. General results arise at equilibrium, where fitness distributions display a “phase transition” with mutation rate. Beyond this phase transition, in Fisher’s geometric model, the full trajectory of fitness and trait distributions takes a simple form; robust to the details of the mutant phenotype distribution. Analytical arguments are explored regarding why and when the deterministic approximation applies. PMID:27770037
Non-Equilibrium Dynamics with Quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Dong, Qiaoyuan
This work is motivated by the fact that the investigation of non-equilibrium phenomena in strongly correlated electron systems has developed into one of the most active and exciting branches of condensed matter physics as it provides rich new insights that could not be obtained from the study of equilibrium situations. However, a theoretical description of those phenomena is missing. Therefore, in this thesis, we develop a numerical method that can be used to study two minimal models--the Hubbard model and the Anderson impurity model with general parameter range and time dependence. We begin by introducing the theoretical framework and the general features of the Hubbard model. We then describe the dynamical mean field theory (DMFT), which was first invented by Georges in 1992. It provides a feasible way to approach strongly correlated electron systems and reduces the complexity of the calculations via a mapping of lattice models onto quantum impurity models subject to a self-consistency condition. We employ the non-equilibrium extension of DMFT and map the Hubbard model to the single impurity Anderson model (SIAM). Since the fundamental component of the DMFT method is a solver of the single impurity Anderson model, we continue with a description of the formalism to study the real-time dynamics of the impurity model staring at its thermal equilibrium state. We utilize the non-equilibrium strong-coupling perturbation theory and derive semi-analytical approximation methods such as the non-crossing approximation (NCA) and the one-crossing approximation (OCA). We then use the Quantum Monte-Carlo method (QMC) as a numerically exact method and present proper measurements of local observables, current and Green's functions. We perform simulations of the current after a quantum quench from equilibrium by rapidly applying a bias voltage in a wide range of initial temperatures. The current exhibits short equilibrium times and saturates upon the decrease of temperature at all times, indicating Kondo behavior both in the transient regime and in the steady state. However, this bare QMC solver suffers from a dynamical sign problem for long time propagations. To overcome the limitations of this bare treatment, we introduce the "Inchworm algorithm'', based on iteratively reusing the information obtained in previous steps to extend the propagation to longer times and stabilize the calculations. We show that this algorithm greatly reduces the required order for each simulation and re-scales the exponential challenge to quadratic in time. We introduce a method to compute Green's functions, spectral functions, and currents for inchworm Monte Carlo and show how systematic error assessments in real time can be obtained. We illustrate the capabilities of the algorithm with a study of the behavior of quantum impurities after an instantaneous voltage quench from a thermal equilibrium state. We conclude with the applications of the unbiased inchworm impurity solver to DMFT calculations. We employ the methods for a study of the one-band paramagnetic Hubbard model on the Bethe lattice in equilibrium, where the DMFT approximation becomes exact. We begin with a brief introduction of the Mott metal insulator phase diagram. We present the results of both real time Green's functions and spectral functions from our nonequilibrium calculations. We observe the metal-insulator crossover as the on-site interaction is increased and the formation of a quasi-particle peak as the temperature is lowered. We also illustrate the convergence of our algorithms in different aspects.
A Two-dimensional Version of the Niblett-Bostick Transformation for Magnetotelluric Interpretations
NASA Astrophysics Data System (ADS)
Esparza, F.
2005-05-01
An imaging technique for two-dimensional magnetotelluric interpretations is developed following the well known Niblett-Bostick transformation for one-dimensional profiles. The algorithm uses a Hopfield artificial neural network to process series and parallel magnetotelluric impedances along with their analytical influence functions. The adaptive, weighted average approximation preserves part of the nonlinearity of the original problem. No initial model in the usual sense is required for the recovery of a functional model. Rather, the built-in relationship between model and data considers automatically, all at the same time, many half spaces whose electrical conductivities vary according to the data. The use of series and parallel impedances, a self-contained pair of invariants of the impedance tensor, avoids the need to decide on best angles of rotation for TE and TM separations. Field data from a given profile can thus be fed directly into the algorithm without much processing. The solutions offered by the Hopfield neural network correspond to spatial averages computed through rectangular windows that can be chosen at will. Applications of the algorithm to simple synthetic models and to the COPROD2 data set illustrate the performance of the approximation.
A New Higher-Order Composite Theory for Analysis and Design of High Speed Tilt-Rotor Blades
NASA Technical Reports Server (NTRS)
McCarthy, Thomas Robert
1996-01-01
A higher-order theory is developed to model composite box beams with arbitrary wall thicknesses. The theory, based on a refined displacement field, represents a three-dimensional model which approximates the elasticity solution. Therefore, the cross-sectional properties are not reduced to one-dimensional beam parameters. Both inplane and out-of-plane warping are automatically included in the formulation. The model accurately captures the transverse shear stresses through the thickness of each wall while satisfying all stress-free boundary conditions. Several numerical results are presented to validate the present theory. The developed theory is then used to model the load carrying member of a tilt-rotor blade which has thick-walled sections. The composite structural analysis is coupled with an aerodynamic analysis to compute the aeroelastic stability of the blade. Finally, a multidisciplinary optimization procedure is developed to improve the aerodynamic, structural and aeroelastic performance of the tilt-rotor aircraft. The Kreisselmeier-Steinhauser function is used to formulate the multiobjective function problem and a hybrid approximate analysis is used to reduce the computational effort. The optimum results are compared with the baseline values and show significant improvements in the overall performance of the tilt-rotor blade.
Preserving Lagrangian Structure in Nonlinear Model Reduction with Application to Structural Dynamics
Carlberg, Kevin; Tuminaro, Ray; Boggs, Paul
2015-03-11
Our work proposes a model-reduction methodology that preserves Lagrangian structure and achieves computational efficiency in the presence of high-order nonlinearities and arbitrary parameter dependence. As such, the resulting reduced-order model retains key properties such as energy conservation and symplectic time-evolution maps. We focus on parameterized simple mechanical systems subjected to Rayleigh damping and external forces, and consider an application to nonlinear structural dynamics. To preserve structure, the method first approximates the system's “Lagrangian ingredients''---the Riemannian metric, the potential-energy function, the dissipation function, and the external force---and subsequently derives reduced-order equations of motion by applying the (forced) Euler--Lagrange equation with thesemore » quantities. Moreover, from the algebraic perspective, key contributions include two efficient techniques for approximating parameterized reduced matrices while preserving symmetry and positive definiteness: matrix gappy proper orthogonal decomposition and reduced-basis sparsification. Our results for a parameterized truss-structure problem demonstrate the practical importance of preserving Lagrangian structure and illustrate the proposed method's merits: it reduces computation time while maintaining high accuracy and stability, in contrast to existing nonlinear model-reduction techniques that do not preserve structure.« less
Preserving Lagrangian Structure in Nonlinear Model Reduction with Application to Structural Dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlberg, Kevin; Tuminaro, Ray; Boggs, Paul
Our work proposes a model-reduction methodology that preserves Lagrangian structure and achieves computational efficiency in the presence of high-order nonlinearities and arbitrary parameter dependence. As such, the resulting reduced-order model retains key properties such as energy conservation and symplectic time-evolution maps. We focus on parameterized simple mechanical systems subjected to Rayleigh damping and external forces, and consider an application to nonlinear structural dynamics. To preserve structure, the method first approximates the system's “Lagrangian ingredients''---the Riemannian metric, the potential-energy function, the dissipation function, and the external force---and subsequently derives reduced-order equations of motion by applying the (forced) Euler--Lagrange equation with thesemore » quantities. Moreover, from the algebraic perspective, key contributions include two efficient techniques for approximating parameterized reduced matrices while preserving symmetry and positive definiteness: matrix gappy proper orthogonal decomposition and reduced-basis sparsification. Our results for a parameterized truss-structure problem demonstrate the practical importance of preserving Lagrangian structure and illustrate the proposed method's merits: it reduces computation time while maintaining high accuracy and stability, in contrast to existing nonlinear model-reduction techniques that do not preserve structure.« less
NASA Astrophysics Data System (ADS)
Zhu, Rencheng; Li, Shunyi; Bao, Xiaofeng; Dumont, Éric
2017-02-01
The performances of two identical biofilters, filled with a new composite packing material (named CM-5) embedded with functional microorganisms or sterilized CM-5 without microorganisms, were investigated for H2S treatment. Running parameters in terms of microbial counts, pressure drops, and inlet and outlet H2S concentrations were measured. The results show that the microbial count of the CM-5 was approximately ×105 CFU/g before being filled into the biofilter, while that of the sterilized CM-5 was negligible. The functional microorganisms embedded in CM-5 adapted to the environment containing H2S quickly. In most cases, pressure drops of the CM-5 biofilter were slightly higher than those of the sterilized CM-5 biofilter when the gas flow rate was 0.6-2.5 m3/h. The maximum elimination capacity (EC) of the CM-5 biofilter in treating H2S could reach up to 65 g/(m3·h) when the loading rate (LR) was approximately 80 g/(m3·h). If the LR was much higher, the measured EC showed a slight downward trend. The experimental ECs of biofilters were fitted by two typical dynamic models: the Michaelis-Menten model and the Haldane model. Compared with the Michaelis-Menten model, the Haldane model fit the experimental ECs better for the two biofilters because of the presence of the substrate inhibition behaviour.
Zhu, Rencheng; Li, Shunyi; Bao, Xiaofeng; Dumont, Éric
2017-01-01
The performances of two identical biofilters, filled with a new composite packing material (named CM-5) embedded with functional microorganisms or sterilized CM-5 without microorganisms, were investigated for H2S treatment. Running parameters in terms of microbial counts, pressure drops, and inlet and outlet H2S concentrations were measured. The results show that the microbial count of the CM-5 was approximately ×105 CFU/g before being filled into the biofilter, while that of the sterilized CM-5 was negligible. The functional microorganisms embedded in CM-5 adapted to the environment containing H2S quickly. In most cases, pressure drops of the CM-5 biofilter were slightly higher than those of the sterilized CM-5 biofilter when the gas flow rate was 0.6–2.5 m3/h. The maximum elimination capacity (EC) of the CM-5 biofilter in treating H2S could reach up to 65 g/(m3·h) when the loading rate (LR) was approximately 80 g/(m3·h). If the LR was much higher, the measured EC showed a slight downward trend. The experimental ECs of biofilters were fitted by two typical dynamic models: the Michaelis-Menten model and the Haldane model. Compared with the Michaelis-Menten model, the Haldane model fit the experimental ECs better for the two biofilters because of the presence of the substrate inhibition behaviour. PMID:28198800
Comparing fixed and variable-width Gaussian networks.
Kůrková, Věra; Kainen, Paul C
2014-09-01
The role of width of Gaussians in two types of computational models is investigated: Gaussian radial-basis-functions (RBFs) where both widths and centers vary and Gaussian kernel networks which have fixed widths but varying centers. The effect of width on functional equivalence, universal approximation property, and form of norms in reproducing kernel Hilbert spaces (RKHS) is explored. It is proven that if two Gaussian RBF networks have the same input-output functions, then they must have the same numbers of units with the same centers and widths. Further, it is shown that while sets of input-output functions of Gaussian kernel networks with two different widths are disjoint, each such set is large enough to be a universal approximator. Embedding of RKHSs induced by "flatter" Gaussians into RKHSs induced by "sharper" Gaussians is described and growth of the ratios of norms on these spaces with increasing input dimension is estimated. Finally, large sets of argminima of error functionals in sets of input-output functions of Gaussian RBFs are described. Copyright © 2014 Elsevier Ltd. All rights reserved.
Monte Carlo turbulence simulation using rational approximations to von Karman spectra
NASA Technical Reports Server (NTRS)
Campbell, C. W.
1986-01-01
Turbulence simulation is computationally much simpler using rational spectra, but turbulence falls off as f exp -5/3 in frequency ranges of interest to aircraft response and as predicted by von Karman's model. Rational approximations to von Karman spectra should satisfy three requirements: (1) the rational spectra should provide a good approximation to the von Karman spectra in the frequency range of interest; (2) for stability, the resulting rational transfer function should have all its poles in the left half-plane; and (3) at high frequencies, the rational spectra must fall off as an integer power of frequency, and since the -2 power is closest to the -5/3 power, the rational approximation should roll off as the -2 power at high frequencies. Rational approximations to von Karman spectra that satisfy these three criteria are presented, along with spectra from simulated turbulence. Agreement between the spectra of the simulated turbulence and von Karman spectra is excellent.
Developing the Polynomial Expressions for Fields in the ITER Tokamak
NASA Astrophysics Data System (ADS)
Sharma, Stephen
2017-10-01
The two most important problems to be solved in the development of working nuclear fusion power plants are: sustained partial ignition and turbulence. These two phenomena are the subject of research and investigation through the development of analytic functions and computational models. Ansatz development through Gaussian wave-function approximations, dielectric quark models, field solutions using new elliptic functions, and better descriptions of the polynomials of the superconducting current loops are the critical theoretical developments that need to be improved. Euler-Lagrange equations of motion in addition to geodesic formulations generate the particle model which should correspond to the Dirac dispersive scattering coefficient calculations and the fluid plasma model. Feynman-Hellman formalism and Heaviside step functional forms are introduced to the fusion equations to produce simple expressions for the kinetic energy and loop currents. Conclusively, a polynomial description of the current loops, the Biot-Savart field, and the Lagrangian must be uncovered before there can be an adequate computational and iterative model of the thermonuclear plasma.
Expressions for Fields in the ITER Tokamak
NASA Astrophysics Data System (ADS)
Sharma, Stephen
2017-10-01
The two most important problems to be solved in the development of working nuclear fusion power plants are: sustained partial ignition and turbulence. These two phenomenon are the subject of research and investigation through the development of analytic functions and computational models. Ansatz development through Gaussian wave-function approximations, dielectric quark models, field solutions using new elliptic functions, and better descriptions of the polynomials of the superconducting current loops are the critical theoretical developments that need to be improved. Euler-Lagrange equations of motion in addition to geodesic formulations generate the particle model which should correspond to the Dirac dispersive scattering coefficient calculations and the fluid plasma model. Feynman-Hellman formalism and Heaviside step functional forms are introduced to the fusion equations to produce simple expressions for the kinetic energy and loop currents. Conclusively, a polynomial description of the current loops, the Biot-Savart field, and the Lagrangian must be uncovered before there can be an adequate computational and iterative model of the thermonuclear plasma.
A pairwise maximum entropy model accurately describes resting-state human brain networks
Watanabe, Takamitsu; Hirose, Satoshi; Wada, Hiroyuki; Imai, Yoshio; Machida, Toru; Shirouzu, Ichiro; Konishi, Seiki; Miyashita, Yasushi; Masuda, Naoki
2013-01-01
The resting-state human brain networks underlie fundamental cognitive functions and consist of complex interactions among brain regions. However, the level of complexity of the resting-state networks has not been quantified, which has prevented comprehensive descriptions of the brain activity as an integrative system. Here, we address this issue by demonstrating that a pairwise maximum entropy model, which takes into account region-specific activity rates and pairwise interactions, can be robustly and accurately fitted to resting-state human brain activities obtained by functional magnetic resonance imaging. Furthermore, to validate the approximation of the resting-state networks by the pairwise maximum entropy model, we show that the functional interactions estimated by the pairwise maximum entropy model reflect anatomical connexions more accurately than the conventional functional connectivity method. These findings indicate that a relatively simple statistical model not only captures the structure of the resting-state networks but also provides a possible method to derive physiological information about various large-scale brain networks. PMID:23340410
QCD-inspired spectra from Blue's functions
NASA Astrophysics Data System (ADS)
Nowak, Maciej A.; Papp, Gábor; Zahed, Ismail
1996-02-01
We use the law of addition in random matrix theory to analyze the spectral distributions of a variety of chiral random matrix models as inspired from QCD whether through symmetries or models. In terms of the Blue's functions recently discussed by Zee, we show that most of the spectral distributions in the macroscopic limit and the quenched approximation, follow algebraically from the discontinuity of a pertinent solution to a cubic (Cardano) or a quartic (Ferrari) equation. We use the end-point equation of the energy spectra in chiral random matrix models to argue for novel phase structures, in which the Dirac density of states plays the role of an order parameter.
NASA Technical Reports Server (NTRS)
Karpel, M.
1994-01-01
Various control analysis, design, and simulation techniques of aeroservoelastic systems require the equations of motion to be cast in a linear, time-invariant state-space form. In order to account for unsteady aerodynamics, rational function approximations must be obtained to represent them in the first order equations of the state-space formulation. A computer program, MIST, has been developed which determines minimum-state approximations of the coefficient matrices of the unsteady aerodynamic forces. The Minimum-State Method facilitates the design of lower-order control systems, analysis of control system performance, and near real-time simulation of aeroservoelastic phenomena such as the outboard-wing acceleration response to gust velocity. Engineers using this program will be able to calculate minimum-state rational approximations of the generalized unsteady aerodynamic forces. Using the Minimum-State formulation of the state-space equations, they will be able to obtain state-space models with good open-loop characteristics while reducing the number of aerodynamic equations by an order of magnitude more than traditional approaches. These low-order state-space mathematical models are good for design and simulation of aeroservoelastic systems. The computer program, MIST, accepts tabular values of the generalized aerodynamic forces over a set of reduced frequencies. It then determines approximations to these tabular data in the LaPlace domain using rational functions. MIST provides the capability to select the denominator coefficients in the rational approximations, to selectably constrain the approximations without increasing the problem size, and to determine and emphasize critical frequency ranges in determining the approximations. MIST has been written to allow two types data weighting options. The first weighting is a traditional normalization of the aerodynamic data to the maximum unit value of each aerodynamic coefficient. The second allows weighting the importance of different tabular values in determining the approximations based upon physical characteristics of the system. Specifically, the physical weighting capability is such that each tabulated aerodynamic coefficient, at each reduced frequency value, is weighted according to the effect of an incremental error of this coefficient on aeroelastic characteristics of the system. In both cases, the resulting approximations yield a relatively low number of aerodynamic lag states in the subsequent state-space model. MIST is written in ANSI FORTRAN 77 for DEC VAX series computers running VMS. It requires approximately 1Mb of RAM for execution. The standard distribution medium for this package is a 9-track 1600 BPI magnetic tape in DEC VAX FILES-11 format. It is also available on a TK50 tape cartridge in DEC VAX BACKUP format. MIST was developed in 1991. DEC VAX and VMS are trademarks of Digital Equipment Corporation. FORTRAN 77 is a registered trademark of Lahey Computer Systems, Inc.
Multidisciplinary design optimization - An emerging new engineering discipline
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw
1993-01-01
A definition of the multidisciplinary design optimization (MDO) is introduced, and functionality and relationship of the MDO conceptual components are examined. The latter include design-oriented analysis, approximation concepts, mathematical system modeling, design space search, an optimization procedure, and a humane interface.
Estimation of parameters of constant elasticity of substitution production functional model
NASA Astrophysics Data System (ADS)
Mahaboob, B.; Venkateswarlu, B.; Sankar, J. Ravi
2017-11-01
Nonlinear model building has become an increasing important powerful tool in mathematical economics. In recent years the popularity of applications of nonlinear models has dramatically been rising up. Several researchers in econometrics are very often interested in the inferential aspects of nonlinear regression models [6]. The present research study gives a distinct method of estimation of more complicated and highly nonlinear model viz Constant Elasticity of Substitution (CES) production functional model. Henningen et.al [5] proposed three solutions to avoid serious problems when estimating CES functions in 2012 and they are i) removing discontinuities by using the limits of the CES function and its derivative. ii) Circumventing large rounding errors by local linear approximations iii) Handling ill-behaved objective functions by a multi-dimensional grid search. Joel Chongeh et.al [7] discussed the estimation of the impact of capital and labour inputs to the gris output agri-food products using constant elasticity of substitution production function in Tanzanian context. Pol Antras [8] presented new estimates of the elasticity of substitution between capital and labour using data from the private sector of the U.S. economy for the period 1948-1998.
The Angular Three-Point Correlation Function in the Quasi-linear Regime
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buchalter, Ari; Kamionkowski, Marc; Jaffe, Andrew H.
2000-02-10
We calculate the normalized angular three-point correlation function (3PCF), q, as well as the normalized angular skewness, s{sub 3}, assuming the small-angle approximation, for a biased mass distribution in flat and open cold dark matter (CDM) models with Gaussian initial conditions. The leading-order perturbative results incorporate the explicit dependence on the cosmological parameters, the shape of the CDM transfer function, the linear evolution of the power spectrum, the form of the assumed redshift distribution function, and linear and nonlinear biasing, which may be evolving. Results are presented for different redshift distributions, including that appropriate for the APM Galaxy Survey, asmore » well as for a survey with a mean redshift of z{approx_equal}1 (such as the VLA FIRST Survey). Qualitatively, many of the results found for s{sub 3} and q are similar to those obtained in a related treatment of the spatial skewness and 3PCF, such as a leading-order correction to the standard result for s{sub 3} in the case of nonlinear bias (as defined for unsmoothed density fields), and the sensitivity of the configuration dependence of q to both cosmological and biasing models. We show that since angular correlation functions (CFs) are sensitive to clustering over a range of redshifts, the various evolutionary dependences included in our predictions imply that measurements of q in a deep survey might better discriminate between models with different histories, such as evolving versus nonevolving bias, that can have similar spatial CFs at low redshift. Our calculations employ a derived equation, valid for open, closed, and flat models, to obtain the angular bispectrum from the spatial bispectrum in the small-angle approximation. (c) (c) 2000. The American Astronomical Society.« less
Parametric optimal control of uncertain systems under an optimistic value criterion
NASA Astrophysics Data System (ADS)
Li, Bo; Zhu, Yuanguo
2018-01-01
It is well known that the optimal control of a linear quadratic model is characterized by the solution of a Riccati differential equation. In many cases, the corresponding Riccati differential equation cannot be solved exactly such that the optimal feedback control may be a complex time-oriented function. In this article, a parametric optimal control problem of an uncertain linear quadratic model under an optimistic value criterion is considered for simplifying the expression of optimal control. Based on the equation of optimality for the uncertain optimal control problem, an approximation method is presented to solve it. As an application, a two-spool turbofan engine optimal control problem is given to show the utility of the proposed model and the efficiency of the presented approximation method.
Error-Rate Estimation Based on Multi-Signal Flow Graph Model and Accelerated Radiation Tests
Wang, Yueke; Xing, Kefei; Deng, Wei; Zhang, Zelong
2016-01-01
A method of evaluating the single-event effect soft-error vulnerability of space instruments before launched has been an active research topic in recent years. In this paper, a multi-signal flow graph model is introduced to analyze the fault diagnosis and meantime to failure (MTTF) for space instruments. A model for the system functional error rate (SFER) is proposed. In addition, an experimental method and accelerated radiation testing system for a signal processing platform based on the field programmable gate array (FPGA) is presented. Based on experimental results of different ions (O, Si, Cl, Ti) under the HI-13 Tandem Accelerator, the SFER of the signal processing platform is approximately 10−3(error/particle/cm2), while the MTTF is approximately 110.7 h. PMID:27583533
Error-Rate Estimation Based on Multi-Signal Flow Graph Model and Accelerated Radiation Tests.
He, Wei; Wang, Yueke; Xing, Kefei; Deng, Wei; Zhang, Zelong
2016-01-01
A method of evaluating the single-event effect soft-error vulnerability of space instruments before launched has been an active research topic in recent years. In this paper, a multi-signal flow graph model is introduced to analyze the fault diagnosis and meantime to failure (MTTF) for space instruments. A model for the system functional error rate (SFER) is proposed. In addition, an experimental method and accelerated radiation testing system for a signal processing platform based on the field programmable gate array (FPGA) is presented. Based on experimental results of different ions (O, Si, Cl, Ti) under the HI-13 Tandem Accelerator, the SFER of the signal processing platform is approximately 10-3(error/particle/cm2), while the MTTF is approximately 110.7 h.
An Exospheric Temperature Model Based On CHAMP Observations and TIEGCM Simulations
NASA Astrophysics Data System (ADS)
Ruan, Haibing; Lei, Jiuhou; Dou, Xiankang; Liu, Siqing; Aa, Ercha
2018-02-01
In this work, thermospheric densities from the accelerometer measurement on board the CHAMP satellite during 2002-2009 and the simulations from the National Center for Atmospheric Research Thermosphere Ionosphere Electrodynamics General Circulation Model (NCAR-TIEGCM) are employed to develop an empirical exospheric temperature model (ETM). The two-dimensional basis functions of the ETM are first provided from the principal component analysis of the TIEGCM simulations. Based on the exospheric temperatures derived from CHAMP thermospheric densities, a global distribution of the exospheric temperatures is reconstructed. A parameterization is conducted for each basis function amplitude as a function of solar-geophysical and seasonal conditions. Thus, the ETM can be utilized to model the thermospheric temperature and mass density under a specified condition. Our results showed that the averaged standard deviation of the ETM is generally less than 10% than approximately 30% in the MSIS model. Besides, the ETM reproduces the global thermospheric evolutions including the equatorial thermosphere anomaly.
NASA Astrophysics Data System (ADS)
Laqua, Henryk; Kussmann, Jörg; Ochsenfeld, Christian
2018-03-01
The correct description of multi-reference electronic ground states within Kohn-Sham density functional theory (DFT) requires an ensemble-state representation, employing fractionally occupied orbitals. However, the use of fractional orbital occupation leads to non-normalized exact-exchange holes, resulting in large fractional-spin errors for conventional approximative density functionals. In this communication, we present a simple approach to directly include the exact-exchange-hole normalization into DFT. Compared to conventional functionals, our model strongly improves the description for multi-reference systems, while preserving the accuracy in the single-reference case. We analyze the performance of our proposed method at the example of spin-averaged atoms and spin-restricted bond dissociation energy surfaces.
Laqua, Henryk; Kussmann, Jörg; Ochsenfeld, Christian
2018-03-28
The correct description of multi-reference electronic ground states within Kohn-Sham density functional theory (DFT) requires an ensemble-state representation, employing fractionally occupied orbitals. However, the use of fractional orbital occupation leads to non-normalized exact-exchange holes, resulting in large fractional-spin errors for conventional approximative density functionals. In this communication, we present a simple approach to directly include the exact-exchange-hole normalization into DFT. Compared to conventional functionals, our model strongly improves the description for multi-reference systems, while preserving the accuracy in the single-reference case. We analyze the performance of our proposed method at the example of spin-averaged atoms and spin-restricted bond dissociation energy surfaces.
Optical-model potential for electron and positron elastic scattering by atoms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salvat, Francesc
2003-07-01
An optical-model potential for systematic calculations of elastic scattering of electrons and positrons by atoms and positive ions is proposed. The electrostatic interaction is determined from the Dirac-Hartree-Fock self-consistent atomic electron density. In the case of electron projectiles, the exchange interaction is described by means of the local-approximation of Furness and McCarthy. The correlation-polarization potential is obtained by combining the correlation potential derived from the local density approximation with a long-range polarization interaction, which is represented by means of a Buckingham potential with an empirical energy-dependent cutoff parameter. The absorption potential is obtained from the local-density approximation, using the Born-Ochkurmore » approximation and the Lindhard dielectric function to describe the binary collisions with a free-electron gas. The strength of the absorption potential is adjusted by means of an empirical parameter, which has been determined by fitting available absolute elastic differential cross-section data for noble gases and mercury. The Dirac partial-wave analysis with this optical-model potential provides a realistic description of elastic scattering of electrons and positrons with energies in the range from {approx}100 eV up to {approx}5 keV. At higher energies, correlation-polarization and absorption corrections are small and the usual static-exchange approximation is sufficiently accurate for most practical purposes.« less
Computing Functions by Approximating the Input
ERIC Educational Resources Information Center
Goldberg, Mayer
2012-01-01
In computing real-valued functions, it is ordinarily assumed that the input to the function is known, and it is the output that we need to approximate. In this work, we take the opposite approach: we show how to compute the values of some transcendental functions by approximating the input to these functions, and obtaining exact answers for their…
Neural Network and Regression Soft Model Extended for PAX-300 Aircraft Engine
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Hopkins, Dale A.
2002-01-01
In fiscal year 2001, the neural network and regression capabilities of NASA Glenn Research Center's COMETBOARDS design optimization testbed were extended to generate approximate models for the PAX-300 aircraft engine. The analytical model of the engine is defined through nine variables: the fan efficiency factor, the low pressure of the compressor, the high pressure of the compressor, the high pressure of the turbine, the low pressure of the turbine, the operating pressure, and three critical temperatures (T(sub 4), T(sub vane), and T(sub metal)). Numerical Propulsion System Simulation (NPSS) calculations of the specific fuel consumption (TSFC), as a function of the variables can become time consuming, and numerical instabilities can occur during these design calculations. "Soft" models can alleviate both deficiencies. These approximate models are generated from a set of high-fidelity input-output pairs obtained from the NPSS code and a design of the experiment strategy. A neural network and a regression model with 45 weight factors were trained for the input/output pairs. Then, the trained models were validated through a comparison with the original NPSS code. Comparisons of TSFC versus the operating pressure and of TSFC versus the three temperatures (T(sub 4), T(sub vane), and T(sub metal)) are depicted in the figures. The overall performance was satisfactory for both the regression and the neural network model. The regression model required fewer calculations than the neural network model, and it produced marginally superior results. Training the approximate methods is time consuming. Once trained, the approximate methods generated the solution with only a trivial computational effort, reducing the solution time from hours to less than a minute.
NASA Astrophysics Data System (ADS)
Khechiba, Khaled; Mamou, Mahmoud; Hachemi, Madjid; Delenda, Nassim; Rebhi, Redha
2017-06-01
The present study is focused on Lapwood convection in isotropic porous media saturated with non-Newtonian shear thinning fluid. The non-Newtonian rheological behavior of the fluid is modeled using the general viscosity model of Carreau-Yasuda. The convection configuration consists of a shallow porous cavity with a finite aspect ratio and subject to a vertical constant heat flux, whereas the vertical walls are maintained impermeable and adiabatic. An approximate analytical solution is developed on the basis of the parallel flow assumption, and numerical solutions are obtained by solving the full governing equations. The Darcy model with the Boussinesq approximation and energy transport equations are solved numerically using a finite difference method. The results are obtained in terms of the Nusselt number and the flow fields as functions of the governing parameters. A good agreement is obtained between the analytical approximation and the numerical solution of the full governing equations. The effects of the rheological parameters of the Carreau-Yasuda fluid and Rayleigh number on the onset of subcritical convection thresholds are demonstrated. Regardless of the aspect ratio of the enclosure and thermal boundary condition type, the subcritical convective flows are seen to occur below the onset of stationary convection. Correlations are proposed to estimate the subcritical Rayleigh number for the onset of finite amplitude convection as a function of the fluid rheological parameters. Linear stability of the convective motion, predicted by the parallel flow approximation, is studied, and the onset of Hopf bifurcation, from steady convective flow to oscillatory behavior, is found to depend strongly on the rheological parameters. In general, Hopf bifurcation is triggered earlier as the fluid becomes more and more shear-thinning.
Electron Correlation from the Adiabatic Connection for Multireference Wave Functions
NASA Astrophysics Data System (ADS)
Pernal, Katarzyna
2018-01-01
An adiabatic connection (AC) formula for the electron correlation energy is derived for a broad class of multireference wave functions. The AC expression recovers dynamic correlation energy and assures a balanced treatment of the correlation energy. Coupling the AC formalism with the extended random phase approximation allows one to find the correlation energy only from reference one- and two-electron reduced density matrices. If the generalized valence bond perfect pairing model is employed a simple closed-form expression for the approximate AC formula is obtained. This results in the overall M5 scaling of the computation cost making the method one of the most efficient multireference approaches accounting for dynamic electron correlation also for the strongly correlated systems.
Onion-shell model of cosmic ray acceleration in supernova remnants
NASA Technical Reports Server (NTRS)
Bogdan, T. J.; Volk, H. J.
1983-01-01
A method is devised to approximate the spatially averaged momentum distribution function for the accelerated particles at the end of the active lifetime of a supernova remnant. The analysis is confined to the test particle approximation and adiabatic losses are oversimplified, but unsteady shock motion, evolving shock strength, and non-uniform gas flow effects on the accelerated particle spectrum are included. Monoenergetic protons are injected at the shock front. It is found that the dominant effect on the resultant accelerated particle spectrum is a changing spectral index with shock strength. High energy particles are produced in early phases, and the resultant distribution function is a slowly varying power law over several orders of magnitude, independent of the specific details of the supernova remnant.
NASA Astrophysics Data System (ADS)
Rozek, A.; Breiter, S.; Vokrouhlicky, D.
2011-10-01
A semi-analytical model of the Yarkovsky-O'Keefe- Radzievskii-Paddack (YORP) effect on an asteroid spin in non principal axis rotation state is presented. Assuming zero conductivity, the YORP torque is represented by spherical harmonics series with vector coefficients, allowing to use any degree and order of approximation. Within the quadrupole approximation of the illumination function we find the same first integrals involving rotational momentum, obliquity and dynamical inertia that were obtained by Cicaló and Scheeres [1]. The integrals do not exist when higher degree terms of illumination function are included and then the asymptotic states known from Vokrouhlický et al. [2] appear. This resolves an apparent contradiction between earlier results. Averaged equations of motion admit stable and unstable limit cycle solutions that were not detected previously.
Study of the zinc-silver oxide battery system
NASA Technical Reports Server (NTRS)
Nanis, L.
1973-01-01
Theoretical and experimental models for the evaluation of current distribution in flooded, porous electrodes are discussed. An approximation for the local current distribution function was derived for conditions of a linear overpotential, a uniform concentration, and a very conductive matrix. By considering the porous electrode to be an analog of chemical catalyst structures, a dimensionless performance parameter was derived from the approximated current distribution function. In this manner the electrode behavior was characterized in terms of an electrochemical Thiele parameter and an effectiveness factor. It was shown that the electrochemical engineering approach makes possible the organizations of theoretical descriptions and of practical experience in the form of dimensionless parameters, such as the electrochemical Thiele parameters, and hence provides useful information for the design of new electrochemical systems.
Dynamics of a spin-boson model with structured spectral density
NASA Astrophysics Data System (ADS)
Kurt, Arzu; Eryigit, Resul
2018-05-01
We report the results of a study of the dynamics of a two-state system coupled to an environment with peaked spectral density. An exact analytical expression for the bath correlation function is obtained. Validity range of various approximations to the correlation function for calculating the population difference of the system is discussed as function of tunneling splitting, oscillator frequency, coupling constant, damping rate and the temperature of the bath. An exact expression for the population difference, for a limited range of parameters, is derived.
Geometric Modeling for Computer Vision
1974-10-01
within a distance R of a locus X ,Y,Z; spatial uniqueness refers to the property that physical solids can not occupy the same space simultaneously. A...density functions W«p( X ,Y,Z). Unfortunately such density functions can no» be writtrn out for objects such as a typing chair or a plastic horse...be approximated by a surface function 2 = F( X ,Y). For example landscape may be represented by geodetic maps in such a 2-D fashion. By definition, a
Certain approximation problems for functions on the infinite-dimensional torus: Lipschitz spaces
NASA Astrophysics Data System (ADS)
Platonov, S. S.
2018-02-01
We consider some questions about the approximation of functions on the infinite-dimensional torus by trigonometric polynomials. Our main results are analogues of the direct and inverse theorems in the classical theory of approximation of periodic functions and a description of the Lipschitz spaces on the infinite-dimensional torus in terms of the best approximation.
Long-time dynamics of Rouse-Zimm polymers in dilute solutions with hydrodynamic memory.
Lisy, V; Tothova, J; Zatovsky, A V
2004-12-01
The dynamics of flexible polymers in dilute solutions is studied taking into account the hydrodynamic memory, as a consequence of fluid inertia. As distinct from the Rouse-Zimm (RZ) theory, the Boussinesq friction force acts on the monomers (beads) instead of the Stokes force, and the motion of the solvent is governed by the nonstationary Navier-Stokes equations. The obtained generalized RZ equation is solved approximately using the preaveraging of the Oseen tensor. It is shown that the time correlation functions describing the polymer motion essentially differ from those in the RZ model. The mean-square displacement (MSD) of the polymer coil is at short times approximately t(2) (instead of approximately t). At long times the MSD contains additional (to the Einstein term) contributions, the leading of which is approximately t. The relaxation of the internal normal modes of the polymer differs from the traditional exponential decay. It is displayed in the long-time tails of their correlation functions, the longest lived being approximately t(-3/2) in the Rouse limit and t(-5/2) in the Zimm case, when the hydrodynamic interaction is strong. It is discussed that the found peculiarities, in particular, an effectively slower diffusion of the polymer coil, should be observable in dynamic scattering experiments. (c) 2004 American Institute of Physics
A model of head-related transfer functions based on a state-space analysis
NASA Astrophysics Data System (ADS)
Adams, Norman Herkamp
This dissertation develops and validates a novel state-space method for binaural auditory display. Binaural displays seek to immerse a listener in a 3D virtual auditory scene with a pair of headphones. The challenge for any binaural display is to compute the two signals to supply to the headphones. The present work considers a general framework capable of synthesizing a wide variety of auditory scenes. The framework models collections of head-related transfer functions (HRTFs) simultaneously. This framework improves the flexibility of contemporary displays, but it also compounds the steep computational cost of the display. The cost is reduced dramatically by formulating the collection of HRTFs in the state-space and employing order-reduction techniques to design efficient approximants. Order-reduction techniques based on the Hankel-operator are found to yield accurate low-cost approximants. However, the inter-aural time difference (ITD) of the HRTFs degrades the time-domain response of the approximants. Fortunately, this problem can be circumvented by employing a state-space architecture that allows the ITD to be modeled outside of the state-space. Accordingly, three state-space architectures are considered. Overall, a multiple-input, single-output (MISO) architecture yields the best compromise between performance and flexibility. The state-space approximants are evaluated both empirically and psychoacoustically. An array of truncated FIR filters is used as a pragmatic reference system for comparison. For a fixed cost bound, the state-space systems yield lower approximation error than FIR arrays for D>10, where D is the number of directions in the HRTF collection. A series of headphone listening tests are also performed to validate the state-space approach, and to estimate the minimum order N of indiscriminable approximants. For D = 50, the state-space systems yield order thresholds less than half those of the FIR arrays. Depending upon the stimulus uncertainty, a minimum state-space order of 7≤N≤23 appears to be adequate. In conclusion, the proposed state-space method enables a more flexible and immersive binaural display with low computational cost.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hacke, Peter; Spataru, Sergiu; Terwilliger, Kent
2015-06-14
An acceleration model based on the Peck equation was applied to power performance of crystalline silicon cell modules as a function of time and of temperature and humidity, the two main environmental stress factors that promote potential-induced degradation. This model was derived from module power degradation data obtained semi-continuously and statistically by in-situ dark current-voltage measurements in an environmental chamber. The modeling enables prediction of degradation rates and times as functions of temperature and humidity. Power degradation could be modeled linearly as a function of time to the second power; additionally, we found that coulombs transferred from the active cellmore » circuit to ground during the stress test is approximately linear with time. Therefore, the power loss could be linearized as a function of coulombs squared. With this result, we observed that when the module face was completely grounded with a condensed phase conductor, leakage current exceeded the anticipated corresponding degradation rate relative to the other tests performed in damp heat.« less
STDP Installs in Winner-Take-All Circuits an Online Approximation to Hidden Markov Model Learning
Kappel, David; Nessler, Bernhard; Maass, Wolfgang
2014-01-01
In order to cross a street without being run over, we need to be able to extract very fast hidden causes of dynamically changing multi-modal sensory stimuli, and to predict their future evolution. We show here that a generic cortical microcircuit motif, pyramidal cells with lateral excitation and inhibition, provides the basis for this difficult but all-important information processing capability. This capability emerges in the presence of noise automatically through effects of STDP on connections between pyramidal cells in Winner-Take-All circuits with lateral excitation. In fact, one can show that these motifs endow cortical microcircuits with functional properties of a hidden Markov model, a generic model for solving such tasks through probabilistic inference. Whereas in engineering applications this model is adapted to specific tasks through offline learning, we show here that a major portion of the functionality of hidden Markov models arises already from online applications of STDP, without any supervision or rewards. We demonstrate the emergent computing capabilities of the model through several computer simulations. The full power of hidden Markov model learning can be attained through reward-gated STDP. This is due to the fact that these mechanisms enable a rejection sampling approximation to theoretically optimal learning. We investigate the possible performance gain that can be achieved with this more accurate learning method for an artificial grammar task. PMID:24675787
Chow, Sy-Miin; Bendezú, Jason J.; Cole, Pamela M.; Ram, Nilam
2016-01-01
Several approaches currently exist for estimating the derivatives of observed data for model exploration purposes, including functional data analysis (FDA), generalized local linear approximation (GLLA), and generalized orthogonal local derivative approximation (GOLD). These derivative estimation procedures can be used in a two-stage process to fit mixed effects ordinary differential equation (ODE) models. While the performance and utility of these routines for estimating linear ODEs have been established, they have not yet been evaluated in the context of nonlinear ODEs with mixed effects. We compared properties of the GLLA and GOLD to an FDA-based two-stage approach denoted herein as functional ordinary differential equation with mixed effects (FODEmixed) in a Monte Carlo study using a nonlinear coupled oscillators model with mixed effects. Simulation results showed that overall, the FODEmixed outperformed both the GLLA and GOLD across all the embedding dimensions considered, but a novel use of a fourth-order GLLA approach combined with very high embedding dimensions yielded estimation results that almost paralleled those from the FODEmixed. We discuss the strengths and limitations of each approach and demonstrate how output from each stage of FODEmixed may be used to inform empirical modeling of young children’s self-regulation. PMID:27391255
Chow, Sy-Miin; Bendezú, Jason J; Cole, Pamela M; Ram, Nilam
2016-01-01
Several approaches exist for estimating the derivatives of observed data for model exploration purposes, including functional data analysis (FDA; Ramsay & Silverman, 2005 ), generalized local linear approximation (GLLA; Boker, Deboeck, Edler, & Peel, 2010 ), and generalized orthogonal local derivative approximation (GOLD; Deboeck, 2010 ). These derivative estimation procedures can be used in a two-stage process to fit mixed effects ordinary differential equation (ODE) models. While the performance and utility of these routines for estimating linear ODEs have been established, they have not yet been evaluated in the context of nonlinear ODEs with mixed effects. We compared properties of the GLLA and GOLD to an FDA-based two-stage approach denoted herein as functional ordinary differential equation with mixed effects (FODEmixed) in a Monte Carlo (MC) study using a nonlinear coupled oscillators model with mixed effects. Simulation results showed that overall, the FODEmixed outperformed both the GLLA and GOLD across all the embedding dimensions considered, but a novel use of a fourth-order GLLA approach combined with very high embedding dimensions yielded estimation results that almost paralleled those from the FODEmixed. We discuss the strengths and limitations of each approach and demonstrate how output from each stage of FODEmixed may be used to inform empirical modeling of young children's self-regulation.
Modal kinematics for multisection continuum arms.
Godage, Isuru S; Medrano-Cerda, Gustavo A; Branson, David T; Guglielmino, Emanuele; Caldwell, Darwin G
2015-05-13
This paper presents a novel spatial kinematic model for multisection continuum arms based on mode shape functions (MSF). Modal methods have been used in many disciplines from finite element methods to structural analysis to approximate complex and nonlinear parametric variations with simple mathematical functions. Given certain constraints and required accuracy, this helps to simplify complex phenomena with numerically efficient implementations leading to fast computations. A successful application of the modal approximation techniques to develop a new modal kinematic model for general variable length multisection continuum arms is discussed. The proposed method solves the limitations associated with previous models and introduces a new approach for readily deriving exact, singularity-free and unique MSF's that simplifies the approach and avoids mode switching. The model is able to simulate spatial bending as well as straight arm motions (i.e., pure elongation/contraction), and introduces inverse position and orientation kinematics for multisection continuum arms. A kinematic decoupling feature, splitting position and orientation inverse kinematics is introduced. This type of decoupling has not been presented for these types of robotic arms before. The model also carefully accounts for physical constraints in the joint space to provide enhanced insight into practical mechanics and impose actuator mechanical limitations onto the kinematics thus generating fully realizable results. The proposed method is easily applicable to a broad spectrum of continuum arm designs.
Prototyping method for Bragg-type atom interferometers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benton, Brandon; Krygier, Michael; Heward, Jeffrey
2011-10-15
We present a method for rapid modeling of new Bragg ultracold atom-interferometer (AI) designs useful for assessing the performance of such interferometers. The method simulates the overall effect on the condensate wave function in a given AI design using two separate elements. These are (1) modeling the effect of a Bragg pulse on the wave function and (2) approximating the evolution of the wave function during the intervals between the pulses. The actual sequence of these pulses and intervals is then followed to determine the approximate final wave function from which the interference pattern can be calculated. The exact evolutionmore » between pulses is assumed to be governed by the Gross-Pitaevskii (GP) equation whose solution is approximated using a Lagrangian variational method to facilitate rapid estimation of performance. The method presented here is an extension of an earlier one that was used to analyze the results of an experiment [J. E. Simsarian et al., Phys. Rev. Lett. 85, 2040 (2000)], where the phase of a Bose-Einstein condensate was measured using a Mach-Zehnder-type Bragg AI. We have developed both 1D and 3D versions of this method and we have determined their validity by comparing their predicted interference patterns with those obtained by numerical integration of the 1D GP equation and with the results of the above experiment. We find excellent agreement between the 1D interference patterns predicted by this method and those found by the GP equation. We show that we can reproduce all of the results of that experiment without recourse to an ad hoc velocity-kick correction needed by the earlier method, including some experimental results that the earlier model did not predict. We also found that this method provides estimates of 1D interference patterns at least four orders-of-magnitude faster than direct numerical solution of the 1D GP equation.« less
NASA Technical Reports Server (NTRS)
Kottarchyk, M.; Chen, S.-H.; Asano, S.
1979-01-01
The study tests the accuracy of the Rayleigh-Gans-Debye (RGD) approximation against a rigorous scattering theory calculation for a simplified model of E. coli (about 1 micron in size) - a solid spheroid. A general procedure is formulated whereby the scattered field amplitude correlation function, for both polarized and depolarized contributions, can be computed for a collection of particles. An explicit formula is presented for the scattered intensity, both polarized and depolarized, for a collection of randomly diffusing or moving particles. Two specific cases for the intermediate scattering functions are considered: diffusing particles and freely moving particles with a Maxwellian speed distribution. The formalism is applied to microorganisms suspended in a liquid medium. Sensitivity studies revealed that for values of the relative index of refraction greater than 1.03, RGD could be in serious error in computing the intensity as well as correlation functions.
NASA Technical Reports Server (NTRS)
Canavos, G. C.
1974-01-01
A study is made of the extent to which the size of the sample affects the accuracy of a quadratic or a cubic polynomial approximation of an experimentally observed quantity, and the trend with regard to improvement in the accuracy of the approximation as a function of sample size is established. The task is made possible through a simulated analysis carried out by the Monte Carlo method in which data are simulated by using several transcendental or algebraic functions as models. Contaminated data of varying amounts are fitted to either quadratic or cubic polynomials, and the behavior of the mean-squared error of the residual variance is determined as a function of sample size. Results indicate that the effect of the size of the sample is significant only for relatively small sizes and diminishes drastically for moderate and large amounts of experimental data.
Semiclassical propagation of Wigner functions.
Dittrich, T; Gómez, E A; Pachón, L A
2010-06-07
We present a comprehensive study of semiclassical phase-space propagation in the Wigner representation, emphasizing numerical applications, in particular as an initial-value representation. Two semiclassical approximation schemes are discussed. The propagator of the Wigner function based on van Vleck's approximation replaces the Liouville propagator by a quantum spot with an oscillatory pattern reflecting the interference between pairs of classical trajectories. Employing phase-space path integration instead, caustics in the quantum spot are resolved in terms of Airy functions. We apply both to two benchmark models of nonlinear molecular potentials, the Morse oscillator and the quartic double well, to test them in standard tasks such as computing autocorrelation functions and propagating coherent states. The performance of semiclassical Wigner propagation is very good even in the presence of marked quantum effects, e.g., in coherent tunneling and in propagating Schrodinger cat states, and of classical chaos in four-dimensional phase space. We suggest options for an effective numerical implementation of our method and for integrating it in Monte-Carlo-Metropolis algorithms suitable for high-dimensional systems.
Weak limit of the three-state quantum walk on the line
NASA Astrophysics Data System (ADS)
Falkner, Stefan; Boettcher, Stefan
2014-07-01
We revisit the one-dimensional discrete time quantum walk with three states and the Grover coin, the simplest model that exhibits localization in a quantum walk. We derive analytic expressions for the localization and a long-time approximation for the entire probability density function (PDF). We find the possibility for asymmetric localization to the extreme that it vanishes completely on one site of the initial conditions. We also connect the time-averaged approximation of the PDF found by Inui et al. [Phys. Rev. E 72, 056112 (2005), 10.1103/PhysRevE.72.056112] to a spatial average of the walk. We show that this smoothed approximation predicts moments of the real PDF accurately.
A Measure Approximation for Distributionally Robust PDE-Constrained Optimization Problems
Kouri, Drew Philip
2017-12-19
In numerous applications, scientists and engineers acquire varied forms of data that partially characterize the inputs to an underlying physical system. This data is then used to inform decisions such as controls and designs. Consequently, it is critical that the resulting control or design is robust to the inherent uncertainties associated with the unknown probabilistic characterization of the model inputs. Here in this work, we consider optimal control and design problems constrained by partial differential equations with uncertain inputs. We do not assume a known probabilistic model for the inputs, but rather we formulate the problem as a distributionally robustmore » optimization problem where the outer minimization problem determines the control or design, while the inner maximization problem determines the worst-case probability measure that matches desired characteristics of the data. We analyze the inner maximization problem in the space of measures and introduce a novel measure approximation technique, based on the approximation of continuous functions, to discretize the unknown probability measure. Finally, we prove consistency of our approximated min-max problem and conclude with numerical results.« less
NASA Astrophysics Data System (ADS)
Lee, Y.; Bescond, M.; Logoteta, D.; Cavassilas, N.; Lannoo, M.; Luisier, M.
2018-05-01
We propose an efficient method to quantum mechanically treat anharmonic interactions in the atomistic nonequilibrium Green's function simulation of phonon transport. We demonstrate that the so-called lowest-order approximation, implemented through a rescaling technique and analytically continued by means of the Padé approximants, can be used to accurately model third-order anharmonic effects. Although the paper focuses on a specific self-energy, the method is applicable to a very wide class of physical interactions. We apply this approach to the simulation of anharmonic phonon transport in realistic Si and Ge nanowires with uniform or discontinuous cross sections. The effect of increasing the temperature above 300 K is also investigated. In all the considered cases, we are able to obtain a good agreement with the routinely adopted self-consistent Born approximation, at a remarkably lower computational cost. In the more complicated case of high temperatures (≫300 K), we find that the first-order Richardson extrapolation applied to the sequence of the Padé approximants N -1 /N results in a significant acceleration of the convergence.
Ranking Support Vector Machine with Kernel Approximation
Dou, Yong
2017-01-01
Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms. PMID:28293256
Ranking Support Vector Machine with Kernel Approximation.
Chen, Kai; Li, Rongchun; Dou, Yong; Liang, Zhengfa; Lv, Qi
2017-01-01
Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.
Control of Complex Dynamic Systems by Neural Networks
NASA Technical Reports Server (NTRS)
Spall, James C.; Cristion, John A.
1993-01-01
This paper considers the use of neural networks (NN's) in controlling a nonlinear, stochastic system with unknown process equations. The NN is used to model the resulting unknown control law. The approach here is based on using the output error of the system to train the NN controller without the need to construct a separate model (NN or other type) for the unknown process dynamics. To implement such a direct adaptive control approach, it is required that connection weights in the NN be estimated while the system is being controlled. As a result of the feedback of the unknown process dynamics, however, it is not possible to determine the gradient of the loss function for use in standard (back-propagation-type) weight estimation algorithms. Therefore, this paper considers the use of a new stochastic approximation algorithm for this weight estimation, which is based on a 'simultaneous perturbation' gradient approximation that only requires the system output error. It is shown that this algorithm can greatly enhance the efficiency over more standard stochastic approximation algorithms based on finite-difference gradient approximations.
An expanded set of brown dwarf and very low mass star models
NASA Technical Reports Server (NTRS)
Burrows, A.; Hubbard, W. B.; Saumon, D.; Lunine, J. I.
1993-01-01
We present in this paper updated and improved theoretical models of brown dwarfs and late M dwarfs. The evolution and characteristics of objects between 0.01 and 0.2 solar mass are exhaustively investigated and special emphasis is placed on their properties at early ages. The dependence on the helium fraction, deuterium fraction, and metallicity of the masses, effective temperature and luminosities at the edge of the hydrogen main sequence are calculated. We derive luminosity functions for representative mass functions and compare our predictions to recent cluster data. We show that there are distinctive features in the theoretical luminosity functions that can serve as diagnostics of brown dwarf physics. A zero-metallicity model is presented as a bound to or approximation of a putative extreme halo population.
Self Improving Methods for Materials and Process Design
1998-08-31
using inductive coupling techniques. The first phase of the work focuses on developing an artificial neural network learning for function approximation...developing an artificial neural network learning algorithm for time-series prediction. The third phase of the work focuses on model selection. We have
Basic features of the pion valence-quark distribution function
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, Lei; Mezrag, Cédric; Moutarde, Hervé
2014-10-07
The impulse-approximation expression used hitherto to define the pion's valence-quark distribution function is flawed because it omits contributions from the gluons which bind quarks into the pion. A corrected leading-order expression produces the model-independent result that quarks dressed via the rainbow–ladder truncation, or any practical analogue, carry all the pion's light-front momentum at a characteristic hadronic scale. Corrections to the leading contribution may be divided into two classes, responsible for shifting dressed-quark momentum into glue and sea-quarks. Working with available empirical information, we use an algebraic model to express the principal impact of both classes of corrections. This enables amore » realistic comparison with experiment that allows us to highlight the basic features of the pion's measurable valence-quark distribution, q π(x); namely, at a characteristic hadronic scale, q π(x)~(1-x) 2 for x≳0.85; and the valence-quarks carry approximately two-thirds of the pion's light-front momentum.« less
Li, Wei; Yi, Huangjian; Zhang, Qitan; Chen, Duofang; Liang, Jimin
2012-01-01
An extended finite element method (XFEM) for the forward model of 3D optical molecular imaging is developed with simplified spherical harmonics approximation (SPN). In XFEM scheme of SPN equations, the signed distance function is employed to accurately represent the internal tissue boundary, and then it is used to construct the enriched basis function of the finite element scheme. Therefore, the finite element calculation can be carried out without the time-consuming internal boundary mesh generation. Moreover, the required overly fine mesh conforming to the complex tissue boundary which leads to excess time cost can be avoided. XFEM conveniences its application to tissues with complex internal structure and improves the computational efficiency. Phantom and digital mouse experiments were carried out to validate the efficiency of the proposed method. Compared with standard finite element method and classical Monte Carlo (MC) method, the validation results show the merits and potential of the XFEM for optical imaging. PMID:23227108
Li, Wei; Yi, Huangjian; Zhang, Qitan; Chen, Duofang; Liang, Jimin
2012-01-01
An extended finite element method (XFEM) for the forward model of 3D optical molecular imaging is developed with simplified spherical harmonics approximation (SP(N)). In XFEM scheme of SP(N) equations, the signed distance function is employed to accurately represent the internal tissue boundary, and then it is used to construct the enriched basis function of the finite element scheme. Therefore, the finite element calculation can be carried out without the time-consuming internal boundary mesh generation. Moreover, the required overly fine mesh conforming to the complex tissue boundary which leads to excess time cost can be avoided. XFEM conveniences its application to tissues with complex internal structure and improves the computational efficiency. Phantom and digital mouse experiments were carried out to validate the efficiency of the proposed method. Compared with standard finite element method and classical Monte Carlo (MC) method, the validation results show the merits and potential of the XFEM for optical imaging.
Volatility in financial markets: stochastic models and empirical results
NASA Astrophysics Data System (ADS)
Miccichè, Salvatore; Bonanno, Giovanni; Lillo, Fabrizio; Mantegna, Rosario N.
2002-11-01
We investigate the historical volatility of the 100 most capitalized stocks traded in US equity markets. An empirical probability density function (pdf) of volatility is obtained and compared with the theoretical predictions of a lognormal model and of the Hull and White model. The lognormal model well describes the pdf in the region of low values of volatility whereas the Hull and White model better approximates the empirical pdf for large values of volatility. Both models fail in describing the empirical pdf over a moderately large volatility range.
QRAP: A numerical code for projected (Q)uasiparticle (RA)ndom (P)hase approximation
NASA Astrophysics Data System (ADS)
Samana, A. R.; Krmpotić, F.; Bertulani, C. A.
2010-06-01
A computer code for quasiparticle random phase approximation - QRPA and projected quasiparticle random phase approximation - PQRPA models of nuclear structure is explained in details. The residual interaction is approximated by a simple δ-force. An important application of the code consists in evaluating nuclear matrix elements involved in neutrino-nucleus reactions. As an example, cross sections for 56Fe and 12C are calculated and the code output is explained. The application to other nuclei and the description of other nuclear and weak decay processes are also discussed. Program summaryTitle of program: QRAP ( Quasiparticle RAndom Phase approximation) Computers: The code has been created on a PC, but also runs on UNIX or LINUX machines Operating systems: WINDOWS or UNIX Program language used: Fortran-77 Memory required to execute with typical data: 16 Mbytes of RAM memory and 2 MB of hard disk space No. of lines in distributed program, including test data, etc.: ˜ 8000 No. of bytes in distributed program, including test data, etc.: ˜ 256 kB Distribution format: tar.gz Nature of physical problem: The program calculates neutrino- and antineutrino-nucleus cross sections as a function of the incident neutrino energy, and muon capture rates, using the QRPA or PQRPA as nuclear structure models. Method of solution: The QRPA, or PQRPA, equations are solved in a self-consistent way for even-even nuclei. The nuclear matrix elements for the neutrino-nucleus interaction are treated as the beta inverse reaction of odd-odd nuclei as function of the transfer momentum. Typical running time: ≈ 5 min on a 3 GHz processor for Data set 1.
Butler, Samuel D; Nauyoks, Stephen E; Marciniak, Michael A
2015-11-02
A popular class of BRDF models is the microfacet models, where geometric optics is assumed. In contrast, more complex physical optics models may more accurately predict the BRDF, but the calculation is more resource intensive. These seemingly disparate approaches are compared in detail for the rough and smooth surface approximations of the modified Beckmann-Kirchhoff BRDF model, assuming Gaussian surface statistics. An approximation relating standard Fresnel reflection with the semi-rough surface polarization term, Q, is presented for unpolarized light. For rough surfaces, the angular dependence of direction cosine space is shown to be identical to the angular dependence in the microfacet distribution function. For polished surfaces, the same comparison shows a breakdown in the microfacet models. Similarities and differences between microfacet BRDF models and the modified Beckmann-Kirchhoff model are identified. The rationale for the original Beckmann-Kirchhoff F(bk)(2) geometric term relative to both microfacet models and generalized Harvey-Shack model is presented. A modification to the geometric F(bk)(2) term in original Beckmann-Kirchhoff BRDF theory is proposed.
Mass functions from the excursion set model
NASA Astrophysics Data System (ADS)
Hiotelis, Nicos; Del Popolo, Antonino
2017-11-01
Aims: We aim to study the stochastic evolution of the smoothed overdensity δ at scale S of the form δ(S) = ∫0S K(S,u)dW(u), where K is a kernel and dW is the usual Wiener process. Methods: For a Gaussian density field, smoothed by the top-hat filter, in real space, we used a simple kernel that gives the correct correlation between scales. A Monte Carlo procedure was used to construct random walks and to calculate first crossing distributions and consequently mass functions for a constant barrier. Results: We show that the evolution considered here improves the agreement with the results of N-body simulations relative to analytical approximations which have been proposed from the same problem by other authors. In fact, we show that an evolution which is fully consistent with the ideas of the excursion set model, describes accurately the mass function of dark matter haloes for values of ν ≤ 1 and underestimates the number of larger haloes. Finally, we show that a constant threshold of collapse, lower than it is usually used, it is able to produce a mass function which approximates the results of N-body simulations for a variety of redshifts and for a wide range of masses. Conclusions: A mass function in good agreement with N-body simulations can be obtained analytically using a lower than usual constant collapse threshold.
Spline smoothing of histograms by linear programming
NASA Technical Reports Server (NTRS)
Bennett, J. O.
1972-01-01
An algorithm for an approximating function to the frequency distribution is obtained from a sample of size n. To obtain the approximating function a histogram is made from the data. Next, Euclidean space approximations to the graph of the histogram using central B-splines as basis elements are obtained by linear programming. The approximating function has area one and is nonnegative.
Finding the Best Quadratic Approximation of a Function
ERIC Educational Resources Information Center
Yang, Yajun; Gordon, Sheldon P.
2011-01-01
This article examines the question of finding the best quadratic function to approximate a given function on an interval. The prototypical function considered is f(x) = e[superscript x]. Two approaches are considered, one based on Taylor polynomial approximations at various points in the interval under consideration, the other based on the fact…
NASA Astrophysics Data System (ADS)
Mainardi, Francesco; Masina, Enrico; Spada, Giorgio
2018-02-01
We present a new rheological model depending on a real parameter ν \\in [0,1], which reduces to the Maxwell body for ν =0 and to the Becker body for ν =1. The corresponding creep law is expressed in an integral form in which the exponential function of the Becker model is replaced and generalized by a Mittag-Leffler function of order ν . Then the corresponding non-dimensional creep function and its rate are studied as functions of time for different values of ν in order to visualize the transition from the classical Maxwell body to the Becker body. Based on the hereditary theory of linear viscoelasticity, we also approximate the relaxation function by solving numerically a Volterra integral equation of the second kind. In turn, the relaxation function is shown versus time for different values of ν to visualize again the transition from the classical Maxwell body to the Becker body. Furthermore, we provide a full characterization of the new model by computing, in addition to the creep and relaxation functions, the so-called specific dissipation Q^{-1} as a function of frequency, which is of particular relevance for geophysical applications.
Best uniform approximation to a class of rational functions
NASA Astrophysics Data System (ADS)
Zheng, Zhitong; Yong, Jun-Hai
2007-10-01
We explicitly determine the best uniform polynomial approximation to a class of rational functions of the form 1/(x-c)2+K(a,b,c,n)/(x-c) on [a,b] represented by their Chebyshev expansion, where a, b, and c are real numbers, n-1 denotes the degree of the best approximating polynomial, and K is a constant determined by a, b, c, and n. Our result is based on the explicit determination of a phase angle [eta] in the representation of the approximation error by a trigonometric function. Moreover, we formulate an ansatz which offers a heuristic strategies to determine the best approximating polynomial to a function represented by its Chebyshev expansion. Combined with the phase angle method, this ansatz can be used to find the best uniform approximation to some more functions.
Approximating high-dimensional dynamics by barycentric coordinates with linear programming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hirata, Yoshito, E-mail: yoshito@sat.t.u-tokyo.ac.jp; Aihara, Kazuyuki; Suzuki, Hideyuki
The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics ofmore » the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.« less
History-Dependent Problems with Applications to Contact Models for Elastic Beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bartosz, Krzysztof; Kalita, Piotr; Migórski, Stanisław
We prove an existence and uniqueness result for a class of subdifferential inclusions which involve a history-dependent operator. Then we specialize this result in the study of a class of history-dependent hemivariational inequalities. Problems of such kind arise in a large number of mathematical models which describe quasistatic processes of contact. To provide an example we consider an elastic beam in contact with a reactive obstacle. The contact is modeled with a new and nonstandard condition which involves both the subdifferential of a nonconvex and nonsmooth function and a Volterra-type integral term. We derive a variational formulation of the problemmore » which is in the form of a history-dependent hemivariational inequality for the displacement field. Then, we use our abstract result to prove its unique weak solvability. Finally, we consider a numerical approximation of the model, solve effectively the approximate problems and provide numerical simulations.« less
Approximating high-dimensional dynamics by barycentric coordinates with linear programming.
Hirata, Yoshito; Shiro, Masanori; Takahashi, Nozomu; Aihara, Kazuyuki; Suzuki, Hideyuki; Mas, Paloma
2015-01-01
The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics of the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.
Model of chiral spin liquids with Abelian and non-Abelian topological phases
NASA Astrophysics Data System (ADS)
Chen, Jyong-Hao; Mudry, Christopher; Chamon, Claudio; Tsvelik, A. M.
2017-12-01
We present a two-dimensional lattice model for quantum spin-1/2 for which the low-energy limit is governed by four flavors of strongly interacting Majorana fermions. We study this low-energy effective theory using two alternative approaches. The first consists of a mean-field approximation. The second consists of a random phase approximation (RPA) for the single-particle Green's functions of the Majorana fermions built from their exact forms in a certain one-dimensional limit. The resulting phase diagram consists of two competing chiral phases, one with Abelian and the other with non-Abelian topological order, separated by a continuous phase transition. Remarkably, the Majorana fermions propagate in the two-dimensional bulk, as in the Kitaev model for a spin liquid on the honeycomb lattice. We identify the vison fields, which are mobile (they are static in the Kitaev model) domain walls propagating along only one of the two space directions.
Ding, Shaojie; Qian, Min; Qian, Hong; Zhang, Xuejuan
2016-12-28
The stochastic Hodgkin-Huxley model is one of the best-known examples of piecewise deterministic Markov processes (PDMPs), in which the electrical potential across a cell membrane, V(t), is coupled with a mesoscopic Markov jump process representing the stochastic opening and closing of ion channels embedded in the membrane. The rates of the channel kinetics, in turn, are voltage-dependent. Due to this interdependence, an accurate and efficient sampling of the time evolution of the hybrid stochastic systems has been challenging. The current exact simulation methods require solving a voltage-dependent hitting time problem for multiple path-dependent intensity functions with random thresholds. This paper proposes a simulation algorithm that approximates an alternative representation of the exact solution by fitting the log-survival function of the inter-jump dwell time, H(t), with a piecewise linear one. The latter uses interpolation points that are chosen according to the time evolution of the H(t), as the numerical solution to the coupled ordinary differential equations of V(t) and H(t). This computational method can be applied to all PDMPs. Pathwise convergence of the approximated sample trajectories to the exact solution is proven, and error estimates are provided. Comparison with a previous algorithm that is based on piecewise constant approximation is also presented.
Phonon scattering in nanoscale systems: lowest order expansion of the current and power expressions
NASA Astrophysics Data System (ADS)
Paulsson, Magnus; Frederiksen, Thomas; Brandbyge, Mads
2006-04-01
We use the non-equilibrium Green's function method to describe the effects of phonon scattering on the conductance of nano-scale devices. Useful and accurate approximations are developed that both provide (i) computationally simple formulas for large systems and (ii) simple analytical models. In addition, the simple models can be used to fit experimental data and provide physical parameters.
On the use of Bayesian Monte-Carlo in evaluation of nuclear data
NASA Astrophysics Data System (ADS)
De Saint Jean, Cyrille; Archier, Pascal; Privas, Edwin; Noguere, Gilles
2017-09-01
As model parameters, necessary ingredients of theoretical models, are not always predicted by theory, a formal mathematical framework associated to the evaluation work is needed to obtain the best set of parameters (resonance parameters, optical models, fission barrier, average width, multigroup cross sections) with Bayesian statistical inference by comparing theory to experiment. The formal rule related to this methodology is to estimate the posterior density probability function of a set of parameters by solving an equation of the following type: pdf(posterior) ˜ pdf(prior) × a likelihood function. A fitting procedure can be seen as an estimation of the posterior density probability of a set of parameters (referred as x→?) knowing a prior information on these parameters and a likelihood which gives the probability density function of observing a data set knowing x→?. To solve this problem, two major paths could be taken: add approximations and hypothesis and obtain an equation to be solved numerically (minimum of a cost function or Generalized least Square method, referred as GLS) or use Monte-Carlo sampling of all prior distributions and estimate the final posterior distribution. Monte Carlo methods are natural solution for Bayesian inference problems. They avoid approximations (existing in traditional adjustment procedure based on chi-square minimization) and propose alternative in the choice of probability density distribution for priors and likelihoods. This paper will propose the use of what we are calling Bayesian Monte Carlo (referred as BMC in the rest of the manuscript) in the whole energy range from thermal, resonance and continuum range for all nuclear reaction models at these energies. Algorithms will be presented based on Monte-Carlo sampling and Markov chain. The objectives of BMC are to propose a reference calculation for validating the GLS calculations and approximations, to test probability density distributions effects and to provide the framework of finding global minimum if several local minimums exist. Application to resolved resonance, unresolved resonance and continuum evaluation as well as multigroup cross section data assimilation will be presented.
Minimal-Approximation-Based Decentralized Backstepping Control of Interconnected Time-Delay Systems.
Choi, Yun Ho; Yoo, Sung Jin
2016-12-01
A decentralized adaptive backstepping control design using minimal function approximators is proposed for nonlinear large-scale systems with unknown unmatched time-varying delayed interactions and unknown backlash-like hysteresis nonlinearities. Compared with existing decentralized backstepping methods, the contribution of this paper is to design a simple local control law for each subsystem, consisting of an actual control with one adaptive function approximator, without requiring the use of multiple function approximators and regardless of the order of each subsystem. The virtual controllers for each subsystem are used as intermediate signals for designing a local actual control at the last step. For each subsystem, a lumped unknown function including the unknown nonlinear terms and the hysteresis nonlinearities is derived at the last step and is estimated by one function approximator. Thus, the proposed approach only uses one function approximator to implement each local controller, while existing decentralized backstepping control methods require the number of function approximators equal to the order of each subsystem and a calculation of virtual controllers to implement each local actual controller. The stability of the total controlled closed-loop system is analyzed using the Lyapunov stability theorem.
A simple, approximate model of parachute inflation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Macha, J.M.
1992-11-01
A simple, approximate model of parachute inflation is described. The model is based on the traditional, practical treatment of the fluid resistance of rigid bodies in nonsteady flow, with appropriate extensions to accommodate the change in canopy inflated shape. Correlations for the steady drag and steady radial force as functions of the inflated radius are required as input to the dynamic model. In a novel approach, the radial force is expressed in terms of easily obtainable drag and reefing fine tension measurements. A series of wind tunnel experiments provides the needed correlations. Coefficients associated with the added mass of fluidmore » are evaluated by calibrating the model against an extensive and reliable set of flight data. A parameter is introduced which appears to universally govern the strong dependence of the axial added mass coefficient on motion history. Through comparisons with flight data, the model is shown to realistically predict inflation forces for ribbon and ringslot canopies over a wide range of sizes and deployment conditions.« less
A simple, approximate model of parachute inflation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Macha, J.M.
1992-01-01
A simple, approximate model of parachute inflation is described. The model is based on the traditional, practical treatment of the fluid resistance of rigid bodies in nonsteady flow, with appropriate extensions to accommodate the change in canopy inflated shape. Correlations for the steady drag and steady radial force as functions of the inflated radius are required as input to the dynamic model. In a novel approach, the radial force is expressed in terms of easily obtainable drag and reefing fine tension measurements. A series of wind tunnel experiments provides the needed correlations. Coefficients associated with the added mass of fluidmore » are evaluated by calibrating the model against an extensive and reliable set of flight data. A parameter is introduced which appears to universally govern the strong dependence of the axial added mass coefficient on motion history. Through comparisons with flight data, the model is shown to realistically predict inflation forces for ribbon and ringslot canopies over a wide range of sizes and deployment conditions.« less
Modeling NDT piezoelectric ultrasonic transmitters.
San Emeterio, J L; Ramos, A; Sanz, P T; Ruíz, A; Azbaid, A
2004-04-01
Ultrasonic NDT applications are frequently based on the spike excitation of piezoelectric transducers by means of efficient pulsers which usually include a power switching device (e.g. SCR or MOS-FET) and some rectifier components. In this paper we present an approximate frequency domain electro-acoustic model for pulsed piezoelectric ultrasonic transmitters which, by integrating partial models of the different stages (driving electronics, tuning/matching networks and broadband piezoelectric transducer), allows the computation of the emission transfer function and output force temporal waveform. An approximate frequency domain model is used for the evaluation of the electrical driving pulse from the spike generator. Tuning circuits, interconnecting cable and mechanical impedance matching layers are modeled by means of transmission lines and the classical quadripole approach. The KLM model is used for the piezoelectric transducer. In addition, a PSPICE scheme is used for an alternative simulation of the broadband driving spike, including the accurate evaluation of non-linear driving effects. Several examples illustrate the capabilities of the specifically developed software.
Geometrical evidence for dark matter: X-ray constraints on the mass of the elliptical galaxy NGC 720
NASA Astrophysics Data System (ADS)
Buote, David A.; Canizares, Claude R.
1994-05-01
We describe (1) a new test for dark matter and alternate theories of gravitation based on the relative geometries of the X-ray and optical surface brightness distributions and an assumed form for the potential, of the optical light, (2) a technique to measure the shapes of the total gravitating matter and dark matter of an ellipsoidal system which is insensitive to the precise value of the temperature of the gas and to modest temperature gradients, and (3) a new method to determine the ratio of dark mass to stellar mass that is dependent on the functional forms for the visible star, gas and dark matter distributions, but independent of the distance to the galaxy or the gas temperature. We apply these techniques to X-ray data from the ROSAT Position Sensitive Proportional Counter (PSPC) of the optically flattened elliptical galaxy NGC 720; the optical isophotes have ellipticity epsilon approximately 0.40 extending out to approximately 120 sec. The X-ray isophotes are significantly elongated, epsilon = 0.20-0.30 for semimajor axis a approximately 100 sec. The major axes of the optical and X-ray isophotes are misaligned by approximately 30 deg +/- 15 deg. Spectral analysis of the X-ray data reveals no evidence of temperature gradients or anisotropies and demonstrates that a single-temperature plasma (T approximately 0.6 keV) having subsolar heavy element abundances and a two-temperature model having solar abundances describe the spectrum equally well. Considering only the relative geometries of the X-ray and optical surface brightness distributions and an assumed functional form for the potential of the optical light, we conclude that matter distributed like the optical light cannot produce the observed ellipticities of the X-ray isophotes, independent of the gas pressure, the gas temperature, and the value of the stellar mass; this comparison assumes a state of quasi-hydrostatic equilibrium so that the three-dimensional surfaces of the gas emissivity trace the three-dimensional isopotential surfaces -- we discuss the viability of this assumption for NGC 720. Milgrom's Modification of Newtonian Dynamics (MOND) cannot dispel this manifestation of dark matter. Hence, geometrical considerations require, without mention of pressure or temperature, the presence of an extended, massive dark matter halo in NGC 720. Employing essentially the technique of Buote & Canizares (1992; Buote 1992) we use the shape of the X-ray surface brightness to constrain the shape of the total gravitating matter. The total matter is modeled as either an oblate or prolate spheriod of constant shape and orientation having either a Ferrers (rho approximately r-n) or Hernquist density. Assuming the X-ray gas is in hydrostatic equilibrium, we construct a model X-ray gas distribution for various temperature profiles. We determine the ellipticity of the total gravitating matter to be epsilon approximately 0.50-0.70. Using the single-temperature model we estimate a total mass approximately (0.41-1.4) x 1012 h80 solar mass interior to the ellipsoid of semimajor axis 43.6 h80 kpc. Ferrers densities as steep as r-3 do not fit the data, but the r-2 and Hernquist models yield excellent fits. We estimate the mass distributions of the stars and the gas and fit the dark matter directly. For a given gas equation of state and functional forms for the visible stars, gas, and dark matter, these models yield a distance-independent and temperature-independent measurement of the ratio of dark mass to stellar mass MDM/Mstars. We estimate a minimum MDM/Mstars greater than or equal to 4 which corresponds to a total mass slightly greater than that derived from the single-temperature models for distance D = 20h80 Mpc.
Geometrical evidence for dark matter: X-ray constraints on the mass of the elliptical galaxy NGC 720
NASA Technical Reports Server (NTRS)
Buote, David A.; Canizares, Claude R.
1994-01-01
We describe (1) a new test for dark matter and alternate theories of gravitation based on the relative geometries of the X-ray and optical surface brightness distributions and an assumed form for the potential, of the optical light, (2) a technique to measure the shapes of the total gravitating matter and dark matter of an ellipsoidal system which is insensitive to the precise value of the temperature of the gas and to modest temperature gradients, and (3) a new method to determine the ratio of dark mass to stellar mass that is dependent on the functional forms for the visible star, gas and dark matter distributions, but independent of the distance to the galaxy or the gas temperature. We apply these techniques to X-ray data from the ROSAT Position Sensitive Proportional Counter (PSPC) of the optically flattened elliptical galaxy NGC 720; the optical isophotes have ellipticity epsilon approximately 0.40 extending out to approximately 120 sec. The X-ray isophotes are significantly elongated, epsilon = 0.20-0.30 for semimajor axis a approximately 100 sec. The major axes of the optical and X-ray isophotes are misaligned by approximately 30 deg +/- 15 deg. Spectral analysis of the X-ray data reveals no evidence of temperature gradients or anisotropies and demonstrates that a single-temperature plasma (T approximately 0.6 keV) having subsolar heavy element abundances and a two-temperature model having solar abundances describe the spectrum equally well. Considering only the relative geometries of the X-ray and optical surface brightness distributions and an assumed functional form for the potential of the optical light, we conclude that matter distributed like the optical light cannot produce the observed ellipticities of the X-ray isophotes, independent of the gas pressure, the gas temperature, and the value of the stellar mass; this comparison assumes a state of quasi-hydrostatic equilibrium so that the three-dimensional surfaces of the gas emissivity trace the three-dimensional isopotential surfaces -- we discuss the viability of this assumption for NGC 720. Milgrom's Modification of Newtonian Dynamics (MOND) cannot dispel this manifestation of dark matter. Hence, geometrical considerations require, without mention of pressure or temperature, the presence of an extended, massive dark matter halo in NGC 720. Employing essentially the technique of Buote & Canizares (1992; Buote 1992) we use the shape of the X-ray surface brightness to constrain the shape of the total gravitating matter. The total matter is modeled as either an oblate or prolate spheriod of constant shape and orientation having either a Ferrers (rho approximately r(exp -n)) or Hernquist density. Assuming the X-ray gas is in hydrostatic equilibrium, we construct a model X-ray gas distribution for various temperature profiles. We determine the ellipticity of the total gravitating matter to be epsilon approximately 0.50-0.70. Using the single-temperature model we estimate a total mass approximately (0.41-1.4) x 10(exp 12) h(sub 80) solar mass interior to the ellipsoid of semimajor axis 43.6 h(sub 80) kpc. Ferrers densities as steep as r(exp -3) do not fit the data, but the r(exp -2) and Hernquist models yield excellent fits. We estimate the mass distributions of the stars and the gas and fit the dark matter directly. For a given gas equation of state and functional forms for the visible stars, gas, and dark matter, these models yield a distance-independent and temperature-independent measurement of the ratio of dark mass to stellar mass M(sub DM)/M(sub stars). We estimate a minimum M(sub DM)/M(sub stars) greater than or equal to 4 which corresponds to a total mass slightly greater than that derived from the single-temperature models for distance D = 20h(sub 80) Mpc.
NASA Technical Reports Server (NTRS)
Manos, P.; Turner, L. R.
1972-01-01
Approximations which can be evaluated with precision using floating-point arithmetic are presented. The particular set of approximations thus far developed are for the function TAN and the functions of USASI FORTRAN excepting SQRT and EXPONENTIATION. These approximations are, furthermore, specialized to particular forms which are especially suited to a computer with a small memory, in that all of the approximations can share one general purpose subroutine for the evaluation of a polynomial in the square of the working argument.
The effect of methamphetamine on an animal model of erectile function
Tar, Moses T.; Martinez, Luis R.; Nosanchuk, Joshua D.; Davies, Kelvin P.
2014-01-01
In the U.S. methamphetamine is considered a first-line treatment for attention-deficit hyperactivity disorder. It is also a common drug of abuse. Reports in patients and abusers suggest its use results in impotence. The efficacy of phosphodiesterase-5 inhibitors (PDE5i) to restore erectile function in these patient groups also has not been determined. In these studies we determined if the rat is a suitable animal model for the physiological effects of methamphetamine on erectile function, and if a PDE5i (tadalafil) has an effect on erectile function following methamphetamine treatment. In acute phase studies, erectile function was measured in male Sprague-Dawley rats, before and after administration of 10 mg/kg methamphetamine i.p. Chronically treated animals received escalating doses of methamphetamine (2.5 mg/kg (1st week), 5 mg/kg (2nd week), and 10 mg/kg (3rd week)) i.p. daily for three weeks and erectile function compared to untreated controls. The effect of co-administration of tadalafil was also investigated in rats acutely and chronically treated with methamphetamine. Erectile function was determined by measuring the intracorporal pressure/blood pressure ratio (ICP/BP) following cavernous nerve stimulation. In both acute and chronic phase studies we observed a significant increase in the rates of spontaneous erections after methamphetamine administration. In addition, following stimulation of the cavernous nerve at 4 and 6mA, there was a significant decrease in the ICP/BP ratio (approximately 50%), indicative of impaired erectile function. Tadalafil treatment reversed this effect. In chronically treated animals the ICP/BP ratio following 4 and 6mA stimulation decreased by approximately 50% compared to untreated animals and erectile dysfunction was also reversed by tadalafil. Overall our data suggests that the rat is a suitable animal model to study the physiological effect of methamphetamine on erectile function. Our work also provides a rationale for treating patients that report erectile dysfunction associated with therapeutics containing methamphetamine or amphetamine with PDE5i. PMID:24706617
Butler, Samuel D; Nauyoks, Stephen E; Marciniak, Michael A
2015-06-01
Of the many classes of bidirectional reflectance distribution function (BRDF) models, two popular classes of models are the microfacet model and the linear systems diffraction model. The microfacet model has the benefit of speed and simplicity, as it uses geometric optics approximations, while linear systems theory uses a diffraction approach to compute the BRDF, at the expense of greater computational complexity. In this Letter, nongrazing BRDF measurements of rough and polished surface-reflecting materials at multiple incident angles are scaled by the microfacet cross section conversion term, but in the linear systems direction cosine space, resulting in great alignment of BRDF data at various incident angles in this space. This results in a predictive BRDF model for surface-reflecting materials at nongrazing angles, while avoiding some of the computational complexities in the linear systems diffraction model.