#### Sample records for energy interval approximation

1. Optimal Approximation of Quadratic Interval Functions

NASA Technical Reports Server (NTRS)

Koshelev, Misha; Taillibert, Patrick

1997-01-01

Measurements are never absolutely accurate, as a result, after each measurement, we do not get the exact value of the measured quantity; at best, we get an interval of its possible values, For dynamically changing quantities x, the additional problem is that we cannot measure them continuously; we can only measure them at certain discrete moments of time t(sub 1), t(sub 2), ... If we know that the value x(t(sub j)) at a moment t(sub j) of the last measurement was in the interval [x-(t(sub j)), x + (t(sub j))], and if we know the upper bound D on the rate with which x changes, then, for any given moment of time t, we can conclude that x(t) belongs to the interval [x-(t(sub j)) - D (t - t(sub j)), x + (t(sub j)) + D (t - t(sub j))]. This interval changes linearly with time, an is, therefore, called a linear interval function. When we process these intervals, we get an expression that is quadratic and higher order w.r.t. time t, Such "quadratic" intervals are difficult to process and therefore, it is necessary to approximate them by linear ones. In this paper, we describe an algorithm that gives the optimal approximation of quadratic interval functions by linear ones.

2. Function approximation using adaptive and overlapping intervals

SciTech Connect

Patil, R.B.

1995-05-01

A problem common to many disciplines is to approximate a function given only the values of the function at various points in input variable space. A method is proposed for approximating a function of several to one variable. The model takes the form of weighted averaging of overlapping basis functions defined over intervals. The number of such basis functions and their parameters (widths and centers) are automatically determined using given training data and a learning algorithm. The proposed algorithm can be seen as placing a nonuniform multidimensional grid in the input domain with overlapping cells. The non-uniformity and overlap of the cells is achieved by a learning algorithm to optimize a given objective function. This approach is motivated by the fuzzy modeling approach and a learning algorithms used for clustering and classification in pattern recognition. The basics of why and how the approach works are given. Few examples of nonlinear regression and classification are modeled. The relationship between the proposed technique, radial basis neural networks, kernel regression, probabilistic neural networks, and fuzzy modeling is explained. Finally advantages and disadvantages are discussed.

3. A comparison of approximate interval estimators for the Bernoulli parameter

NASA Technical Reports Server (NTRS)

Leemis, Lawrence; Trivedi, Kishor S.

1993-01-01

The goal of this paper is to compare the accuracy of two approximate confidence interval estimators for the Bernoulli parameter p. The approximate confidence intervals are based on the normal and Poisson approximations to the binomial distribution. Charts are given to indicate which approximation is appropriate for certain sample sizes and point estimators.

ERIC Educational Resources Information Center

Lambert, Zarrel V.; And Others

1991-01-01

A method is presented that eliminates some interpretational limitations arising from assumptions implicit in the use of arbitrary rules of thumb to interpret exploratory factor analytic results. The bootstrap method is presented as a way of approximating sampling distributions of estimated factor loadings. Simulated datasets illustrate the…

5. A Comparison of Approximate Interval Estimators for the Bernoulli Parameter

DTIC Science & Technology

1993-12-01

The goal of this paper is to compare the accuracy of two approximate confidence interval estimators for the Bernoulli parameter p. The approximate...is appropriate for certain sample sizes and point estimators. Confidence interval , Binomial distribution, Bernoulli distribution, Poisson distribution.

6. Convergence of the natural approximations of piecewise monotone interval maps.

PubMed

Haydn, Nicolai

2004-06-01

We consider piecewise monotone interval mappings which are topologically mixing and satisfy the Markov property. It has previously been shown that the invariant densities of the natural approximations converge exponentially fast in uniform pointwise topology to the invariant density of the given map provided its derivative is piecewise Lipshitz continuous. We provide an example of a map which is Lipshitz continuous and for which the densities converge in the bounded variation norm at a logarithmic rate. This shows that in general one cannot expect exponential convergence in the bounded variation norm. Here we prove that if the derivative of the interval map is Holder continuous and its variation is well approximable (gamma-uniform variation for gamma>0), then the densities converge exponentially fast in the norm.

7. Constructing Approximate Confidence Intervals for Parameters with Structural Equation Models

ERIC Educational Resources Information Center

Cheung, Mike W. -L.

2009-01-01

Confidence intervals (CIs) for parameters are usually constructed based on the estimated standard errors. These are known as Wald CIs. This article argues that likelihood-based CIs (CIs based on likelihood ratio statistics) are often preferred to Wald CIs. It shows how the likelihood-based CIs and the Wald CIs for many statistics and psychometric…

8. Energy conservation - A test for scattering approximations

NASA Technical Reports Server (NTRS)

Acquista, C.; Holland, A. C.

1980-01-01

The roles of the extinction theorem and energy conservation in obtaining the scattering and absorption cross sections for several light scattering approximations are explored. It is shown that the Rayleigh, Rayleigh-Gans, anomalous diffraction, geometrical optics, and Shifrin approximations all lead to reasonable values of the cross sections, while the modified Mie approximation does not. Further examination of the modified Mie approximation for the ensembles of nonspherical particles reveals additional problems with that method.

9. Energy conservation - A test for scattering approximations

NASA Technical Reports Server (NTRS)

Acquista, C.; Holland, A. C.

1980-01-01

The roles of the extinction theorem and energy conservation in obtaining the scattering and absorption cross sections for several light scattering approximations are explored. It is shown that the Rayleigh, Rayleigh-Gans, anomalous diffraction, geometrical optics, and Shifrin approximations all lead to reasonable values of the cross sections, while the modified Mie approximation does not. Further examination of the modified Mie approximation for the ensembles of nonspherical particles reveals additional problems with that method.

10. Confidence and coverage for Bland-Altman limits of agreement and their approximate confidence intervals.

PubMed

Carkeet, Andrew; Goh, Yee Teng

2016-09-01

Bland and Altman described approximate methods in 1986 and 1999 for calculating confidence limits for their 95% limits of agreement, approximations which assume large subject numbers. In this paper, these approximations are compared with exact confidence intervals calculated using two-sided tolerance intervals for a normal distribution. The approximations are compared in terms of the tolerance factors themselves but also in terms of the exact confidence limits and the exact limits of agreement coverage corresponding to the approximate confidence interval methods. Using similar methods the 50th percentile of the tolerance interval are compared with the k values of 1.96 and 2, which Bland and Altman used to define limits of agreements (i.e. [Formula: see text]+/- 1.96Sd and [Formula: see text]+/- 2Sd). For limits of agreement outer confidence intervals, Bland and Altman's approximations are too permissive for sample sizes <40 (1999 approximation) and <76 (1986 approximation). For inner confidence limits the approximations are poorer, being permissive for sample sizes of <490 (1986 approximation) and all practical sample sizes (1999 approximation). Exact confidence intervals for 95% limits of agreements, based on two-sided tolerance factors, can be calculated easily based on tables and should be used in preference to the approximate methods, especially for small sample sizes.

11. A Computer Simulation Analysis of a Suggested Approximate Confidence Interval for System Maintainability.

DTIC Science & Technology

The paper presents an accuracy analysis of a suggested approximate confidence interval for system maintainability parameters. Technically, the...using the method of moments. The simulation has application to the classical confidence interval for mean time to repair of a series system, under the

12. A Novel Method of the Generalized Interval-Valued Fuzzy Rough Approximation Operators

PubMed Central

Xue, Tianyu; Xue, Zhan'ao; Cheng, Huiru; Liu, Jie; Zhu, Tailong

2014-01-01

Rough set theory is a suitable tool for dealing with the imprecision, uncertainty, incompleteness, and vagueness of knowledge. In this paper, new lower and upper approximation operators for generalized fuzzy rough sets are constructed, and their definitions are expanded to the interval-valued environment. Furthermore, the properties of this type of rough sets are analyzed. These operators are shown to be equivalent to the generalized interval fuzzy rough approximation operators introduced by Dubois, which are determined by any interval-valued fuzzy binary relation expressed in a generalized approximation space. Main properties of these operators are discussed under different interval-valued fuzzy binary relations, and the illustrative examples are given to demonstrate the main features of the proposed operators. PMID:25162065

13. Approximate representations of random intervals for hybrid uncertainty quantification in engineering modeling

SciTech Connect

Joslyn, C.

2004-01-01

We review our approach to the representation and propagation of hybrid uncertainties through high-complexity models, based on quantities known as random intervals. These structures have a variety of mathematical descriptions, for example as interval-valued random variables, statistical collections of intervals, or Dempster-Shafer bodies of evidence on the Borel field. But methods which provide simpler, albeit approximate, representations of random intervals are highly desirable, including p-boxes and traces. Each random interval, through its cumulative belief and plausibility measures functions, generates a unique p-box whose constituent CDFs are all of those consistent with the random interval. In turn, each p-box generates an equivalence class of random intervals consistent with it. Then, each p-box necessarily generates a unique trace which stands as the fuzzy set representation of the p-box or random interval. In turn each trace generates an equivalence class of p-boxes. The heart of our approach is to try to understand the tradeoffs between error and simplicity introduced when p-boxes or traces are used to stand in for various random interval operations. For example, Joslyn has argued that for elicitation and representation tasks, traces can be the most appropriate structure, and has proposed a method for the generation of canonical random intervals from elicited traces. But alternatively, models built as algebraic equations of uncertainty-valued variables (in our case, random-interval-valued) propagate uncertainty through convolution operations on basic algebraic expressions, and while convolution operations are defined on all three structures, we have observed that the results of only some of these operations are preserved as one moves through these three levels of specificity. We report on the status and progress of this modeling approach concerning the relations between these mathematical structures within this overall framework.

14. Energy Equation Approximation in Fluid Mechanics

NASA Technical Reports Server (NTRS)

Goldstein, Arthur W.

1959-01-01

There is some confusion in the literature of fluid mechanics in regard to the correct form of the energy equation for the study of the flow of nearly incompressible fluids. Several forms of the energy equation and their use are therefore discussed in this note.

15. Empirical prediction intervals improve energy forecasting.

PubMed

Kaack, Lynn H; Apt, Jay; Morgan, M Granger; McSharry, Patrick

2017-08-15

Hundreds of organizations and analysts use energy projections, such as those contained in the US Energy Information Administration (EIA)'s Annual Energy Outlook (AEO), for investment and policy decisions. Retrospective analyses of past AEO projections have shown that observed values can differ from the projection by several hundred percent, and thus a thorough treatment of uncertainty is essential. We evaluate the out-of-sample forecasting performance of several empirical density forecasting methods, using the continuous ranked probability score (CRPS). The analysis confirms that a Gaussian density, estimated on past forecasting errors, gives comparatively accurate uncertainty estimates over a variety of energy quantities in the AEO, in particular outperforming scenario projections provided in the AEO. We report probabilistic uncertainties for 18 core quantities of the AEO 2016 projections. Our work frames how to produce, evaluate, and rank probabilistic forecasts in this setting. We propose a log transformation of forecast errors for price projections and a modified nonparametric empirical density forecasting method. Our findings give guidance on how to evaluate and communicate uncertainty in future energy outlooks.

16. Non-Gaussian distributions of melodic intervals in music: The Lévy-stable approximation

Niklasson, Gunnar A.; Niklasson, Maria H.

2015-11-01

The analysis of structural patterns in music is of interest in order to increase our fundamental understanding of music, as well as for devising algorithms for computer-generated music, so called algorithmic composition. Musical melodies can be analyzed in terms of a “music walk” between the pitches of successive tones in a notescript, in analogy with the “random walk” model commonly used in physics. We find that the distribution of melodic intervals between tones can be approximated with a Lévy-stable distribution. Since music also exibits self-affine scaling, we propose that the “music walk” should be modelled as a Lévy motion. We find that the Lévy motion model captures basic structural patterns in classical as well as in folk music.

17. Analysis of accuracy of approximate, simultaneous, nonlinear confidence intervals on hydraulic heads in analytical and numerical test cases

USGS Publications Warehouse

Hill, M.C.

1989-01-01

Inaccuracies in parameter values, parameterization, stresses, and boundary conditions of analytical solutions and numerical models of groundwater flow produce errors in simulated hydraulic heads. These errors can be quantified in terms of approximate, simultaneous, nonlinear confidence intervals presented in the literature. Approximate confidence intervals can be applied in both error and sensitivity analysis and can be used prior to calibration or when calibration was accomplished by trial and error. The method is expanded for use in numerical problems, and the accuracy of the approximate intervals is evaluated using Monte Carlo runs. Four test cases are reported. -from Author

18. Accuracy in Parameter Estimation for the Root Mean Square Error of Approximation: Sample Size Planning for Narrow Confidence Intervals

ERIC Educational Resources Information Center

Kelley, Ken; Lai, Keke

2011-01-01

The root mean square error of approximation (RMSEA) is one of the most widely reported measures of misfit/fit in applications of structural equation modeling. When the RMSEA is of interest, so too should be the accompanying confidence interval. A narrow confidence interval reveals that the plausible parameter values are confined to a relatively…

19. Accuracy in Parameter Estimation for the Root Mean Square Error of Approximation: Sample Size Planning for Narrow Confidence Intervals

ERIC Educational Resources Information Center

Kelley, Ken; Lai, Keke

2011-01-01

The root mean square error of approximation (RMSEA) is one of the most widely reported measures of misfit/fit in applications of structural equation modeling. When the RMSEA is of interest, so too should be the accompanying confidence interval. A narrow confidence interval reveals that the plausible parameter values are confined to a relatively…

20. Energy flow: image correspondence approximation for motion analysis

Wang, Liangliang; Li, Ruifeng; Fang, Yajun

2016-04-01

We propose a correspondence approximation approach between temporally adjacent frames for motion analysis. First, energy map is established to represent image spatial features on multiple scales using Gaussian convolution. On this basis, energy flow at each layer is estimated using Gauss-Seidel iteration according to the energy invariance constraint. More specifically, at the core of energy invariance constraint is "energy conservation law" assuming that the spatial energy distribution of an image does not change significantly with time. Finally, energy flow field at different layers is reconstructed by considering different smoothness degrees. Due to the multiresolution origin and energy-based implementation, our algorithm is able to quickly address correspondence searching issues in spite of background noise or illumination variation. We apply our correspondence approximation method to motion analysis, and experimental results demonstrate its applicability.

1. Potential energy changes and the Boussinesq approximation in stratified fluids

Seshadri, K.; Rottman, J. W.; Nomura, K. K.; Stretch, D. D.

2002-11-01

The evolution of the potential energy of an ideal binary fluid mixture that is initially stably stratified is re-examined. The initial stable stratification evolves to a state of uniform density under the influence of molecular diffusion. We derive the appropriate governing equations using either a mass-averaged or a volume-averaged definition of velocity, and develop an energy budget describing the changes between kinetic, potential and internal energies without invoking the Boussinesq approximation. We compare the energy evolution equations with those based on the commonly used Boussinesq approximation and clarify some subtleties associated with the exchanges between the different forms of energy in this problem. In particular, we show that the mass-averaged velocity is nonzero and that all of the increase in potential energy comes from the initial kinetic energy.

2. Approximate Interval Estimation Methods for the Reliability of Systems Using Discrete Component Data

DTIC Science & Technology

1990-09-01

Three lower confidence interval estimation procedures for system reliability of coherent systems with cyclic components are developed and their...components. The combined procedure may yield a reasonably accurate lower confidence interval procedure for the reliability of coherent systems with mixtures of continuous and cyclic components.

3. Analytic saddlepoint approximation for ionization energy loss distributions

Sjue, S. K. L.; George, R. N.; Mathews, D. G.

2017-09-01

We present a saddlepoint approximation for ionization energy loss distributions, valid for arbitrary relativistic velocities of the incident particle 0 < v / c < 1 , provided that ionizing collisions are still the dominant energy loss mechanism. We derive a closed form solution closely related to Moyal's distribution. This distribution is intended for use in simulations with relatively low computational overhead. The approximation generally reproduces the Vavilov most probable energy loss and full width at half maximum to better than 1% and 10%, respectively, with significantly better agreement as Vavilov's κ approaches 1.

4. Correlation Energies from the Two-Component Random Phase Approximation.

PubMed

Kühn, Michael

2014-02-11

The correlation energy within the two-component random phase approximation accounting for spin-orbit effects is derived. The resulting plasmon equation is rewritten-analogously to the scalar relativistic case-in terms of the trace of two Hermitian matrices for (Kramers-restricted) closed-shell systems and then represented as an integral over imaginary frequency using the resolution of the identity approximation. The final expression is implemented in the TURBOMOLE program suite. The code is applied to the computation of equilibrium distances and vibrational frequencies of heavy diatomic molecules. The efficiency is demonstrated by calculation of the relative energies of the Oh-, D4h-, and C5v-symmetric isomers of Pb6. Results within the random phase approximation are obtained based on two-component Kohn-Sham reference-state calculations, using effective-core potentials. These values are finally compared to other two-component and scalar relativistic methods, as well as experimental data.

5. From free energy to expected energy: Improving energy-based value function approximation in reinforcement learning.

PubMed

Elfwing, Stefan; Uchibe, Eiji; Doya, Kenji

2016-12-01

Free-energy based reinforcement learning (FERL) was proposed for learning in high-dimensional state and action spaces. However, the FERL method does only really work well with binary, or close to binary, state input, where the number of active states is fewer than the number of non-active states. In the FERL method, the value function is approximated by the negative free energy of a restricted Boltzmann machine (RBM). In our earlier study, we demonstrated that the performance and the robustness of the FERL method can be improved by scaling the free energy by a constant that is related to the size of network. In this study, we propose that RBM function approximation can be further improved by approximating the value function by the negative expected energy (EERL), instead of the negative free energy, as well as being able to handle continuous state input. We validate our proposed method by demonstrating that EERL: (1) outperforms FERL, as well as standard neural network and linear function approximation, for three versions of a gridworld task with high-dimensional image state input; (2) achieves new state-of-the-art results in stochastic SZ-Tetris in both model-free and model-based learning settings; and (3) significantly outperforms FERL and standard neural network function approximation for a robot navigation task with raw and noisy RGB images as state input and a large number of actions.

6. Restricted second random phase approximations and Tamm-Dancoff approximations for electronic excitation energy calculations

Peng, Degao; Yang, Yang; Zhang, Peng; Yang, Weitao

2014-12-01

In this article, we develop systematically second random phase approximations (RPA) and Tamm-Dancoff approximations (TDA) of particle-hole and particle-particle channels for calculating molecular excitation energies. The second particle-hole RPA/TDA can capture double excitations missed by the particle-hole RPA/TDA and time-dependent density-functional theory (TDDFT), while the second particle-particle RPA/TDA recovers non-highest-occupied-molecular-orbital excitations missed by the particle-particle RPA/TDA. With proper orbital restrictions, these restricted second RPAs and TDAs have a formal scaling of only O(N4). The restricted versions of second RPAs and TDAs are tested with various small molecules to show some positive results. Data suggest that the restricted second particle-hole TDA (r2ph-TDA) has the best overall performance with a correlation coefficient similar to TDDFT, but with a larger negative bias. The negative bias of the r2ph-TDA may be induced by the unaccounted ground state correlation energy to be investigated further. Overall, the r2ph-TDA is recommended to study systems with both single and some low-lying double excitations with a moderate accuracy. Some expressions on excited state property evaluations, such as < hat{S}2rangle are also developed and tested.

7. Restricted second random phase approximations and Tamm-Dancoff approximations for electronic excitation energy calculations.

PubMed

Peng, Degao; Yang, Yang; Zhang, Peng; Yang, Weitao

2014-12-07

In this article, we develop systematically second random phase approximations (RPA) and Tamm-Dancoff approximations (TDA) of particle-hole and particle-particle channels for calculating molecular excitation energies. The second particle-hole RPA/TDA can capture double excitations missed by the particle-hole RPA/TDA and time-dependent density-functional theory (TDDFT), while the second particle-particle RPA/TDA recovers non-highest-occupied-molecular-orbital excitations missed by the particle-particle RPA/TDA. With proper orbital restrictions, these restricted second RPAs and TDAs have a formal scaling of only O(N(4)). The restricted versions of second RPAs and TDAs are tested with various small molecules to show some positive results. Data suggest that the restricted second particle-hole TDA (r2ph-TDA) has the best overall performance with a correlation coefficient similar to TDDFT, but with a larger negative bias. The negative bias of the r2ph-TDA may be induced by the unaccounted ground state correlation energy to be investigated further. Overall, the r2ph-TDA is recommended to study systems with both single and some low-lying double excitations with a moderate accuracy. Some expressions on excited state property evaluations, such as ⟨Ŝ(2)⟩ are also developed and tested.

8. Restricted second random phase approximations and Tamm-Dancoff approximations for electronic excitation energy calculations

SciTech Connect

Peng, Degao; Yang, Yang; Zhang, Peng; Yang, Weitao

2014-12-07

In this article, we develop systematically second random phase approximations (RPA) and Tamm-Dancoff approximations (TDA) of particle-hole and particle-particle channels for calculating molecular excitation energies. The second particle-hole RPA/TDA can capture double excitations missed by the particle-hole RPA/TDA and time-dependent density-functional theory (TDDFT), while the second particle-particle RPA/TDA recovers non-highest-occupied-molecular-orbital excitations missed by the particle-particle RPA/TDA. With proper orbital restrictions, these restricted second RPAs and TDAs have a formal scaling of only O(N{sup 4}). The restricted versions of second RPAs and TDAs are tested with various small molecules to show some positive results. Data suggest that the restricted second particle-hole TDA (r2ph-TDA) has the best overall performance with a correlation coefficient similar to TDDFT, but with a larger negative bias. The negative bias of the r2ph-TDA may be induced by the unaccounted ground state correlation energy to be investigated further. Overall, the r2ph-TDA is recommended to study systems with both single and some low-lying double excitations with a moderate accuracy. Some expressions on excited state property evaluations, such as 〈S{sup ^2}〉 are also developed and tested.

9. Energy equipartitioning in the classical time-dependent Hartree approximation

Straub, John E.; Karplus, Martin

1991-05-01

In the classical time-dependent Hartree approximation (TDH), the dynamics of a single molecule is approximated by that of a field'' (each field being N copies'' of the molecule which are transparent to one another while interacting with the system via a scaled force). It is shown that when some molecules are represented by a field of copies, while other molecules are represented normally, the average kinetic energy of the system increases linearly with the number of copies and diverges in the limit of large N. Nevertheless, the TDH method with appropriate energy scaling can serve as a useful means of enhancing the configurational sampling for problems involving coupled systems with disparate numbers of degrees of freedom.

10. Approximate scaling properties of RNA free energy landscapes

NASA Technical Reports Server (NTRS)

1996-01-01

RNA free energy landscapes are analysed by means of "time-series" that are obtained from random walks restricted to excursion sets. The power spectra, the scaling of the jump size distribution, and the scaling of the curve length measured with different yard stick lengths are used to describe the structure of these "time series". Although they are stationary by construction, we find that their local behavior is consistent with both AR(1) and self-affine processes. Random walks confined to excursion sets (i.e., with the restriction that the fitness value exceeds a certain threshold at each step) exhibit essentially the same statistics as free random walks. We find that an AR(1) time series is in general approximately self-affine on timescales up to approximately the correlation length. We present an empirical relation between the correlation parameter rho of the AR(1) model and the exponents characterizing self-affinity.

11. Flux tube spectra from approximate integrability at low energies

Dubovsky, S.; Flauger, R.; Gorbenko, V.

2015-03-01

We provide a detailed introduction to a method we recently proposed for calculating the spectrum of excitations of effective strings such as QCD flux tubes. The method relies on the approximate integrability of the low-energy effective theory describing the flux tube excitations and is based on the thermodynamic Bethe ansatz. The approximate integrability is a consequence of the Lorentz symmetry of QCD. For excited states, the convergence of the thermodynamic Bethe ansatz technique is significantly better than that of the traditional perturbative approach. We apply the new technique to the lattice spectra for fundamental flux tubes in gluodynamics in D = 3 + 1 and D = 2 + 1, and to k-strings in gluodynamics in D = 2 + 1. We identify a massive pseudoscalar resonance on the worldsheet of the confining strings in SU(3) gluodynamics in D = 3 + 1, and massive scalar resonances on the worldsheet of k = 2.3 strings in SU(6) gluodynamics in D = 2 + 1.

12. Approximate confidence intervals for moment-based estimators of the between-study variance in random effects meta-analysis.

PubMed

Jackson, Dan; Bowden, Jack; Baker, Rose

2015-12-01

Moment-based estimators of the between-study variance are very popular when performing random effects meta-analyses. This type of estimation has many advantages including computational and conceptual simplicity. Furthermore, by using these estimators in large samples, valid meta-analyses can be performed without the assumption that the treatment effects follow a normal distribution. Recently proposed moment-based confidence intervals for the between-study variance are exact under the random effects model but are quite elaborate. Here, we present a much simpler method for calculating approximate confidence intervals of this type. This method uses variance-stabilising transformations as its basis and can be used for a very wide variety of moment-based estimators in both the random effects meta-analysis and meta-regression models.

13. Energy loss and (de)coherence effects beyond eikonal approximation

Apolinário, Liliana; Armesto, Néstor; Milhano, Guilherme; Salgado, Carlos A.

2014-11-01

The parton branching process is known to be modified in the presence of a medium. Colour decoherence processes are known to determine the process of energy loss when the density of the medium is large enough to break the correlations between partons emitted from the same parent. In order to improve existing calculations that consider eikonal trajectories for both the emitter and the hardest emitted parton, we provide in this work the calculation of all finite energy corrections for the gluon radiation off a quark in a QCD medium that exist in the small angle approximation and for static scattering centres. Using the path integral formalism, all particles are allowed to undergo Brownian motion in the transverse plane and the offspring is allowed to carry an arbitrary fraction of the initial energy. The result is a general expression that contains both coherence and decoherence regimes that are controlled by the density of the medium and by the amount of broadening that each parton acquires independently.

14. Interval training intensity affects energy intake compensation in obese men.

PubMed

Alkahtani, Shaea A; Byrne, Nuala M; Hills, Andrew P; King, Neil A

2014-12-01

Compensatory responses may attenuate the effectiveness of exercise training in weight management. The aim of this study was to compare the effect of moderate- and high-intensity interval training on eating behavior compensation. Using a crossover design, 10 overweight and obese men participated in 4-week moderate (MIIT) and high (HIIT) intensity interval training. MIIT consisted of 5-min cycling stages at ± 20% of mechanical work at 45%VO(2)peak, and HIIT consisted of alternate 30-s work at 90%VO(2)peak and 30-s rests, for 30 to 45 min. Assessments included a constant-load exercise test at 45%VO(2)peak for 45 min followed by 60-min recovery. Appetite sensations were measured during the exercise test using a Visual Analog Scale. Food preferences (liking and wanting) were assessed using a computer-based paradigm, and this paradigm uses 20 photographic food stimuli varying along two dimensions, fat (high or low) and taste (sweet or nonsweet). An ad libitum test meal was provided after the constant-load exercise test. Exercise-induced hunger and desire to eat decreased after HIIT, and the difference between MIIT and HIIT in desire to eat approached significance (p = .07). Exercise-induced liking for high-fat nonsweet food tended to increase after MIIT and decreased after HIIT (p = .09). Fat intake decreased by 16% after HIIT, and increased by 38% after MIIT, with the difference between MIIT and HIIT approaching significance (p = .07). This study provides evidence that energy intake compensation differs between MIIT and HIIT.

15. Approximate theory the electromagnetic energy of solenoid in special relativity

Prastyaningrum, I.; Kartikaningsih, S.

2017-01-01

Solenoid is a device that is often used in electronic devices. A solenoid is electrified will cause a magnetic field. In our analysis, we just focus on the electromagnetic energy for solenoid form. We purpose to analyze by the theoretical approach in special relativity. Our approach is begun on the Biot Savart law and Lorentz force. Special theory relativity can be derived from the Biot Savart law, and for the energy can be derived from Lorentz for, by first determining the momentum equation. We choose the solenoid form with the goal of the future can be used to improve the efficiency of the electrical motor.

16. Approximate Confidence Intervals for Standardized Effect Sizes in the Two-Independent and Two-Dependent Samples Design

ERIC Educational Resources Information Center

Viechtbauer, Wolfgang

2007-01-01

Standardized effect sizes and confidence intervals thereof are extremely useful devices for comparing results across different studies using scales with incommensurable units. However, exact confidence intervals for standardized effect sizes can usually be obtained only via iterative estimation procedures. The present article summarizes several…

17. Excitation energies from extended random phase approximation employed with approximate one- and two-electron reduced density matrices

Chatterjee, Koushik; Pernal, Katarzyna

2012-11-01

Starting from Rowe's equation of motion we derive extended random phase approximation (ERPA) equations for excitation energies. The ERPA matrix elements are expressed in terms of the correlated ground state one- and two-electron reduced density matrices, 1- and 2-RDM, respectively. Three ways of obtaining approximate 2-RDM are considered: linearization of the ERPA equations, obtaining 2-RDM from density matrix functionals, and employing 2-RDM corresponding to an antisymmetrized product of strongly orthogonal geminals (APSG) ansatz. Applying the ERPA equations with the exact 2-RDM to a hydrogen molecule reveals that the resulting ^1Σ _g^+ excitation energies are not exact. A correction to the ERPA excitation operator involving some double excitations is proposed leading to the ERPA2 approach, which employs the APSG one- and two-electron reduced density matrices. For two-electron systems ERPA2 satisfies a consistency condition and yields exact singlet excitations. It is shown that 2-RDM corresponding to the APSG theory employed in the ERPA2 equations yields excellent singlet excitation energies for Be and LiH systems, and for the N2 molecule the quality of the potential energy curves is at the coupled cluster singles and doubles level. ERPA2 nearly satisfies the consistency condition for small molecules that partially explains its good performance.

18. Surface Segregation Energies of BCC Binaries from Ab Initio and Quantum Approximate Calculations

NASA Technical Reports Server (NTRS)

Good, Brian S.

2003-01-01

We compare dilute-limit segregation energies for selected BCC transition metal binaries computed using ab initio and quantum approximate energy method. Ab initio calculations are carried out using the CASTEP plane-wave pseudopotential computer code, while quantum approximate results are computed using the Bozzolo-Ferrante-Smith (BFS) method with the most recent parameterization. Quantum approximate segregation energies are computed with and without atomistic relaxation. The ab initio calculations are performed without relaxation for the most part, but predicted relaxations from quantum approximate calculations are used in selected cases to compute approximate relaxed ab initio segregation energies. Results are discussed within the context of segregation models driven by strain and bond-breaking effects. We compare our results with other quantum approximate and ab initio theoretical work, and available experimental results.

19. Long GRB with Additional High Energy Maxima after the End of the Low Energy T90 Intervals

Irene, Arkhangelskaja; Alexander, Zenin; Dmitry, Kirin; Elena, Voevodina

2013-01-01

Now GRB high energy γ-emission was observed mostly by detectors onboard Fermi and Agile satellites. During most part of GRB high energy γ-emission registered some later than low energy trigger and lasts several hundreds of seconds, but its maxima are within low energy t90 intervals both for short and long bursts. But GRB090323, GRB090328 and GRB090626 temporal profiles have additional maxima after low energy t90 intervals finished. These bursts temporal profile analysis have shown that faint peaks in low energy bands close to the ends of low energy t90 intervals preceded such maxima. Moreover, these events low energy spectral index β behavior differs from usual GRB one according to preliminary analysis. We suppose that these GRB could be separated as different GRB type. In presented article this new GRB type properties are discussed.

20. Density-functional correction of random-phase-approximation correlation with results for jellium surface energies

Kurth, Stefan; Perdew, John P.

1999-04-01

Since long-range electron-electron correlation is treated properly in the random phase approximation (RPA), we define short-range correlation as the correction to the RPA. The effects of short-range correlation are investigated here in the local spin density (LSD) approximation and the generalized gradient approximation (GGA). Results are presented for atoms, molecules, and jellium surfaces. It is found that (1) short-range correlation energies are less sensitive to the inclusion of density gradients than are full correlation energies, and (2) short-range correlation makes a surprisingly small contribution to surface and molecular atomization energies. In order to improve the accuracy of electronic-structure calculations, we therefore combine a GGA treatment of short-range correlation with a full RPA treatment of the exchange-correlation energy. This approach leads to jellium surface energies close to those of the LSD approximation for exchange and correlation together (but not for each separately).

1. Analytic energy-level densities of separable harmonic oscillators including approximate hindered rotor corrections

Döntgen, M.

2016-09-01

Energy-level densities are key for obtaining various chemical properties. In chemical kinetics, energy-level densities are used to predict thermochemistry and microscopic reaction rates. Here, an analytic energy-level density formulation is derived using inverse Laplace transformation of harmonic oscillator partition functions. Anharmonic contributions to the energy-level density are considered approximately using a literature model for the transition from harmonic to free motions. The present analytic energy-level density formulation for rigid rotor-harmonic oscillator systems is validated against the well-studied CO+O˙ H system. The approximate hindered rotor energy-level density corrections are validated against the well-studied H2O2 system. The presented analytic energy-level density formulation gives a basis for developing novel numerical simulation schemes for chemical processes.

2. Assessment of Tuning Methods for Enforcing Approximate Energy Linearity in Range-Separated Hybrid Functionals.

PubMed

Gledhill, Jonathan D; Peach, Michael J G; Tozer, David J

2013-10-08

A range of tuning methods, for enforcing approximate energy linearity through a system-by-system optimization of a range-separated hybrid functional, are assessed. For a series of atoms, the accuracy of the frontier orbital energies, ionization potentials, electron affinities, and orbital energy gaps is quantified, and particular attention is paid to the extent to which approximate energy linearity is actually achieved. The tuning methods can yield significantly improved orbital energies and orbital energy gaps, compared to those from conventional functionals. For systems with integer M electrons, optimal results are obtained using a tuning norm based on the highest occupied orbital energy of the M and M + 1 electron systems, with deviations of just 0.1-0.2 eV in these quantities, compared to exact values. However, detailed examination for the carbon atom illustrates a subtle cancellation between errors arising from nonlinearity and errors in the computed ionization potentials and electron affinities used in the tuning.

3. The performance of density functional approximations for the structures and relative energies of minimum energy crossing points

Abate, Bayileyegn A.; Peralta, Juan E.

2013-12-01

The structural parameters and relative energies of the minimum-energy crossing points (MECPs) of eight small molecules are calculated using five different representative density functional theory approximations as well as MP2, MP4, and CCSD(T) as a reference. Compared to high-level wavefunction methods, the main structural features of the MECPs of the systems included in this Letter are reproduced reasonably well by density functional approximations, in agreement with previous works. Our results show that when high-level wavefunction methods are computationally prohibitive, density functional approximations offer a good alternative for locating and characterizing the MECP in spin-forbidden chemical reactions.

4. Ermod: fast and versatile computation software for solvation free energy with approximate theory of solutions.

PubMed

2014-08-05

ERmod is a software package to efficiently and approximately compute the solvation free energy using the method of energy representation. Molecular simulation is to be conducted at two condensed-phase systems of the solution of interest and the reference solvent with test-particle insertion of the solute. The subprogram ermod in ERmod then provides a set of energy distribution functions from the simulation trajectories, and another subprogram slvfe determines the solvation free energy from the distribution functions through an approximate functional. This article describes the design and implementation of ERmod, and illustrates its performance in solvent water for two organic solutes and two protein solutes. Actually, the free-energy computation with ERmod is not restricted to the solvation in homogeneous medium such as fluid and polymer and can treat the binding into weakly ordered system with nano-inhomogeneity such as micelle and lipid membrane. ERmod is available on web at http://sourceforge.net/projects/ermod.

5. Förster Resonance Energy Transfer imaging in vivo with approximated Radiative Transfer Equation

PubMed Central

Soloviev, Vadim Y.; McGinty, James; Stuckey, Daniel W.; Laine, Romain; Wylezinska-Arridge, Marzena; Wells, Dominic J.; Sardini, Alessandro; Hajnal, Joseph V.; French, Paul M.W.; Arridge, Simon R.

2012-01-01

We describe a new light transport model that we have applied to 3-D image reconstruction of in vivo fluorescence lifetime tomography data applied to read out Förster Resonance Energy Transfer in mice. The model is an approximation to the Radiative Transfer Equation and combines light diffusion and rays optics. This approximation is well adopted to wide-field time-gated intensity based data acquisition. Reconstructed image data are presented and compared with results obtained by using the Telegraph Equation approximation. The new approach provides improved recovery of absorption and scattering parameters while returning similar values for the fluorescence parameters. PMID:22193187

6. Numerical integration for ab initio many-electron self energy calculations within the GW approximation

SciTech Connect

Liu, Fang; Lin, Lin; Vigil-Fowler, Derek; Lischner, Johannes; Kemper, Alexander F.; Sharifzadeh, Sahar; Jornada, Felipe H. da; Deslippe, Jack; Yang, Chao; and others

2015-04-01

We present a numerical integration scheme for evaluating the convolution of a Green's function with a screened Coulomb potential on the real axis in the GW approximation of the self energy. Our scheme takes the zero broadening limit in Green's function first, replaces the numerator of the integrand with a piecewise polynomial approximation, and performs principal value integration on subintervals analytically. We give the error bound of our numerical integration scheme and show by numerical examples that it is more reliable and accurate than the standard quadrature rules such as the composite trapezoidal rule. We also discuss the benefit of using different self energy expressions to perform the numerical convolution at different frequencies.

7. Second-order approximation for heat conduction: dissipation principle and free energies

PubMed Central

Amendola, Giovambattista; Golden, Murrough

2016-01-01

In the context of new models of heat conduction, the second-order approximation of Tzou's theory, derived by Quintanilla and Racke, has been studied recently by two of the present authors, where it was proved equivalent to a fading memory material. The importance of determining free energy functionals for such materials, and indeed for any material with memory, is emphasized. Because the kernel does not satisfy certain convexity restrictions that allow us to obtain various traditional free energies for materials with fading memory, it is necessary to restrict the study to the minimum and related free energies, which do not require these restrictions. Thus, the major part of this work is devoted to deriving an explicit expression for the minimum free energy. Simple modifications of this expression also give an intermediate free energy and the maximum free energy for the material. These derivations differ in certain important respects from earlier work on such free energies. PMID:27118896

8. Expeditious Stochastic Calculation of Random-Phase Approximation Energies for Thousands of Electrons in Three Dimensions.

PubMed

Neuhauser, Daniel; Rabani, Eran; Baer, Roi

2013-04-04

A fast method is developed for calculating the random phase approximation (RPA) correlation energy for density functional theory. The correlation energy is given by a trace over a projected RPA response matrix, and the trace is taken by a stochastic approach using random perturbation vectors. For a fixed statistical error in the total energy per electron, the method scales, at most, quadratically with the system size; however, in practice, due to self-averaging, it requires less statistical sampling as the system grows, and the performance is close to linear scaling. We demonstrate the method by calculating the RPA correlation energy for cadmium selenide and silicon nanocrystals with over 1500 electrons. We find that the RPA correlation energies per electron are largely independent of the nanocrystal size. In addition, we show that a correlated sampling technique enables calculation of the energy difference between two slightly distorted configurations with scaling and a statistical error similar to that of the total energy per electron.

9. Second-order approximation for heat conduction: dissipation principle and free energies.

PubMed

Amendola, Giovambattista; Fabrizio, Mauro; Golden, Murrough; Lazzari, Barbara

2016-02-01

In the context of new models of heat conduction, the second-order approximation of Tzou's theory, derived by Quintanilla and Racke, has been studied recently by two of the present authors, where it was proved equivalent to a fading memory material. The importance of determining free energy functionals for such materials, and indeed for any material with memory, is emphasized. Because the kernel does not satisfy certain convexity restrictions that allow us to obtain various traditional free energies for materials with fading memory, it is necessary to restrict the study to the minimum and related free energies, which do not require these restrictions. Thus, the major part of this work is devoted to deriving an explicit expression for the minimum free energy. Simple modifications of this expression also give an intermediate free energy and the maximum free energy for the material. These derivations differ in certain important respects from earlier work on such free energies.

10. Approximate method of free energy calculation for spin system with arbitrary connection matrix

Kryzhanovsky, Boris; Litinskii, Leonid

2015-01-01

The proposed method of the free energy calculation is based on the approximation of the energy distribution in the microcanonical ensemble by the Gaussian distribution. We hope that our approach will be effective for the systems with long-range interaction, where large coordination number q ensures the correctness of the central limit theorem application. However, the method provides good results also for systems with short-range interaction when the number q is not so large.

11. Dielectric Matrix Formulation of Correlation Energies in the Random Phase Approximation: Inclusion of Exchange Effects.

PubMed

Mussard, Bastien; Rocca, Dario; Jansen, Georg; Ángyán, János G

2016-05-10

Starting from the general expression for the ground state correlation energy in the adiabatic-connection fluctuation-dissipation theorem (ACFDT) framework, it is shown that the dielectric matrix formulation, which is usually applied to calculate the direct random phase approximation (dRPA) correlation energy, can be used for alternative RPA expressions including exchange effects. Within this famework, the ACFDT analog of the second order screened exchange (SOSEX) approximation leads to a logarithmic formula for the correlation energy similar to the direct RPA expression. Alternatively, the contribution of the exchange can be included in the kernel used to evaluate the response functions. In this case, the use of an approximate kernel is crucial to simplify the formalism and to obtain a correlation energy in logarithmic form. Technical details of the implementation of these methods are discussed, and it is shown that one can take advantage of density fitting or Cholesky decomposition techniques to improve the computational efficiency; a discussion on the numerical quadrature made on the frequency variable is also provided. A series of test calculations on atomic correlation energies and molecular reaction energies shows that exchange effects are instrumental for improvement over direct RPA results.

12. Infinite order sudden approximation for rotational energy transfer in gaseous mixtures

NASA Technical Reports Server (NTRS)

Goldflam, R.; Kouri, D. J.; Green, S.

1977-01-01

Rotational energy transfer in gaseous mixtures is analyzed within the framework of the infinite order sudden (IOS) approximation, and a new derivation of the IOS from the coupled states Lippman-Schwinger equation is presented. This approach shows the relation between the IOS and coupled state T matrices. The general IOS effective cross section can be factored into a finite sum of 'spectroscopic coefficients' and 'dynamical coefficients'. The evaluation of these coefficients is considered. Pressure broadening for the systems HD-He, HCl-He, CO-He, HCl-Ar, and CO2-Ar is calculated, and results based on the IOS approximation are compared with coupled state results. The IOS approximation is found to be very accurate whenever the rotor spacings are small compared to the kinetic energy, provided closed channels do not play too great a role.

13. Energy Stable Space-Time Discontinuous Galerkin Approximations of the 2-Fluid Plasma Equations

Rossmanith, James; Barth, Tim

2010-11-01

Energy stable variants of the space-time discontinuous Galerkin (DG) finite element method are developed that approximate the ideal two-fluid plasma equations. Using standard symmetrization techniques, the two-fluid plasma equations are symmeterized via convex entropy function and the introduction of entropy variables. Using these entropy variables, the source term coupling in the two-fluid plasma equations is shown to have iso-energetic properties so that the source term neither creates nor removes energy from the system. Finite-dimensional approximation spaces utilizing entropy variables are utilized in the DG discretization yielding provable nonlinear stability and exact preservation of this iso-energetic source term property. Numerical results for the two-fluid approximation of magnetic reconnection are presented verifying and assessing properties of the present method.

14. Infinite order sudden approximation for rotational energy transfer in gaseous mixtures

NASA Technical Reports Server (NTRS)

Goldflam, R.; Kouri, D. J.; Green, S.

1977-01-01

Rotational energy transfer in gaseous mixtures is analyzed within the framework of the infinite order sudden (IOS) approximation, and a new derivation of the IOS from the coupled states Lippman-Schwinger equation is presented. This approach shows the relation between the IOS and coupled state T matrices. The general IOS effective cross section can be factored into a finite sum of 'spectroscopic coefficients' and 'dynamical coefficients'. The evaluation of these coefficients is considered. Pressure broadening for the systems HD-He, HCl-He, CO-He, HCl-Ar, and CO2-Ar is calculated, and results based on the IOS approximation are compared with coupled state results. The IOS approximation is found to be very accurate whenever the rotor spacings are small compared to the kinetic energy, provided closed channels do not play too great a role.

15. Exchange-correlation energy from pairing matrix fluctuation and the particle-particle random phase approximation.

PubMed

van Aggelen, Helen; Yang, Yang; Yang, Weitao

2014-05-14

Despite their unmatched success for many applications, commonly used local, semi-local, and hybrid density functionals still face challenges when it comes to describing long-range interactions, static correlation, and electron delocalization. Density functionals of both the occupied and virtual orbitals are able to address these problems. The particle-hole (ph-) Random Phase Approximation (RPA), a functional of occupied and virtual orbitals, has recently known a revival within the density functional theory community. Following up on an idea introduced in our recent communication [H. van Aggelen, Y. Yang, and W. Yang, Phys. Rev. A 88, 030501 (2013)], we formulate more general adiabatic connections for the correlation energy in terms of pairing matrix fluctuations described by the particle-particle (pp-) propagator. With numerical examples of the pp-RPA, the lowest-order approximation to the pp-propagator, we illustrate the potential of density functional approximations based on pairing matrix fluctuations. The pp-RPA is size-extensive, self-interaction free, fully anti-symmetric, describes the strong static correlation limit in H2, and eliminates delocalization errors in H2(+) and other single-bond systems. It gives surprisingly good non-bonded interaction energies--competitive with the ph-RPA--with the correct R(-6) asymptotic decay as a function of the separation R, which we argue is mainly attributable to its correct second-order energy term. While the pp-RPA tends to underestimate absolute correlation energies, it gives good relative energies: much better atomization energies than the ph-RPA, as it has no tendency to underbind, and reaction energies of similar quality. The adiabatic connection in terms of pairing matrix fluctuation paves the way for promising new density functional approximations.

16. Exchange-correlation energy from pairing matrix fluctuation and the particle-particle random phase approximation

SciTech Connect

Aggelen, Helen van; Yang, Yang; Yang, Weitao

2014-05-14

Despite their unmatched success for many applications, commonly used local, semi-local, and hybrid density functionals still face challenges when it comes to describing long-range interactions, static correlation, and electron delocalization. Density functionals of both the occupied and virtual orbitals are able to address these problems. The particle-hole (ph-) Random Phase Approximation (RPA), a functional of occupied and virtual orbitals, has recently known a revival within the density functional theory community. Following up on an idea introduced in our recent communication [H. van Aggelen, Y. Yang, and W. Yang, Phys. Rev. A 88, 030501 (2013)], we formulate more general adiabatic connections for the correlation energy in terms of pairing matrix fluctuations described by the particle-particle (pp-) propagator. With numerical examples of the pp-RPA, the lowest-order approximation to the pp-propagator, we illustrate the potential of density functional approximations based on pairing matrix fluctuations. The pp-RPA is size-extensive, self-interaction free, fully anti-symmetric, describes the strong static correlation limit in H{sub 2}, and eliminates delocalization errors in H{sub 2}{sup +} and other single-bond systems. It gives surprisingly good non-bonded interaction energies – competitive with the ph-RPA – with the correct R{sup −6} asymptotic decay as a function of the separation R, which we argue is mainly attributable to its correct second-order energy term. While the pp-RPA tends to underestimate absolute correlation energies, it gives good relative energies: much better atomization energies than the ph-RPA, as it has no tendency to underbind, and reaction energies of similar quality. The adiabatic connection in terms of pairing matrix fluctuation paves the way for promising new density functional approximations.

17. Comparison of overlap-based models for approximating the exchange-repulsion energy.

PubMed

Söderhjelm, Pär; Karlström, Gunnar; Ryde, Ulf

2006-06-28

Different ways of approximating the exchange-repulsion energy with a classical potential function have been investigated by fitting various expressions to the exact exchange-repulsion energy for a large set of molecular dimers. The expressions involve either the orbital overlap or the electron-density overlap. For comparison, the parameter-free exchange-repulsion model of the effective fragment potential (EFP) is also evaluated. The results show that exchange-repulsion energy is nearly proportional to both the orbital overlap and the density overlap. For accurate results, a distance-dependent correction is needed in both cases. If few parameters are desired, orbital overlap is superior to density overlap, but the fit to density overlap can be significantly improved by introducing more parameters. The EFP performs well, except for delocalized pi systems. However, an overlap expression with a few parameters seems to be slightly more accurate and considerably easier to approximate.

18. Multiscale cross-approximate entropy analysis as a measurement of complexity between ECG R-R interval and PPG pulse amplitude series among the normal and diabetic subjects.

PubMed

Wu, Hsien-Tsai; Lee, Chih-Yuan; Liu, Cyuan-Cin; Liu, An-Bang

2013-01-01

Physiological signals often show complex fluctuation (CF) under the dual influence of temporal and spatial scales, and CF can be used to assess the health of physiologic systems in the human body. This study applied multiscale cross-approximate entropy (MC-ApEn) to quantify the complex fluctuation between R-R intervals series and photoplethysmography amplitude series. All subjects were then divided into the following two groups: healthy upper middle-aged subjects (Group 1, age range: 41-80 years, n = 27) and upper middle-aged subjects with type 2 diabetes (Group 2, age range: 41-80 years, n = 24). There are significant differences of heart rate variability, LHR, between Groups 1 and 2 (1.94 ± 1.21 versus 1.32 ± 1.00, P = 0.031). Results demonstrated differences in sum of large scale MC-ApEn (MC-ApEn(LS)) (5.32 ± 0.50 versus 4.74 ± 0.78, P = 0.003). This parameter has a good agreement with pulse-pulse interval and pulse amplitude ratio (PAR), a simplified assessment for baroreflex activity. In conclusion, this study employed the MC-ApEn method, integrating multiple temporal and spatial scales, to quantify the complex interaction between the two physical signals. The MC-ApEn(LS) parameter could accurately reflect disease process in diabetics and might be another way for assessing the autonomic nerve function.

19. Correlation matrix renormalization approximation for total energy calculations of correlated electron systems

Yao, Y. X.; Liu, C.; Liu, J.; Lu, W. C.; Wang, C. Z.; Ho, K. M.

2013-03-01

The recently introduced correlation matrix renormalization approximation (CMRA) was further developed by adopting a completely factorizable form for the renormalization z-factors, which assumes the validity of the Wick's theorem with respect to Gutzwiller wave function. This approximation (CMR-II) shows better dissociation behavior than the original one (CMR-I) based on the straightforward generalization of the Gutzwiller approximation to two-body interactions. We further improved the performance of CMRA by redefining the z-factors as a function of f(z) in CMR-II, which we call CMR-III. We obtained an analytical expression of f(z) by enforcing the equality in energy functional between CMR-III and full configuration interaction for the benchmark minimal basis H2. We show that CMR-III yields quite good binding energies and dissociation behaviors for various hydrogen clusters with converged basis set. Finally, we apply CMR-III to hydrogen crystal phases and compare the results with quantum Monte Carlo. Research supported by the U.S. Department of Energy, Office of Basic Energy Sciences, Division of Materials Sciences and Engineering. Ames Laboratory is operated for the U.S. DOE by Iowa State University under Contract No. DE-AC02-07CH11358.

20. Assessment of correlation energies based on the random-phase approximation

Paier, Joachim; Ren, Xinguo; Rinke, Patrick; Scuseria, Gustavo E.; Grüneis, Andreas; Kresse, Georg; Scheffler, Matthias

2012-04-01

The random-phase approximation to the ground state correlation energy (RPA) in combination with exact exchange (EX) has brought the Kohn-Sham (KS) density functional theory one step closer towards a universal, ‘general purpose first-principles method’. In an effort to systematically assess the influence of several correlation energy contributions beyond RPA, this paper presents dissociation energies of small molecules and solids, activation energies for hydrogen transfer and non-hydrogen transfer reactions, as well as reaction energies for a number of common test sets. We benchmark EX + RPA and several flavors of energy functionals going beyond it: second-order screened exchange (SOSEX), single-excitation (SE) corrections, renormalized single-excitation (rSE) corrections and their combinations. Both the SE correction and the SOSEX contribution to the correlation energy significantly improve on the notorious tendency of EX + RPA to underbind. Surprisingly, activation energies obtained using EX + RPA based on a KS reference alone are remarkably accurate. RPA + SOSEX + rSE provides an equal level of accuracy for reaction as well as activation energies and overall gives the most balanced performance, because of which it can be applied to a wide range of systems and chemical reactions.

1. Magnetotail energy storage and release during the CDAW 6 substorm analysis intervals

NASA Technical Reports Server (NTRS)

Baker, D. N.; Fritz, T. A.; Mcpherron, R. L.; Fairfield, D. H.; Kamide, Y.; Baumjohann, W.

1985-01-01

The concept of the Coordinated Data Analysis Workshop (CDAW) grew out of the International Magnetospheric Study (IMS) program. According to this concept, data are to be pooled from a wide variety of spacecraft and ground-based sources for limited time intervals. These data are to provide the basis for the performance of very detailed correlative analyses, usually with fairly limited physical problems in mind. However, in the case of the CDAW 6 truly global goals are involved. The primary goal is to trace the flow of energy from the solar wind through the magnetosphere to its ultimate dissipation by substorm processes. The present investigation has the specific goal to examine the evidence for the storage of solar wind energy in the magnetotail prior to substorm expansion phase onsets. Of particular interest is the determination, in individual substorm cases, of the time delays between the loading of energy into the magnetospheric system and the subsequent unloading of this energy.

2. Nucleation theory - Is replacement free energy needed?. [error analysis of capillary approximation

NASA Technical Reports Server (NTRS)

Doremus, R. H.

1982-01-01

It has been suggested that the classical theory of nucleation of liquid from its vapor as developed by Volmer and Weber (1926) needs modification with a factor referred to as the replacement free energy and that the capillary approximation underlying the classical theory is in error. Here, the classical nucleation equation is derived from fluctuation theory, Gibb's result for the reversible work to form a critical nucleus, and the rate of collision of gas molecules with a surface. The capillary approximation is not used in the derivation. The chemical potential of small drops is then considered, and it is shown that the capillary approximation can be derived from thermodynamic equations. The results show that no corrections to Volmer's equation are needed.

3. Nucleation theory - Is replacement free energy needed?. [error analysis of capillary approximation

NASA Technical Reports Server (NTRS)

Doremus, R. H.

1982-01-01

It has been suggested that the classical theory of nucleation of liquid from its vapor as developed by Volmer and Weber (1926) needs modification with a factor referred to as the replacement free energy and that the capillary approximation underlying the classical theory is in error. Here, the classical nucleation equation is derived from fluctuation theory, Gibb's result for the reversible work to form a critical nucleus, and the rate of collision of gas molecules with a surface. The capillary approximation is not used in the derivation. The chemical potential of small drops is then considered, and it is shown that the capillary approximation can be derived from thermodynamic equations. The results show that no corrections to Volmer's equation are needed.

4. Introducing electron capture into the unitary-convolution-approximation energy-loss theory at low velocities

Schiwietz, G.; Grande, P. L.

2011-11-01

Recent developments in the theoretical treatment of electronic energy losses of bare and screened ions in gases are presented. Specifically, the unitary-convolution-approximation (UCA) stopping-power model has proven its strengths for the determination of nonequilibrium effects for light as well as heavy projectiles at intermediate to high projectile velocities. The focus of this contribution will be on the UCA and its extension to specific projectile energies far below 100 keV/u, by considering electron-capture contributions at charge-equilibrium conditions.

5. Introducing electron capture into the unitary-convolution-approximation energy-loss theory at low velocities

SciTech Connect

Schiwietz, G.; Grande, P. L.

2011-11-15

Recent developments in the theoretical treatment of electronic energy losses of bare and screened ions in gases are presented. Specifically, the unitary-convolution-approximation (UCA) stopping-power model has proven its strengths for the determination of nonequilibrium effects for light as well as heavy projectiles at intermediate to high projectile velocities. The focus of this contribution will be on the UCA and its extension to specific projectile energies far below 100 keV/u, by considering electron-capture contributions at charge-equilibrium conditions.

6. Consolidation of hydrophobic transition criteria by using an approximate energy minimization approach.

PubMed

Patankar, Neelesh A

2010-06-01

Recent experimental work has successfully revealed pressure induced transition from Cassie to Wenzel state on rough hydrophobic substrates. Formulas, based on geometric considerations and imposed pressure, have been developed as transition criteria. In the past, transition has also been considered as a process of overcoming the energy barrier between the Cassie and Wenzel states. A unified understanding of the various considerations of transition has not been apparent. To address this issue, in this work, we consolidate the transition criteria with a homogenized energy minimization approach. This approach decouples the problem of minimizing the energy to wet the rough substrate, from the energy of the macroscopic drop. It is seen that the transition from Cassie to Wenzel state, due to depinning of the liquid-air interface, emerges from the approximate energy minimization approach if the pressure-volume energy associated with the impaled liquid in the roughness is included. This transition can be viewed as a process in which the work done by the pressure force is greater than the barrier due to the surface energy associated with wetting the roughness. It is argued that another transition mechanism, due to a sagging liquid-air interface that touches the bottom of the roughness grooves, is not typically relevant if the substrate roughness is designed such that the Cassie state is at lower energy compared to the Wenzel state.

7. A new heuristic method for approximating the number of local minima in partial RNA energy landscapes.

PubMed

Albrecht, Andreas A; Day, Luke; Abdelhadi Ep Souki, Ouala; Steinhöfel, Kathleen

2016-02-01

The analysis of energy landscapes plays an important role in mathematical modelling, simulation and optimisation. Among the main features of interest are the number and distribution of local minima within the energy landscape. Granier and Kallel proposed in 2002 a new sampling procedure for estimating the number of local minima. In the present paper, we focus on improved heuristic implementations of the general framework devised by Granier and Kallel with regard to run-time behaviour and accuracy of predictions. The new heuristic method is demonstrated for the case of partial energy landscapes induced by RNA secondary structures. While the computation of minimum free energy RNA secondary structures has been studied for a long time, the analysis of folding landscapes has gained momentum over the past years in the context of co-transcriptional folding and deeper insights into cell processes. The new approach has been applied to ten RNA instances of length between 99 nt and 504 nt and their respective partial energy landscapes defined by secondary structures within an energy offset ΔE above the minimum free energy conformation. The number of local minima within the partial energy landscapes ranges from 1440 to 3441. Our heuristic method produces for the best approximations on average a deviation below 3.0% from the true number of local minima.

8. Impact of nonlocal correlations over different energy scales: A dynamical vertex approximation study

Rohringer, G.; Toschi, A.

2016-09-01

In this paper, we investigate how nonlocal correlations affect, selectively, the physics of correlated electrons over different energy scales, from the Fermi level to the band edges. This goal is achieved by applying a diagrammatic extension of dynamical mean field theory (DMFT), the dynamical vertex approximation (D Γ A ), to study several spectral and thermodynamic properties of the unfrustrated Hubbard model in two and three dimensions. Specifically, we focus first on the low-energy regime by computing the electronic scattering rate and the quasiparticle mass renormalization for decreasing temperatures at a fixed interaction strength. This way, we obtain a precise characterization of the several steps through which the Fermi-liquid physics is progressively destroyed by nonlocal correlations. Our study is then extended to a broader energy range, by analyzing the temperature behavior of the kinetic and potential energy, as well as of the corresponding energy distribution functions. Our findings allow us to identify a smooth but definite evolution of the nature of nonlocal correlations by increasing interaction: They either increase or decrease the kinetic energy w.r.t. DMFT depending on the interaction strength being weak or strong, respectively. This reflects the corresponding evolution of the ground state from a nesting-driven (Slater) to a superexchange-driven (Heisenberg) antiferromagnet (AF), whose fingerprints are, thus, recognizable in the spatial correlations of the paramagnetic phase. Finally, a critical analysis of our numerical results of the potential energy at the largest interaction allows us to identify possible procedures to improve the ladder-based algorithms adopted in the dynamical vertex approximation.

9. Interval Data Analysis with the Energy Charting and Metrics Tool (ECAM)

SciTech Connect

Taasevigen, Danny J.; Katipamula, Srinivas; Koran, William

2011-07-07

Analyzing whole building interval data is an inexpensive but effective way to identify and improve building operations, and ultimately save money. Utilizing the Energy Charting and Metrics Tool (ECAM) add-in for Microsoft Excel, building operators and managers can begin implementing changes to their Building Automation System (BAS) after trending the interval data. The two data components needed for full analyses are whole building electricity consumption (kW or kWh) and outdoor air temperature (OAT). Using these two pieces of information, a series of plots and charts and be created in ECAM to monitor the buildings performance over time, gain knowledge of how the building is operating, and make adjustments to the BAS to improve efficiency and start saving money.

10. Communication: Multipole approximations of distant pair energies in local correlation methods with pair natural orbitals

Werner, Hans-Joachim

2016-11-01

The accuracy of multipole approximations for distant pair energies in local second-order Møller-Plesset perturbation theory (LMP2) as introduced by Hetzer et al. [Chem. Phys. Lett. 290, 143 (1998)] is investigated for three chemical reactions involving molecules with up to 92 atoms. Various iterative and non-iterative approaches are compared, using different energy thresholds for distant pair selection. It is demonstrated that the simple non-iterative dipole-dipole approximation, which has been used in several recent pair natural orbitals (PNO)-LMP2 and PNO-LCCSD (local coupled-cluster with singles and doubles) methods, may underestimate the distant pair energies by up to 50% and can lead to significant errors in relative energies, unless very tight thresholds are used. The accuracy can be much improved by including higher multipole orders and by optimizing the distant pair amplitudes iteratively along with all other amplitudes. A new approach is presented in which very small special PNO domains for distant pairs are used in the iterative approach. This reduces the number of distant pair amplitudes by 3 orders of magnitude and keeps the additional computational effort for the iterative optimization of distant pair amplitudes minimal.

11. Communication: Multipole approximations of distant pair energies in local correlation methods with pair natural orbitals.

PubMed

Werner, Hans-Joachim

2016-11-28

The accuracy of multipole approximations for distant pair energies in local second-order Møller-Plesset perturbation theory (LMP2) as introduced by Hetzer et al. [Chem. Phys. Lett. 290, 143 (1998)] is investigated for three chemical reactions involving molecules with up to 92 atoms. Various iterative and non-iterative approaches are compared, using different energy thresholds for distant pair selection. It is demonstrated that the simple non-iterative dipole-dipole approximation, which has been used in several recent pair natural orbitals (PNO)-LMP2 and PNO-LCCSD (local coupled-cluster with singles and doubles) methods, may underestimate the distant pair energies by up to 50% and can lead to significant errors in relative energies, unless very tight thresholds are used. The accuracy can be much improved by including higher multipole orders and by optimizing the distant pair amplitudes iteratively along with all other amplitudes. A new approach is presented in which very small special PNO domains for distant pairs are used in the iterative approach. This reduces the number of distant pair amplitudes by 3 orders of magnitude and keeps the additional computational effort for the iterative optimization of distant pair amplitudes minimal.

12. Low-energy extensions of the eikonal approximation to heavy-ion scattering

SciTech Connect

Aguiar, C.E.; Aguiar, C.E.; Zardi, F.; Vitturi, A.

1997-09-01

We discuss different schemes devised to extend the eikonal approximation to the regime of low bombarding energies (below 50 MeV per nucleon) in heavy-ion collisions. From one side we consider the first- and second-order corrections derived from Wallace{close_quote}s expansion. As an alternative approach we examine the procedure of accounting for the distortion of the eikonal straight-line trajectory by shifting the impact parameter to the corresponding classical turning point. The two methods are tested for different combinations of colliding systems and bombarding energies, by comparing the angular distributions they provide with the exact solution of the scattering problem. We find that the best results are obtained with the shifted trajectories, the Wallace expansion showing a slow convergence at low energies, in particular for heavy systems characterized by a strong Coulomb field. {copyright} {ital 1997} {ital The American Physical Society}

13. Eikonal approximation in the theory of energy loss by fast charged particles

SciTech Connect

Matveev, V. I. Makarov, D. N.; Gusarevich, E. S.

2011-05-15

Energy losses in fast charged particles as a result of collisions with atoms are considered in the eikonal approximation. It is shown that the nonperturbative contribution to effective stopping in the range of intermediate impact parameters (comparable with the characteristic sizes of the electron shells of the target atoms) may turn out to be significant as compared to shell corrections to the Bethe-Bloch formula calculated in perturbation theory. The simplifying assumptions are formulated under which the Bethe-Bloch formula can be derived in the eikonal approximation. It is shown that the allowance for nonperturbative effects may lead to considerable (up to 50%) corrections to the Bethe-Bloch formula. The applicability range for the Bethe-Bloch formula is analyzed. It is concluded that calculation of the energy loss in the eikonal approximation (in the range of impact parameters for which the Bethe-Bloch formula is normally used) is much more advantageous than analysis based on the Bethe-Bloch formula and its modifications because not only the Bloch correction is included in the former calculations, the range of intermediate impact parameters is also taken into account nonperturbatively; in addition, direct generalization to the cases of collisions of complex projectiles and targets is possible in this case.

14. Optimal quasifree approximation: Reconstructing the spectrum from ground-state energies

Campos Venuti, Lorenzo

2011-07-01

The sequence of ground-state energy density at finite size, eL, provides much more information than usually believed. Having at our disposal eL for short lattice sizes, we show how to reconstruct an approximate quasiparticle dispersion for any interacting model. The accuracy of this method relies on the best possible quasifree approximation to the model, consistent with the observed values of the energy eL. We also provide a simple criterion to assess whether such a quasifree approximation is valid. As a side effect, our method is able to assess whether the nature of the quasiparticles is fermionic or bosonic together with the effective boundary conditions of the model. When applied to the spin-1/2 Heisenberg model, the method produces a band of Fermi quasiparticles very close to the exact one of des Cloizeaux and Pearson. The method is further tested on a spin-1/2 Heisenberg model with explicit dimerization and on a spin-1 chain with single-ion anisotropy. A connection with the Riemann hypothesis is also pointed out.

15. Minimum energy and the end of the inspiral in the post-Newtonian approximation

Cabero, Miriam; Nielsen, Alex B.; Lundgren, Andrew P.; Capano, Collin D.

2017-03-01

The early inspiral phase of a compact binary coalescence is well modeled by the post-Newtonian (PN) approximation to the orbital energy and gravitational wave flux. The transition from the inspiral phase to the plunge can be defined by the minimum energy circular orbit (MECO). In the extreme mass-ratio limit the PN energy equals the energy of the (post-Newtonian expanded) exact Kerr solution. However, for comparable-mass systems the MECO of the PN energy does not exist when bodies have large spins and no analytical solution to the end of the inspiral is known. By including the exact Kerr limit, we extract a well-defined minimum of the orbital energy beyond which the plunge or merger occurs. We study the hybrid condition for a number of cases of both black hole and neutron stars and compare to other commonly employed definitions. Our method can be used for any known order of the post-Newtonian series and enables the MECO condition to be used to define the end of the inspiral phase for highly spinning, comparable mass systems.

16. A novel analytical approximation technique for highly nonlinear oscillators based on the energy balance method

Hosen, Md. Alal; Chowdhury, M. S. H.; Ali, Mohammad Yeakub; Ismail, Ahmad Faris

In the present paper, a novel analytical approximation technique has been proposed based on the energy balance method (EBM) to obtain approximate periodic solutions for the focus generalized highly nonlinear oscillators. The expressions of the natural frequency-amplitude relationship are obtained using a novel analytical way. The accuracy of the proposed method is investigated on three benchmark oscillatory problems, namely, the simple relativistic oscillator, the stretched elastic wire oscillator (with a mass attached to its midpoint) and the Duffing-relativistic oscillator. For an initial oscillation amplitude A0 = 100, the maximal relative errors of natural frequency found in three oscillators are 2.1637%, 0.0001% and 1.201%, respectively, which are much lower than the errors found using the existing methods. It is highly remarkable that an excellent accuracy of the approximate natural frequency has been found which is valid for the whole range of large values of oscillation amplitude as compared with the exact ones. Very simple solution procedure and high accuracy that is found in three benchmark problems reveal the novelty, reliability and wider applicability of the proposed analytical approximation technique.

17. Subtraction method in the second random-phase approximation: First applications with a Skyrme energy functional

Gambacurta, D.; Grasso, M.; Engel, J.

2015-09-01

We make use of a subtraction procedure, introduced to overcome double-counting problems in beyond-mean-field theories, in the second random-phase-approximation (SRPA) for the first time. This procedure guarantees the stability of the SRPA (so that all excitation energies are real). We show that the method fits perfectly into nuclear density-functional theory. We illustrate applications to the monopole and quadrupole response and to low-lying 0+ and 2+ states in the nucleus 16O . We show that the subtraction procedure leads to (i) results that are weakly cutoff dependent and (ii) a considerable reduction of the SRPA downwards shift with respect to the random-phase approximation (RPA) spectra (systematically found in all previous applications). This implementation of the SRPA model will allow a reliable analysis of the effects of two particle-two hole configurations (2p2h) on the excitation spectra of medium-mass and heavy nuclei.

18. Stabilization of quantum energy flows within the approximate quantum trajectory approach.

PubMed

Garashchuk, Sophya; Rassolov, Vitaly

2007-10-18

The hydrodynamic, or the de Broglie-Bohm, formulation provides an alternative to the conventional time-dependent Schrödinger equation based on quantum trajectories. The trajectory dynamics scales favorably with the system size, but it is, generally, unstable due to singularities in the exact quantum potential. The approximate quantum potential based on the fitting of the nonclassical component of the momentum operator in terms of a small basis is numerically stable but can lead to inaccurate large net forces in bound systems. We propose to compensate errors in the approximate quantum potential by applying a semiempirical friction-like force. This significantly improves the description of zero-point energy in bound systems. Examples are given for one-dimensional models relevant to nuclear dynamics.

19. Energy transfer in structured and unstructured environments: Master equations beyond the Born-Markov approximations

SciTech Connect

Iles-Smith, Jake; Dijkstra, Arend G.; Lambert, Neill; Nazir, Ahsan

2016-01-28

We explore excitonic energy transfer dynamics in a molecular dimer system coupled to both structured and unstructured oscillator environments. By extending the reaction coordinate master equation technique developed by Iles-Smith et al. [Phys. Rev. A 90, 032114 (2014)], we go beyond the commonly used Born-Markov approximations to incorporate system-environment correlations and the resultant non-Markovian dynamical effects. We obtain energy transfer dynamics for both underdamped and overdamped oscillator environments that are in perfect agreement with the numerical hierarchical equations of motion over a wide range of parameters. Furthermore, we show that the Zusman equations, which may be obtained in a semiclassical limit of the reaction coordinate model, are often incapable of describing the correct dynamical behaviour. This demonstrates the necessity of properly accounting for quantum correlations generated between the system and its environment when the Born-Markov approximations no longer hold. Finally, we apply the reaction coordinate formalism to the case of a structured environment comprising of both underdamped (i.e., sharply peaked) and overdamped (broad) components simultaneously. We find that though an enhancement of the dimer energy transfer rate can be obtained when compared to an unstructured environment, its magnitude is rather sensitive to both the dimer-peak resonance conditions and the relative strengths of the underdamped and overdamped contributions.

20. Proposal for determining the energy content of gravitational waves by using approximate symmetries of differential equations

SciTech Connect

Hussain, Ibrar; Qadir, Asghar; Mahomed, F. M.

2009-06-15

Since gravitational wave spacetimes are time-varying vacuum solutions of Einstein's field equations, there is no unambiguous means to define their energy content. However, Weber and Wheeler had demonstrated that they do impart energy to test particles. There have been various proposals to define the energy content, but they have not met with great success. Here we propose a definition using 'slightly broken' Noether symmetries. We check whether this definition is physically acceptable. The procedure adopted is to appeal to 'approximate symmetries' as defined in Lie analysis and use them in the limit of the exact symmetry holding. A problem is noted with the use of the proposal for plane-fronted gravitational waves. To attain a better understanding of the implications of this proposal we also use an artificially constructed time-varying nonvacuum metric and evaluate its Weyl and stress-energy tensors so as to obtain the gravitational and matter components separately and compare them with the energy content obtained by our proposal. The procedure is also used for cylindrical gravitational wave solutions. The usefulness of the definition is demonstrated by the fact that it leads to a result on whether gravitational waves suffer self-damping.

1. An evaluation of energy-independent heavy ion transport coefficient approximations

NASA Technical Reports Server (NTRS)

Townsend, L. W.; Wilson, J. W.

1988-01-01

Utilizing a one-dimensional transport theory for heavy ion propagation, evaluations of typical energy-dependent transport coefficient approximations are made by comparing theoretical depth-dose predictions to published experimental values for incident 670 MeV/nucleon Ne-20 beams in water. Results are presented for cases where the input nuclear absorption cross sections, or input fragmentation parameters, or both, are fixed. The lack of fragment charge and mass concentration resulting from the use of Silberberg-Tsao fragmentation parameters continues to be the main source of disagreement between theory and experiment.

2. Validity of the Spin-Wave Approximation for the Free Energy of the Heisenberg Ferromagnet

Correggi, Michele; Giuliani, Alessandro; Seiringer, Robert

2015-10-01

We consider the quantum ferromagnetic Heisenberg model in three dimensions, for all spins S ≥ 1/2. We rigorously prove the validity of the spin-wave approximation for the excitation spectrum, at the level of the first non-trivial contribution to the free energy at low temperatures. Our proof comes with explicit, constructive upper and lower bounds on the error term. It uses in an essential way the bosonic formulation of the model in terms of the Holstein-Primakoff representation. In this language, the model describes interacting bosons with a hard-core on-site repulsion and a nearest-neighbor attraction. This attractive interaction makes the lower bound on the free energy particularly tricky: the key idea there is to prove a differential inequality for the two-particle density, which is thereby shown to be smaller than the probability density of a suitably weighted two-particle random process on the lattice.

3. Self-energy-modified Poisson-Nernst-Planck equations: WKB approximation and finite-difference approaches.

PubMed

Xu, Zhenli; Ma, Manman; Liu, Pei

2014-07-01

We propose a modified Poisson-Nernst-Planck (PNP) model to investigate charge transport in electrolytes of inhomogeneous dielectric environment. The model includes the ionic polarization due to the dielectric inhomogeneity and the ion-ion correlation. This is achieved by the self energy of test ions through solving a generalized Debye-Hückel (DH) equation. We develop numerical methods for the system composed of the PNP and DH equations. Particularly, toward the numerical challenge of solving the high-dimensional DH equation, we developed an analytical WKB approximation and a numerical approach based on the selective inversion of sparse matrices. The model and numerical methods are validated by simulating the charge diffusion in electrolytes between two electrodes, for which effects of dielectrics and correlation are investigated by comparing the results with the prediction by the classical PNP theory. We find that, at the length scale of the interface separation comparable to the Bjerrum length, the results of the modified equations are significantly different from the classical PNP predictions mostly due to the dielectric effect. It is also shown that when the ion self energy is in weak or mediate strength, the WKB approximation presents a high accuracy, compared to precise finite-difference results.

4. Cohesion and promotion energies in the transition metals: Implications of the local-density approximation

Watson, R. E.; Fernando, G. W.; Weinert, M.; Wang, Y. J.; Davenport, J. W.

1991-04-01

The accuracy of the local-density (LDA) or local-spin-density (LSDA) approximations when applied to transition metals is of great concern. Estimates of the cohesive energy compare the total energy of the solid with that of the free atom. This involves chosing the reference state of the free atom which, as a rule, will not be the free atom's ground state in LDA or LSDA. Comparing one reference state versus another, e.g., the dn-1s vs dn-2s2 for a transition metal, corresponds to calculating an s-d promotion energy Δ, which may be compared with experiment. Gunnarsson and Jones (GJ) [Phys. Rev. B 31, 7588 (1985)] found for the 3d row that the calculated Δ displayed systematic errors which they attributed to a difference in error within the LSDA in the treatment of the coupling of the outer-core electrons with the d versus non-d valence electrons. This study has been extended to relativistic calculations for the 3d, 4d, and 5d rows and for other promotions. The situation is more complicated than suggested by GJ, and its implications for cohesive energy estimates will be discussed.

5. Performance and energy systems contributions during upper-body sprint interval exercise

PubMed Central

Franchini, Emerson; Takito, Monica Yuri; Dal’Molin Kiss, Maria Augusta Peduti

2016-01-01

The main purpose of this study was to investigate the performance and energy systems contribution during four upper-body Wingate tests interspersed by 3-min intervals. Fourteen well-trained male adult Judo athletes voluntarily took part in the present study. These athletes were from state to national level, were in their competitive period, but not engaged in any weight loss procedure. Energy systems contributions were estimated using oxygen uptake and blood lactate measurements. The main results indicated that there was higher glycolytic contribution compared to oxidative (P<0.001) during bout 1, but lower glycolytic contribution was observed compared to the phosphagen system (adenosine triphosphate-creatine phosphate, ATP-PCr) contribution during bout 3 (P<0.001), lower glycolytic contribution compared to oxidative and ATP-PCr (P<0.001 for both comparisons) contributions during bout 4 and lower oxidative compared to ATP-PCr during bout 4 (P=0.040). For the energy system contribution across Wingate bouts, the ATP-PCr contribution during bout 1 was lower than that observed during bout 4 (P=0.005), and the glycolytic system presented higher percentage contribution in the first bout compared to the third and fourth bouts (P<0.001 for both comparisons), and higher percentage participation in the second compared to the fourth bout (P<0.001). These results suggest that absolute oxidative and ATP-PCr participations were kept constant across Wingate tests, but there was an increase in relative participation of ATP-PCr in bout 4 compared to bout 1, probably due to the partial phosphocreatine resynthesis during intervals and to the decreased glycolytic activity. PMID:28119874

6. Performance and energy systems contributions during upper-body sprint interval exercise.

PubMed

Franchini, Emerson; Takito, Monica Yuri; Dal'Molin Kiss, Maria Augusta Peduti

2016-12-01

The main purpose of this study was to investigate the performance and energy systems contribution during four upper-body Wingate tests interspersed by 3-min intervals. Fourteen well-trained male adult Judo athletes voluntarily took part in the present study. These athletes were from state to national level, were in their competitive period, but not engaged in any weight loss procedure. Energy systems contributions were estimated using oxygen uptake and blood lactate measurements. The main results indicated that there was higher glycolytic contribution compared to oxidative (P<0.001) during bout 1, but lower glycolytic contribution was observed compared to the phosphagen system (adenosine triphosphate-creatine phosphate, ATP-PCr) contribution during bout 3 (P<0.001), lower glycolytic contribution compared to oxidative and ATP-PCr (P<0.001 for both comparisons) contributions during bout 4 and lower oxidative compared to ATP-PCr during bout 4 (P=0.040). For the energy system contribution across Wingate bouts, the ATP-PCr contribution during bout 1 was lower than that observed during bout 4 (P=0.005), and the glycolytic system presented higher percentage contribution in the first bout compared to the third and fourth bouts (P<0.001 for both comparisons), and higher percentage participation in the second compared to the fourth bout (P<0.001). These results suggest that absolute oxidative and ATP-PCr participations were kept constant across Wingate tests, but there was an increase in relative participation of ATP-PCr in bout 4 compared to bout 1, probably due to the partial phosphocreatine resynthesis during intervals and to the decreased glycolytic activity.

7. Development of approximate method to analyze the characteristics of latent heat thermal energy storage system

SciTech Connect

Saitoh, T.S.; Hoshi, Akira

1999-07-01

Third Conference of the Parties to the U.N. Framework Convention on Climate Change (COP3) held in last December in Kyoto urged the industrialized nation to reduce carbon dioxide (CO{sub 2}) emissions by 5.2 percent (on the average) below 1990 level until the period between 2008 and 2012 (Kyoto protocol). This implies that even for the most advanced countries like the US, Japan, and EU implementation of drastic policies and overcoming many barriers in market should be necessary. One idea which leads to a path of low carbon intensity is to adopt an energy storage concept. One of the reasons that the efficiency of the conventional energy systems has been relatively low is ascribed to lacking of energy storage subsystem. Most of the past energy systems, for example, air-conditioning system, do not have energy storage part and the system usually operates with low energy efficiency. Firstly, the effect of reducing CO{sub 2} emissions was also examined if the LHTES subsystems were incorporated in all the residential and building air-conditioning systems. Another field of application of the LHTES is of course transportation. Future vehicle will be electric or hybrid vehicle. However, these vehicles will need considerable energy for air-conditioning. The LHTES system will provide enough energy for this purpose by storing nighttime electricity or rejected heat from the radiator or motor. Melting and solidification of phase change material (PCM) in a capsule is of practical importance in latent heat thermal energy storage (LHTES) systems which are considered to be very promising to reduce a peak demand of electricity in the summer season and also reduce carbon dioxide (CO{sub 2}) emissions. Two melting modes are involved in melting in capsules. One is close-contact melting between the solid bulk and the capsule wall, and another is natural convection melting in the liquid (melt) region. Close-contact melting processes for a single enclosure have been solved using several

8. Exploring the Limits of Density Functional Approximations for Interaction Energies of Molecular Precursors to Organic Electronics.

PubMed

Steinmann, Stephan N; Corminboeuf, Clemence

2012-11-13

Neutral and charged assemblies of π-conjugated molecules span the field of organic electronics. Electronic structure computations can provide valuable information regarding the nature of the intermolecular interactions within molecular precursors to organic electronics. Here, we introduce a database of neutral (Pi29n) and radical (Orel26rad) dimer complexes that represent binding energies between organic functional units. The new benchmarks are used to test approximate electronic structure methods. Achieving accurate interaction energies for neutral complexes (Pi29n) is straightforward, so long as dispersion interactions are properly taken into account. However, π-dimer radical cations (Orel26rad) are examples of highly challenging situations for density functional approximations. The role of dispersion corrections is crucial, yet simultaneously long-range corrected exchange schemes are necessary to provide the proper dimer dissociation behavior. Nevertheless, long-range corrected functionals seriously underestimate the binding energy of Orel26rad at equilibrium geometries. In fact, only ωB97X-D, an empirical exchange-correlation functional fitted together with an empirical "classical" dispersion correction, leads to suitable results. Valuable alternatives are the more demanding MP2/6-31G*(0.25) level, as well as the most cost-effective combination involving a dispersion corrected long-range functional together with a smaller practical size basis set (e.g., LC-ωPBEB95-dDsC/6-31G*). The Orel26rad test set should serve as an ideal benchmark for assessing the performance of improved schemes.

9. A Different View of Solar Spectral Irradiance Variations: Modeling Total Energy over Six-Month Intervals

Woods, Thomas N.; Snow, Martin; Harder, Jerald; Chapman, Gary; Cookson, Angela

2015-10-01

A different approach to studying solar spectral irradiance (SSI) variations, without the need for long-term (multi-year) instrument degradation corrections, is examining the total energy of the irradiance variation during 6-month periods. This duration is selected because a solar active region typically appears suddenly and then takes 5 to 7 months to decay and disperse back into the quiet-Sun network. The solar outburst energy, which is defined as the irradiance integrated over the 6-month period and thus includes the energy from all phases of active region evolution, could be considered the primary cause for the irradiance variations. Because solar cycle variation is the consequence of multiple active region outbursts, understanding the energy spectral variation may provide a reasonable estimate of the variations for the 11-year solar activity cycle. The moderate-term (6-month) variations from the Solar Radiation and Climate Experiment (SORCE) instruments can be decomposed into positive (in-phase with solar cycle) and negative (out-of-phase) contributions by modeling the variations using the San Fernando Observatory (SFO) facular excess and sunspot deficit proxies, respectively. These excess and deficit variations are fit over 6-month intervals every 2 months over the mission, and these fitted variations are then integrated over time for the 6-month energy. The dominant component indicates which wavelengths are in-phase and which are out-of-phase with solar activity. The results from this study indicate out-of-phase variations for the 1400 - 1600 nm range, with all other wavelengths having in-phase variations.

10. A Different View of Solar Spectral Irradiance Variations: Modeling Total Energy over Six-Month Intervals.

PubMed

Woods, Thomas N; Snow, Martin; Harder, Jerald; Chapman, Gary; Cookson, Angela

A different approach to studying solar spectral irradiance (SSI) variations, without the need for long-term (multi-year) instrument degradation corrections, is examining the total energy of the irradiance variation during 6-month periods. This duration is selected because a solar active region typically appears suddenly and then takes 5 to 7 months to decay and disperse back into the quiet-Sun network. The solar outburst energy, which is defined as the irradiance integrated over the 6-month period and thus includes the energy from all phases of active region evolution, could be considered the primary cause for the irradiance variations. Because solar cycle variation is the consequence of multiple active region outbursts, understanding the energy spectral variation may provide a reasonable estimate of the variations for the 11-year solar activity cycle. The moderate-term (6-month) variations from the Solar Radiation and Climate Experiment (SORCE) instruments can be decomposed into positive (in-phase with solar cycle) and negative (out-of-phase) contributions by modeling the variations using the San Fernando Observatory (SFO) facular excess and sunspot deficit proxies, respectively. These excess and deficit variations are fit over 6-month intervals every 2 months over the mission, and these fitted variations are then integrated over time for the 6-month energy. The dominant component indicates which wavelengths are in-phase and which are out-of-phase with solar activity. The results from this study indicate out-of-phase variations for the 1400 - 1600 nm range, with all other wavelengths having in-phase variations.

11. Directed energy transfer in films of CdSe quantum dots: beyond the point dipole approximation.

PubMed

Zheng, Kaibo; Žídek, Karel; Abdellah, Mohamed; Zhu, Nan; Chábera, Pavel; Lenngren, Nils; Chi, Qijin; Pullerits, Tõnu

2014-04-30

Understanding of Förster resonance energy transfer (FRET) in thin films composed of quantum dots (QDs) is of fundamental and technological significance in optimal design of QD based optoelectronic devices. The separation between QDs in the densely packed films is usually smaller than the size of QDs, so that the simple point-dipole approximation, widely used in the conventional approach, can no longer offer quantitative description of the FRET dynamics in such systems. Here, we report the investigations of the FRET dynamics in densely packed films composed of multisized CdSe QDs using ultrafast transient absorption spectroscopy and theoretical modeling. Pairwise interdot transfer time was determined in the range of 1.5 to 2 ns by spectral analyses which enable separation of the FRET contribution from intrinsic exciton decay. A rational model is suggested by taking into account the distribution of the electronic transition densities in the dots and using the film morphology revealed by AFM images. The FRET dynamics predicted by the model are in good quantitative agreement with experimental observations without adjustable parameters. Finally, we use our theoretical model to calculate dynamics of directed energy transfer in ordered multilayer QD films, which we also observe experimentally. The Monte Carlo simulations reveal that three ideal QD monolayers can provide exciton funneling efficiency above 80% from the most distant layer. Thereby, utilization of directed energy transfer can significantly improve light harvesting efficiency of QD devices.

12. Generalized gradient approximation exchange energy functional with correct asymptotic behavior of the corresponding potential

SciTech Connect

Carmona-Espíndola, Javier; Gázquez, José L.; Vela, Alberto; Trickey, S. B.

2015-02-07

A new non-empirical exchange energy functional of the generalized gradient approximation (GGA) type, which gives an exchange potential with the correct asymptotic behavior, is developed and explored. In combination with the Perdew-Burke-Ernzerhof (PBE) correlation energy functional, the new CAP-PBE (CAP stands for correct asymptotic potential) exchange-correlation functional gives heats of formation, ionization potentials, electron affinities, proton affinities, binding energies of weakly interacting systems, barrier heights for hydrogen and non-hydrogen transfer reactions, bond distances, and harmonic frequencies on standard test sets that are fully competitive with those obtained from other GGA-type functionals that do not have the correct asymptotic exchange potential behavior. Distinct from them, the new functional provides important improvements in quantities dependent upon response functions, e.g., static and dynamic polarizabilities and hyperpolarizabilities. CAP combined with the Lee-Yang-Parr correlation functional gives roughly equivalent results. Consideration of the computed dynamical polarizabilities in the context of the broad spectrum of other properties considered tips the balance to the non-empirical CAP-PBE combination. Intriguingly, these improvements arise primarily from improvements in the highest occupied and lowest unoccupied molecular orbitals, and not from shifts in the associated eigenvalues. Those eigenvalues do not change dramatically with respect to eigenvalues from other GGA-type functionals that do not provide the correct asymptotic behavior of the potential. Unexpected behavior of the potential at intermediate distances from the nucleus explains this unexpected result and indicates a clear route for improvement.

13. Two-Body Approximations in the Design of Low-Energy Transfers Between Galilean Moons

Fantino, Elena; Castelli, Roberto

Over the past two decades, the robotic exploration of the Solar System has reached the moons of the giant planets. In the case of Jupiter, a strong scientific interest towards its icy moons has motivated important space missions (e.g., ESAs' JUICE and NASA's Europa Mission). A major issue in this context is the design of efficient trajectories enabling satellite tours, i.e., visiting the several moons in succession. Concepts like the Petit Grand Tour and the Multi-Moon Orbiter have been developed to this purpose, and the literature on the subject is quite rich. The models adopted are the two-body problem (with the patched conics approximation and gravity assists) and the three-body problem (giving rise to the so-called low-energy transfers, LETs). In this contribution, we deal with the connection between two moons, Europa and Ganymede, and we investigate a two-body approximation of trajectories originating from the stable/unstable invariant manifolds of the two circular restricted three body problems, i.e., Jupiter-Ganymede and Jupiter-Europa. We develop ad-hoc algorithms to determine the intersections of the resulting elliptical arcs, and the magnitude of the maneuver at the intersections. We provide a means to perform very fast and accurate evaluations of the minimum-cost trajectories between the two moons. Eventually, we validate the methodology by comparison with numerical integrations in the three-body problem.

14. Nuclear energy surfaces at high-spin in the A{approximately}180 mass region

SciTech Connect

Chasman, R.R.; Egido, J.L.; Robledo, L.M.

1995-08-01

We are studying nuclear energy surfaces at high spin, with an emphasis on very deformed shapes using two complementary methods: (1) the Strutinsky method for making surveys of mass regions and (2) Hartree-Fock calculations using a Gogny interaction to study specific nuclei that appear to be particularly interesting from the Strutinsky method calculations. The great advantage of the Strutinsky method is that one can study the energy surfaces of many nuclides ({approximately}300) with a single set of calculations. Although the Hartree-Fock calculations are quite time-consuming relative to the Strutinsky calculations, they determine the shape at a minimum without being limited to a few deformation modes. We completed a study of {sup 182}Os using both approaches. In our cranked Strutinsky calculations, which incorporate a necking mode deformation in addition to quadrupole and hexadecapole deformations, we found three well-separated, deep, strongly deformed minima. The first is characterized by nuclear shapes with axis ratios of 1.5:1; the second by axis ratios of 2.2:1 and the third by axis ratios of 2.9:1. We also studied this nuclide with the density-dependent Gogny interaction at I = 60 using the Hartree-Fock method and found minima characterized by shapes with axis ratios of 1.5:1 and 2.2:1. A comparison of the shapes at these minima, generated in the two calculations, shows that the necking mode of deformation is extremely useful for generating nuclear shapes at large deformation that minimize the energy. The Hartree-Fock calculations are being extended to larger deformations in order to further explore the energy surface in the region of the 2.9:1 minimum.

15. Discrete Dipole Approximation for Low-Energy Photoelectron Emission from NaCl Nanoparticles

SciTech Connect

Berg, Matthew J.; Wilson, Kevin R.; Sorensen, Chris; Chakrabarti, Amit; Ahmed, Musahid

2011-09-22

This work presents a model for the photoemission of electrons from sodium chloride nanoparticles 50-500 nm in size, illuminated by vacuum ultraviolet light with energy ranging from 9.4-10.9 eV. The discrete dipole approximation is used to calculate the electromagnetic field inside the particles, from which the two-dimensional angular distribution of emitted electrons is simulated. The emission is found to favor the particle?s geometrically illuminated side, and this asymmetry is compared to previous measurements performed at the Lawrence Berkeley National Laboratory. By modeling the nanoparticles as spheres, the Berkeley group is able to semi-quantitatively account for the observed asymmetry. Here however, the particles are modeled as cubes, which is closer to their actual shape, and the interaction of an emitted electron with the particle surface is also considered. The end result shows that the emission asymmetry for these low-energy electrons is more sensitive to the particle-surface interaction than to the specific particle shape, i.e., a sphere or cube.

16. Validity of the local self-energy approximation: Application to coupled quantum impurities

Mitchell, Andrew K.; Bulla, Ralf

2015-10-01

We examine the quality of the local self-energy approximation, applied here to models of multiple quantum impurities coupled to an electronic bath. The local self-energy is obtained by solving a single-impurity Anderson model in an effective medium that is determined self-consistently, similar to the dynamical mean-field theory (DMFT) for correlated lattice systems. By comparing to exact results obtained by using the numerical renormalization group, we determine situations where "impurity-DMFT" is able to capture the physics of highly inhomogeneous systems and those cases where it fails. For two magnetic impurities separated in real space, the onset of the dilute limit is captured, but RKKY-dominated interimpurity singlet formation cannot be described. For parallel quantum dot devices, impurity-DMFT succeeds in capturing the underscreened Kondo physics by self-consistent generation of a critical pseudogapped effective medium. However, the quantum phase transition between high- and low-spin states upon tuning interdot coupling cannot be described.

17. Free energy of contact formation in proteins: Efficient computation in the elastic network approximation

Hamacher, Kay

2011-07-01

Biomolecular simulations have become a major tool in understanding biomolecules and their complexes. However, one can typically only investigate a few mutants or scenarios due to the severe computational demands of such simulations, leading to a great interest in method development to overcome this restriction. One way to achieve this is to reduce the complexity of the systems by an approximation of the forces acting upon the constituents of the molecule. The harmonic approximation used in elastic network models simplifies the physical complexity to the most reduced dynamics of these molecular systems. The reduced polymer modeled this way is typically comprised of mass points representing coarse-grained versions of, e.g., amino acids. In this work, we show how the computation of free energy contributions of contacts between two residues within the molecule can be reduced to a simple lookup operation in a precomputable matrix. Being able to compute such contributions is of great importance: protein design or molecular evolution changes introduce perturbations to these pair interactions, so we need to understand their impact. Perturbation to the interactions occurs due to randomized and fixated changes (in molecular evolution) or designed modifications of the protein structures (in bioengineering). These perturbations are modifications in the topology and the strength of the interactions modeled by the elastic network models. We apply the new algorithm to (1) the bovine trypsin inhibitor, a well-known enzyme in biomedicine, and show the connection to folding properties and the hydrophobic collapse hypothesis and (2) the serine proteinase inhibitor CI-2 and show the correlation to Φ values to characterize folding importance. Furthermore, we discuss the computational complexity and show empirical results for the average case, sampled over a library of 77 structurally diverse proteins. We found a relative speedup of up to 10 000-fold for large proteins with respect to

18. Minimax approximation for the decomposition of energy denominators in Laplace-transformed Møller-Plesset perturbation theories

Takatsuka, Akio; Ten-No, Seiichiro; Hackbusch, Wolfgang

2008-07-01

We implement the minimax approximation for the decomposition of energy denominators in Laplace-transformed Møller-Plesset perturbation theories. The best approximation is defined by minimizing the Chebyshev norm of the quadrature error. The application to the Laplace-transformed second order perturbation theory clearly shows that the present method is much more accurate than other numerical quadratures. It is also shown that the error in the energy decays almost exponentially with respect to the number of quadrature points.

19. Approximate Time to Steady-state Resting Energy Expenditure Using Indirect Calorimetry in Young, Healthy Adults

PubMed Central

Popp, Collin J.; Tisch, Jocelyn J.; Sakarcan, Kenan E.; Bridges, William C.; Jesch, Elliot D.

2016-01-01

Indirect calorimetry (IC) measurements to estimate resting energy expenditure (REE) necessitate a stable measurement period or steady state (SS). There is limited evidence when assessing the time to reach SS in young, healthy adults. The aims of this prospective study are to determine the approximate time to necessary reach SS using open-circuit IC and to establish the appropriate duration of SS needed to estimate REE. One hundred young, healthy participants (54 males and 46 females; age = 20.6 ± 2.1 years; body weight = 73.6 ± 16.3 kg; height 172.5 ± 9.3 cm; BMI = 24.5 ± 3.8 kg/m2) completed IC measurement for approximately 30 min while the volume of oxygen (VO2) and volume of carbon dioxide (VCO2) were collected. SS was defined by variations in the VO2 and VCO2 of ≤10% coefficient of variation (%CV) over a period of five consecutive minutes. The 30-min IC measurement was divided into six 5-min segments, such as S1, S2, S3, S4, S5, and S6. The results show that SS was achieved during S2 (%CV = 6.81 ± 3.2%), and the %CV continued to met the SS criteria for the duration of the IC measurement (S3 = 8.07 ± 4.4%, S4 = 7.93 ± 3.7%, S5 = 7.75 ± 4.1%, and S6 = 8.60 ± 4.6%). The current study found that in a population of young, healthy adults the duration of the IC measurement period could be a minimum of 10 min. The first 5-min segment was discarded, while SS occurred by the second 5-min segment. PMID:27857943

20. Comparison of the two-state approximation and multistate treatments for vibration--vibration energy exchange in molecular collisions

SciTech Connect

Shin, H.K.

1981-09-01

We have solved the time-dependent Schroedinger equation based on a semiclassical collision to examine deviations of the two-state approximation from multistate treatments for vibration--vibration energy exchange processes. For a specific case of 41..-->..50 in D/sub 2/+D/sub 2/, the two-state calculation of energy exchange probabilities fails at high collision energies (E> or approx. =2h..omega.., where ..omega.. is the frequency of the oscillator). A thermal average shows that the approximation leads to large deviatons above 2000 /sup 0/K.

1. Analytical and numerical assessment of the accuracy of the approximated nuclear symmetry energy in the Hartree-Fock theory

2017-07-01

The nuclear symmetry energy is defined by the second derivative of the energy per nucleon with respect to the proton-neutron asymmetry, and is sometimes approximated by the energy difference between the neutron matter and the symmetric matter. The accuracy of this approximation is assessed analytically and numerically within the Hartree-Fock theory using effective interactions. By decomposing the nuclear-matter energy, the relative error of each term is expressed analytically; it is constant or is a single-variable function determined by the function type. The full errors are evaluated for several effective interactions, by inserting values for the parameters. Although the errors stay within 10 % up to twice the normal density irrespective of the interactions, at higher densities the accuracy of the approximation significantly depends on the interactions.

2. Excitation energies from particle-particle random phase approximation: Davidson algorithm and benchmark studies.

PubMed

Yang, Yang; Peng, Degao; Lu, Jianfeng; Yang, Weitao

2014-09-28

The particle-particle random phase approximation (pp-RPA) has been used to investigate excitation problems in our recent paper [Y. Yang, H. van Aggelen, and W. Yang, J. Chem. Phys. 139, 224105 (2013)]. It has been shown to be capable of describing double, Rydberg, and charge transfer excitations, which are challenging for conventional time-dependent density functional theory (TDDFT). However, its performance on larger molecules is unknown as a result of its expensive O(N(6)) scaling. In this article, we derive and implement a Davidson iterative algorithm for the pp-RPA to calculate the lowest few excitations for large systems. The formal scaling is reduced to O(N(4)), which is comparable with the commonly used configuration interaction singles (CIS) and TDDFT methods. With this iterative algorithm, we carried out benchmark tests on molecules that are significantly larger than the molecules in our previous paper with a reasonably large basis set. Despite some self-consistent field convergence problems with ground state calculations of (N - 2)-electron systems, we are able to accurately capture lowest few excitations for systems with converged calculations. Compared to CIS and TDDFT, there is no systematic bias for the pp-RPA with the mean signed error close to zero. The mean absolute error of pp-RPA with B3LYP or PBE references is similar to that of TDDFT, which suggests that the pp-RPA is a comparable method to TDDFT for large molecules. Moreover, excitations with relatively large non-HOMO excitation contributions are also well described in terms of excitation energies, as long as there is also a relatively large HOMO excitation contribution. These findings, in conjunction with the capability of pp-RPA for describing challenging excitations shown earlier, further demonstrate the potential of pp-RPA as a reliable and general method to describe excitations, and to be a good alternative to TDDFT methods.

3. Total daily energy expenditure is increased following a single bout of sprint interval training.

PubMed

Sevits, Kyle J; Melanson, Edward L; Swibas, Tracy; Binns, Scott E; Klochak, Anna L; Lonac, Mark C; Peltonen, Garrett L; Scalzo, Rebecca L; Schweder, Melani M; Smith, Amy M; Wood, Lacey M; Melby, Christopher L; Bell, Christopher

2013-10-01

REGULAR ENDURANCE EXERCISE IS AN EFFECTIVE STRATEGY FOR HEALTHY WEIGHT MAINTENANCE, MEDIATED VIA INCREASED TOTAL DAILY ENERGY EXPENDITURE (TDEE), AND POSSIBLY AN INCREASE IN RESTING METABOLIC RATE (RMR: the single largest component of TDEE). Sprint interval training (SIT) is a low-volume alternative to endurance exercise; however, the utility of SIT for healthy weight maintenance is less clear. In this regard, it is feasible that SIT may evoke a thermogenic response above and beyond the estimates required for prevention of weight gain (i.e., >200-600 kJ). The purpose of these studies was to investigate the hypotheses that a single bout of SIT would increase RMR and/or TDEE. Study 1: RMR (ventilated hood) was determined on four separate occasions in 15 healthy men. Measurements were performed over two pairs of consecutive mornings; each pair was separated by 7 days. Immediately following either the first or third RMR measurement (randomly assigned) subjects completed a single bout of SIT (cycle ergometer exercise). RMR was unaffected by a single bout of SIT (7195 ± 285 kJ/day vs. 7147 ± 222, 7149 ± 246 and 6987 ± 245 kJ/day (mean ± SE); P = 0.12). Study 2: TDEE (whole-room calorimeter) was measured in 12 healthy men, on two consecutive days, one of which began with a single bout of SIT (random order). Sprint exercise increased TDEE in every research participant (9169 ± 243 vs. 10,111 ± 260 kJ/day; P < 0.0001); the magnitude of increase was 946 ± 62 kJ/day (∼10%). These data provide support for SIT as a strategy for increasing TDEE, and may have implications for healthy body weight maintenance.

4. Development of generalized potential-energy surfaces using many-body expansions, neural networks, and moiety energy approximations

Malshe, M.; Narulkar, R.; Raff, L. M.; Hagan, M.; Bukkapatnam, S.; Agrawal, P. M.; Komanduri, R.

2009-05-01

A general method for the development of potential-energy hypersurfaces is presented. The method combines a many-body expansion to represent the potential-energy surface with two-layer neural networks (NN) for each M-body term in the summations. The total number of NNs required is significantly reduced by employing a moiety energy approximation. An algorithm is presented that efficiently adjusts all the coupled NN parameters to the database for the surface. Application of the method to four different systems of increasing complexity shows that the fitting accuracy of the method is good to excellent. For some cases, it exceeds that available by other methods currently in literature. The method is illustrated by fitting large databases of ab initio energies for Sin(n =3,4,…,7) clusters obtained from density functional theory calculations and for vinyl bromide (C2H3Br) and all products for dissociation into six open reaction channels (12 if the reverse reactions are counted as separate open channels) that include C-H and C-Br bond scissions, three-center HBr dissociation, and three-center H2 dissociation. The vinyl bromide database comprises the ab initio energies of 71 969 configurations computed at MP4(SDQ) level with a 6-31G(d,p) basis set for the carbon and hydrogen atoms and Huzinaga's (4333/433/4) basis set augmented with split outer s and p orbitals (43321/4321/4) and a polarization f orbital with an exponent of 0.5 for the bromine atom. It is found that an expansion truncated after the three-body terms is sufficient to fit the Si5 system with a mean absolute testing set error of 5.693×10-4 eV. Expansions truncated after the four-body terms for Sin(n =3,4,5) and Sin(n =3,4,…,7) provide fits whose mean absolute testing set errors are 0.0056 and 0.0212 eV, respectively. For vinyl bromide, a many-body expansion truncated after the four-body terms provides fitting accuracy with mean absolute testing set errors that range between 0.0782 and 0.0808 eV. These

5. Rapid approximate calculation of water binding free energies in the whole hydration domain of (bio)macromolecules.

PubMed

Reif, Maria M; Zacharias, Martin

2016-07-05

The evaluation of water binding free energies around solute molecules is important for the thermodynamic characterization of hydration or association processes. Here, a rapid approximate method to estimate water binding free energies around (bio)macromolecules from a single molecular dynamics simulation is presented. The basic idea is that endpoint free-energy calculation methods are applied and the endpoint quantities are monitored on a three-dimensional grid around the solute. Thus, a gridded map of water binding free energies around the solute is obtained, that is, from a single short simulation, a map of favorable and unfavorable water binding sites can be constructed. Among the employed free-energy calculation methods, approaches involving endpoint information pertaining to actual thermodynamic integration calculations or endpoint information as exploited in the linear interaction energy method were examined. The accuracy of the approximate approaches was evaluated on the hydration of a cage-like molecule representing either a nonpolar, polar, or charged water binding site and on α- and β-cyclodextrin molecules. Among the tested approaches, the linear interaction energy method is considered the most viable approach. Applying the linear interaction energy method on the grid around the solute, a semi-quantitative thermodynamic characterization of hydration around the whole solute is obtained. Disadvantages are the approximate nature of the method and a limited flexibility of the solute. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

6. Vibrational coherence and energy transfer in two-dimensional spectra with the optimized mean-trajectory approximation

PubMed Central

Alemi, Mallory; Loring, Roger F.

2015-01-01

The optimized mean-trajectory (OMT) approximation is a semiclassical method for computing vibrational response functions from action-quantized classical trajectories connected by discrete transitions that represent radiation-matter interactions. Here, we extend the OMT to include additional vibrational coherence and energy transfer processes. This generalized approximation is applied to a pair of anharmonic chromophores coupled to a bath. The resulting 2D spectra are shown to reflect coherence transfer between normal modes. PMID:26049437

7. Vibrational coherence and energy transfer in two-dimensional spectra with the optimized mean-trajectory approximation

Alemi, Mallory; Loring, Roger F.

2015-06-01

The optimized mean-trajectory (OMT) approximation is a semiclassical method for computing vibrational response functions from action-quantized classical trajectories connected by discrete transitions that represent radiation-matter interactions. Here, we extend the OMT to include additional vibrational coherence and energy transfer processes. This generalized approximation is applied to a pair of anharmonic chromophores coupled to a bath. The resulting 2D spectra are shown to reflect coherence transfer between normal modes.

8. Comparison of exact and approximate formulas for the Mott correction to energy loss of relativistic heavy ions

NASA Technical Reports Server (NTRS)

Eby, P. B.; Sung, C. C.

1986-01-01

A comparison is conducted between an exact numerical calculation and two approximate analytic expressions for the Mott correction to energy loss of heavy ions. A complete tabulation of the Mott correction is presented together with a comparison of some of the approximate expressions for the Mott correction due to Ahlen (1978) and Morgan and Eby (1973). Comparison is made with results of an experimental calibration of the HEAO-3 Heavy Cosmic Ray experimental chambers at the Bevalac.

9. Vibrational coherence and energy transfer in two-dimensional spectra with the optimized mean-trajectory approximation

SciTech Connect

Alemi, Mallory; Loring, Roger F.

2015-06-07

The optimized mean-trajectory (OMT) approximation is a semiclassical method for computing vibrational response functions from action-quantized classical trajectories connected by discrete transitions that represent radiation-matter interactions. Here, we extend the OMT to include additional vibrational coherence and energy transfer processes. This generalized approximation is applied to a pair of anharmonic chromophores coupled to a bath. The resulting 2D spectra are shown to reflect coherence transfer between normal modes.

10. Effect of initial phase on error in electron energy obtained using paraxial approximation for a focused laser pulse in vacuum

SciTech Connect

Singh, Kunwar Pal; Arya, Rashmi; Malik, Anil K.

2015-09-14

We have investigated the effect of initial phase on error in electron energy obtained using paraxial approximation to study electron acceleration by a focused laser pulse in vacuum using a three dimensional test-particle simulation code. The error is obtained by comparing the energy of the electron for paraxial approximation and seventh-order correction description of the fields of Gaussian laser. The paraxial approximation predicts wrong laser divergence and wrong electron escape time from the pulse which leads to prediction of higher energy. The error shows strong phase dependence for the electrons lying along the axis of the laser for linearly polarized laser pulse. The relative error may be significant for some specific values of initial phase even at moderate values of laser spot sizes. The error does not show initial phase dependence for a circularly laser pulse.

11. Casimir bag energy in the stochastic approximation to the pure QCD vacuum

SciTech Connect

Fosco, C. D.; Oxman, L. E.

2007-01-15

We study the Casimir contribution to the bag energy coming from gluon field fluctuations, within the context of the stochastic vacuum model of pure QCD. After formulating the problem in terms of the generating functional of field strength cumulants, we argue that the resulting predictions about the Casimir energy are compatible with the phenomenologically required bag energy term.

12. Excitation energies and potential energy curves for the 19 excited electronic terms of CH: Efficiency examination of the multireference first-order polarization propagator approximation

Seleznev, Alexey O.; Khrustov, Vladimir F.; Stepanov, Nikolay F.

2013-11-01

The attainability of a uniform precision level for estimates of electronic transition characteristics through the multireference first-order polarization propagator approximation (MR-FOPPA) was examined under extension of a basis set, using the CH ion as an example. The transitions from the ground electronic state to the 19 excited electronic terms were considered. Balanced approximations for (i) transition energies to the studied excited states and (ii) forms and relative dispositions of their potential energy curves were attained in the 3-21G and 6-311G (d,p) basis sets. In both the basis sets, a balanced approximation for the corresponding transition moments was not achieved.

13. Testing the nonlocal kinetic energy functional of an inhomogeneous, two-dimensional degenerate Fermi gas within the average density approximation

Towers, J.; van Zyl, B. P.; Kirkby, W.

2015-08-01

In a recent paper [B. P. van Zyl et al., Phys. Rev. A 89, 022503 (2014), 10.1103/PhysRevA.89.022503], the average density approximation (ADA) was implemented to develop a parameter-free, nonlocal kinetic energy functional to be used in the orbital-free density functional theory of an inhomogeneous, two-dimensional (2D) Fermi gas. In this work, we provide a detailed comparison of self-consistent calculations within the ADA with the exact results of the Kohn-Sham density functional theory and the elementary Thomas-Fermi (TF) approximation. We demonstrate that the ADA for the 2D kinetic energy functional works very well under a wide variety of confinement potentials, even for relatively small particle numbers. Remarkably, the TF approximation for the kinetic energy functional, without any gradient corrections, also yields good agreement with the exact kinetic energy for all confining potentials considered, although at the expense of the spatial and kinetic energy densities exhibiting poor pointwise agreement, particularly near the TF radius. Our findings illustrate that the ADA kinetic energy functional yields accurate results for both the local and global equilibrium properties of an inhomogeneous 2D Fermi gas, without the need for any fitting parameters.

14. Inducing energy gaps in monolayer and bilayer graphene: Local density approximation calculations

Ribeiro, R. M.; Peres, N. M. R.; Coutinho, J.; Briddon, P. R.

2008-08-01

In this paper we study the formation of energy gaps in the spectrum of graphene and its bilayer when both these materials are covered with water and ammonia molecules. The energy gaps obtained are within the range 20-30 meV, values compatible to those found in experimental studies of graphene bilayer. We further show that the binding energies are large enough for the adsorption of the molecules to be maintained even at room temperature.

15. Interpolated energy densities, correlation indicators and lower bounds from approximations to the strong coupling limit of DFT

Vuckovic, Stefan; Irons, Tom J. P.; Wagner, Lucas O.; Teale, Andrew M.; Gori-Giorgi, Paola

We investigate the construction of approximated exchange-correlation functionals by interpolating locally along the adiabatic connection between the weak- and the strong-coupling regimes, focussing on the effect of using approximate functionals for the strong-coupling energy densities. The gauge problem is avoided by dealing with quantities that are all locally defined in the same way. Using exact ingredients at weak coupling we are able to isolate the error coming from the approximations at strong coupling only. We find that the nonlocal radius model, which retains some of the non-locality of the exact strong-coupling regime, yields very satisfactory results. We also use interpolation models and quantities from the weak- and strong-coupling regimes to define a correlation-type indicator and a lower bound to the exact exchange-correlation energy. Open problems, related to the nature of the local and global slope of the adiabatic connection at weak coupling, are also discussed.

16. Approximate Noether symmetries of the geodesic equations for the charged-Kerr spacetime and rescaling of energy

Hussain, Ibrar; Mahomed, F. M.; Qadir, Asghar

2009-10-01

Using approximate symmetry methods for differential equations we have investigated the exact and approximate symmetries of a Lagrangian for the geodesic equations in the Kerr spacetime. Taking Minkowski spacetime as the exact case, it is shown that the symmetry algebra of the Lagrangian is 17 dimensional. This algebra is related to the 15 dimensional Lie algebra of conformal isometries of Minkowski spacetime. First introducing spin angular momentum per unit mass as a small parameter we consider first-order approximate symmetries of the Kerr metric as a first perturbation of the Schwarzschild metric. We then consider the second-order approximate symmetries of the Kerr metric as a second perturbation of the Minkowski metric. The approximate symmetries are recovered for these spacetimes and there are no non- trivial approximate symmetries. A rescaling of the arc length parameter for consistency of the trivial second-order approximate symmetries of the geodesic equations indicates that the energy in the charged-Kerr metric has to be rescaled and the rescaling factor is r-dependent. This re-scaling factor is compared with that for the Reissner-Nordström metric.

17. Induced P-wave superfluidity within the full energy- and momentum-dependent Eliashberg approximation in asymmetric dilute Fermi gases

SciTech Connect

Bulgac, Aurel; Yoon, Sukjin

2009-05-15

We consider a very asymmetric system of fermions with an interaction characterized by a positive scattering length only. The minority atoms pair and form a Bose-Einstein condensate of dimers, while the surplus fermions interact only indirectly through the exchange of Bogoliubov sound modes. This interaction has a finite range, the retardation effects are significant, and the surplus fermions will form a P-wave superfluid. We compute the P-wave pairing gap in the BCS and Eliashberg approximations with only energy-dependence approximations, and demonstrate their inadequacy in comparison with a full treatment of the momentum and energy dependence of the induced interaction. The pairing gap computed with a full momentum and energy dependence is significantly larger in magnitude, and that makes it more likely that this new exotic paired phase could be put in evidence in atomic trap experiments.

18. High-intensity interval exercise induces 24-h energy expenditure similar to traditional endurance exercise despite reduced time commitment.

PubMed

Skelly, Lauren E; Andrews, Patricia C; Gillen, Jenna B; Martin, Brian J; Percival, Michael E; Gibala, Martin J

2014-07-01

Subjects performed high-intensity interval training (HIIT) and continuous moderate-intensity training (END) to evaluate 24-h oxygen consumption. Oxygen consumption during HIIT was lower versus END; however, total oxygen consumption over 24 h was similar. These data demonstrate that HIIT and END induce similar 24-h energy expenditure, which may explain the comparable changes in body composition reported despite lower total training volume and time commitment.

19. Validity of central field approximations in molecular scattering - Low energy CO-He collisions

NASA Technical Reports Server (NTRS)

Monchick, L.; Green, S.

1975-01-01

Close-coupled calculations have been carried out on collisions of helium and carbon monoxide interacting via a theoretical interaction potential which is believed to reproduce accurately the true interaction of this system. These are compared with an equivalent set of calculations for the spherical average of this potential. It is concluded that the latter approximation holds reasonably well for transport-property calculations but not for differential and total scattering cross sections. As a consequence, conservation of scattering-cross-section theorems that are based on this interaction potential do not hold well.

20. Numerical approximations for the molecular beam epitaxial growth model based on the invariant energy quadratization method

Yang, Xiaofeng; Zhao, Jia; Wang, Qi

2017-03-01

The Molecular Beam Epitaxial model is derived from the variation of a free energy, that consists of either a fourth order Ginzburg-Landau double well potential or a nonlinear logarithmic potential in terms of the gradient of a height function. One challenge in solving the MBE model numerically is how to develop proper temporal discretization for the nonlinear terms in order to preserve energy stability at the time-discrete level. In this paper, we resolve this issue by developing a first and second order time-stepping scheme based on the "Invariant Energy Quadratization" (IEQ) method. The novelty is that all nonlinear terms are treated semi-explicitly, and the resulted semi-discrete equations form a linear system at each time step. Moreover, the linear operator is symmetric positive definite and thus can be solved efficiently. We then prove that all proposed schemes are unconditionally energy stable. The semi-discrete schemes are further discretized in space using finite difference methods and implemented on GPUs for high-performance computing. Various 2D and 3D numerical examples are presented to demonstrate stability and accuracy of the proposed schemes.

1. Folding funnels and energy landscapes of larger proteins within the capillarity approximation

PubMed Central

Wolynes, Peter G.

1997-01-01

The characterization of protein-folding kinetics with increasing chain length under various thermodynamic conditions is addressed using the capillarity picture in which distinct spatial regions of the protein are imagined to be folded or trapped and separated by interfaces. The quantitative capillarity theory is based on the nucleation theory of first-order transitions and the droplet analysis of glasses and random magnets. The concepts of folding funnels and rugged energy landscapes are shown to be applicable in the large size limit just as for smaller proteins. An ideal asymptotic free-energy profile as a function of a reaction coordinate measuring progress down the funnel is shown to be quite broad. This renders traditional transition state theory generally inapplicable but allows a diffusive picture with a transition-state region to be used. The analysis unifies several scaling arguments proposed earlier. The importance of fluctuational fine structure both to the free-energy profile and to the glassy dynamics is highlighted. The fluctuation effects lead to a very broad trapping-time distribution. Considerations necessary for understanding the crossover between the mean field and capillarity pictures of the energy landscapes are discussed. A variety of mechanisms that may roughen the interfaces and may lead to a complex structure of the transition-state ensemble are proposed. PMID:9177189

2. Two-dimensional energy transfer in radiatively participating media with conduction by the P-N approximation

Ratzel, A. C., III; Howell, J. R.

The combined steady state conduction and radiation heat transfer problem for a gray medium within a rectangular enclosure is considered using the differential approximation. The P-1 and P-3 spherical harmonics approximations for the intensity distribution are incorporated in the equation of transfer, and modified Marshak-type boundary conditions are used in the formulation. The enclosure walls are assumed to be isothermal black surfaces. Two and five coupled second-order nonlinear partial differential equations are developed for the P-1 and P-3 approximations, respectively, using the energy conservation and moment of intensity expressions. The P-1 equations have been numerically solved using finite element techniques, while the P-3 relations have been solved using a finite difference successive-over-relaxation (SOR) algorithm. Two-dimensional temperature profiles and hot wall heat transfer results are presented for square enclosures and different optical width rectangular enclosures for a range of conduction-radiation parameters.

3. The Determination of the Spectrum Energy on the model of DNA-protein interactions using WKB approximation method

Syahroni, Edy; Suparmi, A.; Cari, C.

2017-01-01

The spectrum energy’s equation for Killingback potential on the model of DNA and protein interactions was obtained using WKB approximation method. The Killingbeck potential was substituted into the general equation of WKB approximation method to determine the energy. The general equation required the value of critical turning point to complete the form equation. In this research, the general form of Killingbeck potential was causing the equation of critical turning point turn into cube equation. In this case we take the value of critical turning point only with the real value. In mathematical condition, it was satisfied with requirement Discriminant was less than or equal to 0. If D=0, it would give two values of critical turning point and if D<0, it would give three values of critical turning point. In this research we present both of those requirements to complete the general Equation of Energy.

4. Communication: Two-component ring-coupled-cluster computation of the correlation energy in the random-phase approximation

Krause, Katharina; Klopper, Wim

2013-11-01

Within the framework of density-functional theory, the correlation energy is computed in the random-phase approximation (RPA) using spinors obtained from a two-component relativistic Kohn-Sham calculation accounting for spin-orbit interactions. Ring-coupled-cluster equations are solved to obtain the two-component RPA correlation energy. Results are presented for the hydrides of the halogens Br, I, and At as well as of the coinage metals Cu, Ag, and Au, based on two-component relativistic exact-decoupling Kohn-Sham calculations.

5. Communication: Two-component ring-coupled-cluster computation of the correlation energy in the random-phase approximation

SciTech Connect

Krause, Katharina; Klopper, Wim

2013-11-21

Within the framework of density-functional theory, the correlation energy is computed in the random-phase approximation (RPA) using spinors obtained from a two-component relativistic Kohn–Sham calculation accounting for spin–orbit interactions. Ring-coupled-cluster equations are solved to obtain the two-component RPA correlation energy. Results are presented for the hydrides of the halogens Br, I, and At as well as of the coinage metals Cu, Ag, and Au, based on two-component relativistic exact-decoupling Kohn–Sham calculations.

6. The approximation of radiative effects in relativistic gravity - Gravitational radiation reaction and energy loss in nearly Newtonian systems

NASA Technical Reports Server (NTRS)

Walker, M.; Will, C. M.

1980-01-01

An argument is presented to determine the accuracy with which a solution of Einstein's field equations of gravitation must be approximated in order to describe the dominant effects of gravitational radiation emission from weak-field systems. Several previous calculations are compared in the light of this argument, and some apparent discrepancies among them are resolved. The majority of these calculations support the 'quadrupole formulae' for gravitational radiation energy loss and radiation reaction.

7. Minimum-Energy Flight Paths for UAVs Using Mesoscale Wind Forecasts and Approximate Dynamic Programming

DTIC Science & Technology

2007-12-01

24) ( , ) ( ) ij arc i j D x dx DX=∫ . All of the processes that lead to loss of energy, including drag, will eventually be translated to heating of...the wind field as a function of altitude, up to 10 km, according to a 160 km x 160 km COAMPS forecast over Yucca, Nevada test site, for (a) wind...for two cases: (a) | | 90ijβ γ− ≤ and (b) | | 90ijβ γ− ≥ ..................................................27  x Figure 15.  A circular turn from

8. The Mean Trajectory Approximation for Charge and Energy Transfer Processes at Surfaces.

DTIC Science & Technology

1985-03-01

computing :he fol13wing natrix elements: ] (R,t RJ)(R,td <R>=R, 3 1 )- <F> = (3.19) - -)2> = ,(3.20) (3.21 ’ 21 111.2 The use of the eikonal method for...Equation (3.6) we must propose a method for computing the dependence of R on t. If the incident kinetic energy is much higher than the variation of Hii...classical action S (R(t), P(t)). So far the classical mechanicsc appears as a device to compute the eikonal: the.e is no guarantee that the classical

9. A new approach to detect congestive heart failure using Teager energy nonlinear scatter plot of R-R interval series.

PubMed

Kamath, Chandrakar

2012-09-01

A novel approach to distinguish congestive heart failure (CHF) subjects from healthy subjects is proposed. Heart rate variability (HRV) is impaired in CHF subjects. In this work hypothesizing that capturing moment to moment nonlinear dynamics of HRV will reveal cardiac patterning, we construct the nonlinear scatter plot for Teager energy of R-R interval series. The key feature of Teager energy is that it models the energy of the source that generated the signal rather than the energy of the signal itself. Hence, any deviations in the genesis of HRV, by complex interactions of hemodynamic, electrophysiological, and humoral variables, as well as by the autonomic and central nervous regulations, get manifested in the Teager energy function. Comparison of the Teager energy scatter plot with the second-order difference plot (SODP) for normal and CHF subjects reveals significant differences qualitatively and quantitatively. We introduce the concept of curvilinearity for central tendency measures of the plots and define a radial distance index that reveals the efficacy of the Teager energy scatter plot over SODP in separating CHF subjects from healthy subjects. The k-nearest neighbor classifier with RDI as feature showed almost 100% classification rate.

10. Free Energy Contribution Analysis Using Response Kernel Approximation: Insights into the Acylation Reaction of a Beta-Lactamase.

PubMed

2016-09-08

A widely applicable free energy contribution analysis (FECA) method based on the quantum mechanical/molecular mechanical (QM/MM) approximation using response kernel approaches has been proposed to investigate the influences of environmental residues and/or atoms in the QM region on the free energy profile. This method can evaluate atomic contributions to the free energy along the reaction path including polarization effects on the QM region within a dramatically reduced computational time. The rate-limiting step in the deactivation of the β-lactam antibiotic cefalotin (CLS) by β-lactamase was studied using this method. The experimentally observed activation barrier was successfully reproduced by free energy perturbation calculations along the optimized reaction path that involved activation by the carboxylate moiety in CLS. It was found that the free energy profile in the QM region was slightly higher than the isolated energy and that two residues, Lys67 and Lys315, as well as water molecules deeply influenced the QM atoms associated with the bond alternation reaction in the acyl-enzyme intermediate. These facts suggested that the surrounding residues are favorable for the reactant complex and prevent the intermediate from being too stabilized to proceed to the following deacylation reaction. We have demonstrated that the free energy contribution analysis should be a useful method to investigate enzyme catalysis and to facilitate intelligent molecular design.

11. Iterative and direct methods employing distributed approximating functionals for the reconstruction of a potential energy surface from its sampled values

Szalay, Viktor

1999-11-01

The reconstruction of a function from knowing only its values on a finite set of grid points, that is the construction of an analytical approximation reproducing the function with good accuracy everywhere within the sampled volume, is an important problem in all branches of sciences. One such problem in chemical physics is the determination of an analytical representation of Born-Oppenheimer potential energy surfaces by ab initio calculations which give the value of the potential at a finite set of grid points in configuration space. This article describes the rudiments of iterative and direct methods of potential surface reconstruction. The major new results are the derivation, numerical demonstration, and interpretation of a reconstruction formula. The reconstruction formula derived approximates the unknown function, say V, by linear combination of functions obtained by discretizing the continuous distributed approximating functional (DAF) approximation of V over the grid of sampling. The simplest of contracted and ordinary Hermite-DAFs are shown to be sufficient for reconstruction. The linear combination coefficients can be obtained either iteratively or directly by finding the minimal norm least-squares solution of a linear system of equations. Several numerical examples of reconstructing functions of one and two variables, and very different shape are given. The examples demonstrate the robustness, high accuracy, as well as the caveats of the proposed method. As to the mathematical foundation of the method, it is shown that the reconstruction formula can be interpreted as, and in fact is, frame expansion. By recognizing the relevance of frames in determining analytical approximation to potential energy surfaces, an extremely rich and beautiful toolbox of mathematics has come to our disposal. Thus, the simple reconstruction method derived in this paper can be refined, extended, and improved in numerous ways.

12. Brownian motors in the low-energy approximation: Classification and properties

SciTech Connect

Rozenbaum, V. M.

2010-04-15

We classify Brownian motors based on the expansion of their velocity in terms of the reciprocal friction coefficient. The two main classes of motors (with dichotomic fluctuations in homogeneous force and periodic potential energy) are characterized by different analytical dependences of their mean velocity on the spatial and temporal asymmetry coefficients and by different adiabatic limits. The competition between the spatial and temporal asymmetries gives rise to stopping points. The transition through these points can be achieved by varying the asymmetry coefficients, temperature, and other motor parameters, which can be used, for example, for nanoparticle segregation. The proposed classification separates out a new type of motors based on synchronous fluctuations in symmetric potential and applied homogeneous force. As an example of this type of motors, we consider a near-surface motor whose two-dimensional motion (parallel and perpendicular to the substrate plane) results from fluctuations in external force inclined to the surface.

13. Electron density errors and density-driven exchange-correlation energy errors in approximate density functional calculations.

PubMed

Mezei, Pal Daniel; Csonka, Gabor I; Kallay, Mihaly

2017-09-11

Since its formal introduction, density functional theory has achieved many successes on the fields of molecular and solid-state chemistry. According to its central theorems, the ground state of a many-electron system is fully described by its electron density, and the exact functional minimizes the energy at the exact electron density. For many years of density functional development, it was assumed that the improvements in the energy are accompanied by the improvements in the density, and the approximations approach the exact functional. In a recent analysis (Medvedev et al. Science 2017, 355, 49-52.), it has been pointed out for fourteen first row (Be-Ne) atoms and cations with 2, 4, or 10 electrons that the nowadays popular flexible but physically less rigorous approximate density functionals may provide large errors in the calculated electron densities despite the accurate energies. Although far-reaching conclusions have been drawn in this work, the methodology used by the authors may need improvements. Most importantly, their benchmark set was biased towards small atomic cations with compressed, high electron densities. In our paper, we construct a molecular test set with chemically relevant densities and analyze the performance of several density functional approximations including the less-investigated double hybrids. We apply an intensive error measure for the density, its gradient, and its Laplacian and examine how the errors in the density propagate into the semi-local exchange-correlation energy. While we have confirmed the broad conclusions of Medvedev et al., our different way of analyzing the data has led to conclusions that differ in detail. Finally, seeking for a rationale behind the global hybrid or double hybrid methods from the density's point of view, we also analyze the role of the exact exchange and second-order perturbative correlation mixing in PBE-based global hybrid and double hybrid functional forms.

14. Energy-loss function in the two-pair approximation for the electron liquid

Bachlechner, M. E.; Holas, A.; Böhm, H. M.; Schinner, A.

1996-07-01

The imaginary part of the proper polarizability, Im Π, arising due to excitations of two electron-hole pairs, is studied in detail for electron systems of arbitrary dimensionality, and taking into account arbitrary degeneracy of the electron bands. This allows an application to semiconductors with degenerate valleys, and to ferromagnetic metals. The results obtained not only confirm expressions already known for paramagnetic systems in the high-frequency region, but are also rigorously shown to be valid for all frequencies outside the particle-hole continuum. For a sufficiently high momentum transfer a cutoff frequency (below which Im Π=0) is established for not only two-pair but also any n-pair processes. In contrast, there is no upper cutoff for n>~1. The energy-loss function, including the discussed two-pair contributions, is calculated. The effects of screening are investigated. Numerical results, illustrating various aspects and properties of this function, especially showing finite-width plasmon peaks, are obtained for a two-dimensional electron gas.

15. Thermal decay analysis of fiber Bragg gratings at different temperature annealing rates using demarcation energy approximation

Gunawardena, Dinusha Serandi; Lai, Man-Hong; Lim, Kok-Sing; Ahmad, Harith

2017-03-01

In this study the thermal degradation of gratings inscribed in three types of fiber namely, PS 1250/1500, SM 1500 and zero water peak single mode fiber is demonstrated. A comparative investigation is carried out on the aging characteristics of the gratings at three different temperature ramping rates of 3 °C/min, 6 °C/min and 9 °C/min. During the thermal annealing treatment, a significant enhancement in the grating reflectivity is observed for PS 1250/1500 fiber from ∼1.2 eV until 1.4 eV which indicates a thermal induced reversible effect. Higher temperature ramping rates lead to a higher regeneration temperature. In addition, the investigation also reflects that regardless of the temperature ramping rate the thermal decay behavior of a specific fiber can be successfully characterized when represented in a demarcation energy domain. Moreover, this technique can be accommodated when predicting the thermal decay characteristics of a specific fiber.

16. Improved CO Adsorption Energies, Site Preferences, and Surface Formation Energies from a Meta-Generalized Gradient Approximation Exchange-Correlation Functional, M06-L.

PubMed

Luo, Sijie; Zhao, Yan; Truhlar, Donald G

2012-10-18

A notorious failing of approximate exchange-correlation functionals when applied to problems involving catalysis has been the inability of most local functionals to predict the correct adsorption site for CO on metal surfaces or to simultaneously predict accurate surface formation energies and adsorption energies for transition metals. By adding the kinetic energy density τ to the density functional, the revTPSS density functional was shown recently to achieve a balanced description of surface energies and adsorption energies. Here, we show that the older M06-L density functional, also containing τ, provides improved surface formation energies and CO adsorption energies over revTPSS for five transition metals and correctly predicted the on-top/hollow site adsorption preferences for four of the five metals, which was not achieved by most other local functionals. Because M06-L was entirely designed on the basis of atomic and molecular energies, its very good performance is a confirmation of the reasonableness of its functional form. Two GGA functionals with an expansion in the reduced gradient that is correct through second order, namely, SOGGA and SOGGA11, were also tested and found to produce the best surface formation energies of all tested GGA functionals, although they significantly overestimate the adsorption energies.

17. Vibration-translation energy transfer in anharmonic diatomic molecules. 1: A critical evaluation of the semiclassical approximation

NASA Technical Reports Server (NTRS)

Mckenzie, R. L.

1974-01-01

The semiclassical approximation is applied to anharmonic diatomic oscillators in excited initial states. Multistate numerical solutions giving the vibrational transition probabilities for collinear collisions with an inert atom are compared with equivalent, exact quantum-mechanical calculations. Several symmetrization methods are shown to correlate accurately the predictions of both theories for all initial states, transitions, and molecular types tested, but only if coupling of the oscillator motion and the classical trajectory of the incident particle is considered. In anharmonic heteronuclear molecules, the customary semiclassical method of computing the classical trajectory independently leads to transition probabilities with anomalous low-energy resonances. Proper accounting of the effects of oscillator compression and recoil on the incident particle trajectory removes the anomalies and restores the applicability of the semiclassical approximation.

18. On the approximate albedo boundary conditions for two-energy group X,Y-geometry discrete ordinates eigenvalue problems

SciTech Connect

Nunes, C. E. A.; Alves Filho, H.; Barros, R. C.

2012-07-01

We discuss in this paper the computational efficiency of approximate discrete ordinates (SN) albedo boundary conditions for two-energy group eigenvalue problems in X,Y-geometry. The non-standard SN albedo substitutes approximately the reflector system around the active domain, as we neglect the transverse leakage terms within the non-multiplying reflector region. Should the problem have no transverse leakage terms, i.e., one-dimensional slab geometry, then the offered albedo boundary conditions are exact. By computational efficiency we mean analyzing the accuracy of the numerical results versus the CPU execution time of each run for a given model problem. Numerical results to a typical test problem are shown to illustrate this efficiency analysis. (authors)

19. Anharmonic free energies and phonon dispersions from the stochastic self-consistent harmonic approximation: Application to platinum and palladium hydrides

Errea, Ion; Calandra, Matteo; Mauri, Francesco

2014-02-01

Harmonic calculations based on density-functional theory are generally the method of choice for the description of phonon spectra of metals and insulators. The inclusion of anharmonic effects is, however, delicate as it relies on perturbation theory requiring a considerable amount of computer time, fast increasing with the cell size. Furthermore, perturbation theory breaks down when the harmonic solution is dynamically unstable or the anharmonic correction of the phonon energies is larger than the harmonic frequencies themselves. We present here a stochastic implementation of the self-consistent harmonic approximation valid to treat anharmonicity at any temperature in the nonperturbative regime. The method is based on the minimization of the free energy with respect to a trial density matrix described by an arbitrary harmonic Hamiltonian. The minimization is performed with respect to all the free parameters in the trial harmonic Hamiltonian, namely, equilibrium positions, phonon frequencies, and polarization vectors. The gradient of the free energy is calculated following a stochastic procedure. The method can be used to calculate thermodynamic properties, dynamical properties, and even anharmonic corrections to the Eliashberg function of the electron-phonon coupling. The scaling with the system size is greatly improved with respect to perturbation theory. The validity of the method is demonstrated in the strongly anharmonic palladium and platinum hydrides. In both cases, we predict a strong anharmonic correction to the harmonic phonon spectra, far beyond the perturbative limit. In palladium hydrides, we calculate thermodynamic properties beyond the quasiharmonic approximation, while in PtH, we demonstrate that the high superconducting critical temperatures at 100 GPa predicted in previous calculations based on the harmonic approximation are strongly suppressed when anharmonic effects are included.

20. Lateral distribution of high energy muons in EAS of sizes Ne approximately equals 10(5) and Ne approximately equals 10(6)

NASA Technical Reports Server (NTRS)

Bazhutov, Y. N.; Ermakov, G. G.; Fomin, G. G.; Isaev, V. I.; Jarochkina, Z. V.; Kalmykov, N. N.; Khrenov, B. A.; Khristiansen, G. B.; Kulikov, G. V.; Motova, M. V.

1985-01-01

Muon energy spectra and muon lateral distribution in EAS were investigated with the underground magnetic spectrometer working as a part of the extensive air showers (EAS) array. For every registered muon the data on EAS are analyzed and the following EAS parameters are obtained, size N sub e, distance r from the shower axis to muon, age parameter s. The number of muons with energy over some threshold E associated to EAS of fixed parameters are measured, I sub reg. To obtain traditional characteristics, muon flux densities as a function of the distance r and muon energy E, muon lateral distribution and energy spectra are discussed for hadron-nucleus interaction model and composition of primary cosmic rays.

1. Influence of birth interval and child labour on family energy requirements and dependency ratios in two traditional subsistence economies in Africa.

PubMed

Ulijaszek, S J

1993-01-01

The consequences of different birth intervals on dietary energy requirements and dependency ratios at different stages of the family lifecycle are modelled for Gambian agriculturalists and !Kung hunter-gatherers. Energy requirements reach a peak at between 20 and 30 years after starting a family for the Gambians, and between 15 and 20 years for the !Kung. For the Gambians, shorter birth interval confers no economic advantage over the traditional birth interval of 30 months. For the !Kung, the lack of participation in subsistence activities by children gives an output:input ratio in excess of that reported in other studies, suggesting that they are in a state of chronic energy deficiency.

2. Approximate constants of motion for classically chaotic vibrational dynamics - Vague tori, semiclassical quantization, and classical intramolecular energy flow

NASA Technical Reports Server (NTRS)

Shirts, R. B.; Reinhardt, W. P.

1982-01-01

Substantial short time regularity, even in the chaotic regions of phase space, is found for what is seen as a large class of systems. This regularity manifests itself through the behavior of approximate constants of motion calculated by Pade summation of the Birkhoff-Gustavson normal form expansion; it is attributed to remnants of destroyed invariant tori in phase space. The remnant torus-like manifold structures are used to justify Einstein-Brillouin-Keller semiclassical quantization procedures for obtaining quantum energy levels, even in the absence of complete tori. They also provide a theoretical basis for the calculation of rate constants for intramolecular mode-mode energy transfer. These results are illustrated by means of a thorough analysis of the Henon-Heiles oscillator problem. Possible generality of the analysis is demonstrated by brief consideration of classical dynamics for the Barbanis Hamiltonian, Zeeman effect in hydrogen and recent results of Wolf and Hase (1980) for the H-C-C fragment.

3. Approximate constants of motion for classically chaotic vibrational dynamics - Vague tori, semiclassical quantization, and classical intramolecular energy flow

NASA Technical Reports Server (NTRS)

Shirts, R. B.; Reinhardt, W. P.

1982-01-01

Substantial short time regularity, even in the chaotic regions of phase space, is found for what is seen as a large class of systems. This regularity manifests itself through the behavior of approximate constants of motion calculated by Pade summation of the Birkhoff-Gustavson normal form expansion; it is attributed to remnants of destroyed invariant tori in phase space. The remnant torus-like manifold structures are used to justify Einstein-Brillouin-Keller semiclassical quantization procedures for obtaining quantum energy levels, even in the absence of complete tori. They also provide a theoretical basis for the calculation of rate constants for intramolecular mode-mode energy transfer. These results are illustrated by means of a thorough analysis of the Henon-Heiles oscillator problem. Possible generality of the analysis is demonstrated by brief consideration of classical dynamics for the Barbanis Hamiltonian, Zeeman effect in hydrogen and recent results of Wolf and Hase (1980) for the H-C-C fragment.

4. Interval arithmetic in calculations

Bairbekova, Gaziza; Mazakov, Talgat; Djomartova, Sholpan; Nugmanova, Salima

2016-10-01

Interval arithmetic is the mathematical structure, which for real intervals defines operations analogous to ordinary arithmetic ones. This field of mathematics is also called interval analysis or interval calculations. The given math model is convenient for investigating various applied objects: the quantities, the approximate values of which are known; the quantities obtained during calculations, the values of which are not exact because of rounding errors; random quantities. As a whole, the idea of interval calculations is the use of intervals as basic data objects. In this paper, we considered the definition of interval mathematics, investigated its properties, proved a theorem, and showed the efficiency of the new interval arithmetic. Besides, we briefly reviewed the works devoted to interval analysis and observed basic tendencies of development of integral analysis and interval calculations.

5. Potential-Energy Surfaces, the Born-Oppenheimer Approximations, and the Franck-Condon Principle: Back to the Roots.

PubMed

Mustroph, Heinz

2016-09-05

The concept of a potential-energy surface (PES) is central to our understanding of spectroscopy, photochemistry, and chemical kinetics. However, the terminology used in connection with the basic approximations is variously, and somewhat confusingly, represented with such phrases as "adiabatic", "Born-Oppenheimer", or "Born-Oppenheimer adiabatic" approximation. Concerning the closely relevant and important Franck-Condon principle (FCP), the IUPAC definition differentiates between a classical and quantum mechanical formulation. Consequently, in many publications we find terms such as "Franck-Condon (excited) state", or a vertical transition to the "Franck-Condon point" with the "Franck-Condon geometry" that relaxes to the excited-state equilibrium geometry. The Born-Oppenheimer approximation and the "classical" model of the Franck-Condon principle are typical examples of misused terms and lax interpretations of the original theories. In this essay, we revisit the original publications of pioneers of the PES concept and the FCP to help stimulate a lively discussion and clearer thinking around these important concepts. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

6. Vibrational energy relaxation rates via the linearized semiclassical approximation: applications to neat diatomic liquids and atomic-diatomic liquid mixtures.

PubMed

Ka, Being J; Shi, Qiang; Geva, Eitan

2005-06-30

We report the results obtained from the application of our previously proposed linearized semiclassical method for computing vibrational energy relaxation (VER) rates (J. Phys. Chem. A 2003, 107, 9059, 9070) to neat liquid oxygen, neat liquid nitrogen, and liquid mixtures of oxygen and argon. Our calculations are based on a semiclassical approximation for the quantum-mechanical force-force correlation function, which puts it in terms of the Wigner transforms of the force and the product of the Boltzmann operator and the force. The calculation of the multidimensional Wigner integrals is made feasible by the introduction of a local harmonic approximation. A systematic analysis has been performed of the temperature and mole-fraction dependences of the VER rate constant, as well as the relative contributions of centrifugal and potential forces, and of different types of quantum effects. The results were found to be in very good quantitative agreement with experiment, and they suggest that this semiclassical approximation can capture the quantum enhancement, by many orders of magnitude, of the experimentally observed VER rate constants over the corresponding classical predictions.

7. An approximate but efficient method to calculate free energy trends by computer simulation: Application to dihydrofolate reductase-inhibitor complexes

Gerber, Paul R.; Mark, Alan E.; van Gunsteren, Wilfred F.

1993-06-01

Derivatives of free energy differences have been calculated by molecular dynamics techniques. The systems under study were ternary complexes of Trimethoprim (TMP) with dihydrofolate reductases of E. coli and chicken liver, containing the cofactor NADPH. Derivatives are taken with respect to modification of TMP, with emphasis on altering the 3-, 4- and 5-substituents of the phenyl ring. A linear approximation allows the encompassing of a whole set of modifications in a single simulation, as opposed to a full perturbation calculation, which requires a separate simulation for each modification. In the case considered here, the proposed technique requires a factor of 1000 less computing effort than a full free energy perturbation calculation. For the linear approximation to yield a significant result, one has to find ways of choosing the perturbation evolution, such that the initial trend mirrors the full calculation. The generation of new atoms requires a careful treatment of the singular terms in the non-bonded interaction. The result can be represented by maps of the changed molecule, which indicate whether complex formation is favoured under movement of partial charges and change in atom polarizabilities. Comparison with experimental measurements of inhibition constants reveals fair agreement in the range of values covered. However, detailed comparison fails to show a significant correlation. Possible reasons for the most pronounced deviations are given.

8. Accurate and efficient representation of intramolecular energy in ab initio generation of crystal structures. I. Adaptive local approximate models.

PubMed

Sugden, Isaac; Adjiman, Claire S; Pantelides, Constantinos C

2016-12-01

The global search stage of crystal structure prediction (CSP) methods requires a fine balance between accuracy and computational cost, particularly for the study of large flexible molecules. A major improvement in the accuracy and cost of the intramolecular energy function used in the CrystalPredictor II [Habgood et al. (2015). J. Chem. Theory Comput. 11, 1957-1969] program is presented, where the most efficient use of computational effort is ensured via the use of adaptive local approximate model (LAM) placement. The entire search space of the relevant molecule's conformations is initially evaluated using a coarse, low accuracy grid. Additional LAM points are then placed at appropriate points determined via an automated process, aiming to minimize the computational effort expended in high-energy regions whilst maximizing the accuracy in low-energy regions. As the size, complexity and flexibility of molecules increase, the reduction in computational cost becomes marked. This improvement is illustrated with energy calculations for benzoic acid and the ROY molecule, and a CSP study of molecule (XXVI) from the sixth blind test [Reilly et al. (2016). Acta Cryst. B72, 439-459], which is challenging due to its size and flexibility. Its known experimental form is successfully predicted as the global minimum. The computational cost of the study is tractable without the need to make unphysical simplifying assumptions.

9. Spin-unrestricted random-phase approximation with range separation: Benchmark on atomization energies and reaction barrier heights

SciTech Connect

Mussard, Bastien; Reinhardt, Peter; Toulouse, Julien; Ángyán, János G.

2015-04-21

We consider several spin-unrestricted random-phase approximation (RPA) variants for calculating correlation energies, with and without range separation, and test them on datasets of atomization energies and reaction barrier heights. We show that range separation greatly improves the accuracy of all RPA variants for these properties. Moreover, we show that a RPA variant with exchange, hereafter referred to as RPAx-SO2, first proposed by Szabo and Ostlund [J. Chem. Phys. 67, 4351 (1977)] in a spin-restricted closed-shell formalism, and extended here to a spin-unrestricted formalism, provides on average the most accurate range-separated RPA variant for atomization energies and reaction barrier heights. Since this range-separated RPAx-SO2 method had already been shown to be among the most accurate range-separated RPA variants for weak intermolecular interactions [J. Toulouse et al., J. Chem. Phys. 135, 084119 (2011)], this works confirms range-separated RPAx-SO2 as a promising method for general chemical applications.

10. Evaluation of Hydration Free Energy by Level-Set Variational Implicit-Solvent Model with Coulomb-Field Approximation.

PubMed

Guo, Zuojun; Li, Bo; Dzubiella, Joachim; Cheng, Li-Tien; McCammon, J Andrew; Che, Jianwei

2013-03-12

In this article, we systematically apply a novel implicit-solvent model, the variational implicit-solvent model (VISM) together with the Coulomb-Field Approximation (CFA), to calculate the hydration free energy of a large set of small organic molecules. Because these molecules have been studied in detail by molecular dynamics simulations and other implicit-solvent models, they provide a good benchmark for evaluating the performance of VISM-CFA. With all-atom Amber force field parameters, VISM-CFA is able to reproduce well not only the experimental and MD simulated total hydration free energy but also the polar and nonpolar contributions individually. The correlation between VISM-CFA and experiments is R(2) = 0.763 for the total hydration free energy, with a root-mean-square deviation (RMSD) of 1.83 kcal/mol, and the correlation to results from TIP3P explicit water MD simulations is R(2) = 0.839 with a RMSD = 1.36 kcal/mol. In addition, we demonstrate that VISM captures dewetting phenomena in the p53/MDM2 complex and hydrophobic characteristics in the system. This work demonstrates that the level-set VISM-CFA can be used to study the energetic behavior of realistic molecular systems with complicated geometries in solvation, protein-ligand binding, protein-protein association, and protein folding processes.

11. Spin-unrestricted random-phase approximation with range separation: Benchmark on atomization energies and reaction barrier heights.

PubMed

Mussard, Bastien; Reinhardt, Peter; Ángyán, János G; Toulouse, Julien

2015-04-21

We consider several spin-unrestricted random-phase approximation (RPA) variants for calculating correlation energies, with and without range separation, and test them on datasets of atomization energies and reaction barrier heights. We show that range separation greatly improves the accuracy of all RPA variants for these properties. Moreover, we show that a RPA variant with exchange, hereafter referred to as RPAx-SO2, first proposed by Szabo and Ostlund [J. Chem. Phys. 67, 4351 (1977)] in a spin-restricted closed-shell formalism, and extended here to a spin-unrestricted formalism, provides on average the most accurate range-separated RPA variant for atomization energies and reaction barrier heights. Since this range-separated RPAx-SO2 method had already been shown to be among the most accurate range-separated RPA variants for weak intermolecular interactions [J. Toulouse et al., J. Chem. Phys. 135, 084119 (2011)], this works confirms range-separated RPAx-SO2 as a promising method for general chemical applications.

12. Validity of the relativistic impulse approximation for elastic proton-nucleus scattering at energies lower than 200 MeV

SciTech Connect

Li, Z. P.; Hillhouse, G. C.; Meng, J.

2008-07-15

We present the first study to examine the validity of the relativistic impulse approximation (RIA) for describing elastic proton-nucleus scattering at incident laboratory kinetic energies lower than 200 MeV. For simplicity we choose a {sup 208}Pb target, which is a spin-saturated spherical nucleus for which reliable nuclear structure models exist. Microscopic scalar and vector optical potentials are generated by folding invariant scalar and vector scattering nucleon-nucleon (NN) amplitudes, based on our recently developed relativistic meson-exchange model, with Lorentz scalar and vector densities resulting from the accurately calibrated PK1 relativistic mean field model of nuclear structure. It is seen that phenomenological Pauli blocking (PB) effects and density-dependent corrections to {sigma}N and {omega}N meson-nucleon coupling constants modify the RIA microscopic scalar and vector optical potentials so as to provide a consistent and quantitative description of all elastic scattering observables, namely, total reaction cross sections, differential cross sections, analyzing powers and spin rotation functions. In particular, the effect of PB becomes more significant at energies lower than 200 MeV, whereas phenomenological density-dependent corrections to the NN interaction also play an increasingly important role at energies lower than 100 MeV.

13. High-intensity interval training, solutions to the programming puzzle. Part II: anaerobic energy, neuromuscular load and practical applications.

PubMed

Buchheit, Martin; Laursen, Paul B

2013-10-01

High-intensity interval training (HIT) is a well-known, time-efficient training method for improving cardiorespiratory and metabolic function and, in turn, physical performance in athletes. HIT involves repeated short (<45 s) to long (2-4 min) bouts of rather high-intensity exercise interspersed with recovery periods (refer to the previously published first part of this review). While athletes have used 'classical' HIT formats for nearly a century (e.g. repetitions of 30 s of exercise interspersed with 30 s of rest, or 2-4-min interval repetitions ran at high but still submaximal intensities), there is today a surge of research interest focused on examining the effects of short sprints and all-out efforts, both in the field and in the laboratory. Prescription of HIT consists of the manipulation of at least nine variables (e.g. work interval intensity and duration, relief interval intensity and duration, exercise modality, number of repetitions, number of series, between-series recovery duration and intensity); any of which has a likely effect on the acute physiological response. Manipulating HIT appropriately is important, not only with respect to the expected middle- to long-term physiological and performance adaptations, but also to maximize daily and/or weekly training periodization. Cardiopulmonary responses are typically the first variables to consider when programming HIT (refer to Part I). However, anaerobic glycolytic energy contribution and neuromuscular load should also be considered to maximize the training outcome. Contrasting HIT formats that elicit similar (and maximal) cardiorespiratory responses have been associated with distinctly different anaerobic energy contributions. The high locomotor speed/power requirements of HIT (i.e. ≥95 % of the minimal velocity/power that elicits maximal oxygen uptake [v/p(·)VO(2max)] to 100 % of maximal sprinting speed or power) and the accumulation of high-training volumes at high-exercise intensity (runners can

14. Approximating Optimal Behavioural Strategies Down to Rules-of-Thumb: Energy Reserve Changes in Pairs of Social Foragers

PubMed Central

Rands, Sean A.

2011-01-01

Functional explanations of behaviour often propose optimal strategies for organisms to follow. These ‘best’ strategies could be difficult to perform given biological constraints such as neural architecture and physiological constraints. Instead, simple heuristics or ‘rules-of-thumb’ that approximate these optimal strategies may instead be performed. From a modelling perspective, rules-of-thumb are also useful tools for considering how group behaviour is shaped by the behaviours of individuals. Using simple rules-of-thumb reduces the complexity of these models, but care needs to be taken to use rules that are biologically relevant. Here, we investigate the similarity between the outputs of a two-player dynamic foraging game (which generated optimal but complex solutions) and a computational simulation of the behaviours of the two members of a foraging pair, who instead followed a rule-of-thumb approximation of the game's output. The original game generated complex results, and we demonstrate here that the simulations following the much-simplified rules-of-thumb also generate complex results, suggesting that the rule-of-thumb was sufficient to make some of the model outcomes unpredictable. There was some agreement between both modelling techniques, but some differences arose – particularly when pair members were not identical in how they gained and lost energy. We argue that exploring how rules-of-thumb perform in comparison to their optimal counterparts is an important exercise for biologically validating the output of agent-based models of group behaviour. PMID:21765938

15. Approximating optimal behavioural strategies down to rules-of-thumb: energy reserve changes in pairs of social foragers.

PubMed

Rands, Sean A

2011-01-01

Functional explanations of behaviour often propose optimal strategies for organisms to follow. These 'best' strategies could be difficult to perform given biological constraints such as neural architecture and physiological constraints. Instead, simple heuristics or 'rules-of-thumb' that approximate these optimal strategies may instead be performed. From a modelling perspective, rules-of-thumb are also useful tools for considering how group behaviour is shaped by the behaviours of individuals. Using simple rules-of-thumb reduces the complexity of these models, but care needs to be taken to use rules that are biologically relevant. Here, we investigate the similarity between the outputs of a two-player dynamic foraging game (which generated optimal but complex solutions) and a computational simulation of the behaviours of the two members of a foraging pair, who instead followed a rule-of-thumb approximation of the game's output. The original game generated complex results, and we demonstrate here that the simulations following the much-simplified rules-of-thumb also generate complex results, suggesting that the rule-of-thumb was sufficient to make some of the model outcomes unpredictable. There was some agreement between both modelling techniques, but some differences arose - particularly when pair members were not identical in how they gained and lost energy. We argue that exploring how rules-of-thumb perform in comparison to their optimal counterparts is an important exercise for biologically validating the output of agent-based models of group behaviour.

16. Accurate and efficient calculation of excitation energies with the active-space particle-particle random phase approximation

Zhang, Du; Yang, Weitao

2016-10-01

An efficient method for calculating excitation energies based on the particle-particle random phase approximation (ppRPA) is presented. Neglecting the contributions from the high-lying virtual states and the low-lying core states leads to the significantly smaller active-space ppRPA matrix while keeping the error to within 0.05 eV from the corresponding full ppRPA excitation energies. The resulting computational cost is significantly reduced and becomes less than the construction of the non-local Fock exchange potential matrix in the self-consistent-field (SCF) procedure. With only a modest number of active orbitals, the original ppRPA singlet-triplet (ST) gaps as well as the low-lying single and double excitation energies can be accurately reproduced at much reduced computational costs, up to 100 times faster than the iterative Davidson diagonalization of the original full ppRPA matrix. For high-lying Rydberg excitations where the Davidson algorithm fails, the computational savings of active-space ppRPA with respect to the direct diagonalization is even more dramatic. The virtues of the underlying full ppRPA combined with the significantly lower computational cost of the active-space approach will significantly expand the applicability of the ppRPA method to calculate excitation energies at a cost of O(K4), with a prefactor much smaller than a single SCF Hartree-Fock (HF)/hybrid functional calculation, thus opening up new possibilities for the quantum mechanical study of excited state electronic structure of large systems.

17. Accurate and efficient calculation of excitation energies with the active-space particle-particle random phase approximation

SciTech Connect

Zhang, Du; Yang, Weitao

2016-10-13

An efficient method for calculating excitation energies based on the particle-particle random phase approximation (ppRPA) is presented. Neglecting the contributions from the high-lying virtual states and the low-lying core states leads to the significantly smaller active-space ppRPA matrix while keeping the error to within 0.05 eV from the corresponding full ppRPA excitation energies. The resulting computational cost is significantly reduced and becomes less than the construction of the non-local Fock exchange potential matrix in the self-consistent-field (SCF) procedure. With only a modest number of active orbitals, the original ppRPA singlet-triplet (ST) gaps as well as the low-lying single and double excitation energies can be accurately reproduced at much reduced computational costs, up to 100 times faster than the iterative Davidson diagonalization of the original full ppRPA matrix. For high-lying Rydberg excitations where the Davidson algorithm fails, the computational savings of active-space ppRPA with respect to the direct diagonalization is even more dramatic. The virtues of the underlying full ppRPA combined with the significantly lower computational cost of the active-space approach will significantly expand the applicability of the ppRPA method to calculate excitation energies at a cost of O(K^{4}), with a prefactor much smaller than a single SCF Hartree-Fock (HF)/hybrid functional calculation, thus opening up new possibilities for the quantum mechanical study of excited state electronic structure of large systems.

18. Accurate and efficient calculation of excitation energies with the active-space particle-particle random phase approximation

DOE PAGES

Zhang, Du; Yang, Weitao

2016-10-13

An efficient method for calculating excitation energies based on the particle-particle random phase approximation (ppRPA) is presented. Neglecting the contributions from the high-lying virtual states and the low-lying core states leads to the significantly smaller active-space ppRPA matrix while keeping the error to within 0.05 eV from the corresponding full ppRPA excitation energies. The resulting computational cost is significantly reduced and becomes less than the construction of the non-local Fock exchange potential matrix in the self-consistent-field (SCF) procedure. With only a modest number of active orbitals, the original ppRPA singlet-triplet (ST) gaps as well as the low-lying single and doublemore » excitation energies can be accurately reproduced at much reduced computational costs, up to 100 times faster than the iterative Davidson diagonalization of the original full ppRPA matrix. For high-lying Rydberg excitations where the Davidson algorithm fails, the computational savings of active-space ppRPA with respect to the direct diagonalization is even more dramatic. The virtues of the underlying full ppRPA combined with the significantly lower computational cost of the active-space approach will significantly expand the applicability of the ppRPA method to calculate excitation energies at a cost of O(K^{4}), with a prefactor much smaller than a single SCF Hartree-Fock (HF)/hybrid functional calculation, thus opening up new possibilities for the quantum mechanical study of excited state electronic structure of large systems.« less

19. Electron-Phonon Coupling and Energy Flow in a Simple Metal beyond the Two-Temperature Approximation

Waldecker, Lutz; Bertoni, Roman; Ernstorfer, Ralph; Vorberger, Jan

2016-04-01

The electron-phonon coupling and the corresponding energy exchange are investigated experimentally and by ab initio theory in nonequilibrium states of the free-electron metal aluminium. The temporal evolution of the atomic mean-squared displacement in laser-excited thin freestanding films is monitored by femtosecond electron diffraction. The electron-phonon coupling strength is obtained for a range of electronic and lattice temperatures from density functional theory molecular dynamics simulations. The electron-phonon coupling parameter extracted from the experimental data in the framework of a two-temperature model (TTM) deviates significantly from the ab initio values. We introduce a nonthermal lattice model (NLM) for describing nonthermal phonon distributions as a sum of thermal distributions of the three phonon branches. The contributions of individual phonon branches to the electron-phonon coupling are considered independently and found to be dominated by longitudinal acoustic phonons. Using all material parameters from first-principles calculations except the phonon-phonon coupling strength, the prediction of the energy transfer from electrons to phonons by the NLM is in excellent agreement with time-resolved diffraction data. Our results suggest that the TTM is insufficient for describing the microscopic energy flow even for simple metals like aluminium and that the determination of the electron-phonon coupling constant from time-resolved experiments by means of the TTM leads to incorrect values. In contrast, the NLM describing transient phonon populations by three parameters appears to be a sufficient model for quantitatively describing electron-lattice equilibration in aluminium. We discuss the general applicability of the NLM and provide a criterion for the suitability of the two-temperature approximation for other metals.

20. Post-mortem interval estimation of human skeletal remains by micro-computed tomography, mid-infrared microscopic imaging and energy dispersive X-ray mapping

PubMed Central

Hatzer-Grubwieser, P.; Bauer, C.; Parson, W.; Unterberger, S. H.; Kuhn, V.; Pemberger, N.; Pallua, Anton K.; Recheis, W.; Lackner, R.; Stalder, R.; Pallua, J. D.

2015-01-01

In this study different state-of-the-art visualization methods such as micro-computed tomography (micro-CT), mid-infrared (MIR) microscopic imaging and energy dispersive X-ray (EDS) mapping were evaluated to study human skeletal remains for the determination of the post-mortem interval (PMI). PMI specific features were identified and visualized by overlaying molecular imaging data and morphological tissue structures generated by radiological techniques and microscopic images gained from confocal microscopy (Infinite Focus (IFM)). In this way, a more distinct picture concerning processes during the PMI as well as a more realistic approximation of the PMI were achieved. It could be demonstrated that the gained result in combination with multivariate data analysis can be used to predict the Ca/C ratio and bone volume (BV) over total volume (TV) for PMI estimation. Statistical limitation of this study is the small sample size, and future work will be based on more specimens to develop a screening tool for PMI based on the outcome of this multidimensional approach. PMID:25878731

1. Post-mortem interval estimation of human skeletal remains by micro-computed tomography, mid-infrared microscopic imaging and energy dispersive X-ray mapping.

PubMed

Longato, S; Wöss, C; Hatzer-Grubwieser, P; Bauer, C; Parson, W; Unterberger, S H; Kuhn, V; Pemberger, N; Pallua, Anton K; Recheis, W; Lackner, R; Stalder, R; Pallua, J D

2015-04-07

In this study different state-of-the-art visualization methods such as micro-computed tomography (micro-CT), mid-infrared (MIR) microscopic imaging and energy dispersive X-ray (EDS) mapping were evaluated to study human skeletal remains for the determination of the post-mortem interval (PMI). PMI specific features were identified and visualized by overlaying molecular imaging data and morphological tissue structures generated by radiological techniques and microscopic images gained from confocal microscopy (Infinite Focus (IFM)). In this way, a more distinct picture concerning processes during the PMI as well as a more realistic approximation of the PMI were achieved. It could be demonstrated that the gained result in combination with multivariate data analysis can be used to predict the Ca/C ratio and bone volume (BV) over total volume (TV) for PMI estimation. Statistical limitation of this study is the small sample size, and future work will be based on more specimens to develop a screening tool for PMI based on the outcome of this multidimensional approach.

2. Analytical model for ion stopping power and range in the therapeutic energy interval for beams of hydrogen and heavier ions

Donahue, William; Newhauser, Wayne D.; Ziegler, James F.

2016-09-01

Many different approaches exist to calculate stopping power and range of protons and heavy charged particles. These methods may be broadly categorized as physically complete theories (widely applicable and complex) or semi-empirical approaches (narrowly applicable and simple). However, little attention has been paid in the literature to approaches that are both widely applicable and simple. We developed simple analytical models of stopping power and range for ions of hydrogen, carbon, iron, and uranium that spanned intervals of ion energy from 351 keV u-1 to 450 MeV u-1 or wider. The analytical models typically reproduced the best-available evaluated stopping powers within 1% and ranges within 0.1 mm. The computational speed of the analytical stopping power model was 28% faster than a full-theoretical approach. The calculation of range using the analytic range model was 945 times faster than a widely-used numerical integration technique. The results of this study revealed that the new, simple analytical models are accurate, fast, and broadly applicable. The new models require just 6 parameters to calculate stopping power and range for a given ion and absorber. The proposed model may be useful as an alternative to traditional approaches, especially in applications that demand fast computation speed, small memory footprint, and simplicity.

3. Minimax rational approximation of the Fermi-Dirac distribution

SciTech Connect

Moussa, Jonathan E.

2016-10-27

Accurate rational approximations of the Fermi-Dirac distribution are a useful component in many numerical algorithms for electronic structure calculations. The best known approximations use O(log(βΔ)log(ϵ–1)) poles to achieve an error tolerance ϵ at temperature β–1 over an energy interval Δ. We apply minimax approximation to reduce the number of poles by a factor of four and replace Δ with Δocc, the occupied energy interval. Furthermore, this is particularly beneficial when Δ >> Δocc, such as in electronic structure calculations that use a large basis set.

4. Minimax rational approximation of the Fermi-Dirac distribution

DOE PAGES

Moussa, Jonathan E.

2016-10-27

Accurate rational approximations of the Fermi-Dirac distribution are a useful component in many numerical algorithms for electronic structure calculations. The best known approximations use O(log(βΔ)log(ϵ–1)) poles to achieve an error tolerance ϵ at temperature β–1 over an energy interval Δ. We apply minimax approximation to reduce the number of poles by a factor of four and replace Δ with Δocc, the occupied energy interval. Furthermore, this is particularly beneficial when Δ >> Δocc, such as in electronic structure calculations that use a large basis set.

5. Density functional theory with an approximate kinetic energy functional applied to study structure and stability of weak van der Waals complexes

Wesołowski, T. A.; Ellinger, Y.; Weber, J.

1998-04-01

In view of further application to the study of molecular and atomic sticking on dust particles, we investigated the capability of the "freeze-and-thaw" cycle of the Kohn-Sham equations with constrained electron density (KSCED) to describe potential energy surfaces of weak van der Waals complexes. We report the results obtained for C6H6⋯X (X=O2, N2, and CO) as test cases. In the KSCED formalism, the exchange-correlation functional is defined as in the Kohn-Sham approach whereas the kinetic energy of the molecular complex is expressed differently, using both the analytic expressions for the kinetic energy of individual fragments and the explicit functional of electron density to approximate nonadditive contributions. As the analytical form of the kinetic energy functional is not known, the approach relies on approximations. Therefore, the applied implementation of KSCED requires the use of an approximate kinetic energy functional in addition to the approximate exchange-correlation functional in calculations following the Kohn-Sham formalism. Several approximate kinetic energy functionals derived using a general form by Lee, Lee, and Parr [Lee et al., Phys. Rev. A. 44, 768 (1991)] were considered. The functionals of this type are related to the approximate exchange energy functionals and it is possible to derive a kinetic energy functional from an exchange energy functional without the use of any additional parameters. The KSCED interaction energies obtained using the PW91 [Perdew and Wang, in Electronic Structure of Solids '91, edited by P. Ziesche and H. Eschrig (Academie Verlag, Berlin, 1991), p. 11] exchange-correlation functional and the kinetic energy functional derived from the PW91 exchange functional agree very well with the available experimental results. Other considered functionals lead to worse results. Compared to the supermolecule Kohn-Sham interaction energies, the ones derived from the KSCED calculations depend less on the choice of the

6. Estimating the Gibbs energy of hydration from molecular dynamics trajectories obtained by integral equations of the theory of liquids in the RISM approximation

Tikhonov, D. A.; Sobolev, E. V.

2011-04-01

A method of integral equations of the theory of liquids in the reference interaction site model (RISM) approximation is used to estimate the Gibbs energy averaged over equilibrium trajectories computed by molecular mechanics. Peptide oxytocin is selected as the object of interest. The Gibbs energy is calculated using all chemical potential formulas introduced in the RISM approach for the excess chemical potential of solvation and is compared with estimates by the generalized Born model. Some formulas are shown to give the wrong sign of Gibbs energy changes when peptide passes from the gas phase into water environment; the other formulas give overestimated Gibbs energy changes with the right sign. Note that allowance for the repulsive correction in the approximate analytical expressions for the Gibbs energy derived by thermodynamic perturbation theory is not a remedy.

7. An interval-possibilistic basic-flexible programming method for air quality management of municipal energy system through introducing electric vehicles.

PubMed

Yu, L; Li, Y P; Huang, G H; Shan, B G

2017-09-01

Contradictions of sustainable transportation development and environmental issues have been aggravated significantly and been one of the major concerns for energy systems planning and management. A heavy emphasis is placed on stimulation of electric vehicles (EVs) to handle these problems associated with various complexities and uncertainties in municipal energy system (MES). In this study, an interval-possibilistic basic-flexible programming (IPBFP) method is proposed for planning MES of Qingdao, where uncertainties expressed as interval-flexible variables and interval-possibilistic parameters can be effectively reflected. Support vector regression (SVR) is used for predicting electricity demand of the city under various scenarios. Solutions of EVs stimulation levels and satisfaction levels in association with flexible constraints and predetermined necessity degrees are analyzed, which can help identify the optimized energy-supply patterns that could plunk for improvement of air quality and hedge against violation of soft constraints. Results disclose that largely developing EVs can help facilitate the city's energy system with an environment-effective way. However, compared to the rapid growth of transportation, the EVs' contribution of improving the city's air quality is limited. It is desired that, to achieve an environmentally sustainable MES, more concerns should be focused on the integration of increasing renewable energy resources, stimulating EVs as well as improving energy transmission, transport and storage. Copyright © 2017 Elsevier B.V. All rights reserved.

8. Energy-averaged electron-ion momentum transport cross section in the Born approximation and Debye-Hückel potential: Comparison with the cut-off theory

Zaghloul, Mofreh R.; Bourham, Mohamed A.; Doster, J. Michael

2000-04-01

An exact analytical expression for the energy-averaged electron-ion momentum transport cross section in the Born approximation and Debye-Hückel exponentially screened potential has been derived and compared with the formulae given by other authors. A quantitative comparison between cut-off theory and quantum mechanical perturbation theory has been presented. Based on results from the Born approximation and Spitzer's formula, a new approximate formula for the quantum Coulomb logarithm has been derived and shown to be more accurate than previous expressions.

9. Energy-averaged electron-ion momentum transport cross section in the Born Approximation and Debye-Hückel potential: Comparison with the cut-off theory

Zaghloul, Mofreh R.; Bourham, Mohamed A.; Doster, J. Michael

2000-02-01

An exact analytical expression for the energy-averaged electron-ion momentum transport cross section in the Born approximation and Debye-Hückel exponentially screened potential has been derived and compared with the formulae given by other authors. A quantitative comparison between cut-off theory and quantum mechanical perturbation theory has been presented. Based on results from the Born approximation and Spitzer's formula, a new approximate formula for the quantum Coulomb logarithm has been derived and shown to be more accurate than previous expressions.

10. Approximate expression for the potential energy of the double-layer interaction between two parallel ion-penetrable membranes at small separations in an electrolyte solution.

PubMed

Ohshima, Hiroyuki

2010-10-01

An approximate expression for the potential energy of the double-layer interaction between two parallel similar ion-penetrable membranes in a symmetrical electrolyte solution is derived via a linearization method, in which the nonlinear Poisson-Boltzmann equations in the regions inside and outside the membranes are linearized with respect to the deviation of the electric potential from the Donnan potential. This approximation works quite well for small membrane separations h for all values of the density of fixed charges in the membranes (or the Donnan potential) and gives a correct limiting form of the interaction energy (or the interaction force) as h-->0.

11. Discussion on the energy content of the galactic dark matter Bose-Einstein condensate halo in the Thomas-Fermi approximation

SciTech Connect

De Souza, J.C.C.; Pires, M.O.C. E-mail: marcelo.pires@ufabc.edu.br

2014-03-01

We show that the galactic dark matter halo, considered composed of an axionlike particles Bose-Einstein condensate [6] trapped by a self-graviting potential [5], may be stable in the Thomas-Fermi approximation since appropriate choices for the dark matter particle mass and scattering length are made. The demonstration is performed by means of the calculation of the potential, kinetic and self-interaction energy terms of a galactic halo described by a Boehmer-Harko density profile. We discuss the validity of the Thomas-Fermi approximation for the halo system, and show that the kinetic energy contribution is indeed negligible.

12. Interval Training

MedlinePlus

... before trying any type of interval training. Recent studies suggest, however, that interval training can be used safely for short periods even in individuals with heart disease. Also keep the risk of overuse injury in mind. If you rush into a strenuous workout before ...

13. Approximate forms of the pair-density-functional kinetic energy on the basis of a rigorous expression with coupling-constant integration

Higuchi, Katsuhiko; Higuchi, Masahiko

2014-12-01

We propose approximate kinetic energy (KE) functionals of the pair-density (PD)-functional theory on the basis of the rigorous expression with the coupling-constant integration (RECCI) that has been recently derived [Phys. Rev. A 85, 062508 (2012), 10.1103/PhysRevA.85.062508]. These approximate functionals consist of the noninteracting KE and correlation energy terms. It is found that the Thomas-Fermi-Weizsäcker functional is shown to be better as the noninteracting KE term than the Thomas-Fermi and Gaussian model functionals. It is also shown that the correlation energy term is also indispensable for the reduction of the KE error, i.e., reductions of both inappropriateness of the approximate functional and error of the resultant PD. Concerning the correlation energy term, we further propose an approximate functional in addition to using the existing familiar functionals. This functional satisfies the scaling property of the KE functional, and yields a reasonable PD in a sense that the KE, electron-electron interaction, and potentials energies tend to be improved with satisfying the virial theorem. The present results not only suggest the usefulness of the RECCI but also provide the guideline for the further improvement of the RECCI-based KE functional.

14. Fourth-grade children's dietary recall accuracy for energy intake at school meals differs by social desirability and body mass index percentile in a study concerning retention interval.

PubMed

Guinn, Caroline H; Baxter, Suzanne D; Royer, Julie A; Hardin, James W; Mackelprang, Alyssa J; Smith, Albert F

2010-05-01

Data from a study concerning retention interval and school-meal observation on children's dietary recalls were used to investigate relationships of social desirability score (SDS) and body mass index percentile (BMI%) to recall accuracy for energy for observed (n = 327) children, and to reported energy for observed and unobserved (n = 152) children. Report rates (reported/observed) correlated negatively with SDS and BMI%. Correspondence rates (correctly reported/observed) correlated negatively with SDS. Inflation ratios (overreported/observed) correlated negatively with BMI%. The relationship between reported energy and each of SDS and BMI% did not depend on observation status. Studies utilizing children's dietary recalls should assess SDS and BMI%.

15. Dual quantum electrodynamics: Dyon-dyon and charge-monopole scattering in a high-energy approximation

SciTech Connect

Gamberg, Leonard; Milton, Kimball A.

2000-04-01

We develop the quantum field theory of electron-point magnetic monopole interactions and, more generally, dyon-dyon interactions, based on the original string-dependent ''nonlocal'' action of Dirac and Schwinger. We demonstrate that a viable nonperturbative quantum field theoretic formulation can be constructed that results in a string independent cross section for monopole-electron and dyon-dyon scattering. Such calculations can be done only by using nonperturbative approximations such as the eikonal approximation and not by some mutilation of lowest-order perturbation theory. (c) 2000 The American Physical Society.

16. Single differential electron impact ionization cross sections in the binary-encounter-Bethe approximation for the low binding energy regime

Guerra, M.; Amaro, P.; Machado, J.; Santos, J. P.

2015-09-01

An analytical expression based on the binary-encounter-Bethe model for energy differential cross sections in the low binding energy regime is presented. Both the binary-encounter-Bethe model and its modified counterpart are extended to shells with very low binding energy by removing the constraints in the interference term of the Mott cross section, originally introduced by Kim et al. The influence of the ionic factor is also studied for such targets. All the binary-encounter-Bethe based models presented here are checked against experimental results of low binding energy targets, such as the total ionization cross sections of alkali metals. The energy differential cross sections for H and He, at several incident energies, are also compared to available experimental and theoretical values.

17. Interval neural networks

SciTech Connect

Patil, R.B.

1995-05-01

Traditional neural networks like multi-layered perceptrons (MLP) use example patterns, i.e., pairs of real-valued observation vectors, ({rvec x},{rvec y}), to approximate function {cflx f}({rvec x}) = {rvec y}. To determine the parameters of the approximation, a special version of the gradient descent method called back-propagation is widely used. In many situations, observations of the input and output variables are not precise; instead, we usually have intervals of possible values. The imprecision could be due to the limited accuracy of the measuring instrument or could reflect genuine uncertainty in the observed variables. In such situation input and output data consist of mixed data types; intervals and precise numbers. Function approximation in interval domains is considered in this paper. We discuss a modification of the classical backpropagation learning algorithm to interval domains. Results are presented with simple examples demonstrating few properties of nonlinear interval mapping as noise resistance and finding set of solutions to the function approximation problem.

18. Calculation of the Energy-Band Structure of the Kronig-Penney Model Using the Nearly-Free and Tightly-Bound-Electron Approximations

ERIC Educational Resources Information Center

Wetsel, Grover C., Jr.

1978-01-01

Calculates the energy-band structure of noninteracting electrons in a one-dimensional crystal using exact and approximate methods for a rectangular-well atomic potential. A comparison of the two solutions as a function of potential-well depth and ratio of lattice spacing to well width is presented. (Author/GA)

19. Calculation of the Energy-Band Structure of the Kronig-Penney Model Using the Nearly-Free and Tightly-Bound-Electron Approximations

ERIC Educational Resources Information Center

Wetsel, Grover C., Jr.

1978-01-01

Calculates the energy-band structure of noninteracting electrons in a one-dimensional crystal using exact and approximate methods for a rectangular-well atomic potential. A comparison of the two solutions as a function of potential-well depth and ratio of lattice spacing to well width is presented. (Author/GA)

20. Random-phase approximation correlation energies from Lanczos chains and an optimal basis set: theory and applications to the benzene dimer.

PubMed

Rocca, Dario

2014-05-14

A new ab initio approach is introduced to compute the correlation energy within the adiabatic connection fluctuation dissipation theorem in the random phase approximation. First, an optimally small basis set to represent the response functions is obtained by diagonalizing an approximate dielectric matrix containing the kinetic energy contribution only. Then, the Lanczos algorithm is used to compute the full dynamical dielectric matrix and the correlation energy. The convergence issues with respect to the number of empty states or the dimension of the basis set are avoided and the dynamical effects are easily kept into account. To demonstrate the accuracy and efficiency of this approach the binding curves for three different configurations of the benzene dimer are computed: T-shaped, sandwich, and slipped parallel.

1. Composition of primary cosmic rays at energies 10(15) to approximately 10(16) eV

NASA Technical Reports Server (NTRS)

Amenomori, M.; Konishi, E.; Hotta, N.; Mizutani, K.; Kasahara, K.; Kobayashi, T.; Mikumo, E.; Sato, K.; Yuda, T.; Mito, I.

1985-01-01

The sigma epsilon gamma spectrum in 1 approx. 5 x 1000 TV observed at Mt. Fuji suggests that the flux of primary protons 10 to the 15 approx 10th eV is lower by a factor of 2 approx. 3 than a simple extrapolation from lower energies; the integral proton spectrum tends to be steeper than around to the power V and the spectral index tends to be steeper than Epsilon to the -17th power around 10 to the 14th power eV and the spectral index becomes approx. 2.0 around 10 to the 15th power eV. If the total flux of primary particles has no steepening up to approx 10 to the 15th power eV, than the fraction of primary protons to the total flux should be approx 20% in contrast to approx 45% at lower energies.

2. Few-particles generation channels in inelastic hadron-nuclear interactions at energy approximately equals 400 GeV

NASA Technical Reports Server (NTRS)

Tsomaya, P. V.

1985-01-01

The behavior of the few-particles generation channels in interaction of hadrons with nuclei of CH2, Al, Cu and Pb at mean energy 400 GeV was investigated. The values of coherent production cross-sections beta coh at the investigated nuclei are given. A dependence of coherent and noncoherent events is investigated. The results are compared with the simulations on additive quark model (AQM).

3. Neutrino and antineutrino CCQE scattering in the SuperScaling Approximation from MiniBooNE to NOMAD energies

Megias, G. D.; Amaro, J. E.; Barbaro, M. B.; Caballero, J. A.; Donnelly, T. W.

2013-08-01

We compare the predictions of the SuperScaling model for charged-current quasielastic muonic neutrino and antineutrino scattering from 12C with experimental data spanning an energy range up to 100 GeV. We discuss the sensitivity of the results to different parametrizations of the nucleon vector and axial-vector form factors. Finally, we show the differences between electron and muon (anti)neutrino cross sections relevant for the νSTORM facility.

4. Convergent sum of gradient expansion of the kinetic-energy density functional up to the sixth order term using Padé approximant

Sergeev, A.; Alharbi, F. H.; Jovanovic, R.; Kais, S.

2016-04-01

The gradient expansion of the kinetic energy density functional, when applied to atoms or finite systems, usually grossly overestimates the energy in the fourth order and generally diverges in the sixth order. We avoid the divergence of the integral by replacing the asymptotic series including the sixth order term in the integrand by a rational function. Padé approximants show moderate improvements in accuracy in comparison with partial sums of the series. The results are discussed for atoms and Hooke’s law model for two-electron atoms.

5. ENERGY CONSERVATION AND GRAVITY WAVES IN SOUND-PROOF TREATMENTS OF STELLAR INTERIORS. PART I. ANELASTIC APPROXIMATIONS

SciTech Connect

Brown, Benjamin P.; Zweibel, Ellen G.; Vasil, Geoffrey M.

2012-09-10

Typical flows in stellar interiors are much slower than the speed of sound. To follow the slow evolution of subsonic motions, various sound-proof equations are in wide use, particularly in stellar astrophysical fluid dynamics. These low-Mach number equations include the anelastic equations. Generally, these equations are valid in nearly adiabatically stratified regions like stellar convection zones, but may not be valid in the sub-adiabatic, stably stratified stellar radiative interiors. Understanding the coupling between the convection zone and the radiative interior is a problem of crucial interest and may have strong implications for solar and stellar dynamo theories as the interface between the two, called the tachocline in the Sun, plays a crucial role in many solar dynamo theories. Here, we study the properties of gravity waves in stably stratified atmospheres. In particular, we explore how gravity waves are handled in various sound-proof equations. We find that some anelastic treatments fail to conserve energy in stably stratified atmospheres, instead conserving pseudo-energies that depend on the stratification, and we demonstrate this numerically. One anelastic equation set does conserve energy in all atmospheres and we provide recommendations for converting low-Mach number anelastic codes to this set of equations.

6. Ensemble v-representable ab initio density-functional calculation of energy and spin in atoms: A test of exchange-correlation approximations

SciTech Connect

Kraisler, Eli; Makov, Guy; Kelson, Itzhak

2010-10-15

The total energies and the spin states for atoms and their first ions with Z=1-86 are calculated within the the local spin-density approximation (LSDA) and the generalized-gradient approximation (GGA) to the exchange-correlation (xc) energy in density-functional theory. Atoms and ions for which the ground-state density is not pure-state v-representable are treated as ensemble v-representable with fractional occupations of the Kohn-Sham system. A recently developed algorithm which searches over ensemble v-representable densities [E. Kraisler et al., Phys. Rev. A 80, 032115 (2009)] is employed in calculations. It is found that for many atoms, the ionization energies obtained with the GGA are only modestly improved with respect to experimental data, as compared to the LSDA. However, even in those groups of atoms where the improvement is systematic, there remains a non-negligible difference with respect to the experiment. The ab initio electronic configuration in the Kohn-Sham reference system does not always equal the configuration obtained from the spectroscopic term within the independent-electron approximation. It was shown that use of the latter configuration can prevent the energy-minimization process from converging to the global minimum, e.g., in lanthanides. The spin values calculated ab initio fit the experiment for most atoms and are almost unaffected by the choice of the xc functional. Among the systems with incorrectly obtained spin, there exist some cases (e.g., V, Pt) for which the result is found to be stable with respect to small variations in the xc approximation. These findings suggest a necessity for a significant modification of the exchange-correlation functional, probably of a nonlocal nature, to accurately describe such systems.

7. Comparative assessment of density functional methods for evaluating essential parameters to simulate SERS spectra within the excited state energy gradient approximation

2016-05-01

The prospect of challenges in reproducing and interpretation of resonance Raman properties of molecules interacting with metal clusters has prompted the present research initiative. Resonance Raman spectra based on the time-dependent gradient approximation are examined in the framework of density functional theory using different methods for representing the exchange-correlation functional. In this work the performance of different XC functionals in the prediction of ground state properties, excitation state energies, and gradients are compared and discussed. Resonance Raman properties based on time-dependent gradient approximation for the strongly low-lying charge transfer states are calculated and compared for different methods. We draw the following conclusions: (1) for calculating the binding energy and ground state geometry, dispersion-corrected functionals give the best performance in comparison to ab initio calculations, (2) GGA and meta GGA functionals give good accuracy in calculating vibrational frequencies, (3) excited state energies determined by hybrid and range-separated hybrid functionals are in good agreement with EOM-CCSD calculations, and (4) in calculating resonance Raman properties GGA functionals give good and reasonable performance in comparison to the experiment; however, calculating the excited state gradient by using the hybrid functional on the hessian of GGA improves the results of the hybrid functional significantly. Finally, we conclude that the agreement of charge-transfer surface enhanced resonance Raman spectra with experiment is improved significantly by using the excited state gradient approximation.

8. On the design of high-speed energy-efficient successive-approximation logic for asynchronous SAR ADCs

Yang, Jiaqi; Li, Ting; Yu, Mingyuan; Zhang, Shuangshuang; Lin, Fujiang; He, Lin

2017-08-01

This paper analyzes the power consumption and delay mechanisms of the successive-approximation (SA) logic of a typical asynchronous SAR ADC, and provides strategies to reduce both of them. Following these strategies, a unique direct-pass SA logic is proposed based on a full-swing once-triggered DFF and a self-locking tri-state gate. The unnecessary internal switching power of a typical TSPC DFF, which is commonly used in the SA logic, is avoided. The delay of the ready detector as well as the sequencer is removed from the critical path. A prototype SAR ADC based on the proposed SA logic is fabricated in 130 nm CMOS. It achieves a peak SNDR of 56.3 dB at 1.2 V supply and 65 MS/s sampling rate, and has a total power consumption of 555 μW, while the digital part consumes only 203 μW. Project supported by the National Natural Science Foundation of China (Nos. 61204033, 61331015), the Fundamental Research Funds for the Central Universities (No. WK2100230015), and the Funds of Science and Technology on Analog Integrated Circuit Laboratory (No. 9140C090111150C09041).

9. The Vertical-current Approximation Nonlinear Force-free Field Code—Description, Performance Tests, and Measurements of Magnetic Energies Dissipated in Solar Flares

Aschwanden, Markus J.

2016-06-01

In this work we provide an updated description of the Vertical-Current Approximation Nonlinear Force-Free Field (VCA-NLFFF) code, which is designed to measure the evolution of the potential, non-potential, free energies, and the dissipated magnetic energies during solar flares. This code provides a complementary and alternative method to existing traditional NLFFF codes. The chief advantages of the VCA-NLFFF code over traditional NLFFF codes are the circumvention of the unrealistic assumption of a force-free photosphere in the magnetic field extrapolation method, the capability to minimize the misalignment angles between observed coronal loops (or chromospheric fibril structures) and theoretical model field lines, as well as computational speed. In performance tests of the VCA-NLFFF code, by comparing with the NLFFF code of Wiegelmann, we find agreement in the potential, non-potential, and free energy within a factor of ≲ 1.3, but the Wiegelmann code yields in the average a factor of 2 lower flare energies. The VCA-NLFFF code is found to detect decreases in flare energies in most X, M, and C-class flares. The successful detection of energy decreases during a variety of flares with the VCA-NLFFF code indicates that current-driven twisting and untwisting of the magnetic field is an adequate model to quantify the storage of magnetic energies in active regions and their dissipation during flares. The VCA-NLFFF code is also publicly available in the Solar SoftWare.

10. Approximation algorithms

PubMed Central

Schulz, Andreas S.; Shmoys, David B.; Williamson, David P.

1997-01-01

Increasing global competition, rapidly changing markets, and greater consumer awareness have altered the way in which corporations do business. To become more efficient, many industries have sought to model some operational aspects by gigantic optimization problems. It is not atypical to encounter models that capture 106 separate “yes” or “no” decisions to be made. Although one could, in principle, try all 2106 possible solutions to find the optimal one, such a method would be impractically slow. Unfortunately, for most of these models, no algorithms are known that find optimal solutions with reasonable computation times. Typically, industry must rely on solutions of unguaranteed quality that are constructed in an ad hoc manner. Fortunately, for some of these models there are good approximation algorithms: algorithms that produce solutions quickly that are provably close to optimal. Over the past 6 years, there has been a sequence of major breakthroughs in our understanding of the design of approximation algorithms and of limits to obtaining such performance guarantees; this area has been one of the most flourishing areas of discrete mathematics and theoretical computer science. PMID:9370525

11. Introducing the mean field approximation to CDFT/MMpol method: Statistically converged equilibrium and nonequilibrium free energy calculation for electron transfer reactions in condensed phases

Nakano, Hiroshi; Sato, Hirofumi

2017-04-01

A new theoretical method to study electron transfer reactions in condensed phases is proposed by introducing the mean-field approximation into the constrained density functional theory/molecular mechanical method with a polarizable force field (CDFT/MMpol). The method enables us to efficiently calculate the statistically converged equilibrium and nonequilibrium free energies for diabatic states in an electron transfer reaction by virtue of the mean field approximation that drastically reduces the number of CDFT calculations. We apply the method to the system of a formanilide-anthraquinone dyad in dimethylsulfoxide, in which charge recombination and cis-trans isomerization reactions can take place, previously studied by the CDFT/MMpol method. Quantitative agreement of the driving force and the reorganization energy between our results and those from the CDFT/MMpol calculation and the experimental estimates supports the utility of our method. The calculated nonequilibrium free energy is analyzed by its decomposition into several contributions such as those from the averaged solute-solvent electrostatic interactions and the explicit solvent electronic polarization. The former contribution is qualitatively well described by a model composed of a coarse-grained dyad in a solution in the linear response regime. The latter contribution reduces the reorganization energy by more than 10 kcal/mol.

12. Cross-section activation measurement for U-238 through protons and deuterons in energy interval 10-14 MeV

SciTech Connect

Guzhovskii, B.Y.; Abramovich, S.N.; Zvenigorodskii, A.G.

1995-10-01

There were presented results of cross-section measurements for nuclear reactions {sup 238}U(p,n){sup 238}Np, {sup 238}U(d,2n){sup 238}Np, {sup 238}U(d,t){sup 237}U, {sup 238}U(d,p){sup 239}U, and {sup 238}U(d,n){sup 239}Np. Interval of projectile energy was 10-14 MeV. For measurements of cross-sections it was used the activatio methods. The registration of {beta}- and {gamma}-activity was made with using of plastic scintillation detector and Ge(Li)-detector.

13. Shortening the retention interval of 24-hour dietary recalls increases fourth-grade children's accuracy for reporting energy and macronutrient intake at school meals.

PubMed

Baxter, Suzanne Domel; Guinn, Caroline H; Royer, Julie A; Hardin, James W; Smith, Albert F

2010-08-01

Accurate information about children's intake is crucial for national nutrition policy and for research and clinical activities. To analyze accuracy for reporting energy and nutrients, most validation studies utilize the "conventional approach," which was not designed to capture errors of reported foods and amounts. The "reporting-error-sensitive approach" captures errors of reported foods and amounts. To extend results to energy and macronutrients for a validation study concerning retention interval (elapsed time between to-be-reported meals and the interview) and accuracy for reporting school-meal intake, the conventional and reporting-error-sensitive approaches were compared. DESIGN AND PARTICIPANTS/SETTING: Fourth-grade children (n=374) were observed eating two school meals, and interviewed to obtain a 24-hour recall using one of six interview conditions from crossing two target periods (prior 24 hours and previous day) with three interview times (morning, afternoon, and evening). Data were collected in one district during three school years (2004-2005, 2005-2006, and 2006-2007). Report rates (reported/observed), correspondence rates (correctly reported/observed), and inflation ratios (intruded/observed) were calculated for energy and macronutrients. For each outcome measure, mixed-model analysis of variance was conducted with target period, interview time, their interaction, and sex in the model; results were adjusted for school year and interviewer. With the conventional approach, report rates for energy and macronutrients did not differ by target period, interview time, their interaction, or sex. With the reporting-error-sensitive approach, correspondence rates for energy and macronutrients differed by target period (four P values <0.0001) and the target period by interview-time interaction (four P values <0.0001); inflation ratios for energy and macronutrients differed by target period (four P values <0.0001), and inflation ratios for energy and carbohydrate

14. Second-order structural phase transitions, free energy curvature, and temperature-dependent anharmonic phonons in the self-consistent harmonic approximation: Theory and stochastic implementation

Bianco, Raffaello; Errea, Ion; Paulatto, Lorenzo; Calandra, Matteo; Mauri, Francesco

2017-07-01

The self-consistent harmonic approximation is an effective harmonic theory to calculate the free energy of systems with strongly anharmonic atomic vibrations, and its stochastic implementation has proved to be an efficient method to study, from first-principles, the anharmonic properties of solids. The free energy as a function of average atomic positions (centroids) can be used to study quantum or thermal lattice instability. In particular the centroids are order parameters in second-order structural phase transitions such as, e.g., charge-density-waves or ferroelectric instabilities. According to Landau's theory, the knowledge of the second derivative of the free energy (i.e., the curvature) with respect to the centroids in a high-symmetry configuration allows the identification of the phase-transition and of the instability modes. In this work we derive the exact analytic formula for the second derivative of the free energy in the self-consistent harmonic approximation for a generic atomic configuration. The analytic derivative is expressed in terms of the atomic displacements and forces in a form that can be evaluated by a stochastic technique using importance sampling. Our approach is particularly suitable for applications based on first-principles density-functional-theory calculations, where the forces on atoms can be obtained with a negligible computational effort compared to total energy determination. Finally, we propose a dynamical extension of the theory to calculate spectral properties of strongly anharmonic phonons, as probed by inelastic scattering processes. We illustrate our method with a numerical application on a toy model that mimics the ferroelectric transition in rock-salt crystals such as SnTe or GeTe.

15. Distributed memory parallel implementation of energies and gradients for second-order Møller-Plesset perturbation theory with the resolution-of-the-identity approximation.

PubMed

Hättig, Christof; Hellweg, Arnim; Köhn, Andreas

2006-03-14

We present a parallel implementation of second-order Møller-Plesset perturbation theory with the resolution-of-the-identity approximation (RI-MP2). The implementation is based on a recent improved sequential implementation of RI-MP2 within the Turbomole program package and employs the message passing interface (MPI) standard for communication between distributed memory nodes. The parallel implementation extends the applicability of canonical MP2 to considerably larger systems. Examples are presented for full geometry optimizations with up to 60 atoms and 3300 basis functions and MP2 energy calculations with more than 200 atoms and 7000 basis functions.

16. Detectability of auditory signals presented without defined observation intervals

NASA Technical Reports Server (NTRS)

Watson, C. S.; Nichols, T. L.

1976-01-01

Ability to detect tones in noise was measured without defined observation intervals. Latency density functions were estimated for the first response following a signal and, separately, for the first response following randomly distributed instances of background noise. Detection performance was measured by the maximum separation between the cumulative latency density functions for signal-plus-noise and for noise alone. Values of the index of detectability, estimated by this procedure, were approximately those obtained with a 2-dB weaker signal and defined observation intervals. Simulation of defined- and non-defined-interval tasks with an energy detector showed that this device performs very similarly to the human listener in both cases.

17. Detectability of auditory signals presented without defined observation intervals

NASA Technical Reports Server (NTRS)

Watson, C. S.; Nichols, T. L.

1976-01-01

Ability to detect tones in noise was measured without defined observation intervals. Latency density functions were estimated for the first response following a signal and, separately, for the first response following randomly distributed instances of background noise. Detection performance was measured by the maximum separation between the cumulative latency density functions for signal-plus-noise and for noise alone. Values of the index of detectability, estimated by this procedure, were approximately those obtained with a 2-dB weaker signal and defined observation intervals. Simulation of defined- and non-defined-interval tasks with an energy detector showed that this device performs very similarly to the human listener in both cases.

18. Modified feed-forward neural network structures and combined-function-derivative approximations incorporating exchange symmetry for potential energy surface fitting.

PubMed

Nguyen, Hieu T T; Le, Hung M

2012-05-10

The classical interchange (permutation) of atoms of similar identity does not have an effect on the overall potential energy. In this study, we present feed-forward neural network structures that provide permutation symmetry to the potential energy surfaces of molecules. The new feed-forward neural network structures are employed to fit the potential energy surfaces for two illustrative molecules, which are H(2)O and ClOOCl. Modifications are made to describe the symmetric interchange (permutation) of atoms of similar identity (or mathematically, the permutation of symmetric input parameters). The combined-function-derivative approximation algorithm (J. Chem. Phys. 2009, 130, 134101) is also implemented to fit the neural-network potential energy surfaces accurately. The combination of our symmetric neural networks and the function-derivative fitting effectively produces PES fits using fewer numbers of training data points. For H(2)O, only 282 configurations are employed as the training set; the testing root-mean-squared and mean-absolute energy errors are respectively reported as 0.0103 eV (0.236 kcal/mol) and 0.0078 eV (0.179 kcal/mol). In the ClOOCl case, 1693 configurations are required to construct the training set; the root-mean-squared and mean-absolute energy errors for the ClOOCl testing set are 0.0409 eV (0.943 kcal/mol) and 0.0269 eV (0.620 kcal/mol), respectively. Overall, we find good agreements between ab initio and NN prediction in term of energy and gradient errors, and conclude that the new feed-forward neural-network models advantageously describe the molecules with excellent accuracy.

19. Estimation of the Breakup Cross-Sections in 6He + 12C Reaction Within High-Energy Approximation and Microscopic Optical Potential

Lukyanov, V. K.; Zemlyanaya, E. V.; Lukyanov, K. V.

The breakup cross-sections in the reaction 6He + 12C are calculated at about 40 MeV/nucleon using the high-energy approximation (HEA) and with the help of microscopic optical potentials (OP) of interaction with the target nucleus 12C of the projectile nucleus fragments 4He and 2n. Considering the di-neutron h = 2n as a single particle the relative motion hα wave function is estimated so that to explain both the separation energy of h in 6He and the rms radius of the latter. The stripping and absorbtion total cross-sections are calculated and their sum is compared with the total reaction cross-section obtained within a double-folding microscopic OP for the 6He + 12C scattering. It is concluded that the breakup cross-sections contribute to about 50% of the total reaction cross-section.

20. Algebraic cluster approach to calculations of low-energy reactions. n sup 4 He scattering with realistic NN interactions. Inapplicability of the no-polarization approximation

SciTech Connect

Ustinin, M.N.; Efros, V.D. )

1989-05-01

The results of the first microscopic calculation of reactions by the resonating-group method employing realistic {ital NN} interactions are presented for {ital n}{sup 4}He scattering with {ital J}{sup {pi}}=1/2{sup {minus}} and {ital J}{sup {pi}}=3/2{sup {minus}} in the region of the low-energy resonances of {sup 5}He. In the framework of the no-polarization approximation convergence of the results was achieved. It was found that in the no-polarization approximation the realistic {ital NN} forces are, in contrast to conventional effective'' {ital NN} forces, too weak to reproduce the above-mentioned resonances of {sup 5}He. The cause of this fact is the ineffectiveness of the tensor forces. In order to reproduce the experimental phase shifts in the no-polarization approximation one would have to strengthen the non-splitting components of the realistic {ital NN} forces {approx equal}1.5 times. It is shown that the main contribution to the {ital J} splitting of the phase shifts is given by the {ital ls} forces, whereas the relative contribution of the tensor forces to the splitting is quite small. Upon appropriate strengthening of their non-splitting components, the realistic {ital NN} forces ensure the correct magnitude of the splitting.

1. On the piecewise convex or concave nature of ground state energy as a function of fractional number of electrons for approximate density functionals.

PubMed

Li, Chen; Yang, Weitao

2017-02-21

We provide a rigorous proof that the Hartree Fock energy, as a function of the fractional electron number, E(N), is piecewise concave. Moreover, for semi-local density functionals, we show that the piecewise convexity of the E(N) curve, as stated in the literature, is not generally true for all fractions. By an analysis based on exchange-only local density approximation and careful examination of the E(N) curve, we find for some systems, there exists a very small concave region, corresponding to adding a small fraction of electrons to the integer system, while the remaining E(N) curve is convex. Several numerical examples are provided as verification. Although the E(N) curve is not convex everywhere in these systems, the previous conclusions on the consequence of the delocalization error in the commonly used density functional approximations, in particular, the underestimation of ionization potential, and the overestimation of electron affinity, and other related issues, remain unchanged. This suggests that instead of using the term convexity, a modified and more rigorous description for the delocalization error is that the E(N) curve lies below the straight line segment across the neighboring integer points for these approximate functionals.

2. Accurate and efficient representation of intra­molecular energy in ab initio generation of crystal structures. I. Adaptive local approximate models

PubMed Central

Sugden, Isaac; Adjiman, Claire S.; Pantelides, Constantinos C.

2016-01-01

The global search stage of crystal structure prediction (CSP) methods requires a fine balance between accuracy and computational cost, particularly for the study of large flexible molecules. A major improvement in the accuracy and cost of the intramolecular energy function used in the CrystalPredictor II [Habgood et al. (2015 ▸). J. Chem. Theory Comput. 11, 1957–1969] program is presented, where the most efficient use of computational effort is ensured via the use of adaptive local approximate model (LAM) placement. The entire search space of the relevant molecule’s conformations is initially evaluated using a coarse, low accuracy grid. Additional LAM points are then placed at appropriate points determined via an automated process, aiming to minimize the computational effort expended in high-energy regions whilst maximizing the accuracy in low-energy regions. As the size, complexity and flexibility of molecules increase, the reduction in computational cost becomes marked. This improvement is illustrated with energy calculations for benzoic acid and the ROY molecule, and a CSP study of molecule (XXVI) from the sixth blind test [Reilly et al. (2016 ▸). Acta Cryst. B72, 439–459], which is challenging due to its size and flexibility. Its known experimental form is successfully predicted as the global minimum. The computational cost of the study is tractable without the need to make unphysical simplifying assumptions. PMID:27910837

3. Generalized Gradient Approximations of the Noninteracting Kinetic Energy from the Semiclassical Atom Theory: Rationalization of the Accuracy of the Frozen Density Embedding Theory for Nonbonded Interactions.

PubMed

Laricchia, S; Fabiano, E; Constantin, L A; Della Sala, F

2011-08-09

We present a new class of noninteracting kinetic energy (KE) functionals, derived from the semiclassical-atom theory. These functionals are constructed using the link between exchange and kinetic energies and employ a generalized gradient approximation (GGA) for the enhancement factor, namely, the Perdew-Burke-Ernzerhof (PBE) one. Two of them, named APBEK and revAPBEK, recover in the slowly varying density limit the modified second-order gradient (MGE2) expansion of the KE, which is valid for a neutral atom with a large number of electrons. APBEK contains no empirical parameters, while revAPBEK has one empirical parameter derived from exchange energies, which leads to a higher degree of nonlocality. The other two functionals, APBEKint and revAPBEKint, modify the APBEK and revAPBEK enhancement factors, respectively, to recover the second-order gradient expansion (GE2) of the homogeneous electron gas. We first benchmarked the total KE of atoms/ions and jellium spheres/surfaces: we found that functionals based on the MGE2 are as accurate as the current state-of-the-art KE functionals, containing several empirical parameters. Then, we verified the accuracy of these new functionals in the context of the frozen density embedding (FDE) theory. We benchmarked 20 systems with nonbonded interactions, and we considered embedding errors in the energy and density. We found that all of the PBE-like functionals give accurate and similar embedded densities, but the revAPBEK and revAPBEKint functionals have a significant superior accuracy for the embedded energy, outperforming the current state-of-the-art GGA approaches. While the revAPBEK functional is more accurate than revAPBEKint, APBEKint is better than APBEK. To rationalize this performance, we introduce the reduced-gradient decomposition of the nonadditive kinetic energy, and we discuss how systems with different interactions can be described with the same functional form.

4. Water 16-mers and hexamers: assessment of the three-body and electrostatically embedded many-body approximations of the correlation energy or the nonlocal energy as ways to include cooperative effects.

PubMed

Qi, Helena W; Leverentz, Hannah R; Truhlar, Donald G

2013-05-30

This work presents a new fragment method, the electrostatically embedded many-body expansion of the nonlocal energy (EE-MB-NE), and shows that it, along with the previously proposed electrostatically embedded many-body expansion of the correlation energy (EE-MB-CE), produces accurate results for large systems at the level of CCSD(T) coupled cluster theory. We primarily study water 16-mers, but we also test the EE-MB-CE method on water hexamers. We analyze the distributions of two-body and three-body terms to show why the many-body expansion of the electrostatically embedded correlation energy converges faster than the many-body expansion of the entire electrostatically embedded interaction potential. The average magnitude of the dimer contributions to the pairwise additive (PA) term of the correlation energy (which neglects cooperative effects) is only one-half of that of the average dimer contribution to the PA term of the expansion of the total energy; this explains why the mean unsigned error (MUE) of the EE-PA-CE approximation is only one-half of that of the EE-PA approximation. Similarly, the average magnitude of the trimer contributions to the three-body (3B) term of the EE-3B-CE approximation is only one-fourth of that of the EE-3B approximation, and the MUE of the EE-3B-CE approximation is one-fourth that of the EE-3B approximation. Finally, we test the efficacy of two- and three-body density functional corrections. One such density functional correction method, the new EE-PA-NE method, with the OLYP or the OHLYP density functional (where the OHLYP functional is the OptX exchange functional combined with the LYP correlation functional multiplied by 0.5), has the best performance-to-price ratio of any method whose computational cost scales as the third power of the number of monomers and is competitive in accuracy in the tests presented here with even the electrostatically embedded three-body approximation.

5. Laplacian-Level Kinetic Energy Approximations Based on the Fourth-Order Gradient Expansion: Global Assessment and Application to the Subsystem Formulation of Density Functional Theory.

PubMed

Laricchia, Savio; Constantin, Lucian A; Fabiano, Eduardo; Della Sala, Fabio

2014-01-14

We tested Laplacian-level meta-generalized gradient approximation (meta-GGA) noninteracting kinetic energy functionals based on the fourth-order gradient expansion (GE4). We considered several well-known Laplacian-level meta-GGAs from the literature (bare GE4, modified GE4, and the MGGA functional of Perdew and Constantin (Phys. Rev. B 2007,75, 155109)), as well as two newly designed Laplacian-level kinetic energy functionals (L0.4 and L0.6). First, a general assessment of the different functionals is performed to test them for model systems (one-electron densities, Hooke's atom, and different jellium systems) and atomic and molecular kinetic energies as well as for their behavior with respect to density-scaling transformations. Finally, we assessed, for the first time, the performance of the different functionals for subsystem density functional theory (DFT) calculations on noncovalently interacting systems. We found that the different Laplacian-level meta-GGA kinetic functionals may improve the description of different properties of electronic systems, but no clear overall advantage is found over the best GGA functionals. Concerning the subsystem DFT calculations, the here-proposed L0.4 and L0.6 kinetic energy functionals are competitive with state-of-the-art GGAs, whereas all other Laplacian-level functionals fail badly. The performance of the Laplacian-level functionals is rationalized thanks to a two-dimensional reduced-gradient and reduced-Laplacian decomposition of the nonadditive kinetic energy density.

6. Momentum dependence of the electron-phonon coupling and self-energy effects in superconducting YBa2Cu3O7 within the local density approximation.

PubMed

Heid, Rolf; Bohnen, Klaus-Peter; Zeyher, Roland; Manske, Dirk

2008-04-04

Using the local density approximation and a realistic phonon spectrum we determine the momentum and frequency dependence of alpha(2)F(k,omega) in YBa(2)Cu(3)O(7) for the bonding, antibonding, and chain band. The resulting self-energy Sigma is rather small near the Fermi surface. For instance, for the antibonding band the maximum of ReSigma as a function of frequency is about 7 meV at the nodal point in the normal state and the ratio of bare and renormalized Fermi velocities is 1.18. These values are a factor of 3-5 too small compared to the experiment showing that only a small part of Sigma can be attributed to phonons. Furthermore, the frequency dependence of the renormalization factor Z(k,omega) is smooth and has no anomalies at the observed kink frequencies which means that phonons cannot produce well-pronounced kinks in stoichiometric YBa(2)Cu()3)O(7), at least, within the local density approximation.

7. Minimax confidence intervals in geomagnetism

NASA Technical Reports Server (NTRS)

Stark, Philip B.

1992-01-01

The present paper uses theory of Donoho (1989) to find lower bounds on the lengths of optimally short fixed-length confidence intervals (minimax confidence intervals) for Gauss coefficients of the field of degree 1-12 using the heat flow constraint. The bounds on optimal minimax intervals are about 40 percent shorter than Backus' intervals: no procedure for producing fixed-length confidence intervals, linear or nonlinear, can give intervals shorter than about 60 percent the length of Backus' in this problem. While both methods rigorously account for the fact that core field models are infinite-dimensional, the application of the techniques to the geomagnetic problem involves approximations and counterfactual assumptions about the data errors, and so these results are likely to be extremely optimistic estimates of the actual uncertainty in Gauss coefficients.

8. Minimax confidence intervals in geomagnetism

NASA Technical Reports Server (NTRS)

Stark, Philip B.

1992-01-01

The present paper uses theory of Donoho (1989) to find lower bounds on the lengths of optimally short fixed-length confidence intervals (minimax confidence intervals) for Gauss coefficients of the field of degree 1-12 using the heat flow constraint. The bounds on optimal minimax intervals are about 40 percent shorter than Backus' intervals: no procedure for producing fixed-length confidence intervals, linear or nonlinear, can give intervals shorter than about 60 percent the length of Backus' in this problem. While both methods rigorously account for the fact that core field models are infinite-dimensional, the application of the techniques to the geomagnetic problem involves approximations and counterfactual assumptions about the data errors, and so these results are likely to be extremely optimistic estimates of the actual uncertainty in Gauss coefficients.

9. Practical approximation of the non-adiabatic coupling terms for same-symmetry interstate crossings by using adiabatic potential energies only

Baeck, Kyoung Koo; An, Heesun

2017-02-01

A very simple equation, Fij A p p=[(∂2(Via-Vja ) /∂Q2 ) /(Via-Vja ) ] 1 /2/2 , giving a reliable magnitude of non-adiabatic coupling terms (NACTs, Fij's) based on adiabatic potential energies only (Via and Vja) was discovered, and its reliability was tested for several prototypes of same-symmetry interstate crossings in LiF, C2, NH3Cl, and C6H5SH molecules. Our theoretical derivation starts from the analysis of the relationship between the Lorentzian dependence of NACTs along a diabatization coordinate and the well-established linear vibronic coupling scheme. This analysis results in a very simple equation, α =2 κ /Δc , enabling the evaluation of the Lorentz function α parameter in terms of the coupling constant κ and the energy gap Δc (Δc=|Via-Vja| Q c ) between adiabatic states at the crossing point QC. Subsequently, it was shown that QC corresponds to the point where Fij A p p exhibit maximum values if we set the coupling parameter as κ =[(Via-Vja ) ṡ(∂2(Via-Vja ) /∂Q2 ) ] Qc1 /2 /2 . Finally, we conjectured that this relation could give reasonable values of NACTs not only at the crossing point but also at other geometries near QC. In this final approximation, the pre-defined crossing point QC is not required. The results of our test demonstrate that the approximation works much better than initially expected. The present new method does not depend on the selection of an ab initio method for adiabatic electronic states but is currently limited to local non-adiabatic regions where only two electronic states are dominantly involved within a nuclear degree of freedom.

10. On the incorporation of the geometric phase in general single potential energy surface dynamics: A removable approximation to ab initio data

Malbon, Christopher L.; Zhu, Xiaolei; Guo, Hua; Yarkony, David R.

2016-12-01

For two electronic states coupled by conical intersections, the line integral of the derivative coupling can be used to construct a complex-valued multiplicative phase factor that makes the real-valued adiabatic electronic wave function single-valued, provided that the curl of the derivative coupling is zero. Unfortunately for ab initio determined wave functions, the curl is never rigorously zero. However, when the wave functions are determined from a coupled two diabatic state Hamiltonian Hd (fit to ab initio data), the resulting derivative couplings are by construction curl free, except at points of conical intersection. In this work we focus on a recently introduced diabatization scheme that produces the Hd by fitting ab initio determined energies, energy gradients, and derivative couplings to the corresponding Hd determined quantities in a least squares sense, producing a removable approximation to the ab initio determined derivative coupling. This approach and related numerical issues associated with the nonremovable ab initio derivative couplings are illustrated using a full 33-dimensional representation of phenol photodissociation. The use of this approach to provide a general framework for treating the molecular Aharonov Bohm effect is demonstrated.

11. On the incorporation of the geometric phase in general single potential energy surface dynamics: A removable approximation to ab initio data.

PubMed

Malbon, Christopher L; Zhu, Xiaolei; Guo, Hua; Yarkony, David R

2016-12-21

For two electronic states coupled by conical intersections, the line integral of the derivative coupling can be used to construct a complex-valued multiplicative phase factor that makes the real-valued adiabatic electronic wave function single-valued, provided that the curl of the derivative coupling is zero. Unfortunately for ab initio determined wave functions, the curl is never rigorously zero. However, when the wave functions are determined from a coupled two diabatic state Hamiltonian H(d) (fit to ab initio data), the resulting derivative couplings are by construction curl free, except at points of conical intersection. In this work we focus on a recently introduced diabatization scheme that produces the H(d) by fitting ab initio determined energies, energy gradients, and derivative couplings to the corresponding H(d) determined quantities in a least squares sense, producing a removable approximation to the ab initio determined derivative coupling. This approach and related numerical issues associated with the nonremovable ab initio derivative couplings are illustrated using a full 33-dimensional representation of phenol photodissociation. The use of this approach to provide a general framework for treating the molecular Aharonov Bohm effect is demonstrated.

12. How accurate is the strongly orthogonal geminal theory in predicting excitation energies? Comparison of the extended random phase approximation and the linear response theory approaches

SciTech Connect

Pernal, Katarzyna; Chatterjee, Koushik; Kowalski, Piotr H.

2014-01-07

Performance of the antisymmetrized product of strongly orthogonal geminal (APSG) ansatz in describing ground states of molecules has been extensively explored in the recent years. Not much is known, however, about possibilities of obtaining excitation energies from methods that would rely on the APSG ansatz. In the paper we investigate the recently proposed extended random phase approximations, ERPA and ERPA2, that employ APSG reduced density matrices. We also propose a time-dependent linear response APSG method (TD-APSG). Its relation to the recently proposed phase including natural orbital theory is elucidated. The methods are applied to Li{sub 2}, BH, H{sub 2}O, and CH{sub 2}O molecules at equilibrium geometries and in the dissociating limits. It is shown that ERPA2 and TD-APSG perform better in describing double excitations than ERPA due to inclusion of the so-called diagonal double elements. Analysis of the potential energy curves of Li{sub 2}, BH, and H{sub 2}O reveals that ERPA2 and TD-APSG describe correctly excitation energies of dissociating molecules if orbitals involved in breaking bonds are involved. For single excitations of molecules at equilibrium geometries the accuracy of the APSG-based methods approaches that of the time-dependent Hartree-Fock method with the increase of the system size. A possibility of improving the accuracy of the TD-APSG method for single excitations by splitting the electron-electron interaction operator into the long- and short-range terms and employing density functionals to treat the latter is presented.

13. Rough Set Approximations in Formal Concept Analysis

Yamaguchi, Daisuke; Murata, Atsuo; Li, Guo-Dong; Nagai, Masatake

Conventional set approximations are based on a set of attributes; however, these approximations cannot relate an object to the corresponding attribute. In this study, a new model for set approximation based on individual attributes is proposed for interval-valued data. Defining an indiscernibility relation is omitted since each attribute value itself has a set of values. Two types of approximations, single- and multiattribute approximations, are presented. A multi-attribute approximation has two solutions: a maximum and a minimum solution. A maximum solution is a set of objects that satisfy the condition of approximation for at least one attribute. A minimum solution is a set of objects that satisfy the condition for all attributes. The proposed set approximation is helpful in finding the features of objects relating to condition attributes when interval-valued data are given. The proposed model contributes to feature extraction in interval-valued information systems.

14. Nonlinear amplitude approximation for bilinear systems

Jung, Chulwoo; D'Souza, Kiran; Epureanu, Bogdan I.

2014-06-01

An efficient method to predict vibration amplitudes at the resonant frequencies of dynamical systems with piecewise-linear nonlinearity is developed. This technique is referred to as bilinear amplitude approximation (BAA). BAA constructs a single vibration cycle at each resonant frequency to approximate the periodic steady-state response of the system. It is postulated that the steady-state response is piece-wise linear and can be approximated by analyzing the response over two time intervals during which the system behaves linearly. Overall the dynamics is nonlinear, but the system is in a distinct linear state during each of the two time intervals. Thus, the approximated vibration cycle is constructed using linear analyses. The equation of motion for analyzing the vibration of each state is projected along the overlapping space spanned by the linear mode shapes active in each of the states. This overlapping space is where the vibratory energy is transferred from one state to the other when the system switches from one state to the other. The overlapping space can be obtained using singular value decomposition. The space where the energy is transferred is used together with transition conditions of displacement and velocity compatibility to construct a single vibration cycle and to compute the amplitude of the dynamics. Since the BAA method does not require numerical integration of nonlinear models, computational costs are very low. In this paper, the BAA method is first applied to a single-degree-of-freedom system. Then, a three-degree-of-freedom system is introduced to demonstrate a more general application of BAA. Finally, the BAA method is applied to a full bladed disk with a crack. Results comparing numerical solutions from full-order nonlinear analysis and results obtained using BAA are presented for all systems.

15. Energy and macronutrient content of familiar beverages interact with pre-meal intervals to determine later food intake, appetite and glycemic response in young adults.

PubMed

Panahi, Shirin; Luhovyy, Bohdan L; Liu, Ting Ting; Akhavan, Tina; El Khoury, Dalia; Goff, H Douglas; Anderson, G Harvey

2013-01-01

The objective was to compare the effects of pre-meal consumption of familiar beverages on appetite, food intake, and glycemic response in healthy young adults. Two short-term experiments compared the effect of consumption at 30 (experiment 1) or 120 min (experiment 2) before a pizza meal of isovolumetric amounts (500 mL) of water (0 kcal), soy beverage (200 kcal), 2% milk (260 kcal), 1% chocolate milk (340 kcal), orange juice (229 kcal) and cow's milk-based infant formula (368 kcal) on food intake and subjective appetite and blood glucose before and after a meal. Pre-meal ingestion of chocolate milk and infant formula reduced food intake compared to water at 30 min, however, beverage type did not affect food intake at 2h. Pre-meal blood glucose was higher after chocolate milk than other caloric beverages from 0 to 30 min (experiment 1), and after chocolate milk and orange juice from 0 to 120 min (experiment 2). Only milk reduced post-meal blood glucose in both experiments, suggesting that its effects were independent of meal-time energy intake. Combined pre- and post-meal blood glucose was lower after milk compared to chocolate milk and orange juice, but did not differ from other beverages. Thus, beverage calorie content and inter-meal intervals are primary determinants of food intake in the short-term, but macronutrient composition, especially protein content and composition, may play the greater role in glycemic control.

16. Generalized Vibrational Perturbation Theory for Rotovibrational Energies of Linear, Symmetric and Asymmetric Tops: Theory, Approximations, and Automated Approaches to Deal with Medium-to-Large Molecular Systems.

PubMed

Piccardo, Matteo; Bloino, Julien; Barone, Vincenzo

2015-08-05

Models going beyond the rigid-rotor and the harmonic oscillator levels are mandatory for providing accurate theoretical predictions for several spectroscopic properties. Different strategies have been devised for this purpose. Among them, the treatment by perturbation theory of the molecular Hamiltonian after its expansion in power series of products of vibrational and rotational operators, also referred to as vibrational perturbation theory (VPT), is particularly appealing for its computational efficiency to treat medium-to-large systems. Moreover, generalized (GVPT) strategies combining the use of perturbative and variational formalisms can be adopted to further improve the accuracy of the results, with the first approach used for weakly coupled terms, and the second one to handle tightly coupled ones. In this context, the GVPT formulation for asymmetric, symmetric, and linear tops is revisited and fully generalized to both minima and first-order saddle points of the molecular potential energy surface. The computational strategies and approximations that can be adopted in dealing with GVPT computations are pointed out, with a particular attention devoted to the treatment of symmetry and degeneracies. A number of tests and applications are discussed, to show the possibilities of the developments, as regards both the variety of treatable systems and eligible methods.

17. Laplacian-dependent models of the kinetic energy density: Applications in subsystem density functional theory with meta-generalized gradient approximation functionals

Śmiga, Szymon; Fabiano, Eduardo; Constantin, Lucian A.; Della Sala, Fabio

2017-02-01

The development of semilocal models for the kinetic energy density (KED) is an important topic in density functional theory (DFT). This is especially true for subsystem DFT, where these models are necessary to construct the required non-additive embedding contributions. In particular, these models can also be efficiently employed to replace the exact KED in meta-Generalized Gradient Approximation (meta-GGA) exchange-correlation functionals allowing to extend the subsystem DFT applicability to the meta-GGA level of theory. Here, we present a two-dimensional scan of semilocal KED models as linear functionals of the reduced gradient and of the reduced Laplacian, for atoms and weakly bound molecular systems. We find that several models can perform well but in any case the Laplacian contribution is extremely important to model the local features of the KED. Indeed a simple model constructed as the sum of Thomas-Fermi KED and 1/6 of the Laplacian of the density yields the best accuracy for atoms and weakly bound molecular systems. These KED models are tested within subsystem DFT with various meta-GGA exchange-correlation functionals for non-bonded systems, showing a good accuracy of the method.

18. Generalized Vibrational Perturbation Theory for Rotovibrational Energies of Linear, Symmetric and Asymmetric Tops: Theory, Approximations, and Automated Approaches to Deal with Medium-to-Large Molecular Systems

PubMed Central

Piccardo, Matteo; Bloino, Julien; Barone, Vincenzo

2015-01-01

Models going beyond the rigid-rotor and the harmonic oscillator levels are mandatory for providing accurate theoretical predictions for several spectroscopic properties. Different strategies have been devised for this purpose. Among them, the treatment by perturbation theory of the molecular Hamiltonian after its expansion in power series of products of vibrational and rotational operators, also referred to as vibrational perturbation theory (VPT), is particularly appealing for its computational efficiency to treat medium-to-large systems. Moreover, generalized (GVPT) strategies combining the use of perturbative and variational formalisms can be adopted to further improve the accuracy of the results, with the first approach used for weakly coupled terms, and the second one to handle tightly coupled ones. In this context, the GVPT formulation for asymmetric, symmetric, and linear tops is revisited and fully generalized to both minima and first-order saddle points of the molecular potential energy surface. The computational strategies and approximations that can be adopted in dealing with GVPT computations are pointed out, with a particular attention devoted to the treatment of symmetry and degeneracies. A number of tests and applications are discussed, to show the possibilities of the developments, as regards both the variety of treatable systems and eligible methods. © 2015 Wiley Periodicals, Inc. PMID:26345131

19. Laplacian-dependent models of the kinetic energy density: Applications in subsystem density functional theory with meta-generalized gradient approximation functionals.

PubMed

Śmiga, Szymon; Fabiano, Eduardo; Constantin, Lucian A; Della Sala, Fabio

2017-02-14

The development of semilocal models for the kinetic energy density (KED) is an important topic in density functional theory (DFT). This is especially true for subsystem DFT, where these models are necessary to construct the required non-additive embedding contributions. In particular, these models can also be efficiently employed to replace the exact KED in meta-Generalized Gradient Approximation (meta-GGA) exchange-correlation functionals allowing to extend the subsystem DFT applicability to the meta-GGA level of theory. Here, we present a two-dimensional scan of semilocal KED models as linear functionals of the reduced gradient and of the reduced Laplacian, for atoms and weakly bound molecular systems. We find that several models can perform well but in any case the Laplacian contribution is extremely important to model the local features of the KED. Indeed a simple model constructed as the sum of Thomas-Fermi KED and 1/6 of the Laplacian of the density yields the best accuracy for atoms and weakly bound molecular systems. These KED models are tested within subsystem DFT with various meta-GGA exchange-correlation functionals for non-bonded systems, showing a good accuracy of the method.

20. Low-energy dipole excitations in neon isotopes and N=16 isotones within the quasiparticle random-phase approximation and the Gogny force

SciTech Connect

Martini, M.; Peru, S.; Dupuis, M.

2011-03-15

Low-energy dipole excitations in neon isotopes and N=16 isotones are calculated with a fully consistent axially-symmetric-deformed quasiparticle random phase approximation (QRPA) approach based on Hartree-Fock-Bogolyubov (HFB) states. The same Gogny D1S effective force has been used both in HFB and QRPA calculations. The microscopical structure of these low-lying resonances, as well as the behavior of proton and neutron transition densities, are investigated in order to determine the isoscalar or isovector nature of the excitations. It is found that the N=16 isotones {sup 24}O, {sup 26}Ne, {sup 28}Mg, and {sup 30}Si are characterized by a similar behavior. The occupation of the 2s{sub 1/2} neutron orbit turns out to be crucial, leading to nontrivial transition densities and to small but finite collectivity. Some low-lying dipole excitations of {sup 28}Ne and {sup 30}Ne, characterized by transitions involving the {nu}1d{sub 3/2} state, present a more collective behavior and isoscalar transition densities. A collective proton low-lying excitation is identified in the {sup 18}Ne nucleus.

1. A fast parallel code for calculating energies and oscillator strengths of many-electron atoms at neutron star magnetic field strengths in adiabatic approximation

Engel, D.; Klews, M.; Wunner, G.

2009-02-01

We have developed a new method for the fast computation of wavelengths and oscillator strengths for medium-Z atoms and ions, up to iron, at neutron star magnetic field strengths. The method is a parallelized Hartree-Fock approach in adiabatic approximation based on finite-element and B-spline techniques. It turns out that typically 15-20 finite elements are sufficient to calculate energies to within a relative accuracy of 10-5 in 4 or 5 iteration steps using B-splines of 6th order, with parallelization speed-ups of 20 on a 26-processor machine. Results have been obtained for the energies of the ground states and excited levels and for the transition strengths of astrophysically relevant atoms and ions in the range Z=2…26 in different ionization stages. Catalogue identifier: AECC_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECC_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 3845 No. of bytes in distributed program, including test data, etc.: 27 989 Distribution format: tar.gz Programming language: MPI/Fortran 95 and Python Computer: Cluster of 1-26 HP Compaq dc5750 Operating system: Fedora 7 Has the code been vectorised or parallelized?: Yes RAM: 1 GByte Classification: 2.1 External routines: MPI/GFortran, LAPACK, PyLab/Matplotlib Nature of problem: Calculations of synthetic spectra [1] of strongly magnetized neutron stars are bedevilled by the lack of data for atoms in intense magnetic fields. While the behaviour of hydrogen and helium has been investigated in detail (see, e.g., [2]), complete and reliable data for heavier elements, in particular iron, are still missing. Since neutron stars are formed by the collapse of the iron cores of massive stars, it may be assumed that their atmospheres contain an iron plasma. Our objective is to fill the gap

2. Analytical approximations for X-ray cross sections 3

Biggs, Frank; Lighthill, Ruth

1988-08-01

This report updates our previous work that provided analytical approximations to cross sections for both photoelectric absorption of photons by atoms and incoherent scattering of photons by atoms. This representation is convenient for use in programmable calculators and in computer programs to evaluate these cross sections numerically. The results apply to atoms of atomic numbers between 1 and 100 and for photon energies greater than or equal to 10 eV. The photoelectric cross sections are again approximated by four-term polynomials in reciprocal powers of the photon energy. There are now more fitting intervals, however, than were used previously. The incoherent-scattering cross sections are based on the Klein-Nishina relation, but use simpler approximate equations for efficient computer evaluation. We describe the averaging scheme for applying these atomic results to any composite material. The fitting coefficients are included in tables, and the cross sections are shown graphically.

3. Semiphenomenological approximation of the sums of experimental radiative strength functions for dipole gamma transitions of energy E{sub {gamma}}below the neutron binding energy B{sub n} for mass numbers in the range 40 {<=} A {<=} 200

SciTech Connect

Sukhovoj, A. M. Furman, W. I. Khitrov, V. A.

2008-06-15

The sums of radiative strength functions for primary dipole gamma transitions, k(E1) + k(M1), are approximated to a high precision by a superposition of two functional dependences in the energy range 0.5 < E{sub 1} < B{sub n} - 0.5 MeV for the {sup 40}K, {sup 60}Co, {sup 71,74}Ge, {sup 80}Br, {sup 114}Cd, {sup 118}Sn, {sup 124,125}Te, {sup 128}I, {sup 137,138,139}Ba, {sup 140}La, {sup 150}Sm, {sup 156,158}Gd, {sup 160}Tb, {sup 163,164,165}Dy, {sup 166}Ho, {sup 168}Er, {sup 170}Tm, {sup 174}Yb, {sup 176,177}Lu, {sup 181}Hf, {sup 182}Ta, {sup 183,184,185,187}W, {sup 188,190,191,193}Os, {sup 192}Ir, {sup 196}Pt, {sup 198}Au, and {sup 200}Hg nuclei. It is shown that, in any nuclei, radiative strength functions are a dynamical quantity and that the values of k(E1) + k(M1) for specific energies of gamma transitions and specific nuclei are determined by the structure of decaying and excited levels, at least up to the neutron binding energy B{sub n}.

4. Semiphenomenological approximation of the sums of experimental radiative strength functions for dipole gamma transitions of energy E γ below the neutron binding energy B n for mass numbers in the range 40 ≤ A ≤ 200

Sukhovoj, A. M.; Furman, W. I.; Khitrov, V. A.

2008-06-01

The sums of radiative strength functions for primary dipole gamma transitions, k( E1) + k( M1), are approximated to a high precision by a superposition of two functional dependences in the energy range 0.5 < E 1 < B n - 0.5 MeV for the 40K, 60Co, 71,74Ge, 80Br, 114Cd, 118Sn, 124,125Te, 128I, 137,138,139Ba, 140La, 150Sm, 156,158Gd, 160Tb, 163,164,165Dy, 166Ho, 168Er, 170Tm, 174Yb, 176,177Lu, 181Hf, 182Ta, 183,184,185,187W, 188,190,191,193Os, 192Ir, 196Pt, 198Au, and 200Hg nuclei. It is shown that, in any nuclei, radiative strength functions are a dynamical quantity and that the values of k( E1) + k( M1) for specific energies of gamma transitions and specific nuclei are determined by the structure of decaying and excited levels, at least up to the neutron binding energy B n .

5. Interaction between respiratory and RR interval oscillations at low frequencies.

PubMed

Aguirre, A; Wodicka, G R; Maayan, C; Shannon, D C

1990-03-01

Oscillations in RR interval between 0.02 and 1.00 cycles per second (Hz) have been related to the action of the autonomic nervous system. Respiration has been shown to influence RR interval at normal breathing frequencies between approximately 0.16 and 0.5 Hz in children and adults--a phenomenon known as respiratory sinus arrhythmia. In this study we investigated the effect of respiration on RR interval in a lower frequency range between 0.02 and 0.12 Hz. Low frequency oscillations in respiration were induced in healthy sleeping adult subjects via the administration of a bolus of CO2 during inhalation. Power spectra of RR interval and respiration were obtained before and after the CO2 pulse, and the frequency content in the low frequency range was quantitatively compared. An increase in the spectral energy in both respiration and RR interval was observed for the group. However, this increase was accounted for by six of 29 epochs. We conclude that respiration (tidal volume) can influence RR interval at frequencies below those usually associated with respiratory sinus arrhythmia. This influence may be mediated through a sympathetic reflex. This result is applicable to the measurement and interpretation of heart rate variability and to autonomic influences of low frequency fluctuations in RR interval.

6. Piecewise linear approximation for hereditary control problems

NASA Technical Reports Server (NTRS)

Propst, Georg

1990-01-01

This paper presents finite-dimensional approximations for linear retarded functional differential equations by use of discontinuous piecewise linear functions. The approximation scheme is applied to optimal control problems, when a quadratic cost integral must be minimized subject to the controlled retarded system. It is shown that the approximate optimal feedback operators converge to the true ones both in the case where the cost integral ranges over a finite time interval, as well as in the case where it ranges over an infinite time interval. The arguments in the last case rely on the fact that the piecewise linear approximations to stable systems are stable in a uniform sense.

7. Phenomenological applications of rational approximants

Gonzàlez-Solís, Sergi; Masjuan, Pere

2016-08-01

We illustrate the powerfulness of Padé approximants (PAs) as a summation method and explore one of their extensions, the so-called quadratic approximant (QAs), to access both space- and (low-energy) time-like (TL) regions. As an introductory and pedagogical exercise, the function 1 zln(1 + z) is approximated by both kind of approximants. Then, PAs are applied to predict pseudoscalar meson Dalitz decays and to extract Vub from the semileptonic B → πℓνℓ decays. Finally, the π vector form factor in the TL region is explored using QAs.

8. Interpolation and Approximation Theory.

ERIC Educational Resources Information Center

Kaijser, Sten

1991-01-01

Introduced are the basic ideas of interpolation and approximation theory through a combination of theory and exercises written for extramural education at the university level. Topics treated are spline methods, Lagrange interpolation, trigonometric approximation, Fourier series, and polynomial approximation. (MDH)

9. Application of the Mean Spherical Approximation to Describe the Gibbs Solvation Energies of Monovalent Monoatomic Ions in Non-Aqueous Solvents

DTIC Science & Technology

1991-11-09

through hydrogen bonding . A plot of AG0Oth against AGO is shown for the Cl- ion data in Figure 4. Agreement tr, htrex between the theoretical estimate and...SOLVENTS by L. Blum* and W.R. Fawcett* Prepared for Publication in the Journal of Physical Chemistry *Department of Physics, POB AT, Faculty of Natural ...the excess ionic properties depend on a single scaling, Debye-like parameter is still retained by this approximation. The equations for the most

10. Kramer-Pesch approximation for analyzing field-angle-resolved measurements made in unconventional superconductors: a calculation of the zero-energy density of states.

PubMed

Nagai, Yuki; Hayashi, Nobuhiko

2008-08-29

By measuring the angular-oscillations behavior of the heat capacity with respect to the applied field direction, one can detect the details of the gap structure. We introduce the Kramer-Pesch approximation as a new method to analyze the field-angle-dependent experiments, which improves the previous Doppler-shift technique. We show that the Fermi-surface anisotropy is an indispensable factor for identifying the superconducting gap symmetry.

11. Kramer-Pesch Approximation for Analyzing Field-Angle-Resolved Measurements Made in Unconventional Superconductors: A Calculation of the Zero-Energy Density of States

Nagai, Yuki; Hayashi, Nobuhiko

2008-08-01

By measuring the angular-oscillations behavior of the heat capacity with respect to the applied field direction, one can detect the details of the gap structure. We introduce the Kramer-Pesch approximation as a new method to analyze the field-angle-dependent experiments, which improves the previous Doppler-shift technique. We show that the Fermi-surface anisotropy is an indispensable factor for identifying the superconducting gap symmetry.

12. α-CASSCF: An Efficient, Empirical Correction for SA-CASSCF To Closely Approximate MS-CASPT2 Potential Energy Surfaces.

PubMed

Snyder, James W; Parrish, Robert M; Martínez, Todd J

2017-06-01

Because of its computational efficiency, the state-averaged complete active-space self-consistent field (SA-CASSCF) method is commonly employed in nonadiabatic ab initio molecular dynamics. However, SA-CASSCF does not effectively recover dynamical correlation. As a result, there can be qualitative differences between SA-CASSCF potential energy surfaces (PESs) and more accurate reference surfaces computed using multistate complete active space second-order perturbation theory (MS-CASPT2). Here we introduce an empirical correction to SA-CASSCF that scales the splitting between individual states and the state-averaged energy. We call this the α-CASSCF method, and we show here that it significantly improves the accuracy of relative energies and PESs compared with MS-CASPT2 for the chromophores of green fluorescent and photoactive yellow proteins. As such, this method may prove to be quite valuable for nonadiabatic dynamics.

13. The {sup 2}H(d,p){sup 3}H Reaction At Astrophysical Energies Studied Via The Trojan Horse Method And Pole Approximation Validity Test

SciTech Connect

Sparta, R.; Pizzone, R. G.; Spitaleri, C.; Cherubini, S.; Crucilla, V.; Gulino, M.; La Cognata, M.; Lamia, L.; Puglia, S. M. R.; Rapisarda, G. G.; Romano, S.; Sergi, M. L.; Aliotta, M.; Burjan, V.; Hons, Z.; Kroha, V.; Mrazek, J.; Kiss, G.; McCleskey, M.; Trache, L.

2010-03-01

In order to understand primordial and stellar nucleosynthesis, we have studied {sup 2}H(d,p){sup 3}H reaction at 0, 4 MeV down to astrophysical energies. Knowledge of this S-factor is interesting also to plan reactions for fusion reactors to produce energy. {sup 2}H(d,p){sup 3}H has been studied through the Trojan Horse Method applied to the three-body reaction {sup 2}H({sup 3}He,pt)H, at a beam energy of 17 MeV. Once protons and tritons are detected in coincidence and the quasi-free events are selected, the obtained S-factor has been compared with direct reactions results. Such data are in agreement with the direct ones, and a pole invariance test has been obtained comparing the present result with another {sup 2}H(d,p){sup 3}H THM one, performed with a different spectator particle (see fig. 1).

14. Analytic energy gradients for the coupled-cluster singles and doubles with perturbative triples method with the density-fitting approximation

Bozkaya, Uǧur; Sherrill, C. David

2017-07-01

An efficient implementation of analytic gradients for the coupled-cluster singles and doubles with perturbative triples [CCSD(T)] method with the density-fitting (DF) approximation, denoted as DF-CCSD(T), is reported. For the molecules considered, the DF approach substantially accelerates conventional CCSD(T) analytic gradients due to the reduced input/output time and the acceleration of the so-called "gradient terms": formation of particle density matrices (PDMs), computation of the generalized Fock-matrix (GFM), solution of the Z-vector equation, formation of the effective PDMs and GFM, back-transformation of the PDMs and GFM, from the molecular orbital to the atomic orbital (AO) basis, and computation of gradients in the AO basis. For the largest member of the molecular test set considered (C6H14), the computational times for analytic gradients (with the correlation-consistent polarized valence triple-ζ basis set in serial) are 106.2 [CCSD(T)] and 49.8 [DF-CCSD(T)] h, a speedup of more than 2-fold. In the evaluation of gradient terms, the DF approach completely avoids the use of four-index two-electron integrals. Similar to our previous studies on DF-second-order Møller-Plesset perturbation theory and DF-CCSD gradients, our formalism employs 2- and 3-index two-particle density matrices (TPDMs) instead of 4-index TPDMs. Errors introduced by the DF approximation are negligible for equilibrium geometries and harmonic vibrational frequencies.

15. Analytic energy gradients for the coupled-cluster singles and doubles with perturbative triples method with the density-fitting approximation.

PubMed

Bozkaya, Uğur; Sherrill, C David

2017-07-28

An efficient implementation of analytic gradients for the coupled-cluster singles and doubles with perturbative triples [CCSD(T)] method with the density-fitting (DF) approximation, denoted as DF-CCSD(T), is reported. For the molecules considered, the DF approach substantially accelerates conventional CCSD(T) analytic gradients due to the reduced input/output time and the acceleration of the so-called "gradient terms": formation of particle density matrices (PDMs), computation of the generalized Fock-matrix (GFM), solution of the Z-vector equation, formation of the effective PDMs and GFM, back-transformation of the PDMs and GFM, from the molecular orbital to the atomic orbital (AO) basis, and computation of gradients in the AO basis. For the largest member of the molecular test set considered (C6H14), the computational times for analytic gradients (with the correlation-consistent polarized valence triple-ζ basis set in serial) are 106.2 [CCSD(T)] and 49.8 [DF-CCSD(T)] h, a speedup of more than 2-fold. In the evaluation of gradient terms, the DF approach completely avoids the use of four-index two-electron integrals. Similar to our previous studies on DF-second-order Møller-Plesset perturbation theory and DF-CCSD gradients, our formalism employs 2- and 3-index two-particle density matrices (TPDMs) instead of 4-index TPDMs. Errors introduced by the DF approximation are negligible for equilibrium geometries and harmonic vibrational frequencies.

16. Biomathematics and Interval Analysis: A Prosperous Marriage

Markov, S. M.

2010-11-01

In this survey paper we focus our attention on dynamical bio-systems involving uncertainties and the use of interval methods for the modelling study of such systems. The kind of envisioned uncertain systems are those described by a dynamical model with parameters bounded in intervals. We point out to a fruitful symbiosis between dynamical modelling in biology and computational methods of interval analysis. Both fields are presently in the stage of rapid development and can benefit from each other. We point out on recent studies in the field of interval arithmetic from a new perspective—the midpoint-radius arithmetic which explores the properties of error bounds and approximate numbers. The midpoint-radius approach provides a bridge between interval methods and the "uncertain but bounded" approach used for model estimation and identification. We briefly discuss certain recently obtained algebraic properties of errors and approximate numbers.

17. Approximating random quantum optimization problems

Hsu, B.; Laumann, C. R.; Läuchli, A. M.; Moessner, R.; Sondhi, S. L.

2013-06-01

We report a cluster of results regarding the difficulty of finding approximate ground states to typical instances of the quantum satisfiability problem k-body quantum satisfiability (k-QSAT) on large random graphs. As an approximation strategy, we optimize the solution space over “classical” product states, which in turn introduces a novel autonomous classical optimization problem, PSAT, over a space of continuous degrees of freedom rather than discrete bits. Our central results are (i) the derivation of a set of bounds and approximations in various limits of the problem, several of which we believe may be amenable to a rigorous treatment; (ii) a demonstration that an approximation based on a greedy algorithm borrowed from the study of frustrated magnetism performs well over a wide range in parameter space, and its performance reflects the structure of the solution space of random k-QSAT. Simulated annealing exhibits metastability in similar “hard” regions of parameter space; and (iii) a generalization of belief propagation algorithms introduced for classical problems to the case of continuous spins. This yields both approximate solutions, as well as insights into the free energy “landscape” of the approximation problem, including a so-called dynamical transition near the satisfiability threshold. Taken together, these results allow us to elucidate the phase diagram of random k-QSAT in a two-dimensional energy-density-clause-density space.

18. Approximate flavor symmetries

SciTech Connect

Rasin, A.

1994-04-01

We discuss the idea of approximate flavor symmetries. Relations between approximate flavor symmetries and natural flavor conservation and democracy models is explored. Implications for neutrino physics are also discussed.

19. Piecewise linear approximation for hereditary control problems

NASA Technical Reports Server (NTRS)

Propst, Georg

1987-01-01

Finite dimensional approximations are presented for linear retarded functional differential equations by use of discontinuous piecewise linear functions. The approximation scheme is applied to optimal control problems when a quadratic cost integral has to be minimized subject to the controlled retarded system. It is shown that the approximate optimal feedback operators converge to the true ones both in case the cost integral ranges over a finite time interval as well as in the case it ranges over an infinite time interval. The arguments in the latter case rely on the fact that the piecewise linear approximations to stable systems are stable in a uniform sense. This feature is established using a vector-component stability criterion in the state space R(n) x L(2) and the favorable eigenvalue behavior of the piecewise linear approximations.

20. Analysis of experimental data on doublet neutron-deuteron scattering at energies below the deuteron-breakup threshold on the basis of the pole approximation of the effective-range function

SciTech Connect

Babenko, V. A.; Petrov, N. M.

2008-01-15

On the basis of the Bargmann representation of the S matrix, the pole approximation is obtained for the effective-range function k cot {delta}. This approximation is optimal for describing the neutron-deuteron system in the doublet spin state. The values of r{sub 0} = 412.469 fm and v{sub 2} = -35 495.62 fm{sup 3} for the doublet low-energy parameters of neutron-deuteron scattering and the value of D = 172.678 fm{sup 2} for the respective pole parameter are deduced by using experimental results for the triton binding energy E{sub T}, the doublet neutron-deuteron scattering length a{sub 2}, and van Oers-Seagrave phase shifts at energies below the deuteron-breakup threshold. With these parameters, the pole approximation of the effective-range function provides a highly precise description (the relative error does not exceed 1%) of the doublet phase shift for neutron-deuteron scattering at energies below the deuteron-breakup threshold. Physical properties of the triton in the ground (T) and virtual (v) states are calculated. The results are B{sub v} = 0.608 MeV for the virtuallevel position and C{sub T}{sup 2} = 2.866 and C{sub v}{sup 2} = 0.0586 for the dimensionless asymptotic normalization constants. It is shown that, in the Whiting-Fuda approximation, the values of physical quantities characterizing the triton virtual state are determined to a high precision by one parameter, the doublet neutron-deuteron scattering length a{sub 2}. The effective triton radii in the ground ({rho}{sub T} = 1.711 fm) and virtual ({rho}{sub v} = 74.184 fm) states are calculated for the first time.

1. Exclusive experiment on nuclei with backward emitted particles by electron-nucleus collision in {approximately} 10 GeV energy range

SciTech Connect

Saito, T.; Takagi, F.

1994-04-01

Since the evidence of strong cross section in proton-nucleus backward scattering was presented in the early of 1970 years, this phenomena have been interested from the point of view to be related to information on the short range correlation between nucleons or on high momentum components of the wave function of the nucleus. In the analysis of the first experiment on protons from the carbon target under bombardment by 1.5-5.7 GeV protons, indications are found of an effect analogous to scaling in high-energy interactions of elementary particles with protons. Moreover it is found that the function f(p{sup 2})/{sigma}{sub tot}, which describes the spectra of the protons and deuterons emitted backward from nuclei in the laboratory system, does not depend on the energy and the type of the incident particle or on the atomic number of the target nucleus. In the following experiments the spectra of the protons emitted from the nuclei C, Al, Ti, Cu, Cd and Pb were measured in the inclusive reactions with incident particles of negative pions (1.55-6.2 GeV/c) and protons (6.2-9.0 GeV/C). The cross section f is described by f = E/p{sup 2} d{sup 2}{sigma}/dpd{Omega} = C exp ({minus}Bp{sup 2}), where p is the momentum of hadron. The function f depends linearly on the atomic weight A of the target nuclei. The slope parameter B is independent of the target nucleus and of the sort and energy of the bombarding particles. The invariant cross section {rho} = f/{sigma}{sub tot} is also described by exponential A{sub 0} exp ({minus}A{sub 1p}{sup 2}), where p becomes independent of energy at initial particle energies {ge} 1.5 GeV for C nucleus and {ge} 5 GeV for the heaviest of the investigated Pb nuclei.

2. Rytov approximation in electron scattering

Krehl, Jonas; Lubk, Axel

2017-06-01

In this work we introduce the Rytov approximation in the scope of high-energy electron scattering with the motivation of developing better linear models for electron scattering. Such linear models play an important role in tomography and similar reconstruction techniques. Conventional linear models, such as the phase grating approximation, have reached their limits in current and foreseeable applications, most importantly in achieving three-dimensional atomic resolution using electron holographic tomography. The Rytov approximation incorporates propagation effects which are the most pressing limitation of conventional models. While predominately used in the weak-scattering regime of light microscopy, we show that the Rytov approximation can give reasonable results in the inherently strong-scattering regime of transmission electron microscopy.

3. Fourth-grade children’s dietary recall accuracy for energy intake at school meals differs by social desirability and body mass index percentile in a study concerning retention interval

PubMed Central

Guinn, Caroline H.; Baxter, Suzanne D.; Royer, Julie A.; Hardin, James W.; Mackelprang, Alyssa J.; Smith, Albert F.

2010-01-01

Data from a study concerning retention interval and school-meal observation on children’s dietary recalls were used to investigate relationships of social desirability score (SDS) and body mass index percentile (BMI%) to recall accuracy for energy for observed (n=327) children, and to reported energy for observed and unobserved (n=152) children. Report rates (reported/observed) correlated negatively with SDS and BMI%. Correspondence rates (correctly reported/observed) correlated negatively with SDS. Inflation ratios (overreported/observed) correlated negatively with BMI%. The relationship between reported energy and each of SDS and BMI% did not depend on observation status. Studies utilizing children’s dietary recalls should assess SDS and BMI%. PMID:20460407

4. Interval hypoxic training.

PubMed

Bernardi, L

2001-01-01

Interval hypoxic training (IHT) is a technique developed in the former Soviet Union, that consists of repeated exposures to 5-7 minutes of steady or progressive hypoxia, interrupted by equal periods of recovery. It has been proposed for training in sports, to acclimatize to high altitude, and to treat a variety of clinical conditions, spanning from coronary heart disease to Cesarean delivery. Some of these results may originate by the different effects of continuous vs. intermittent hypoxia (IH), which can be obtained by manipulating the repetition rate, the duration and the intensity of the hypoxic stimulus. The present article will attempt to examine some of the effects of IH, and, whenever possible, compare them to those of typical IHT. IH can modify oxygen transport and energy utilization, alter respiratory and blood pressure control mechanisms, induce permanent modifications in the cardiovascular system. IHT increases the hypoxic ventilatory response, increase red blood cell count and increase aerobic capacity. Some of these effects might be potentially beneficial in specific physiologic or pathologic conditions. At this stage, this technique appears interesting for its possible applications, but still largely to be explored for its mechanisms, potentials and limitations.

5. Analytic Energy Gradients and Spin Multiplicities for Orbital-Optimized Second-Order Perturbation Theory with Density-Fitting Approximation: An Efficient Implementation.

PubMed

Bozkaya, Uğur

2014-10-14

An efficient implementation of analytic energy gradients and spin multiplicities for the density-fitted orbital-optimized second-order perturbation theory (DF-OMP2) [Bozkaya, U. J. Chem. Theory Comput. 2014, 10, 2371-2378] is presented. The DF-OMP2 method is applied to a set of alkanes, conjugated dienes, and noncovalent interaction complexes to compare the cost of single point analytic gradient computations with the orbital-optimized MP2 with the resolution of the identity approach (OO-RI-MP2) [Neese, F.; Schwabe, T.; Kossmann, S.; Schirmer, B.; Grimme, S. J. Chem. Theory Comput. 2009, 5, 3060-3073]. Our results demonstrate that the DF-OMP2 method provides substantially lower computational costs for analytic gradients than OO-RI-MP2. On average, the cost of DF-OMP2 analytic gradients is 9-11 times lower than that of OO-RI-MP2 for systems considered. We also consider aromatic bond dissociation energies, for which MP2 provides poor reaction energies. The DF-OMP2 method exhibits a substantially better performance than MP2, providing a mean absolute error of 2.5 kcal mol(-1), which is more than 9 times lower than that of MP2 (22.6 kcal mol(-1)). Overall, the DF-OMP2 method appears very helpful for electronically challenging chemical systems such as free radicals or other cases where standard MP2 proves unreliable. For such problematic systems, we recommend using DF-OMP2 instead of the canonical MP2 as a more robust method with the same computational scaling.

6. Approximation of Laws

Niiniluoto, Ilkka

2014-03-01

Approximation of laws is an important theme in the philosophy of science. If we can make sense of the idea that two scientific laws are "close" to each other, then we can also analyze such methodological notions as approximate explanation of laws, approximate reduction of theories, approximate empirical success of theories, and approximate truth of laws. Proposals for measuring the distance between quantitative scientific laws were given in Niiniluoto (1982, 1987). In this paper, these definitions are reconsidered as a response to the interesting critical remarks by Liu (1999).

7. Musical intervals in speech

PubMed Central

Ross, Deborah; Choi, Jonathan; Purves, Dale

2007-01-01

Throughout history and across cultures, humans have created music using pitch intervals that divide octaves into the 12 tones of the chromatic scale. Why these specific intervals in music are preferred, however, is not known. In the present study, we analyzed a database of individually spoken English vowel phones to examine the hypothesis that musical intervals arise from the relationships of the formants in speech spectra that determine the perceptions of distinct vowels. Expressed as ratios, the frequency relationships of the first two formants in vowel phones represent all 12 intervals of the chromatic scale. Were the formants to fall outside the ranges found in the human voice, their relationships would generate either a less complete or a more dilute representation of these specific intervals. These results imply that human preference for the intervals of the chromatic scale arises from experience with the way speech formants modulate laryngeal harmonics to create different phonemes. PMID:17525146

8. Musical intervals in speech.

PubMed

Ross, Deborah; Choi, Jonathan; Purves, Dale

2007-06-05

Throughout history and across cultures, humans have created music using pitch intervals that divide octaves into the 12 tones of the chromatic scale. Why these specific intervals in music are preferred, however, is not known. In the present study, we analyzed a database of individually spoken English vowel phones to examine the hypothesis that musical intervals arise from the relationships of the formants in speech spectra that determine the perceptions of distinct vowels. Expressed as ratios, the frequency relationships of the first two formants in vowel phones represent all 12 intervals of the chromatic scale. Were the formants to fall outside the ranges found in the human voice, their relationships would generate either a less complete or a more dilute representation of these specific intervals. These results imply that human preference for the intervals of the chromatic scale arises from experience with the way speech formants modulate laryngeal harmonics to create different phonemes.

9. The calculation of ionization energies by perturbation, configuration interaction and approximate coupled pair techniques and comparisons with green's function methods for Ne, H 2O and N 2

Bacskay, George B.

1980-05-01

The vertical valence ionization potentials of Ne, H 2O and N 2 have been calculated by Rayleigh-Schrödinger perturbation and configuration interaction methods. The calculations were carried out in the space of a single determinant reference state and its single and double excitations, using both the N and N - 1 electron Hartree-Fock orbitals as hole/particle bases. The perturbation series for the ion state were generally found to converge fairly slowly in the N electron Hartree-Fock (frozen) orbital basis, but considerably faster in the appropriate N - 1 electron RHF (relaxed) orbital basis. In certain cases, however, due to near-degeneracy effects, partial, and even complete, breakdown of the (non-degenerate) perturbation treatment was observed. The effects of higher excitations on the ionization potentials were estimated by the approximate coupled pair techniques CPA' and CPA″ as well as by a Davidson type correction formula. The final, fully converged CPA″ results are generally in good agreement with those from PNO-CEPA and Green's function calculations as well as experiment.

10. Programming with Intervals

Matsakis, Nicholas D.; Gross, Thomas R.

Intervals are a new, higher-level primitive for parallel programming with which programmers directly construct the program schedule. Programs using intervals can be statically analyzed to ensure that they do not deadlock or contain data races. In this paper, we demonstrate the flexibility of intervals by showing how to use them to emulate common parallel control-flow constructs like barriers and signals, as well as higher-level patterns such as bounded-buffer producer-consumer. We have implemented intervals as a publicly available library for Java and Scala.

11. Interval Graph Limits

PubMed Central

Diaconis, Persi; Holmes, Susan; Janson, Svante

2015-01-01

We work out a graph limit theory for dense interval graphs. The theory developed departs from the usual description of a graph limit as a symmetric function W (x, y) on the unit square, with x and y uniform on the interval (0, 1). Instead, we fix a W and change the underlying distribution of the coordinates x and y. We find choices such that our limits are continuous. Connections to random interval graphs are given, including some examples. We also show a continuity result for the chromatic number and clique number of interval graphs. Some results on uniqueness of the limit description are given for general graph limits. PMID:26405368

12. Dependence of the specific energy of the β/α interface in the VT6 titanium alloy on the heating temperature in the interval 600-975°C

Murzinova, M. A.; Zherebtsov, S. V.; Salishchev, G. A.

2016-04-01

The specific energy of interphase boundaries is an important characteristic of multiphase alloys, because it determines in many respects their microstructural stability and properties during processing and exploitation. We analyze variation of the specific energy of the β/α interface in the VT6 titanium alloy at temperatures from 600 to 975°C. Analysis is based on the model of a ledge interphase boundary and the method for computation of its energy developed by van der Merwe and Shiflet [33, 34]. Calculations use the available results of measurements of the lattice parameters of phases in the indicated temperature interval and their chemical composition. In addition, we take into account the experimental data and the results of simulation of the effect of temperature and phase composition on the elastic moduli of the α and β phases in titanium alloys. It is shown that when the temperature decreases from 975 to 600°C, the specific energy of the β/α interface increases from 0.15 to 0.24 J/m2. The main contribution to the interfacial energy (about 85%) comes from edge dislocations accommodating the misfit in direction [0001]α || [110]β. The energy associated with the accommodation of the misfit in directions {[ {bar 2110} ]_α }| {{{[ {1bar 11} ]}_β }} . and {[ {0bar 110} ]_α }| {{{[ {bar 112} ]}_β }} . due to the formation of "ledges" and tilt misfit dislocations is low and increases slightly upon cooling.

13. Approximate symmetries of Hamiltonians

Chubb, Christopher T.; Flammia, Steven T.

2017-08-01

We explore the relationship between approximate symmetries of a gapped Hamiltonian and the structure of its ground space. We start by considering approximate symmetry operators, defined as unitary operators whose commutators with the Hamiltonian have norms that are sufficiently small. We show that when approximate symmetry operators can be restricted to the ground space while approximately preserving certain mutual commutation relations. We generalize the Stone-von Neumann theorem to matrices that approximately satisfy the canonical (Heisenberg-Weyl-type) commutation relations and use this to show that approximate symmetry operators can certify the degeneracy of the ground space even though they only approximately form a group. Importantly, the notions of "approximate" and "small" are all independent of the dimension of the ambient Hilbert space and depend only on the degeneracy in the ground space. Our analysis additionally holds for any gapped band of sufficiently small width in the excited spectrum of the Hamiltonian, and we discuss applications of these ideas to topological quantum phases of matter and topological quantum error correcting codes. Finally, in our analysis, we also provide an exponential improvement upon bounds concerning the existence of shared approximate eigenvectors of approximately commuting operators under an added normality constraint, which may be of independent interest.

14. Interval estimations in metrology

Mana, G.; Palmisano, C.

2014-06-01

This paper investigates interval estimation for a measurand that is known to be positive. Both the Neyman and Bayesian procedures are considered and the difference between the two, not always perceived, is discussed in detail. A solution is proposed to a paradox originated by the frequentist assessment of the long-run success rate of Bayesian intervals.

15. Simple physics-based analytical formulas for the potentials of mean force for the interaction of amino acid side chains in water. 1. Approximate expression for the free energy of hydrophobic association based on a Gaussian-overlap model.

PubMed

Makowski, Mariusz; Liwo, Adam; Scheraga, Harold A

2007-03-22

A physics-based model is proposed to derive approximate analytical expressions for the cavity component of the free energy of hydrophobic association of spherical and spheroidal solutes in water. The model is based on the difference between the number and context of the water molecules in the hydration sphere of a hydrophobic dimer and of two isolated hydrophobic solutes. It is assumed that the water molecules touching the convex part of the molecular surface of the dimer and those in the hydration spheres of the monomers contribute equally to the free energy of solvation, and those touching the saddle part of the molecular surface of the dimer result in a more pronounced increase in free energy because of their more restricted mobility (entropy loss) and fewer favorable electrostatic interactions with other water molecules. The density of water in the hydration sphere around a single solute particle is approximated by the derivative of a Gaussian centered on the solute molecule with respect to its standard deviation. On the basis of this approximation, the number of water molecules in different parts of the hydration sphere of the dimer is expressed in terms of the first and the second mixed derivatives of the two Gaussians centered on the first and second solute molecules, respectively, with respect to the standard deviations of these Gaussians, and plausible analytical expressions for the cavity component of the hydrophobic-association energy of spherical and spheroidal solutes are introduced. As opposed to earlier hydration-shell models, our expressions reproduce the desolvation maxima in the potentials of mean force of pairs of nonpolar solutes in water, and their advantage over the models based on molecular-surface area is that they have continuous gradients in the coordinates of solute centers.

16. Finding the Best Quadratic Approximation of a Function

ERIC Educational Resources Information Center

Yang, Yajun; Gordon, Sheldon P.

2011-01-01

This article examines the question of finding the best quadratic function to approximate a given function on an interval. The prototypical function considered is f(x) = e[superscript x]. Two approaches are considered, one based on Taylor polynomial approximations at various points in the interval under consideration, the other based on the fact…

17. Finding the Best Quadratic Approximation of a Function

ERIC Educational Resources Information Center

Yang, Yajun; Gordon, Sheldon P.

2011-01-01

This article examines the question of finding the best quadratic function to approximate a given function on an interval. The prototypical function considered is f(x) = e[superscript x]. Two approaches are considered, one based on Taylor polynomial approximations at various points in the interval under consideration, the other based on the fact…

18. Overconfidence in interval estimates.

PubMed

Soll, Jack B; Klayman, Joshua

2004-03-01

Judges were asked to make numerical estimates (e.g., "In what year was the first flight of a hot air balloon?"). Judges provided high and low estimates such that they were X% sure that the correct answer lay between them. They exhibited substantial overconfidence: The correct answer fell inside their intervals much less than X% of the time. This contrasts with choices between 2 possible answers to a question, which showed much less overconfidence. The authors show that overconfidence in interval estimates can result from variability in setting interval widths. However, the main cause is that subjective intervals are systematically too narrow given the accuracy of one's information-sometimes only 40% as large as necessary to be well calibrated. The degree of overconfidence varies greatly depending on how intervals are elicited. There are also substantial differences among domains and between male and female judges. The authors discuss the possible psychological mechanisms underlying this pattern of findings.

19. Direct interval volume visualization.

PubMed

Ament, Marco; Weiskopf, Daniel; Carr, Hamish

2010-01-01

We extend direct volume rendering with a unified model for generalized isosurfaces, also called interval volumes, allowing a wider spectrum of visual classification. We generalize the concept of scale-invariant opacity—typical for isosurface rendering—to semi-transparent interval volumes. Scale-invariant rendering is independent of physical space dimensions and therefore directly facilitates the analysis of data characteristics. Our model represents sharp isosurfaces as limits of interval volumes and combines them with features of direct volume rendering. Our objective is accurate rendering, guaranteeing that all isosurfaces and interval volumes are visualized in a crack-free way with correct spatial ordering. We achieve simultaneous direct and interval volume rendering by extending preintegration and explicit peak finding with data-driven splitting of ray integration and hybrid computation in physical and data domains. Our algorithm is suitable for efficient parallel processing for interactive applications as demonstrated by our CUDA implementation.

20. Approximate spatial reasoning

NASA Technical Reports Server (NTRS)

Dutta, Soumitra

1988-01-01

A model for approximate spatial reasoning using fuzzy logic to represent the uncertainty in the environment is presented. Algorithms are developed which can be used to reason about spatial information expressed in the form of approximate linguistic descriptions similar to the kind of spatial information processed by humans. Particular attention is given to static spatial reasoning.

1. Approximate spatial reasoning

NASA Technical Reports Server (NTRS)

Dutta, Soumitra

1988-01-01

A model for approximate spatial reasoning using fuzzy logic to represent the uncertainty in the environment is presented. Algorithms are developed which can be used to reason about spatial information expressed in the form of approximate linguistic descriptions similar to the kind of spatial information processed by humans. Particular attention is given to static spatial reasoning.

2. An assessment of low-lying excitation energies and triplet instabilities of organic molecules with an ab initio Bethe-Salpeter equation approach and the Tamm-Dancoff approximation.

PubMed

Rangel, Tonatiuh; Hamed, Samia M; Bruneval, Fabien; Neaton, Jeffrey B

2017-05-21

The accurate prediction of singlet and triplet excitation energies is an area of intense research of significant fundamental interest and critical for many applications. Most calculations of singlet and triplet energies use time-dependent density functional theory (TDDFT) in conjunction with an approximate exchange-correlation functional. In this work, we examine and critically assess an alternative method for predicting low-lying neutral excitations with similar computational cost, the ab initio Bethe-Salpeter equation (BSE) approach, and compare results against high-accuracy wavefunction-based methods. We consider singlet and triplet excitations of 27 prototypical organic molecules, including members of Thiel's set, the acene series, and several aromatic hydrocarbons exhibiting charge-transfer-like excitations. Analogous to its impact in TDDFT, we find that the Tamm-Dancoff approximation (TDA) overcomes triplet instabilities in the BSE approach, improving both triplet and singlet energetics relative to higher level theories. Finally, we find that BSE-TDA calculations built on effective DFT starting points, such as those utilizing optimally tuned range-separated hybrid functionals, can yield accurate singlet and triplet excitation energies for gas-phase organic molecules.

3. An assessment of low-lying excitation energies and triplet instabilities of organic molecules with an ab initio Bethe-Salpeter equation approach and the Tamm-Dancoff approximation

Rangel, Tonatiuh; Hamed, Samia M.; Bruneval, Fabien; Neaton, Jeffrey B.

2017-05-01

The accurate prediction of singlet and triplet excitation energies is an area of intense research of significant fundamental interest and critical for many applications. Most calculations of singlet and triplet energies use time-dependent density functional theory (TDDFT) in conjunction with an approximate exchange-correlation functional. In this work, we examine and critically assess an alternative method for predicting low-lying neutral excitations with similar computational cost, the ab initio Bethe-Salpeter equation (BSE) approach, and compare results against high-accuracy wavefunction-based methods. We consider singlet and triplet excitations of 27 prototypical organic molecules, including members of Thiel's set, the acene series, and several aromatic hydrocarbons exhibiting charge-transfer-like excitations. Analogous to its impact in TDDFT, we find that the Tamm-Dancoff approximation (TDA) overcomes triplet instabilities in the BSE approach, improving both triplet and singlet energetics relative to higher level theories. Finally, we find that BSE-TDA calculations built on effective DFT starting points, such as those utilizing optimally tuned range-separated hybrid functionals, can yield accurate singlet and triplet excitation energies for gas-phase organic molecules.

4. Green Ampt approximations

Barry, D. A.; Parlange, J.-Y.; Li, L.; Jeng, D.-S.; Crapper, M.

2005-10-01

The solution to the Green and Ampt infiltration equation is expressible in terms of the Lambert W-1 function. Approximations for Green and Ampt infiltration are thus derivable from approximations for the W-1 function and vice versa. An infinite family of asymptotic expansions to W-1 is presented. Although these expansions do not converge near the branch point of the W function (corresponds to Green-Ampt infiltration with immediate ponding), a method is presented for approximating W-1 that is exact at the branch point and asymptotically, with interpolation between these limits. Some existing and several new simple and compact yet robust approximations applicable to Green-Ampt infiltration and flux are presented, the most accurate of which has a maximum relative error of 5 × 10 -5%. This error is orders of magnitude lower than any existing analytical approximations.

5. Valence excitation energies of alkenes, carbonyl compounds, and azabenzenes by time-dependent density functional theory: linear response of the ground state compared to collinear and noncollinear spin-flip TDDFT with the Tamm-Dancoff approximation.

PubMed

Isegawa, Miho; Truhlar, Donald G

2013-04-07

Time-dependent density functional theory (TDDFT) holds great promise for studying photochemistry because of its affordable cost for large systems and for repeated calculations as required for direct dynamics. The chief obstacle is uncertain accuracy. There have been many validation studies, but there are also many formulations, and there have been few studies where several formulations were applied systematically to the same problems. Another issue, when TDDFT is applied with only a single exchange-correlation functional, is that errors in the functional may mask successes or failures of the formulation. Here, to try to sort out some of the issues, we apply eight formulations of adiabatic TDDFT to the first valence excitations of ten molecules with 18 density functionals of diverse types. The formulations examined are linear response from the ground state (LR-TDDFT), linear response from the ground state with the Tamm-Dancoff approximation (TDDFT-TDA), the original collinear spin-flip approximation with the Tamm-Dancoff (TD) approximation (SF1-TDDFT-TDA), the original noncollinear spin-flip approximation with the TDA approximation (SF1-NC-TDDFT-TDA), combined self-consistent-field (SCF) and collinear spin-flip calculations in the original spin-projected form (SF2-TDDFT-TDA) or non-spin-projected (NSF2-TDDFT-TDA), and combined SCF and noncollinear spin-flip calculations (SF2-NC-TDDFT-TDA and NSF2-NC-TDDFT-TDA). Comparing LR-TDDFT to TDDFT-TDA, we observed that the excitation energy is raised by the TDA; this brings the excitation energies underestimated by full linear response closer to experiment, but sometimes it makes the results worse. For ethylene and butadiene, the excitation energies are underestimated by LR-TDDFT, and the error becomes smaller making the TDA. Neither SF1-TDDFT-TDA nor SF2-TDDFT-TDA provides a lower mean unsigned error than LR-TDDFT or TDDFT-TDA. The comparison between collinear and noncollinear kernels shows that the noncollinear kernel

6. Valence excitation energies of alkenes, carbonyl compounds, and azabenzenes by time-dependent density functional theory: Linear response of the ground state compared to collinear and noncollinear spin-flip TDDFT with the Tamm-Dancoff approximation

Isegawa, Miho; Truhlar, Donald G.

2013-04-01

Time-dependent density functional theory (TDDFT) holds great promise for studying photochemistry because of its affordable cost for large systems and for repeated calculations as required for direct dynamics. The chief obstacle is uncertain accuracy. There have been many validation studies, but there are also many formulations, and there have been few studies where several formulations were applied systematically to the same problems. Another issue, when TDDFT is applied with only a single exchange-correlation functional, is that errors in the functional may mask successes or failures of the formulation. Here, to try to sort out some of the issues, we apply eight formulations of adiabatic TDDFT to the first valence excitations of ten molecules with 18 density functionals of diverse types. The formulations examined are linear response from the ground state (LR-TDDFT), linear response from the ground state with the Tamm-Dancoff approximation (TDDFT-TDA), the original collinear spin-flip approximation with the Tamm-Dancoff (TD) approximation (SF1-TDDFT-TDA), the original noncollinear spin-flip approximation with the TDA approximation (SF1-NC-TDDFT-TDA), combined self-consistent-field (SCF) and collinear spin-flip calculations in the original spin-projected form (SF2-TDDFT-TDA) or non-spin-projected (NSF2-TDDFT-TDA), and combined SCF and noncollinear spin-flip calculations (SF2-NC-TDDFT-TDA and NSF2-NC-TDDFT-TDA). Comparing LR-TDDFT to TDDFT-TDA, we observed that the excitation energy is raised by the TDA; this brings the excitation energies underestimated by full linear response closer to experiment, but sometimes it makes the results worse. For ethylene and butadiene, the excitation energies are underestimated by LR-TDDFT, and the error becomes smaller making the TDA. Neither SF1-TDDFT-TDA nor SF2-TDDFT-TDA provides a lower mean unsigned error than LR-TDDFT or TDDFT-TDA. The comparison between collinear and noncollinear kernels shows that the noncollinear kernel

7. Analysis of regression confidence intervals and Bayesian credible intervals for uncertainty quantification

Lu, Dan; Ye, Ming; Hill, Mary C.

2012-09-01

Confidence intervals based on classical regression theories augmented to include prior information and credible intervals based on Bayesian theories are conceptually different ways to quantify parametric and predictive uncertainties. Because both confidence and credible intervals are used in environmental modeling, we seek to understand their differences and similarities. This is of interest in part because calculating confidence intervals typically requires tens to thousands of model runs, while Bayesian credible intervals typically require tens of thousands to millions of model runs. Given multi-Gaussian distributed observation errors, our theoretical analysis shows that, for linear or linearized-nonlinear models, confidence and credible intervals are always numerically identical when consistent prior information is used. For nonlinear models, nonlinear confidence and credible intervals can be numerically identical if parameter confidence regions defined using the approximate likelihood method and parameter credible regions estimated using Markov chain Monte Carlo realizations are numerically identical and predictions are a smooth, monotonic function of the parameters. Both occur if intrinsic model nonlinearity is small. While the conditions of Gaussian errors and small intrinsic model nonlinearity are violated by many environmental models, heuristic tests using analytical and numerical models suggest that linear and nonlinear confidence intervals can be useful approximations of uncertainty even under significantly nonideal conditions. In the context of epistemic model error for a complex synthetic nonlinear groundwater problem, the linear and nonlinear confidence and credible intervals for individual models performed similarly enough to indicate that the computationally frugal confidence intervals can be useful in many circumstances. Experiences with these groundwater models are expected to be broadly applicable to many environmental models. We suggest that for

8. Random-Phase Approximation Methods

Chen, Guo P.; Voora, Vamsee K.; Agee, Matthew M.; Balasubramani, Sree Ganesh; Furche, Filipp

2017-05-01

Random-phase approximation (RPA) methods are rapidly emerging as cost-effective validation tools for semilocal density functional computations. We present the theoretical background of RPA in an intuitive rather than formal fashion, focusing on the physical picture of screening and simple diagrammatic analysis. A new decomposition of the RPA correlation energy into plasmonic modes leads to an appealing visualization of electron correlation in terms of charge density fluctuations. Recent developments in the areas of beyond-RPA methods, RPA correlation potentials, and efficient algorithms for RPA energy and property calculations are reviewed. The ability of RPA to approximately capture static correlation in molecules is quantified by an analysis of RPA natural occupation numbers. We illustrate the use of RPA methods in applications to small-gap systems such as open-shell d- and f-element compounds, radicals, and weakly bound complexes, where semilocal density functional results exhibit strong functional dependence.

9. New approach to identify negative and positive pions with a scintillator range telescope in the 15-90 MeV pion energy interval

SciTech Connect

Julien, J.; Bellini, V.; Bolore, M.; Charlot, X.; Girard, J.; Pappalardo, G.S.; Poitou, J.; Roussel, L.

1984-02-01

A scintillator range telescope was designed to detect pions in a very intense background of charged particles (ca 5000 ps) and to identify pion charge in the 15-90 MeV range. Such a telescope has a solid angle of 20 msr and allows the simultaneous detection of a wide pion momentum range on the order of 70 MeV/c to 200 MeV/c for both pions plus and pions minus. Several angles can be simultaneously studied with three telescopes. The pion energy resolution of ca 3 MeV is less, however, than the corresponding 0.5 MeV of a magnetic spectrometer. The accuracy of the R ratio depends on the accuracy of the pion plus identification method. This identification is based on the detection of particles generated by the pion plus-to-muon-to-tau decay sequence with a mean life of 26 ns. One method relies on the fast recovery time of the associated electronics by using an appropriate delayed coincidence between poin plus and muon plus signals. The low efficiency of such a method does not permit the determination of the pion minus contribution. In order to improve the charge identification of pions, the authors use a new approach in their experiments, based on the measurement of the charge of the particle pulses within different time gates. This paper presents the principles of this approach. Three gates--a prompt, a normal, and a delayed gate-and their respective charge analyzers are used in the discussion.

10. Intrinsic Nilpotent Approximation.

DTIC Science & Technology

1985-06-01

RD-A1II58 265 INTRINSIC NILPOTENT APPROXIMATION(U) MASSACHUSETTS INST 1/2 OF TECH CAMBRIDGE LAB FOR INFORMATION AND, DECISION UMCLRSSI SYSTEMS C...TYPE OF REPORT & PERIOD COVERED Intrinsic Nilpotent Approximation Technical Report 6. PERFORMING ORG. REPORT NUMBER LIDS-R-1482 7. AUTHOR(.) S...certain infinite-dimensional filtered Lie algebras L by (finite-dimensional) graded nilpotent Lie algebras or g . where x E M, (x,,Z) E T*M/O. It

11. Anomalous diffraction approximation limits

Videen, Gorden; Chýlek, Petr

It has been reported in a recent article [Liu, C., Jonas, P.R., Saunders, C.P.R., 1996. Accuracy of the anomalous diffraction approximation to light scattering by column-like ice crystals. Atmos. Res., 41, pp. 63-69] that the anomalous diffraction approximation (ADA) accuracy does not depend on particle refractive index, but instead is dependent on the particle size parameter. Since this is at odds with previous research, we thought these results warranted further discussion.

12. Approximate spatial reasoning

NASA Technical Reports Server (NTRS)

Dutta, Soumitra

1988-01-01

Much of human reasoning is approximate in nature. Formal models of reasoning traditionally try to be precise and reject the fuzziness of concepts in natural use and replace them with non-fuzzy scientific explicata by a process of precisiation. As an alternate to this approach, it has been suggested that rather than regard human reasoning processes as themselves approximating to some more refined and exact logical process that can be carried out with mathematical precision, the essence and power of human reasoning is in its capability to grasp and use inexact concepts directly. This view is supported by the widespread fuzziness of simple everyday terms (e.g., near tall) and the complexity of ordinary tasks (e.g., cleaning a room). Spatial reasoning is an area where humans consistently reason approximately with demonstrably good results. Consider the case of crossing a traffic intersection. We have only an approximate idea of the locations and speeds of various obstacles (e.g., persons and vehicles), but we nevertheless manage to cross such traffic intersections without any harm. The details of our mental processes which enable us to carry out such intricate tasks in such apparently simple manner are not well understood. However, it is that we try to incorporate such approximate reasoning techniques in our computer systems. Approximate spatial reasoning is very important for intelligent mobile agents (e.g., robots), specially for those operating in uncertain or unknown or dynamic domains.

13. Approximate kernel competitive learning.

PubMed

Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang

2015-03-01

Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches.

14. Approximate Bruechner orbitals in electron propagator calculations

SciTech Connect

Ortiz, J.V.

1999-12-01

Orbitals and ground-state correlation amplitudes from the so-called Brueckner doubles approximation of coupled-cluster theory provide a useful reference state for electron propagator calculations. An operator manifold with hold, particle, two-hole-one-particle and two-particle-one-hole components is chosen. The resulting approximation, third-order algebraic diagrammatic construction [2ph-TDA, ADC (3)] and 3+ methods. The enhanced versatility of this approximation is demonstrated through calculations on valence ionization energies, core ionization energies, electron detachment energies of anions, and on a molecule with partial biradical character, ozone.

15. Consistent Yokoya-Chen Approximation to Beamstrahlung(LCC-0010)

SciTech Connect

Peskin, M

2004-04-22

I reconsider the Yokoya-Chen approximate evolution equation for beamstrahlung and modify it slightly to generate simple, consistent analytical approximations for the electron and photon energy spectra. I compare these approximations to previous ones, and to simulation data.I reconsider the Yokoya-Chen approximate evolution equation for beamstrahlung and modify it slightly to generate simple, consistent analytical approximations for the electron and photon energy spectra. I compare these approximations to previous ones, and to simulation data.

16. Explicitly solvable complex Chebyshev approximation problems related to sine polynomials

NASA Technical Reports Server (NTRS)

Freund, Roland

1989-01-01

Explicitly solvable real Chebyshev approximation problems on the unit interval are typically characterized by simple error curves. A similar principle is presented for complex approximation problems with error curves induced by sine polynomials. As an application, some new explicit formulae for complex best approximations are derived.

17. Covariant approximation averaging

Shintani, Eigo; Arthur, Rudy; Blum, Thomas; Izubuchi, Taku; Jung, Chulwoo; Lehner, Christoph

2015-06-01

We present a new class of statistical error reduction techniques for Monte Carlo simulations. Using covariant symmetries, we show that correlation functions can be constructed from inexpensive approximations without introducing any systematic bias in the final result. We introduce a new class of covariant approximation averaging techniques, known as all-mode averaging (AMA), in which the approximation takes account of contributions of all eigenmodes through the inverse of the Dirac operator computed from the conjugate gradient method with a relaxed stopping condition. In this paper we compare the performance and computational cost of our new method with traditional methods using correlation functions and masses of the pion, nucleon, and vector meson in Nf=2 +1 lattice QCD using domain-wall fermions. This comparison indicates that AMA significantly reduces statistical errors in Monte Carlo calculations over conventional methods for the same cost.

18. Approximate Bayesian Computation

Cisewski, Jessi

2015-08-01

Explicitly specifying a likelihood function is becoming increasingly difficult for many problems in astronomy. Astronomers often specify a simpler approximate likelihood - leaving out important aspects of a more realistic model. Approximate Bayesian computation (ABC) provides a framework for performing inference in cases where the likelihood is not available or intractable. I will introduce ABC and explain how it can be a useful tool for astronomers. In particular, I will focus on the eccentricity distribution for a sample of exoplanets with multiple sub-populations.

19. Ab initio dynamical vertex approximation

Galler, Anna; Thunström, Patrik; Gunacker, Patrik; Tomczak, Jan M.; Held, Karsten

2017-03-01

Diagrammatic extensions of dynamical mean-field theory (DMFT) such as the dynamical vertex approximation (DΓ A) allow us to include nonlocal correlations beyond DMFT on all length scales and proved their worth for model calculations. Here, we develop and implement an Ab initio DΓ A approach (AbinitioDΓ A ) for electronic structure calculations of materials. The starting point is the two-particle irreducible vertex in the two particle-hole channels which is approximated by the bare nonlocal Coulomb interaction and all local vertex corrections. From this, we calculate the full nonlocal vertex and the nonlocal self-energy through the Bethe-Salpeter equation. The AbinitioDΓ A approach naturally generates all local DMFT correlations and all nonlocal G W contributions, but also further nonlocal correlations beyond: mixed terms of the former two and nonlocal spin fluctuations. We apply this new methodology to the prototypical correlated metal SrVO3.

20. Overconfidence in Interval Estimates

ERIC Educational Resources Information Center

Soll, Jack B.; Klayman, Joshua

2004-01-01

Judges were asked to make numerical estimates (e.g., "In what year was the first flight of a hot air balloon?"). Judges provided high and low estimates such that they were X% sure that the correct answer lay between them. They exhibited substantial overconfidence: The correct answer fell inside their intervals much less than X% of the time. This…

1. Multicriteria approximation through decomposition

SciTech Connect

Burch, C.; Krumke, S.; Marathe, M.; Phillips, C.; Sundberg, E.

1998-06-01

The authors propose a general technique called solution decomposition to devise approximation algorithms with provable performance guarantees. The technique is applicable to a large class of combinatorial optimization problems that can be formulated as integer linear programs. Two key ingredients of their technique involve finding a decomposition of a fractional solution into a convex combination of feasible integral solutions and devising generic approximation algorithms based on calls to such decompositions as oracles. The technique is closely related to randomized rounding. Their method yields as corollaries unified solutions to a number of well studied problems and it provides the first approximation algorithms with provable guarantees for a number of new problems. The particular results obtained in this paper include the following: (1) the authors demonstrate how the technique can be used to provide more understanding of previous results and new algorithms for classical problems such as Multicriteria Spanning Trees, and Suitcase Packing; (2) they also show how the ideas can be extended to apply to multicriteria optimization problems, in which they wish to minimize a certain objective function subject to one or more budget constraints. As corollaries they obtain first non-trivial multicriteria approximation algorithms for problems including the k-Hurdle and the Network Inhibition problems.

2. Multicriteria approximation through decomposition

SciTech Connect

Burch, C. |; Krumke, S.; Marathe, M.; Phillips, C.; Sundberg, E. |

1997-12-01

The authors propose a general technique called solution decomposition to devise approximation algorithms with provable performance guarantees. The technique is applicable to a large class of combinatorial optimization problems that can be formulated as integer linear programs. Two key ingredients of the technique involve finding a decomposition of a fractional solution into a convex combination of feasible integral solutions and devising generic approximation algorithms based on calls to such decompositions as oracles. The technique is closely related to randomized rounding. The method yields as corollaries unified solutions to a number of well studied problems and it provides the first approximation algorithms with provable guarantees for a number of new problems. The particular results obtained in this paper include the following: (1) The authors demonstrate how the technique can be used to provide more understanding of previous results and new algorithms for classical problems such as Multicriteria Spanning Trees, and Suitcase Packing. (2) They show how the ideas can be extended to apply to multicriteria optimization problems, in which they wish to minimize a certain objective function subject to one or more budget constraints. As corollaries they obtain first non-trivial multicriteria approximation algorithms for problems including the k-Hurdle and the Network Inhibition problems.

3. On Stochastic Approximation.

ERIC Educational Resources Information Center

Wolff, Hans

This paper deals with a stochastic process for the approximation of the root of a regression equation. This process was first suggested by Robbins and Monro. The main result here is a necessary and sufficient condition on the iteration coefficients for convergence of the process (convergence with probability one and convergence in the quadratic…

4. Approximating Integrals Using Probability

ERIC Educational Resources Information Center

Maruszewski, Richard F., Jr.; Caudle, Kyle A.

2005-01-01

As part of a discussion on Monte Carlo methods, which outlines how to use probability expectations to approximate the value of a definite integral. The purpose of this paper is to elaborate on this technique and then to show several examples using visual basic as a programming tool. It is an interesting method because it combines two branches of…

5. Approximating Integrals Using Probability

ERIC Educational Resources Information Center

Maruszewski, Richard F., Jr.; Caudle, Kyle A.

2005-01-01

As part of a discussion on Monte Carlo methods, which outlines how to use probability expectations to approximate the value of a definite integral. The purpose of this paper is to elaborate on this technique and then to show several examples using visual basic as a programming tool. It is an interesting method because it combines two branches of…

6. High resolution time interval counter

DOEpatents

Condreva, K.J.

1994-07-26

A high resolution counter circuit measures the time interval between the occurrence of an initial and a subsequent electrical pulse to two nanoseconds resolution using an eight megahertz clock. The circuit includes a main counter for receiving electrical pulses and generating a binary word--a measure of the number of eight megahertz clock pulses occurring between the signals. A pair of first and second pulse stretchers receive the signal and generate a pair of output signals whose widths are approximately sixty-four times the time between the receipt of the signals by the respective pulse stretchers and the receipt by the respective pulse stretchers of a second subsequent clock pulse. Output signals are thereafter supplied to a pair of start and stop counters operable to generate a pair of binary output words representative of the measure of the width of the pulses to a resolution of two nanoseconds. Errors associated with the pulse stretchers are corrected by providing calibration data to both stretcher circuits, and recording start and stop counter values. Stretched initial and subsequent signals are combined with autocalibration data and supplied to an arithmetic logic unit to determine the time interval in nanoseconds between the pair of electrical pulses being measured. 3 figs.

7. High resolution time interval counter

DOEpatents

Condreva, Kenneth J.

1994-01-01

A high resolution counter circuit measures the time interval between the occurrence of an initial and a subsequent electrical pulse to two nanoseconds resolution using an eight megahertz clock. The circuit includes a main counter for receiving electrical pulses and generating a binary word--a measure of the number of eight megahertz clock pulses occurring between the signals. A pair of first and second pulse stretchers receive the signal and generate a pair of output signals whose widths are approximately sixty-four times the time between the receipt of the signals by the respective pulse stretchers and the receipt by the respective pulse stretchers of a second subsequent clock pulse. Output signals are thereafter supplied to a pair of start and stop counters operable to generate a pair of binary output words representative of the measure of the width of the pulses to a resolution of two nanoseconds. Errors associated with the pulse stretchers are corrected by providing calibration data to both stretcher circuits, and recording start and stop counter values. Stretched initial and subsequent signals are combined with autocalibration data and supplied to an arithmetic logic unit to determine the time interval in nanoseconds between the pair of electrical pulses being measured.

8. Optimizing the Zeldovich approximation

NASA Technical Reports Server (NTRS)

Melott, Adrian L.; Pellman, Todd F.; Shandarin, Sergei F.

1994-01-01

We have recently learned that the Zeldovich approximation can be successfully used for a far wider range of gravitational instability scenarios than formerly proposed; we study here how to extend this range. In previous work (Coles, Melott and Shandarin 1993, hereafter CMS) we studied the accuracy of several analytic approximations to gravitational clustering in the mildly nonlinear regime. We found that what we called the 'truncated Zeldovich approximation' (TZA) was better than any other (except in one case the ordinary Zeldovich approximation) over a wide range from linear to mildly nonlinear (sigma approximately 3) regimes. TZA was specified by setting Fourier amplitudes equal to zero for all wavenumbers greater than k(sub nl), where k(sub nl) marks the transition to the nonlinear regime. Here, we study the cross correlation of generalized TZA with a group of n-body simulations for three shapes of window function: sharp k-truncation (as in CMS), a tophat in coordinate space, or a Gaussian. We also study the variation in the crosscorrelation as a function of initial truncation scale within each type. We find that k-truncation, which was so much better than other things tried in CMS, is the worst of these three window shapes. We find that a Gaussian window e(exp(-k(exp 2)/2k(exp 2, sub G))) applied to the initial Fourier amplitudes is the best choice. It produces a greatly improved crosscorrelation in those cases which most needed improvement, e.g. those with more small-scale power in the initial conditions. The optimum choice of kG for the Gaussian window is (a somewhat spectrum-dependent) 1 to 1.5 times k(sub nl). Although all three windows produce similar power spectra and density distribution functions after application of the Zeldovich approximation, the agreement of the phases of the Fourier components with the n-body simulation is better for the Gaussian window. We therefore ascribe the success of the best-choice Gaussian window to its superior treatment

9. Fermion tunneling beyond semiclassical approximation

SciTech Connect

Majhi, Bibhas Ranjan

2009-02-15

Applying the Hamilton-Jacobi method beyond the semiclassical approximation prescribed in R. Banerjee and B. R. Majhi, J. High Energy Phys. 06 (2008) 095 for the scalar particle, Hawking radiation as tunneling of the Dirac particle through an event horizon is analyzed. We show that, as before, all quantum corrections in the single particle action are proportional to the usual semiclassical contribution. We also compute the modifications to the Hawking temperature and Bekenstein-Hawking entropy for the Schwarzschild black hole. Finally, the coefficient of the logarithmic correction to entropy is shown to be related with the trace anomaly.

SciTech Connect

Perdew, J.P.; Burke, K.; Ernzerhof, M.

1996-10-01

Generalized gradient approximations (GGA{close_quote}s) for the exchange-correlation energy improve upon the local spin density (LSD) description of atoms, molecules, and solids. We present a simple derivation of a simple GGA, in which all parameters (other than those in LSD) are fundamental constants. Only general features of the detailed construction underlying the Perdew-Wang 1991 (PW91) GGA are invoked. Improvements over PW91 include an accurate description of the linear response of the uniform electron gas, correct behavior under uniform scaling, and a smoother potential. {copyright} {ital 1996 The American Physical Society.}

11. Validity of the site-averaging approximation for modeling the dissociative chemisorption of H{sub 2} on Cu(111) surface: A quantum dynamics study on two potential energy surfaces

SciTech Connect

Liu, Tianhui; Fu, Bina E-mail: zhangdh@dicp.ac.cn; Zhang, Dong H. E-mail: zhangdh@dicp.ac.cn

2014-11-21

A new finding of the site-averaging approximation was recently reported on the dissociative chemisorption of the HCl/DCl+Au(111) surface reaction [T. Liu, B. Fu, and D. H. Zhang, J. Chem. Phys. 139, 184705 (2013); T. Liu, B. Fu, and D. H. Zhang, J. Chem. Phys. 140, 144701 (2014)]. Here, in order to investigate the dependence of new site-averaging approximation on the initial vibrational state of H{sub 2} as well as the PES for the dissociative chemisorption of H{sub 2} on Cu(111) surface at normal incidence, we carried out six-dimensional quantum dynamics calculations using the initial state-selected time-dependent wave packet approach, with H{sub 2} initially in its ground vibrational state and the first vibrational excited state. The corresponding four-dimensional site-specific dissociation probabilities are also calculated with H{sub 2} fixed at bridge, center, and top sites. These calculations are all performed based on two different potential energy surfaces (PESs). It is found that the site-averaging dissociation probability over 15 fixed sites obtained from four-dimensional quantum dynamics calculations can accurately reproduce the six-dimensional dissociation probability for H{sub 2} (v = 0) and (v = 1) on the two PESs.

12. Applied Routh approximation

NASA Technical Reports Server (NTRS)

Merrill, W. C.

1978-01-01

The Routh approximation technique for reducing the complexity of system models was applied in the frequency domain to a 16th order, state variable model of the F100 engine and to a 43d order, transfer function model of a launch vehicle boost pump pressure regulator. The results motivate extending the frequency domain formulation of the Routh method to the time domain in order to handle the state variable formulation directly. The time domain formulation was derived and a characterization that specifies all possible Routh similarity transformations was given. The characterization was computed by solving two eigenvalue-eigenvector problems. The application of the time domain Routh technique to the state variable engine model is described, and some results are given. Additional computational problems are discussed, including an optimization procedure that can improve the approximation accuracy by taking advantage of the transformation characterization.

13. Molecular excitation energies to high-lying bound states from time-dependent density-functional response theory: Characterization and correction of the time-dependent local density approximation ionization threshold

Casida, Mark E.; Jamorski, Christine; Casida, Kim C.; Salahub, Dennis R.

1998-03-01

This paper presents an evaluation of the performance of time-dependent density-functional response theory (TD-DFRT) for the calculation of high-lying bound electronic excitation energies of molecules. TD-DFRT excitation energies are reported for a large number of states for each of four molecules: N2, CO, CH2O, and C2H4. In contrast to the good results obtained for low-lying states within the time-dependent local density approximation (TDLDA), there is a marked deterioration of the results for high-lying bound states. This is manifested as a collapse of the states above the TDLDA ionization threshold, which is at -ɛHOMOLDA (the negative of the highest occupied molecular orbital energy in the LDA). The -ɛHOMOLDA is much lower than the true ionization potential because the LDA exchange-correlation potential has the wrong asymptotic behavior. For this reason, the excitation energies were also calculated using the asymptotically correct potential of van Leeuwen and Baerends (LB94) in the self-consistent field step. This was found to correct the collapse of the high-lying states that was observed with the LDA. Nevertheless, further improvement of the functional is desirable. For low-lying states the asymptotic behavior of the exchange-correlation potential is not critical and the LDA potential does remarkably well. We propose criteria delineating for which states the TDLDA can be expected to be used without serious impact from the incorrect asymptotic behavior of the LDA potential.

14. Topics in Metric Approximation

Leeb, William Edward

This thesis develops effective approximations of certain metrics that occur frequently in pure and applied mathematics. We show that distances that often arise in applications, such as the Earth Mover's Distance between two probability measures, can be approximated by easily computed formulas for a wide variety of ground distances. We develop simple and easily computed characterizations both of norms measuring a function's regularity -- such as the Lipschitz norm -- and of their duals. We are particularly concerned with the tensor product of metric spaces, where the natural notion of regularity is not the Lipschitz condition but the mixed Lipschitz condition. A theme that runs throughout this thesis is that snowflake metrics (metrics raised to a power less than 1) are often better-behaved than ordinary metrics. For example, we show that snowflake metrics on finite spaces can be approximated by the average of tree metrics with a distortion bounded by intrinsic geometric characteristics of the space and not the number of points. Many of the metrics for which we characterize the Lipschitz space and its dual are snowflake metrics. We also present applications of the characterization of certain regularity norms to the problem of recovering a matrix that has been corrupted by noise. We are able to achieve an optimal rate of recovery for certain families of matrices by exploiting the relationship between mixed-variable regularity conditions and the decay of a function's coefficients in a certain orthonormal basis.

15. Varieties of Confidence Intervals.

PubMed

Cousineau, Denis

2017-01-01

Error bars are useful to understand data and their interrelations. Here, it is shown that confidence intervals of the mean (CI M s) can be adjusted based on whether the objective is to highlight differences between measures or not and based on the experimental design (within- or between-group designs). Confidence intervals (CIs) can also be adjusted to take into account the sampling mechanisms and the population size (if not infinite). Names are proposed to distinguish the various types of CIs and the assumptions underlying them, and how to assess their validity is explained. The various CIs presented here are easily obtained from a succession of multiplicative adjustments to the basic (unadjusted) CI width. All summary results should present a measure of precision, such as CIs, as this information is complementary to effect sizes.

16. New approach to description of (d,xn) spectra at energies below 50 MeV in Monte Carlo simulation by intra-nuclear cascade code with Distorted Wave Born Approximation

Hashimoto, S.; Iwamoto, Y.; Sato, T.; Niita, K.; Boudard, A.; Cugnon, J.; David, J.-C.; Leray, S.; Mancusi, D.

2014-08-01

A new approach to describing neutron spectra of deuteron-induced reactions in the Monte Carlo simulation for particle transport has been developed by combining the Intra-Nuclear Cascade of Liège (INCL) and the Distorted Wave Born Approximation (DWBA) calculation. We incorporated this combined method into the Particle and Heavy Ion Transport code System (PHITS) and applied it to estimate (d,xn) spectra on natLi, 9Be, and natC targets at incident energies ranging from 10 to 40 MeV. Double differential cross sections obtained by INCL and DWBA successfully reproduced broad peaks and discrete peaks, respectively, at the same energies as those observed in experimental data. Furthermore, an excellent agreement was observed between experimental data and PHITS-derived results using the combined method in thick target neutron yields over a wide range of neutron emission angles in the reactions. We also applied the new method to estimate (d,xp) spectra in the reactions, and discussed the validity for the proton emission spectra.

17. Approximate option pricing

SciTech Connect

Chalasani, P.; Saias, I.; Jha, S.

1996-04-08

As increasingly large volumes of sophisticated options (called derivative securities) are traded in world financial markets, determining a fair price for these options has become an important and difficult computational problem. Many valuation codes use the binomial pricing model, in which the stock price is driven by a random walk. In this model, the value of an n-period option on a stock is the expected time-discounted value of the future cash flow on an n-period stock price path. Path-dependent options are particularly difficult to value since the future cash flow depends on the entire stock price path rather than on just the final stock price. Currently such options are approximately priced by Monte carlo methods with error bounds that hold only with high probability and which are reduced by increasing the number of simulation runs. In this paper the authors show that pricing an arbitrary path-dependent option is {number_sign}-P hard. They show that certain types f path-dependent options can be valued exactly in polynomial time. Asian options are path-dependent options that are particularly hard to price, and for these they design deterministic polynomial-time approximate algorithms. They show that the value of a perpetual American put option (which can be computed in constant time) is in many cases a good approximation to the value of an otherwise identical n-period American put option. In contrast to Monte Carlo methods, the algorithms have guaranteed error bounds that are polynormally small (and in some cases exponentially small) in the maturity n. For the error analysis they derive large-deviation results for random walks that may be of independent interest.

18. Beyond the Kirchhoff approximation

NASA Technical Reports Server (NTRS)

Rodriguez, Ernesto

1989-01-01

The three most successful models for describing scattering from random rough surfaces are the Kirchhoff approximation (KA), the small-perturbation method (SPM), and the two-scale-roughness (or composite roughness) surface-scattering (TSR) models. In this paper it is shown how these three models can be derived rigorously from one perturbation expansion based on the extinction theorem for scalar waves scattering from perfectly rigid surface. It is also shown how corrections to the KA proportional to the surface curvature and higher-order derivatives may be obtained. Using these results, the scattering cross section is derived for various surface models.

19. Multichannel interval timer (MINT)

SciTech Connect

Kimball, K.B.

1982-06-01

A prototype Multichannel INterval Timer (MINT) has been built for measuring signal Time of Arrival (TOA) from sensors placed in blast environments. The MINT is intended to reduce the space, equipment costs, and data reduction efforts associated with traditional analog TOA recording methods, making it more practical to field the large arrays of TOA sensors required to characterize blast environments. This document describes the MINT design features, provides the information required for installing and operating the system, and presents proposed improvements for the next generation system.

20. Empirical Bayes interval estimates that are conditionally equal to unadjusted confidence intervals or to default prior credibility intervals.

PubMed

Bickel, David R

2012-02-21

Problems involving thousands of null hypotheses have been addressed by estimating the local false discovery rate (LFDR). A previous LFDR approach to reporting point and interval estimates of an effect-size parameter uses an estimate of the prior distribution of the parameter conditional on the alternative hypothesis. That estimated prior is often unreliable, and yet strongly influences the posterior intervals and point estimates, causing the posterior intervals to differ from fixed-parameter confidence intervals, even for arbitrarily small estimates of the LFDR. That influence of the estimated prior manifests the failure of the conditional posterior intervals, given the truth of the alternative hypothesis, to match the confidence intervals. Those problems are overcome by changing the posterior distribution conditional on the alternative hypothesis from a Bayesian posterior to a confidence posterior. Unlike the Bayesian posterior, the confidence posterior equates the posterior probability that the parameter lies in a fixed interval with the coverage rate of the coinciding confidence interval. The resulting confidence-Bayes hybrid posterior supplies interval and point estimates that shrink toward the null hypothesis value. The confidence intervals tend to be much shorter than their fixed-parameter counterparts, as illustrated with gene expression data. Simulations nonetheless confirm that the shrunken confidence intervals cover the parameter more frequently than stated. Generally applicable sufficient conditions for correct coverage are given. In addition to having those frequentist properties, the hybrid posterior can also be motivated from an objective Bayesian perspective by requiring coherence with some default prior conditional on the alternative hypothesis. That requirement generates a new class of approximate posteriors that supplement Bayes factors modified for improper priors and that dampen the influence of proper priors on the credibility intervals. While

1. Interval probabilistic neural network.

PubMed

Kowalski, Piotr A; Kulczycki, Piotr

2017-01-01

Automated classification systems have allowed for the rapid development of exploratory data analysis. Such systems increase the independence of human intervention in obtaining the analysis results, especially when inaccurate information is under consideration. The aim of this paper is to present a novel approach, a neural networking, for use in classifying interval information. As presented, neural methodology is a generalization of probabilistic neural network for interval data processing. The simple structure of this neural classification algorithm makes it applicable for research purposes. The procedure is based on the Bayes approach, ensuring minimal potential losses with regard to that which comes about through classification errors. In this article, the topological structure of the network and the learning process are described in detail. Of note, the correctness of the procedure proposed here has been verified by way of numerical tests. These tests include examples of both synthetic data, as well as benchmark instances. The results of numerical verification, carried out for different shapes of data sets, as well as a comparative analysis with other methods of similar conditioning, have validated both the concept presented here and its positive features.

2. Hierarchical Approximate Bayesian Computation

PubMed Central

Turner, Brandon M.; Van Zandt, Trisha

2013-01-01

Approximate Bayesian computation (ABC) is a powerful technique for estimating the posterior distribution of a model’s parameters. It is especially important when the model to be fit has no explicit likelihood function, which happens for computational (or simulation-based) models such as those that are popular in cognitive neuroscience and other areas in psychology. However, ABC is usually applied only to models with few parameters. Extending ABC to hierarchical models has been difficult because high-dimensional hierarchical models add computational complexity that conventional ABC cannot accommodate. In this paper we summarize some current approaches for performing hierarchical ABC and introduce a new algorithm called Gibbs ABC. This new algorithm incorporates well-known Bayesian techniques to improve the accuracy and efficiency of the ABC approach for estimation of hierarchical models. We then use the Gibbs ABC algorithm to estimate the parameters of two models of signal detection, one with and one without a tractable likelihood function. PMID:24297436

3. Approximate strip exchanging.

PubMed

Roy, Swapnoneel; Thakur, Ashok Kumar

2008-01-01

Genome rearrangements have been modelled by a variety of primitives such as reversals, transpositions, block moves and block interchanges. We consider such a genome rearrangement primitive Strip Exchanges. Given a permutation, the challenge is to sort it by using minimum number of strip exchanges. A strip exchanging move interchanges the positions of two chosen strips so that they merge with other strips. The strip exchange problem is to sort a permutation using minimum number of strip exchanges. We present here the first non-trivial 2-approximation algorithm to this problem. We also observe that sorting by strip-exchanges is fixed-parameter-tractable. Lastly we discuss the application of strip exchanges in a different area Optical Character Recognition (OCR) with an example.

4. Cosmological applications of Padé approximant

SciTech Connect

Wei, Hao; Yan, Xiao-Peng; Zhou, Ya-Nan E-mail: 764644314@qq.com

2014-01-01

As is well known, in mathematics, any function could be approximated by the Padé approximant. The Padé approximant is the best approximation of a function by a rational function of given order. In fact, the Padé approximant often gives better approximation of the function than truncating its Taylor series, and it may still work where the Taylor series does not converge. In the present work, we consider the Padé approximant in two issues. First, we obtain the analytical approximation of the luminosity distance for the flat XCDM model, and find that the relative error is fairly small. Second, we propose several parameterizations for the equation-of-state parameter (EoS) of dark energy based on the Padé approximant. They are well motivated from the mathematical and physical points of view. We confront these EoS parameterizations with the latest observational data, and find that they can work well. In these practices, we show that the Padé approximant could be an useful tool in cosmology, and it deserves further investigation.

5. Metabolic response of different high-intensity aerobic interval exercise protocols.

PubMed

Gosselin, Luc E; Kozlowski, Karl F; DeVinney-Boymel, Lee; Hambridge, Caitlin

2012-10-01

Although high-intensity sprint interval training (SIT) employing the Wingate protocol results in significant physiological adaptations, it is conducted at supramaximal intensity and is potentially unsafe for sedentary middle-aged adults. We therefore evaluated the metabolic and cardiovascular response in healthy young individuals performing 4 high-intensity (~90% VO2max) aerobic interval training (HIT) protocols with similar total work output but different work-to-rest ratio. Eight young physically active subjects participated in 5 different bouts of exercise over a 3-week period. Protocol 1 consisted of 20-minute continuous exercise at approximately 70% of VO2max, whereas protocols 2-5 were interval based with a work-active rest duration (in seconds) of 30/30, 60/30, 90/30, and 60/60, respectively. Each interval protocol resulted in approximately 10 minutes of exercise at a workload corresponding to approximately 90% VO2max, but differed in the total rest duration. The 90/30 HIT protocol resulted in the highest VO2, HR, rating of perceived exertion, and blood lactate, whereas the 30/30 protocol resulted in the lowest of these parameters. The total caloric energy expenditure was lowest in the 90/30 and 60/30 protocols (~150 kcal), whereas the other 3 protocols did not differ (~195 kcal) from one another. The immediate postexercise blood pressure response was similar across all the protocols. These finding indicate that HIT performed at approximately 90% of VO2max is no more physiologically taxing than is steady-state exercise conducted at 70% VO2max, but the response during HIT is influenced by the work-to-rest ratio. This interval protocol may be used as an alternative approach to steady-state exercise training but with less time commitment.

6. Approximated integrability of the Dicke model

Relaño, A.; Bastarrachea-Magnani, M. A.; Lerma-Hernández, S.

2016-12-01

A very approximate second integral of motion of the Dicke model is identified within a broad energy region above the ground state, and for a wide range of values of the external parameters. This second integral, obtained from a Born-Oppenheimer approximation, classifies the whole regular part of the spectrum in bands, coming from different semi-classical energy surfaces, and labelled by its corresponding eigenvalues. Results obtained from this approximation are compared with exact numerical diagonalization for finite systems in the superradiant phase, obtaining a remarkable accord. The region of validity of our approach in the parameter space, which includes the resonant case, is unveiled. The energy range of validity goes from the ground state up to a certain upper energy where chaos sets in, and extends far beyond the range of applicability of a simple harmonic approximation around the minimal energy configuration. The upper energy validity limit increases for larger values of the coupling constant and the ratio between the level splitting and the frequency of the field. These results show that the Dicke model behaves like a two-degree-of-freedom integrable model for a wide range of energies and values of the external parameters.

7. Hybrid Approximate Message Passing

Rangan, Sundeep; Fletcher, Alyson K.; Goyal, Vivek K.; Byrne, Evan; Schniter, Philip

2017-09-01

The standard linear regression (SLR) problem is to recover a vector $\\mathbf{x}^0$ from noisy linear observations $\\mathbf{y}=\\mathbf{Ax}^0+\\mathbf{w}$. The approximate message passing (AMP) algorithm recently proposed by Donoho, Maleki, and Montanari is a computationally efficient iterative approach to SLR that has a remarkable property: for large i.i.d.\\ sub-Gaussian matrices $\\mathbf{A}$, its per-iteration behavior is rigorously characterized by a scalar state-evolution whose fixed points, when unique, are Bayes optimal. AMP, however, is fragile in that even small deviations from the i.i.d.\\ sub-Gaussian model can cause the algorithm to diverge. This paper considers a "vector AMP" (VAMP) algorithm and shows that VAMP has a rigorous scalar state-evolution that holds under a much broader class of large random matrices $\\mathbf{A}$: those that are right-rotationally invariant. After performing an initial singular value decomposition (SVD) of $\\mathbf{A}$, the per-iteration complexity of VAMP can be made similar to that of AMP. In addition, the fixed points of VAMP's state evolution are consistent with the replica prediction of the minimum mean-squared error recently derived by Tulino, Caire, Verd\\'u, and Shamai. The effectiveness and state evolution predictions of VAMP are confirmed in numerical experiments.

8. Countably QC-Approximating Posets

PubMed Central

Mao, Xuxin; Xu, Luoshan

2014-01-01

As a generalization of countably C-approximating posets, the concept of countably QC-approximating posets is introduced. With the countably QC-approximating property, some characterizations of generalized completely distributive lattices and generalized countably approximating posets are given. The main results are as follows: (1) a complete lattice is generalized completely distributive if and only if it is countably QC-approximating and weakly generalized countably approximating; (2) a poset L having countably directed joins is generalized countably approximating if and only if the lattice σc(L)op of all σ-Scott-closed subsets of L is weakly generalized countably approximating. PMID:25165730

9. Exponential Approximations Using Fourier Series Partial Sums

NASA Technical Reports Server (NTRS)

Banerjee, Nana S.; Geer, James F.

1997-01-01

The problem of accurately reconstructing a piece-wise smooth, 2(pi)-periodic function f and its first few derivatives, given only a truncated Fourier series representation of f, is studied and solved. The reconstruction process is divided into two steps. In the first step, the first 2N + 1 Fourier coefficients of f are used to approximate the locations and magnitudes of the discontinuities in f and its first M derivatives. This is accomplished by first finding initial estimates of these quantities based on certain properties of Gibbs phenomenon, and then refining these estimates by fitting the asymptotic form of the Fourier coefficients to the given coefficients using a least-squares approach. It is conjectured that the locations of the singularities are approximated to within O(N(sup -M-2), and the associated jump of the k(sup th) derivative of f is approximated to within O(N(sup -M-l+k), as N approaches infinity, and the method is robust. These estimates are then used with a class of singular basis functions, which have certain 'built-in' singularities, to construct a new sequence of approximations to f. Each of these new approximations is the sum of a piecewise smooth function and a new Fourier series partial sum. When N is proportional to M, it is shown that these new approximations, and their derivatives, converge exponentially in the maximum norm to f, and its corresponding derivatives, except in the union of a finite number of small open intervals containing the points of singularity of f. The total measure of these intervals decreases exponentially to zero as M approaches infinity. The technique is illustrated with several examples.

10. Energy.

ERIC Educational Resources Information Center

Online-Offline, 1998

1998-01-01

This issue focuses on the theme of "Energy," and describes several educational resources (Web sites, CD-ROMs and software, videos, books, activities, and other resources). Sidebars offer features on alternative energy, animal energy, internal combustion engines, and energy from food. Subthemes include harnessing energy, human energy, and…

11. Energy.

ERIC Educational Resources Information Center

Online-Offline, 1998

1998-01-01

This issue focuses on the theme of "Energy," and describes several educational resources (Web sites, CD-ROMs and software, videos, books, activities, and other resources). Sidebars offer features on alternative energy, animal energy, internal combustion engines, and energy from food. Subthemes include harnessing energy, human energy, and…

12. Elucidating Potential Energy Surfaces for Singlet O2 Reactions with Protonated, Deprotonated, and Di-Deprotonated Cystine Using a Combination of Approximately Spin-Projected Density Functional Theory and Guided-Ion-Beam Mass Spectrometry.

PubMed

Lu, Wenchao; Tsai, I-Hsien Midas; Sun, Yan; Zhou, Wenjing; Liu, Jianbo

2017-08-24

The reactivity of cystine toward electronically excited singlet O2 (a(1)Δg) has been long debated, despite the fact that most organic disulfides are susceptible to oxidation by singlet O2. We report a combined experimental and computational study on reactions of singlet O2 with gas-phase cystine at different ionization and hydration states, aimed to determine reaction outcomes, mechanisms, and potential energy surfaces (PESs). Ion-molecule collisions of protonated and di-deprotonated cystine ions with singlet O2, in both the absence and the presence of a water ligand, were measured over a center-of-mass collision energy (Ecol) range from 0.1 to 1.0 eV, using a guided-ion-beam scattering tandem mass spectrometer. No oxidation was observed for these reactant ions except collision-induced dissociation at high energies. Guided by density functional theory (DFT)-calculated PESs, reaction coordinates were established to unravel the origin of the nonreactivity of cystine ions toward singlet O2. To account for mixed open- and closed-shell characters, singlet O2 and critical structures along reaction coordinates were evaluated using broken-symmetry, open-shell DFT with spin contamination errors removed by an approximate spin-projection method. It was found that collision of protonated cystine with singlet O2 follows a repulsive potential surface and possesses no chemically significant interaction and that collision-induced dissociation of protonated cystine is dominated by loss of water and CO. Collision of di-deprotonated cystine with singlet O2, on the other hand, forms a short-lived electrostatically bonded precursor complex at low Ecol. The latter may evolve to a covalently bonded persulfoxide, but the conversion is blocked by an activation barrier lying 0.39 eV above reactants. At high Ecol, C-S bond cleavage dominates the collision-induced dissociation of di-deprotonated cystine, leading to charge-separated fragmentation. Cross section for the ensuing fragment ion H2

13. Approximating maximum clique with a Hopfield network.

PubMed

Jagota, A

1995-01-01

In a graph, a clique is a set of vertices such that every pair is connected by an edge. MAX-CLIQUE is the optimization problem of finding the largest clique in a given graph and is NP-hard, even to approximate well. Several real-world and theory problems can be modeled as MAX-CLIQUE. In this paper, we efficiently approximate MAX-CLIQUE in a special case of the Hopfield network whose stable states are maximal cliques. We present several energy-descent optimizing dynamics; both discrete (deterministic and stochastic) and continuous. One of these emulates, as special cases, two well-known greedy algorithms for approximating MAX-CLIQUE. We report on detailed empirical comparisons on random graphs and on harder ones. Mean-field annealing, an efficient approximation to simulated annealing, and a stochastic dynamics are the narrow but clear winners. All dynamics approximate much better than one which emulates a "naive" greedy heuristic.

14. Fast approximate stochastic tractography.

PubMed

Iglesias, Juan Eugenio; Thompson, Paul M; Liu, Cheng-Yi; Tu, Zhuowen

2012-01-01

Many different probabilistic tractography methods have been proposed in the literature to overcome the limitations of classical deterministic tractography: (i) lack of quantitative connectivity information; and (ii) robustness to noise, partial volume effects and selection of seed region. However, these methods rely on Monte Carlo sampling techniques that are computationally very demanding. This study presents an approximate stochastic tractography algorithm (FAST) that can be used interactively, as opposed to having to wait several minutes to obtain the output after marking a seed region. In FAST, tractography is formulated as a Markov chain that relies on a transition tensor. The tensor is designed to mimic the features of a well-known probabilistic tractography method based on a random walk model and Monte-Carlo sampling, but can also accommodate other propagation rules. Compared to the baseline algorithm, our method circumvents the sampling process and provides a deterministic solution at the expense of partially sacrificing sub-voxel accuracy. Therefore, the method is strictly speaking not stochastic, but provides a probabilistic output in the spirit of stochastic tractography methods. FAST was compared with the random walk model using real data from 10 patients in two different ways: 1. the probability maps produced by the two methods on five well-known fiber tracts were directly compared using metrics from the image registration literature; and 2. the connectivity measurements between different regions of the brain given by the two methods were compared using the correlation coefficient ρ. The results show that the connectivity measures provided by the two algorithms are well-correlated (ρ = 0.83), and so are the probability maps (normalized cross correlation 0.818 ± 0.081). The maps are also qualitatively (i.e., visually) very similar. The proposed method achieves a 60x speed-up (7 s vs. 7 min) over the Monte Carlo sampling scheme, therefore

15. Volatility return intervals analysis of the Japanese market

Jung, W.-S.; Wang, F. Z.; Havlin, S.; Kaizoji, T.; Moon, H.-T.; Stanley, H. E.

2008-03-01

We investigate scaling and memory effects in return intervals between price volatilities above a certain threshold q for the Japanese stock market using daily and intraday data sets. We find that the distribution of return intervals can be approximated by a scaling function that depends only on the ratio between the return interval τ and its mean <τ>. We also find memory effects such that a large (or small) return interval follows a large (or small) interval by investigating the conditional distribution and mean return interval. The results are similar to previous studies of other markets and indicate that similar statistical features appear in different financial markets. We also compare our results between the period before and after the big crash at the end of 1989. We find that scaling and memory effects of the return intervals show similar features although the statistical properties of the returns are different.

16. Heart rate dependency of JT interval sections.

PubMed

Hnatkova, Katerina; Johannesen, Lars; Vicente, Jose; Malik, Marek

2017-08-09

Little experience exists with the heart rate correction of J-Tpeak and Tpeak-Tend intervals. In a population of 176 female and 176 male healthy subjects aged 32.3±9.8 and 33.1±8.4years, respectively, curve-linear and linear relationship to heart rate was investigated for different sections of the JT interval defined by the proportions of the area under the vector magnitude of the reconstructed 3D vectorcardiographic loop. The duration of the JT sub-section between approximately just before the T peak and almost the T end was found heart rate independent. Most of the JT heart rate dependency relates to the beginning of the interval. The duration of the terminal T wave tail is only weakly heart rate dependent. The Tpeak-Tend is only minimally heart rate dependent and in studies not showing substantial heart rate changes does not need to be heart rate corrected. For any correction formula that has linear additive properties, heart rate correction of JT and JTpeak intervals is practically the same as of the QT interval. However, this does not apply to the formulas in the form of Int/RR(a) since they do not have linear additive properties. Copyright © 2017 Elsevier Inc. All rights reserved.

17. IONIS: Approximate atomic photoionization intensities

Heinäsmäki, Sami

2012-02-01

A program to compute relative atomic photoionization cross sections is presented. The code applies the output of the multiconfiguration Dirac-Fock method for atoms in the single active electron scheme, by computing the overlap of the bound electron states in the initial and final states. The contribution from the single-particle ionization matrix elements is assumed to be the same for each final state. This method gives rather accurate relative ionization probabilities provided the single-electron ionization matrix elements do not depend strongly on energy in the region considered. The method is especially suited for open shell atoms where electronic correlation in the ionic states is large. Program summaryProgram title: IONIS Catalogue identifier: AEKK_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKK_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 1149 No. of bytes in distributed program, including test data, etc.: 12 877 Distribution format: tar.gz Programming language: Fortran 95 Computer: Workstations Operating system: GNU/Linux, Unix Classification: 2.2, 2.5 Nature of problem: Photoionization intensities for atoms. Solution method: The code applies the output of the multiconfiguration Dirac-Fock codes Grasp92 [1] or Grasp2K [2], to compute approximate photoionization intensities. The intensity is computed within the one-electron transition approximation and by assuming that the sum of the single-particle ionization probabilities is the same for all final ionic states. Restrictions: The program gives nonzero intensities for those transitions where only one electron is removed from the initial configuration(s). Shake-type many-electron transitions are not computed. The ionized shell must be closed in the initial state. Running time: Few seconds for a

18. DALI: Derivative Approximation for LIkelihoods

Sellentin, Elena

2015-07-01

DALI (Derivative Approximation for LIkelihoods) is a fast approximation of non-Gaussian likelihoods. It extends the Fisher Matrix in a straightforward way and allows for a wider range of posterior shapes. The code is written in C/C++.

19. Genetic analyses of a seasonal interval timer.

PubMed

Prendergast, Brian J; Renstrom, Randall A; Nelson, Randy J

2004-08-01

Seasonal clocks (e.g., circannual clocks, seasonal interval timers) permit anticipation of regularly occurring environmental events by timing the onset of seasonal transitions in reproduction, metabolism, and behavior. Implicit in the concept that seasonal clocks reflect adaptations to the local environment is the unexamined assumption that heritable genetic variance exists in the critical features of such clocks, namely, their temporal properties. These experiments quantified the intraspecific variance in, and heritability of, the photorefractoriness interval timer in Siberian hamsters (Phodopus sungorus), a seasonal clock that provides temporal information to mechanisms that regulate seasonal transitions in body weight. Twenty-seven families consisting of 54 parents and 109 offspring were raised in a long-day photoperiod and transferred as adults to an inhibitory photoperiod (continuous darkness; DD). Weekly body weight measurements permitted specification of the interval of responsiveness to DD, a reflection of the duration of the interval timer, in each individual. Body weights of males and females decreased after exposure to DD, but 3 to 5 months later, somatic recrudescence occurred, indicative of photorefractoriness to DD. The interval timer was approximately 5 weeks longer and twice as variable in females relative to males. Analyses of variance of full siblings revealed an overall intraclass correlation of 0.71 +/- 0.04 (0.51 +/- 0.10 for male offspring and 0.80 +/- 0.06 for female offspring), suggesting a significant family resemblance in the duration of interval timers. Parent-offspring regression analyses yielded an overall heritability estimate of 0.61 +/- 0.2; h(2) estimates from parent-offspring regression analyses were significant for female offspring (0.91 +/- 0.4) but not for male offspring (0.35 +/- 0.2), indicating strong additive genetic components for this trait, primarily in females. In nature, individual differences, both within and between

20. Intervality and coherence in complex networks

Domínguez-García, Virginia; Johnson, Samuel; Muñoz, Miguel A.

2016-06-01

Food webs—networks of predators and prey—have long been known to exhibit "intervality": species can generally be ordered along a single axis in such a way that the prey of any given predator tend to lie on unbroken compact intervals. Although the meaning of this axis—usually identified with a "niche" dimension—has remained a mystery, it is assumed to lie at the basis of the highly non-trivial structure of food webs. With this in mind, most trophic network modelling has for decades been based on assigning species a niche value by hand. However, we argue here that intervality should not be considered the cause but rather a consequence of food-web structure. First, analysing a set of 46 empirical food webs, we find that they also exhibit predator intervality: the predators of any given species are as likely to be contiguous as the prey are, but in a different ordering. Furthermore, this property is not exclusive of trophic networks: several networks of genes, neurons, metabolites, cellular machines, airports, and words are found to be approximately as interval as food webs. We go on to show that a simple model of food-web assembly which does not make use of a niche axis can nevertheless generate significant intervality. Therefore, the niche dimension (in the sense used for food-web modelling) could in fact be the consequence of other, more fundamental structural traits. We conclude that a new approach to food-web modelling is required for a deeper understanding of ecosystem assembly, structure, and function, and propose that certain topological features thought to be specific of food webs are in fact common to many complex networks.

1. Intervality and coherence in complex networks.

PubMed

Domínguez-García, Virginia; Johnson, Samuel; Muñoz, Miguel A

2016-06-01

Food webs-networks of predators and prey-have long been known to exhibit "intervality": species can generally be ordered along a single axis in such a way that the prey of any given predator tend to lie on unbroken compact intervals. Although the meaning of this axis-usually identified with a "niche" dimension-has remained a mystery, it is assumed to lie at the basis of the highly non-trivial structure of food webs. With this in mind, most trophic network modelling has for decades been based on assigning species a niche value by hand. However, we argue here that intervality should not be considered the cause but rather a consequence of food-web structure. First, analysing a set of 46 empirical food webs, we find that they also exhibit predator intervality: the predators of any given species are as likely to be contiguous as the prey are, but in a different ordering. Furthermore, this property is not exclusive of trophic networks: several networks of genes, neurons, metabolites, cellular machines, airports, and words are found to be approximately as interval as food webs. We go on to show that a simple model of food-web assembly which does not make use of a niche axis can nevertheless generate significant intervality. Therefore, the niche dimension (in the sense used for food-web modelling) could in fact be the consequence of other, more fundamental structural traits. We conclude that a new approach to food-web modelling is required for a deeper understanding of ecosystem assembly, structure, and function, and propose that certain topological features thought to be specific of food webs are in fact common to many complex networks.

2. Taylor Approximations and Definite Integrals

ERIC Educational Resources Information Center

Gordon, Sheldon P.

2007-01-01

We investigate the possibility of approximating the value of a definite integral by approximating the integrand rather than using numerical methods to approximate the value of the definite integral. Particular cases considered include examples where the integral is improper, such as an elliptic integral. (Contains 4 tables and 2 figures.)

3. Taylor Approximations and Definite Integrals

ERIC Educational Resources Information Center

Gordon, Sheldon P.

2007-01-01

We investigate the possibility of approximating the value of a definite integral by approximating the integrand rather than using numerical methods to approximate the value of the definite integral. Particular cases considered include examples where the integral is improper, such as an elliptic integral. (Contains 4 tables and 2 figures.)

4. Approximate equilibria for Bayesian games

Mallozzi, Lina; Pusillo, Lucia; Tijs, Stef

2008-07-01

In this paper the problem of the existence of approximate equilibria in mixed strategies is central. Sufficient conditions are given under which approximate equilibria exist for non-finite Bayesian games. Further one possible approach is suggested to the problem of the existence of approximate equilibria for the class of multicriteria Bayesian games.

5. Frankenstein's glue: transition functions for approximate solutions

Yunes, Nicolás

2007-09-01

Approximations are commonly employed to find approximate solutions to the Einstein equations. These solutions, however, are usually only valid in some specific spacetime region. A global solution can be constructed by gluing approximate solutions together, but this procedure is difficult because discontinuities can arise, leading to large violations of the Einstein equations. In this paper, we provide an attempt to formalize this gluing scheme by studying transition functions that join approximate analytic solutions together. In particular, we propose certain sufficient conditions on these functions and prove that these conditions guarantee that the joined solution still satisfies the Einstein equations analytically to the same order as the approximate ones. An example is also provided for a binary system of non-spinning black holes, where the approximate solutions are taken to be given by a post-Newtonian expansion and a perturbed Schwarzschild solution. For this specific case, we show that if the transition functions satisfy the proposed conditions, then the joined solution does not contain any violations to the Einstein equations larger than those already inherent in the approximations. We further show that if these functions violate the proposed conditions, then the matter content of the spacetime is modified by the introduction of a matter shell, whose stress energy tensor depends on derivatives of these functions.

6. Recommended confidence intervals for two independent binomial proportions.

PubMed

Fagerland, Morten W; Lydersen, Stian; Laake, Petter

2015-04-01

The relationship between two independent binomial proportions is commonly estimated and presented using the difference between proportions, the number needed to treat, the ratio of proportions or the odds ratio. Several different confidence intervals are available, but they can produce markedly different results. Some of the traditional approaches, such as the Wald interval for the difference between proportions and the Katz log interval for the ratio of proportions, do not perform well unless the sample size is large. Better intervals are available. This article describes and compares approximate and exact confidence intervals that are - with one exception - easy to calculate or available in common software packages. We illustrate the performances of the intervals and make recommendations for both small and moderate-to-large sample sizes. © The Author(s) 2011 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

7. Effect Sizes, Confidence Intervals, and Confidence Intervals for Effect Sizes

ERIC Educational Resources Information Center

Thompson, Bruce

2007-01-01

The present article provides a primer on (a) effect sizes, (b) confidence intervals, and (c) confidence intervals for effect sizes. Additionally, various admonitions for reformed statistical practice are presented. For example, a very important implication of the realization that there are dozens of effect size statistics is that "authors must…

8. Temporal binding of interval markers

PubMed Central

Derichs, Christina; Zimmermann, Eckart

2016-01-01

How we estimate the passage of time is an unsolved mystery in neuroscience. Illusions of subjective time provide an experimental access to this question. Here we show that time compression and expansion of visually marked intervals result from a binding of temporal interval markers. Interval markers whose onset signals were artificially weakened by briefly flashing a whole-field mask were bound in time towards markers with a strong onset signal. We explain temporal compression as the consequence of summing response distributions of weak and strong onset signals. Crucially, temporal binding occurred irrespective of the temporal order of weak and strong onset markers, thus ruling out processing latencies as an explanation for changes in interval duration judgments. If both interval markers were presented together with a mask or the mask was shown in the temporal interval center, no compression occurred. In a sequence of two intervals, masking the middle marker led to time compression for the first and time expansion for the second interval. All these results are consistent with a model view of temporal binding that serves a functional role by reducing uncertainty in the final estimate of interval duration. PMID:27958311

9. Reference Intervals in Neonatal Hematology.

PubMed

Henry, Erick; Christensen, Robert D

2015-09-01

The various blood cell counts of neonates must be interpreted in accordance with high-quality reference intervals based on gestational and postnatal age. Using very large sample sizes, we generated neonatal reference intervals for each element of the complete blood count (CBC). Knowledge of whether a patient has CBC values that are too high (above the upper reference interval) or too low (below the lower reference interval) provides important insights into the specific disorder involved and in many instances suggests a treatment plan. Copyright © 2015 Elsevier Inc. All rights reserved.

10. A trigonometric interval method for dynamic response analysis of uncertain nonlinear systems

Liu, ZhuangZhuang; Wang, TianShu; Li, JunFeng

2015-04-01

This paper proposes a new non-intrusive trigonometric polynomial approximation interval method for the dynamic response analysis of nonlinear systems with uncertain-but-bounded parameters and/or initial conditions. This method provides tighter solution ranges compared to the existing approximation interval methods. We consider trigonometric approximation polynomials of three types: both cosine and sine functions, the sine function, and the cosine function. Thus, special interval arithmetic for trigonometric function without overestimation can be used to obtain interval results. The interval method using trigonometric approximation polynomials with a cosine functional form exhibits better performance than the existing Taylor interval method and Chebyshev interval method. Finally, two typical numerical examples with nonlinearity are applied to demonstrate the effectiveness of the proposed method.

11. An approximation based global optimization strategy for structural synthesis

NASA Technical Reports Server (NTRS)

Sepulveda, A. E.; Schmit, L. A.

1991-01-01

A global optimization strategy for structural synthesis based on approximation concepts is presented. The methodology involves the solution of a sequence of highly accurate approximate problems using a global optimization algorithm. The global optimization algorithm implemented consists of a branch and bound strategy based on the interval evaluation of the objective function and constraint functions, combined with a local feasible directions algorithm. The approximate design optimization problems are constructed using first order approximations of selected intermediate response quantities in terms of intermediate design variables. Some numerical results for example problems are presented to illustrate the efficacy of the design procedure setforth.

12. Energy

ERIC Educational Resources Information Center

Boyer, Ernest L.

1977-01-01

Schools must teach pupils about the wide nature of our energy dilemma and prepare them for a future in which not only will conservation of energy be essential, but also the conservation and preservation of our total natural resources. (JD)

13. An approximate classical unimolecular reaction rate theory

Zhao, Meishan; Rice, Stuart A.

1992-05-01

We describe a classical theory of unimolecular reaction rate which is derived from the analysis of Davis and Gray by use of simplifying approximations. These approximations concern the calculation of the locations of, and the fluxes of phase points across, the bottlenecks to fragmentation and to intramolecular energy transfer. The bottleneck to fragment separation is represented as a vibration-rotation state dependent separatrix, which approximation is similar to but extends and improves the approximations for the separatrix introduced by Gray, Rice, and Davis and by Zhao and Rice. The novel feature in our analysis is the representation of the bottlenecks to intramolecular energy transfer as dividing surfaces in phase space; the locations of these dividing surfaces are determined by the same conditions as locate the remnants of robust tori with frequency ratios related to the golden mean (in a two degree of freedom system these are the cantori). The flux of phase points across each dividing surface is calculated with an analytic representation instead of a stroboscopic mapping. The rate of unimolecular reaction is identified with the net rate at which phase points escape from the region of quasiperiodic bounded motion to the region of free fragment motion by consecutively crossing the dividing surfaces for intramolecular energy exchange and the separatrix. This new theory generates predictions of the rates of predissociation of the van der Waals molecules HeI2, NeI2 and ArI2 which are in very good agreement with available experimental data.

14. [Normal confidence interval for a summary measure].

PubMed

Bernard, P M

2000-10-01

This paper proposes an approach for calculating the normal confidence interval of a weighted summary measure which requires a particular continuous transformation for its variance estimation. By using the transformation properties and applying the delta method, the variance of transformed measure is easily expressed in terms of the transformed specific measure variances and the squared weights. The confidence limits of the summary measure are easily deduced by inverse transformation of those of transformed measure. The method is illustrated by applying it to some well known epidemiological measures. It seems appropriate for application in stratified analysis context where size allows normal approximation.

15. Teaching Confidence Intervals Using Simulation

ERIC Educational Resources Information Center

Hagtvedt, Reidar; Jones, Gregory Todd; Jones, Kari

2008-01-01

Confidence intervals are difficult to teach, in part because most students appear to believe they understand how to interpret them intuitively. They rarely do. To help them abandon their misconception and achieve understanding, we have developed a simulation tool that encourages experimentation with multiple confidence intervals derived from the…

16. Explorations in Statistics: Confidence Intervals

ERIC Educational Resources Information Center

Curran-Everett, Douglas

2009-01-01

Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This third installment of "Explorations in Statistics" investigates confidence intervals. A confidence interval is a range that we expect, with some level of confidence, to include the true value of a population parameter…

17. Interval Recognition in Minimal Context.

ERIC Educational Resources Information Center

Shatzkin, Merton

1984-01-01

Music majors were asked to identify interval when it was either preceded or followed by a tone moving in the same direction. Difficulties in interval recognition in context appear to be an effect not just of placement within the context or of tonality, but of particular combinations of these aspects. (RM)

18. Children's Discrimination of Melodic Intervals.

ERIC Educational Resources Information Center

Schellenberg, E. Glenn; Trehub, Sandra E.

1996-01-01

Adults and children listened to tone sequences and were required to detect changes either from intervals with simple frequency ratios to intervals with complex ratios or vice versa. Adults performed better on changes from simple to complex ratios than on the reverse changes. Similar performance was observed for 6-year olds who had never taken…

19. Explorations in Statistics: Confidence Intervals

ERIC Educational Resources Information Center

Curran-Everett, Douglas

2009-01-01

Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This third installment of "Explorations in Statistics" investigates confidence intervals. A confidence interval is a range that we expect, with some level of confidence, to include the true value of a population parameter…

20. Automatic Error Analysis Using Intervals

ERIC Educational Resources Information Center

Rothwell, E. J.; Cloud, M. J.

2012-01-01

A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…

1. Automatic Error Analysis Using Intervals

ERIC Educational Resources Information Center

Rothwell, E. J.; Cloud, M. J.

2012-01-01

A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…

2. Binary Interval Search: a scalable algorithm for counting interval intersections

PubMed Central

Layer, Ryan M.; Skadron, Kevin; Robins, Gabriel; Hall, Ira M.; Quinlan, Aaron R.

2013-01-01

Motivation: The comparison of diverse genomic datasets is fundamental to understand genome biology. Researchers must explore many large datasets of genome intervals (e.g. genes, sequence alignments) to place their experimental results in a broader context and to make new discoveries. Relationships between genomic datasets are typically measured by identifying intervals that intersect, that is, they overlap and thus share a common genome interval. Given the continued advances in DNA sequencing technologies, efficient methods for measuring statistically significant relationships between many sets of genomic features are crucial for future discovery. Results: We introduce the Binary Interval Search (BITS) algorithm, a novel and scalable approach to interval set intersection. We demonstrate that BITS outperforms existing methods at counting interval intersections. Moreover, we show that BITS is intrinsically suited to parallel computing architectures, such as graphics processing units by illustrating its utility for efficient Monte Carlo simulations measuring the significance of relationships between sets of genomic intervals. Availability: https://github.com/arq5x/bits. Contact: arq5x@virginia.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:23129298

3. Rational-spline approximation with automatic tension adjustment

NASA Technical Reports Server (NTRS)

Schiess, J. R.; Kerr, P. A.

1984-01-01

An algorithm for weighted least-squares approximation with rational splines is presented. A rational spline is a cubic function containing a distinct tension parameter for each interval defined by two consecutive knots. For zero tension, the rational spline is identical to a cubic spline; for very large tension, the rational spline is a linear function. The approximation algorithm incorporates an algorithm which automatically adjusts the tension on each interval to fulfill a user-specified criterion. Finally, an example is presented comparing results of the rational spline with those of the cubic spline.

4. VARIABLE TIME-INTERVAL GENERATOR

DOEpatents

Gross, J.E.

1959-10-31

This patent relates to a pulse generator and more particularly to a time interval generator wherein the time interval between pulses is precisely determined. The variable time generator comprises two oscillators with one having a variable frequency output and the other a fixed frequency output. A frequency divider is connected to the variable oscillator for dividing its frequency by a selected factor and a counter is used for counting the periods of the fixed oscillator occurring during a cycle of the divided frequency of the variable oscillator. This defines the period of the variable oscillator in terms of that of the fixed oscillator. A circuit is provided for selecting as a time interval a predetermined number of periods of the variable oscillator. The output of the generator consists of a first pulse produced by a trigger circuit at the start of the time interval and a second pulse marking the end of the time interval produced by the same trigger circuit.

5. Combining global and local approximations

NASA Technical Reports Server (NTRS)

Haftka, Raphael T.

1991-01-01

A method based on a linear approximation to a scaling factor, designated the 'global-local approximation' (GLA) method, is presented and shown capable of extending the range of usefulness of derivative-based approximations to a more refined model. The GLA approach refines the conventional scaling factor by means of a linearly varying, rather than constant, scaling factor. The capabilities of the method are demonstrated for a simple beam example with a crude and more refined FEM model.

6. Combining global and local approximations

SciTech Connect

Haftka, R.T. )

1991-09-01

A method based on a linear approximation to a scaling factor, designated the 'global-local approximation' (GLA) method, is presented and shown capable of extending the range of usefulness of derivative-based approximations to a more refined model. The GLA approach refines the conventional scaling factor by means of a linearly varying, rather than constant, scaling factor. The capabilities of the method are demonstrated for a simple beam example with a crude and more refined FEM model. 6 refs.

7. An improved proximity force approximation for electrostatics

SciTech Connect

Fosco, Cesar D.; Lombardo, Fernando C.; Mazzitelli, Francisco D.

2012-08-15

A quite straightforward approximation for the electrostatic interaction between two perfectly conducting surfaces suggests itself when the distance between them is much smaller than the characteristic lengths associated with their shapes. Indeed, in the so called 'proximity force approximation' the electrostatic force is evaluated by first dividing each surface into a set of small flat patches, and then adding up the forces due two opposite pairs, the contributions of which are approximated as due to pairs of parallel planes. This approximation has been widely and successfully applied in different contexts, ranging from nuclear physics to Casimir effect calculations. We present here an improvement on this approximation, based on a derivative expansion for the electrostatic energy contained between the surfaces. The results obtained could be useful for discussing the geometric dependence of the electrostatic force, and also as a convenient benchmark for numerical analyses of the tip-sample electrostatic interaction in atomic force microscopes. - Highlights: Black-Right-Pointing-Pointer The proximity force approximation (PFA) has been widely used in different areas. Black-Right-Pointing-Pointer The PFA can be improved using a derivative expansion in the shape of the surfaces. Black-Right-Pointing-Pointer We use the improved PFA to compute electrostatic forces between conductors. Black-Right-Pointing-Pointer The results can be used as an analytic benchmark for numerical calculations in AFM. Black-Right-Pointing-Pointer Insight is provided for people who use the PFA to compute nuclear and Casimir forces.

8. Energy

DTIC Science & Technology

2003-01-01

Canada, Britain, and Spain. We found that the energy industry is not in crisis ; however, U.S. government policies, laws, dollars, and even public...CEIMAT (Centro de Investagaciones Energeticas , Medioambeintales y Tecnologicas) Research and development Page 3 of 28ENERGY 8/10/04http://www.ndu.edu...procurement or storage of standard, common use fuels. NATURAL GAS Natural gas, abundant globally and domestically, offers energy versatility among

9. Approximating Functions with Exponential Functions

ERIC Educational Resources Information Center

Gordon, Sheldon P.

2005-01-01

The possibility of approximating a function with a linear combination of exponential functions of the form e[superscript x], e[superscript 2x], ... is considered as a parallel development to the notion of Taylor polynomials which approximate a function with a linear combination of power function terms. The sinusoidal functions sin "x" and cos "x"…

10. Approximation methods in gravitational-radiation theory

NASA Technical Reports Server (NTRS)

Will, C. M.

1986-01-01

The observation of gravitational-radiation damping in the binary pulsar PSR 1913 + 16 and the ongoing experimental search for gravitational waves of extraterrestrial origin have made the theory of gravitational radiation an active branch of classical general relativity. In calculations of gravitational radiation, approximation methods play a crucial role. Recent developments are summarized in two areas in which approximations are important: (a) the quadrupole approxiamtion, which determines the energy flux and the radiation reaction forces in weak-field, slow-motion, source-within-the-near-zone systems such as the binary pulsar; and (b) the normal modes of oscillation of black holes, where the Wentzel-Kramers-Brillouin approximation gives accurate estimates of the complex frequencies of the modes.

11. Approximation methods in gravitational-radiation theory

NASA Technical Reports Server (NTRS)

Will, C. M.

1986-01-01

The observation of gravitational-radiation damping in the binary pulsar PSR 1913 + 16 and the ongoing experimental search for gravitational waves of extraterrestrial origin have made the theory of gravitational radiation an active branch of classical general relativity. In calculations of gravitational radiation, approximation methods play a crucial role. Recent developments are summarized in two areas in which approximations are important: (a) the quadrupole approxiamtion, which determines the energy flux and the radiation reaction forces in weak-field, slow-motion, source-within-the-near-zone systems such as the binary pulsar; and (b) the normal modes of oscillation of black holes, where the Wentzel-Kramers-Brillouin approximation gives accurate estimates of the complex frequencies of the modes.

12. Structural optimization with approximate sensitivities

NASA Technical Reports Server (NTRS)

Patnaik, S. N.; Hopkins, D. A.; Coroneos, R.

1994-01-01

Computational efficiency in structural optimization can be enhanced if the intensive computations associated with the calculation of the sensitivities, that is, gradients of the behavior constraints, are reduced. Approximation to gradients of the behavior constraints that can be generated with small amount of numerical calculations is proposed. Structural optimization with these approximate sensitivities produced correct optimum solution. Approximate gradients performed well for different nonlinear programming methods, such as the sequence of unconstrained minimization technique, method of feasible directions, sequence of quadratic programming, and sequence of linear programming. Structural optimization with approximate gradients can reduce by one third the CPU time that would otherwise be required to solve the problem with explicit closed-form gradients. The proposed gradient approximation shows potential to reduce intensive computation that has been associated with traditional structural optimization.

13. Approximate circuits for increased reliability

DOEpatents

Hamlet, Jason R.; Mayo, Jackson R.

2015-12-22

Embodiments of the invention describe a Boolean circuit having a voter circuit and a plurality of approximate circuits each based, at least in part, on a reference circuit. The approximate circuits are each to generate one or more output signals based on values of received input signals. The voter circuit is to receive the one or more output signals generated by each of the approximate circuits, and is to output one or more signals corresponding to a majority value of the received signals. At least some of the approximate circuits are to generate an output value different than the reference circuit for one or more input signal values; however, for each possible input signal value, the majority values of the one or more output signals generated by the approximate circuits and received by the voter circuit correspond to output signal result values of the reference circuit.

14. Approximate circuits for increased reliability

SciTech Connect

Hamlet, Jason R.; Mayo, Jackson R.

2015-08-18

Embodiments of the invention describe a Boolean circuit having a voter circuit and a plurality of approximate circuits each based, at least in part, on a reference circuit. The approximate circuits are each to generate one or more output signals based on values of received input signals. The voter circuit is to receive the one or more output signals generated by each of the approximate circuits, and is to output one or more signals corresponding to a majority value of the received signals. At least some of the approximate circuits are to generate an output value different than the reference circuit for one or more input signal values; however, for each possible input signal value, the majority values of the one or more output signals generated by the approximate circuits and received by the voter circuit correspond to output signal result values of the reference circuit.

15. Photoelectron spectroscopy and the dipole approximation

SciTech Connect

Hemmers, O.; Hansen, D.L.; Wang, H.

1997-04-01

Photoelectron spectroscopy is a powerful technique because it directly probes, via the measurement of photoelectron kinetic energies, orbital and band structure in valence and core levels in a wide variety of samples. The technique becomes even more powerful when it is performed in an angle-resolved mode, where photoelectrons are distinguished not only by their kinetic energy, but by their direction of emission as well. Determining the probability of electron ejection as a function of angle probes the different quantum-mechanical channels available to a photoemission process, because it is sensitive to phase differences among the channels. As a result, angle-resolved photoemission has been used successfully for many years to provide stringent tests of the understanding of basic physical processes underlying gas-phase and solid-state interactions with radiation. One mainstay in the application of angle-resolved photoelectron spectroscopy is the well-known electric-dipole approximation for photon interactions. In this simplification, all higher-order terms, such as those due to electric-quadrupole and magnetic-dipole interactions, are neglected. As the photon energy increases, however, effects beyond the dipole approximation become important. To best determine the range of validity of the dipole approximation, photoemission measurements on a simple atomic system, neon, where extra-atomic effects cannot play a role, were performed at BL 8.0. The measurements show that deviations from {open_quotes}dipole{close_quotes} expectations in angle-resolved valence photoemission are observable for photon energies down to at least 0.25 keV, and are quite significant at energies around 1 keV. From these results, it is clear that non-dipole angular-distribution effects may need to be considered in any application of angle-resolved photoelectron spectroscopy that uses x-ray photons of energies as low as a few hundred eV.

16. Intertrial interval duration and learning in autistic children.

PubMed

Koegel, R L; Dunlap, G; Dyer, K

1980-01-01

This study investigated the influence of intertrial interval duration on the performance of autistic children during teaching situations. The children were taught under the same conditions existing in their regular programs, except that the length of time between trials was systematically manipulated. With both multiple baseline and repeated reversal designs, two lengths of intertrial interval were employed; short intervals with the SD for any given trial presented approximately one second following the reinforcer for the previous trial versus long intervals with the SD presented four or more seconds following the reinforcer for the previous trial. The results showed that: (1) the short intertrial intervals always produced higher levels of correct responding than the long intervals; and (2) there were improving trends in performance and rapid acquisition with the short intertrial intervals, in contrast to minimal or no change with the long intervals. The results are discussed in terms of utilizing information about child and task characteristics in terms of selecting optimal intervals. The data suggest that manipulations made between trials have a large influence on autistic children's learning.

17. Corrected profile likelihood confidence interval for binomial paired incomplete data.

PubMed

Pradhan, Vivek; Menon, Sandeep; Das, Ujjwal

2013-01-01

Clinical trials often use paired binomial data as their clinical endpoint. The confidence interval is frequently used to estimate the treatment performance. Tang et al. (2009) have proposed exact and approximate unconditional methods for constructing a confidence interval in the presence of incomplete paired binary data. The approach proposed by Tang et al. can be overly conservative with large expected confidence interval width (ECIW) in some situations. We propose a profile likelihood-based method with a Jeffreys' prior correction to construct the confidence interval. This approach generates confidence interval with a much better coverage probability and shorter ECIWs. The performances of the method along with the corrections are demonstrated through extensive simulation. Finally, three real world data sets are analyzed by all the methods. Statistical Analysis System (SAS) codes to execute the profile likelihood-based methods are also presented.

18. Confidence Intervals for a Mean and a Proportion in the Bounded Case.

DTIC Science & Technology

1986-11-01

This paper describes a 100x(1-alpha) confidence interval for the mean of a bounded random variable which is shorter than the interval that...Chebyshev’s inequality induces for small alpha and which avoids the error of approximation that assuming normality induces. The paper also presents an analogous development for deriving a 100x(1-alpha) confidence interval for a proportion.

19. TIME-INTERVAL MEASURING DEVICE

DOEpatents

Gross, J.E.

1958-04-15

An electronic device for measuring the time interval between two control pulses is presented. The device incorporates part of a previous approach for time measurement, in that pulses from a constant-frequency oscillator are counted during the interval between the control pulses. To reduce the possible error in counting caused by the operation of the counter gating circuit at various points in the pulse cycle, the described device provides means for successively delaying the pulses for a fraction of the pulse period so that a final delay of one period is obtained and means for counting the pulses before and after each stage of delay during the time interval whereby a plurality of totals is obtained which may be averaged and multplied by the pulse period to obtain an accurate time- Interval measurement.

20. Simple Interval Timers for Microcomputers.

ERIC Educational Resources Information Center

McInerney, M.; Burgess, G.

1985-01-01

Discusses simple interval timers for microcomputers, including (1) the Jiffy clock; (2) CPU count timers; (3) screen count timers; (4) light pen timers; and (5) chip timers. Also examines some of the general characteristics of all types of timers. (JN)

1. Relativistic mean field approximation to baryons

SciTech Connect

Dmitri Diakonov

2005-02-01

We stress the importance of the spontaneous chiral symmetry breaking for understanding the low-energy structure of baryons. The Mean Field Approximation to baryons is formulated, which solves several outstanding paradoxes of the naive quark models, and which allows to compute parton distributions at low virtuality in a consistent way. We explain why this approach to baryons leads to the prediction of relatively light exotic pentaquark baryons, in contrast to the constituent models which do not take seriously the importance of chiral symmetry breaking. We briefly discuss why, to our mind, it is easier to produce exotic pentaquarks at low than at high energies.

2. Development of New Density Functional Approximations

Su, Neil Qiang; Xu, Xin

2017-05-01

Kohn-Sham density functional theory has become the leading electronic structure method for atoms, molecules, and extended systems. It is in principle exact, but any practical application must rely on density functional approximations (DFAs) for the exchange-correlation energy. Here we emphasize four aspects of the subject: (a) philosophies and strategies for developing DFAs; (b) classification of DFAs; (c) major sources of error in existing DFAs; and (d) some recent developments and future directions.

3. Numerical Approximation to the Thermodynamic Integrals

Johns, S. M.; Ellis, P. J.; Lattimer, J. M.

1996-12-01

We approximate boson thermodynamic integrals as polynomials in two variables chosen to give the correct limiting expansion and to smoothly interpolate into other regimes. With 10 free parameters, an accuracy of better than 0.009% is achieved for the pressure, internal energy density, and number density. We also revisit the fermion case, originally addressed by Eggleton, Faulkner, & Flannery (1973), and substantially improve the accuracy of their fits.

4. Microscopic justification of the equal filling approximation

SciTech Connect

Perez-Martin, Sara; Robledo, L. M.

2008-07-15

The equal filling approximation, a procedure widely used in mean-field calculations to treat the dynamics of odd nuclei in a time-reversal invariant way, is justified as the consequence of a variational principle over an average energy functional. The ideas of statistical quantum mechanics are employed in the justification. As an illustration of the method, the ground and lowest-lying states of some octupole deformed radium isotopes are computed.

5. The Role of Higher Harmonics In Musical Interval Perception

Krantz, Richard; Douthett, Jack

2011-10-01

Using an alternative parameterization of the roughness curve we make direct use of critical band results to investigate the role of higher harmonics on the perception of tonal consonance. We scale the spectral amplitudes in the complex home tone and complex interval tone to simulate acoustic signals of constant energy. Our analysis reveals that even with a relatively small addition of higher harmonics the perfect fifth emerges as a consonant interval with more, musically important, just intervals emerging as consonant as more and more energy is shifted into higher frequencies.

6. Estimation of distribution algorithms with Kikuchi approximations.

PubMed

Santana, Roberto

2005-01-01

The question of finding feasible ways for estimating probability distributions is one of the main challenges for Estimation of Distribution Algorithms (EDAs). To estimate the distribution of the selected solutions, EDAs use factorizations constructed according to graphical models. The class of factorizations that can be obtained from these probability models is highly constrained. Expanding the class of factorizations that could be employed for probability approximation is a necessary step for the conception of more robust EDAs. In this paper we introduce a method for learning a more general class of probability factorizations. The method combines a reformulation of a probability approximation procedure known in statistical physics as the Kikuchi approximation of energy, with a novel approach for finding graph decompositions. We present the Markov Network Estimation of Distribution Algorithm (MN-EDA), an EDA that uses Kikuchi approximations to estimate the distribution, and Gibbs Sampling (GS) to generate new points. A systematic empirical evaluation of MN-EDA is done in comparison with different Bayesian network based EDAs. From our experiments we conclude that the algorithm can outperform other EDAs that use traditional methods of probability approximation in the optimization of functions with strong interactions among their variables.

7. Approximating subtree distances between phylogenies.

PubMed

Bonet, Maria Luisa; St John, Katherine; Mahindru, Ruchi; Amenta, Nina

2006-10-01

We give a 5-approximation algorithm to the rooted Subtree-Prune-and-Regraft (rSPR) distance between two phylogenies, which was recently shown to be NP-complete. This paper presents the first approximation result for this important tree distance. The algorithm follows a standard format for tree distances. The novel ideas are in the analysis. In the analysis, the cost of the algorithm uses a "cascading" scheme that accounts for possible wrong moves. This accounting is missing from previous analysis of tree distance approximation algorithms. Further, we show how all algorithms of this type can be implemented in linear time and give experimental results.

8. Partitioned-Interval Quantum Optical Communications Receiver

NASA Technical Reports Server (NTRS)

Vilnrotter, Victor A.

2013-01-01

The proposed quantum receiver in this innovation partitions each binary signal interval into two unequal segments: a short "pre-measurement" segment in the beginning of the symbol interval used to make an initial guess with better probability than 50/50 guessing, and a much longer segment used to make the high-sensitivity signal detection via field-cancellation and photon-counting detection. It was found that by assigning as little as 10% of the total signal energy to the pre-measurement segment, the initial 50/50 guess can be improved to about 70/30, using the best available measurements such as classical coherent or "optimized Kennedy" detection.

9. Tuning for temporal interval in human apparent motion detection.

PubMed

Bours, Roger J E; Stuur, Sanne; Lankheet, Martin J M

2007-01-08

Detection of apparent motion in random dot patterns requires correlation across time and space. It has been difficult to study the temporal requirements for the correlation step because motion detection also depends on temporal filtering preceding correlation and on integration at the next levels. To specifically study tuning for temporal interval in the correlation step, we performed an experiment in which prefiltering and postintegration were held constant and in which we used a motion stimulus containing coherent motion for a single interval value only. The stimulus consisted of a sparse random dot pattern in which each dot was presented in two frames only, separated by a specified interval. On each frame, half of the dots were refreshed and the other half was a displaced reincarnation of the pattern generated one or several frames earlier. Motion energy statistics in such a stimulus do not vary from frame to frame, and the directional bias in spatiotemporal correlations is similar for different interval settings. We measured coherence thresholds for left-right direction discrimination by varying motion coherence levels in a Quest staircase procedure, as a function of both step size and interval. Results show that highest sensitivity was found for an interval of 17-42 ms, irrespective of viewing distance. The falloff at longer intervals was much sharper than previously described. Tuning for temporal interval was largely, but not completely, independent of step size. The optimal temporal interval slightly decreased with increasing step size. Similarly, the optimal step size decreased with increasing temporal interval.

10. Dual approximations in optimal control

NASA Technical Reports Server (NTRS)

Hager, W. W.; Ianculescu, G. D.

1984-01-01

A dual approximation for the solution to an optimal control problem is analyzed. The differential equation is handled with a Lagrange multiplier while other constraints are treated explicitly. An algorithm for solving the dual problem is presented.

11. Exponential approximations in optimal design

NASA Technical Reports Server (NTRS)

Belegundu, A. D.; Rajan, S. D.; Rajgopal, J.

1990-01-01

One-point and two-point exponential functions have been developed and proved to be very effective approximations of structural response. The exponential has been compared to the linear, reciprocal and quadratic fit methods. Four test problems in structural analysis have been selected. The use of such approximations is attractive in structural optimization to reduce the numbers of exact analyses which involve computationally expensive finite element analysis.

12. Mathematical algorithms for approximate reasoning

NASA Technical Reports Server (NTRS)

Murphy, John H.; Chay, Seung C.; Downs, Mary M.

1988-01-01

Most state of the art expert system environments contain a single and often ad hoc strategy for approximate reasoning. Some environments provide facilities to program the approximate reasoning algorithms. However, the next generation of expert systems should have an environment which contain a choice of several mathematical algorithms for approximate reasoning. To meet the need for validatable and verifiable coding, the expert system environment must no longer depend upon ad hoc reasoning techniques but instead must include mathematically rigorous techniques for approximate reasoning. Popular approximate reasoning techniques are reviewed, including: certainty factors, belief measures, Bayesian probabilities, fuzzy logic, and Shafer-Dempster techniques for reasoning. A group of mathematically rigorous algorithms for approximate reasoning are focused on that could form the basis of a next generation expert system environment. These algorithms are based upon the axioms of set theory and probability theory. To separate these algorithms for approximate reasoning various conditions of mutual exclusivity and independence are imposed upon the assertions. Approximate reasoning algorithms presented include: reasoning with statistically independent assertions, reasoning with mutually exclusive assertions, reasoning with assertions that exhibit minimum overlay within the state space, reasoning with assertions that exhibit maximum overlay within the state space (i.e. fuzzy logic), pessimistic reasoning (i.e. worst case analysis), optimistic reasoning (i.e. best case analysis), and reasoning with assertions with absolutely no knowledge of the possible dependency among the assertions. A robust environment for expert system construction should include the two modes of inference: modus ponens and modus tollens. Modus ponens inference is based upon reasoning towards the conclusion in a statement of logical implication, whereas modus tollens inference is based upon reasoning away

13. Approximation techniques for neuromimetic calculus.

PubMed

Vigneron, V; Barret, C

1999-06-01

Approximation Theory plays a central part in modern statistical methods, in particular in Neural Network modeling. These models are able to approximate a large amount of metric data structures in their entire range of definition or at least piecewise. We survey most of the known results for networks of neurone-like units. The connections to classical statistical ideas such as ordinary least squares (LS) are emphasized.

14. Analytic approximations to the modon dispersion relation. [in oceanography

NASA Technical Reports Server (NTRS)

Boyd, J. P.

1981-01-01

Three explicit analytic approximations are given to the modon dispersion relation developed by Flierl et al. (1980) to describe Gulf Stream rings and related phenomena in the oceans and atmosphere. The solutions are in the form of k(q), and are developed in the form of a power series in q for small q, an inverse power series in 1/q for large q, and a two-point Pade approximant. The low order Pade approximant is shown to yield a solution for the dispersion relation with a maximum relative error for the lowest branch of the function equal to one in 700 in the q interval zero to infinity.

15. JIMWLK evolution in the Gaussian approximation

Iancu, E.; Triantafyllopoulos, D. N.

2012-04-01

We demonstrate that the Balitsky-JIMWLK equations describing the high-energy evolution of the n-point functions of the Wilson lines (the QCD scattering amplitudes in the eikonal approximation) admit a controlled mean field approximation of the Gaussian type, for any value of the number of colors N c . This approximation is strictly correct in the weak scattering regime at relatively large transverse momenta, where it re-produces the BFKL dynamics, and in the strong scattering regime deeply at saturation, where it properly describes the evolution of the scattering amplitudes towards the respective black disk limits. The approximation scheme is fully specified by giving the 2-point function (the S-matrix for a color dipole), which in turn can be related to the solution to the Balitsky-Kovchegov equation, including at finite N c . Any higher n-point function with n ≥ 4 can be computed in terms of the dipole S-matrix by solving a closed system of evolution equations (a simplified version of the respective Balitsky-JIMWLK equations) which are local in the transverse coordinates. For simple configurations of the projectile in the transverse plane, our new results for the 4-point and the 6-point functions coincide with the high-energy extrapolations of the respective results in the McLerran-Venugopalan model. One cornerstone of our construction is a symmetry property of the JIMWLK evolution, that we notice here for the first time: the fact that, with increasing energy, a hadron is expanding its longitudinal support symmetrically around the light-cone. This corresponds to invariance under time reversal for the scattering amplitudes.

16. High resolution time interval meter

DOEpatents

Martin, A.D.

1986-05-09

Method and apparatus are provided for measuring the time interval between two events to a higher resolution than reliability available from conventional circuits and component. An internal clock pulse is provided at a frequency compatible with conventional component operating frequencies for reliable operation. Lumped constant delay circuits are provided for generating outputs at delay intervals corresponding to the desired high resolution. An initiation START pulse is input to generate first high resolution data. A termination STOP pulse is input to generate second high resolution data. Internal counters count at the low frequency internal clock pulse rate between the START and STOP pulses. The first and second high resolution data are logically combined to directly provide high resolution data to one counter and correct the count in the low resolution counter to obtain a high resolution time interval measurement.

17. Finding Nested Common Intervals Efficiently

Blin, Guillaume; Stoye, Jens

In this paper, we study the problem of efficiently finding gene clusters formalized by nested common intervals between two genomes represented either as permutations or as sequences. Considering permutations, we give several algorithms whose running time depends on the size of the actual output rather than the output in the worst case. Indeed, we first provide a straightforward O(n 3) time algorithm for finding all nested common intervals. We reduce this complexity by providing an O(n 2) time algorithm computing an irredundant output. Finally, we show, by providing a third algorithm, that finding only the maximal nested common intervals can be done in linear time. Considering sequences, we provide solutions (modifications of previously defined algorithms and a new algorithm) for different variants of the problem, depending on the treatment one wants to apply to duplicated genes.

18. Wavelet Sparse Approximate Inverse Preconditioners

NASA Technical Reports Server (NTRS)

Chan, Tony F.; Tang, W.-P.; Wan, W. L.

1996-01-01

There is an increasing interest in using sparse approximate inverses as preconditioners for Krylov subspace iterative methods. Recent studies of Grote and Huckle and Chow and Saad also show that sparse approximate inverse preconditioner can be effective for a variety of matrices, e.g. Harwell-Boeing collections. Nonetheless a drawback is that it requires rapid decay of the inverse entries so that sparse approximate inverse is possible. However, for the class of matrices that, come from elliptic PDE problems, this assumption may not necessarily hold. Our main idea is to look for a basis, other than the standard one, such that a sparse representation of the inverse is feasible. A crucial observation is that the kind of matrices we are interested in typically have a piecewise smooth inverse. We exploit this fact, by applying wavelet techniques to construct a better sparse approximate inverse in the wavelet basis. We shall justify theoretically and numerically that our approach is effective for matrices with smooth inverse. We emphasize that in this paper we have only presented the idea of wavelet approximate inverses and demonstrated its potential but have not yet developed a highly refined and efficient algorithm.

19. Product-State Approximations to Quantum States

Brandão, Fernando G. S. L.; Harrow, Aram W.

2016-02-01

We show that for any many-body quantum state there exists an unentangled quantum state such that most of the two-body reduced density matrices are close to those of the original state. This is a statement about the monogamy of entanglement, which cannot be shared without limit in the same way as classical correlation. Our main application is to Hamiltonians that are sums of two-body terms. For such Hamiltonians we show that there exist product states with energy that is close to the ground-state energy whenever the interaction graph of the Hamiltonian has high degree. This proves the validity of mean-field theory and gives an explicitly bounded approximation error. If we allow states that are entangled within small clusters of systems but product across clusters then good approximations exist when the Hamiltonian satisfies one or more of the following properties: (1) high degree, (2) small expansion, or (3) a ground state where the blocks in the partition have sublinear entanglement. Previously this was known only in the case of small expansion or in the regime where the entanglement was close to zero. Our approximations allow an extensive error in energy, which is the scale considered by the quantum PCP (probabilistically checkable proof) and NLTS (no low-energy trivial-state) conjectures. Thus our results put restrictions on the possible Hamiltonians that could be used for a possible proof of the qPCP or NLTS conjectures. By contrast the classical PCP constructions are often based on constraint graphs with high degree. Likewise we show that the parallel repetition that is possible with classical constraint satisfaction problems cannot also be possible for quantum Hamiltonians, unless qPCP is false. The main technical tool behind our results is a collection of new classical and quantum de Finetti theorems which do not make any symmetry assumptions on the underlying states.

20. Nonequilibrium dynamical cluster approximation study of the Falicov-Kimball model

Herrmann, Andreas J.; Tsuji, Naoto; Eckstein, Martin; Werner, Philipp

2016-12-01

We use a nonequilibrium implementation of the dynamical cluster approximation (DCA) to study the effect of short-range correlations on the dynamics of the two-dimensional Falicov-Kimball model after an interaction quench. As in the case of single-site dynamical mean-field theory, thermalization is absent in DCA simulations, and for quenches across the metal-insulator boundary, nearest-neighbor charge correlations in the nonthermal steady state are found to be larger than in the thermal state with identical energy. We investigate to what extent it is possible to define an effective temperature of the trapped state after a quench. Based on the ratio between the lesser and retarded Green's function, we conclude that a roughly thermal distribution is reached within the energy intervals corresponding to the momentum-patch dependent subbands of the spectral function. The effectively different chemical potentials of these distributions, however, lead to a very hot, or even negative, effective temperature in the energy intervals between these subbands.

1. Gadgets, approximation, and linear programming

SciTech Connect

Trevisan, L.; Sudan, M.; Sorkin, G.B.; Williamson, D.P.

1996-12-31

We present a linear-programming based method for finding {open_quotes}gadgets{close_quotes}, i.e., combinatorial structures reducing constraints of one optimization problems to constraints of another. A key step in this method is a simple observation which limits the search space to a finite one. Using this new method we present a number of new, computer-constructed gadgets for several different reductions. This method also answers a question posed by on how to prove the optimality of gadgets-we show how LP duality gives such proofs. The new gadgets improve hardness results for MAX CUT and MAX DICUT, showing that approximating these problems to within factors of 60/61 and 44/45 respectively is N P-hard. We also use the gadgets to obtain an improved approximation algorithm for MAX 3SAT which guarantees an approximation ratio of .801. This improves upon the previous best bound of .7704.

2. Rational approximations for tomographic reconstructions

Reynolds, Matthew; Beylkin, Gregory; Monzón, Lucas

2013-06-01

We use optimal rational approximations of projection data collected in x-ray tomography to improve image resolution. Under the assumption that the object of interest is described by functions with jump discontinuities, for each projection we construct its rational approximation with a small (near optimal) number of terms for a given accuracy threshold. This allows us to augment the measured data, i.e., double the number of available samples in each projection or, equivalently, extend (double) the domain of their Fourier transform. We also develop a new, fast, polar coordinate Fourier domain algorithm which uses our nonlinear approximation of projection data in a natural way. Using augmented projections of the Shepp-Logan phantom, we provide a comparison between the new algorithm and the standard filtered back-projection algorithm. We demonstrate that the reconstructed image has improved resolution without additional artifacts near sharp transitions in the image.

3. Confidence intervals for effect parameters common in cancer epidemiology.

PubMed Central

Sato, T

1990-01-01

This paper reviews approximate confidence intervals for some effect parameters common in cancer epidemiology. These methods have computational feasibility and give nearly nominal coverage rates. In the analysis of crude data, the simplest type of epidemiologic analysis, parameters of interest are the odds ratio in case-control studies and the rate ratio and difference in cohort studies. These parameters can estimate the instantaneous-incidence-rate ratio and difference that are the most meaningful effect measures in cancer epidemiology. Approximate confidence intervals for these parameters including the classical Cornfield's method are mainly based on efficient scores. When some confounding factors exist, stratified analysis and summary measures for effect parameters are needed. Since the Mantel-Haenszel estimators have been widely used by epidemiologists as summary measures, confidence intervals based on the Mantel-Haenszel estimators are described. The paper also discusses recent developments in these methods. PMID:2269246

4. Energy.

ERIC Educational Resources Information Center

Shanebrook, J. Richard

This document describes a course designed to acquaint students with the many societal and technological problems facing the United States and the world due to the increasing demand for energy. The course begins with a writing assignment that involves readings on the environmental philosophy of Native Americans and the Chernobyl catastrophe.…

5. Heat pipe transient response approximation.

SciTech Connect

Reid, R. S.

2001-01-01

A simple and concise routine that approximates the response of an alkali metal heat pipe to changes in evaporator heat transfer rate is described. This analytically based routine is compared with data from a cylindrical heat pipe with a crescent-annular wick that undergoes gradual (quasi-steady) transitions through the viscous and condenser boundary heat transfer limits. The sonic heat transfer limit can also be incorporated into this routine for heat pipes with more closely coupled condensers. The advantages and obvious limitations of this approach are discussed. For reference, a source code listing for the approximation appears at the end of this paper.

6. Adaptive approximation models in optimization

SciTech Connect

Voronin, A.N.

1995-05-01

The paper proposes a method for optimization of functions of several variables that substantially reduces the number of objective function evaluations compared to traditional methods. The method is based on the property of iterative refinement of approximation models of the optimand function in approximation domains that contract to the extremum point. It does not require subjective specification of the starting point, step length, or other parameters of the search procedure. The method is designed for efficient optimization of unimodal functions of several (not more than 10-15) variables and can be applied to find the global extremum of polymodal functions and also for optimization of scalarized forms of vector objective functions.

7. Approximating spatially exclusive invasion processes.

PubMed

Ross, Joshua V; Binder, Benjamin J

2014-05-01

A number of biological processes, such as invasive plant species and cell migration, are composed of two key mechanisms: motility and reproduction. Due to the spatially exclusive interacting behavior of these processes a cellular automata (CA) model is specified to simulate a one-dimensional invasion process. Three (independence, Poisson, and 2D-Markov chain) approximations are considered that attempt to capture the average behavior of the CA. We show that our 2D-Markov chain approximation accurately predicts the state of the CA for a wide range of motility and reproduction rates.

8. Galerkin approximations for dissipative magnetohydrodynamics

NASA Technical Reports Server (NTRS)

Chen, Hudong; Shan, Xiaowen; Montgomery, David

1990-01-01

A Galerkin approximation scheme is proposed for voltage-driven, dissipative magnetohydrodynamics. The trial functions are exact eigenfunctions of the linearized continuum equations and represent helical deformations of the axisymmetric, zero-flow, driven steady state. The lowest nontrivial truncation is explored: one axisymmetric trial function and one helical trial function each for the magnetic and velocity fields. The system resembles the Lorenz approximation to Benard convection, but in the region of believed applicability, its dynamical behavior is rather different, including relaxation to a helically deformed state similar to those that have emerged in the much higher resolution computations of Dahlburg et al.

9. Second Approximation to Conical Flows

DTIC Science & Technology

1950-12-01

Public Release WRIGHT AIR DEVELOPMENT CENTER AF-WP-(B)-O-29 JUL 53 100 NOTICES ’When Government drawings, specifications, or other data are used V...so that the X, the approximation always depends on the ( "/)th, etc. Here the second approximation, i.e., the terms in C and 62, are computed and...the scheme shown in Fig. 1, the isentropic equations of motion are (cV-X2) +~X~C 6 +- 4= -x- 1 It is assumed that + Ux !E . O’/ + (8) Introducing Eqs 10. Approximation of Failure Probability Using Conditional Sampling NASA Technical Reports Server (NTRS) Giesy. Daniel P.; Crespo, Luis G.; Kenney, Sean P. 2008-01-01 In analyzing systems which depend on uncertain parameters, one technique is to partition the uncertain parameter domain into a failure set and its complement, and judge the quality of the system by estimating the probability of failure. If this is done by a sampling technique such as Monte Carlo and the probability of failure is small, accurate approximation can require so many sample points that the computational expense is prohibitive. Previous work of the authors has shown how to bound the failure event by sets of such simple geometry that their probabilities can be calculated analytically. In this paper, it is shown how to make use of these failure bounding sets and conditional sampling within them to substantially reduce the computational burden of approximating failure probability. It is also shown how the use of these sampling techniques improves the confidence intervals for the failure probability estimate for a given number of sample points and how they reduce the number of sample point analyses needed to achieve a given level of confidence. 11. High resolution time interval counter NASA Technical Reports Server (NTRS) Zhang, Victor S.; Davis, Dick D.; Lombardi, Michael A. 1995-01-01 In recent years, we have developed two types of high resolution, multi-channel time interval counters. In the NIST two-way time transfer MODEM application, the counter is designed for operating primarily in the interrupt-driven mode, with 3 start channels and 3 stop channels. The intended start and stop signals are 1 PPS, although other frequencies can also be applied to start and stop the count. The time interval counters used in the NIST Frequency Measurement and Analysis System are implemented with 7 start channels and 7 stop channels. Four of the 7 start channels are devoted to the frequencies of 1 MHz, 5 MHz or 10 MHz, while triggering signals to all other start and stop channels can range from 1 PPS to 100 kHz. Time interval interpolation plays a key role in achieving the high resolution time interval measurements for both counters. With a 10 MHz time base, both counters demonstrate a single-shot resolution of better than 40 ps, and a stability of better than 5 x 10(exp -12) (sigma(sub chi)(tau)) after self test of 1000 seconds). The maximum rate of time interval measurements (with no dead time) is 1.0 kHz for the counter used in the MODEM application and is 2.0 kHz for the counter used in the Frequency Measurement and Analysis System. The counters are implemented as plug-in units for an AT-compatible personal computer. This configuration provides an efficient way of using a computer not only to control and operate the counters, but also to store and process measured data. 12. Pythagorean Approximations and Continued Fractions ERIC Educational Resources Information Center Peralta, Javier 2008-01-01 In this article, we will show that the Pythagorean approximations of [the square root of] 2 coincide with those achieved in the 16th century by means of continued fractions. Assuming this fact and the known relation that connects the Fibonacci sequence with the golden section, we shall establish a procedure to obtain sequences of rational numbers… 13. Pythagorean Approximations and Continued Fractions ERIC Educational Resources Information Center Peralta, Javier 2008-01-01 In this article, we will show that the Pythagorean approximations of [the square root of] 2 coincide with those achieved in the 16th century by means of continued fractions. Assuming this fact and the known relation that connects the Fibonacci sequence with the golden section, we shall establish a procedure to obtain sequences of rational numbers… 14. Singularly Perturbed Lie Bracket Approximation SciTech Connect Durr, Hans-Bernd; Krstic, Miroslav; Scheinker, Alexander; Ebenbauer, Christian 2015-03-27 Here, we consider the interconnection of two dynamical systems where one has an input-affine vector field. We show that by employing a singular perturbation analysis and the Lie bracket approximation technique, the stability of the overall system can be analyzed by regarding the stability properties of two reduced, uncoupled systems. 15. Analytic approximate radiation effects due to Bremsstrahlung SciTech Connect Ben-Zvi I. 2012-02-01 The purpose of this note is to provide analytic approximate expressions that can provide quick estimates of the various effects of the Bremsstrahlung radiation produced relatively low energy electrons, such as the dumping of the beam into the beam stop at the ERL or field emission in superconducting cavities. The purpose of this work is not to replace a dependable calculation or, better yet, a measurement under real conditions, but to provide a quick but approximate estimate for guidance purposes only. These effects include dose to personnel, ozone generation in the air volume exposed to the radiation, hydrogen generation in the beam dump water cooling system and radiation damage to near-by magnets. These expressions can be used for other purposes, but one should note that the electron beam energy range is limited. In these calculations the good range is from about 0.5 MeV to 10 MeV. To help in the application of this note, calculations are presented as a worked out example for the beam dump of the R&D Energy Recovery Linac. 16. Counting independent sets using the Bethe approximation SciTech Connect Chertkov, Michael; Chandrasekaran, V; Gamarmik, D; Shah, D; Sin, J 2009-01-01 The authors consider the problem of counting the number of independent sets or the partition function of a hard-core model in a graph. The problem in general is computationally hard (P hard). They study the quality of the approximation provided by the Bethe free energy. Belief propagation (BP) is a message-passing algorithm can be used to compute fixed points of the Bethe approximation; however, BP is not always guarantee to converge. As the first result, they propose a simple message-passing algorithm that converges to a BP fixed pont for any grapy. They find that their algorithm converges within a multiplicative error 1 + {var_epsilon} of a fixed point in {Omicron}(n{sup 2}E{sup -4} log{sup 3}(nE{sup -1})) iterations for any bounded degree graph of n nodes. In a nutshell, the algorithm can be thought of as a modification of BP with 'time-varying' message-passing. Next, they analyze the resulting error to the number of independent sets provided by such a fixed point of the Bethe approximation. Using the recently developed loop calculus approach by Vhertkov and Chernyak, they establish that for any bounded graph with large enough girth, the error is {Omicron}(n{sup -{gamma}}) for some {gamma} > 0. As an application, they find that for random 3-regular graph, Bethe approximation of log-partition function (log of the number of independent sets) is within o(1) of corret log-partition - this is quite surprising as previous physics-based predictions were expecting an error of o(n). In sum, their results provide a systematic way to find Bethe fixed points for any graph quickly and allow for estimating error in Bethe approximation using novel combinatorial techniques. 17. An approximate quantal treatment to obtain the energy levels of tetra-atomic X ṡṡṡ I2 ṡṡṡ Y van der Waals clusters (X,Y=He,Ne) NASA Astrophysics Data System (ADS) García-Vela, A.; Villarreal, P.; Delgado-Barrio, G. 1990-01-01 The structure of tetra-atomic X ṡṡṡ I2 ṡṡṡ Y van der Waals (vdW) clusters, where X,Y=He,Ne, is studied using an approximate quantal treatment. In this model the above complexes are treated as like diatomic molecules with the rare-gas atoms playing the role of electrons in conventional diatomics. Then a H2-like molecular-orbital formalism is applied, choosing the discrete states of triatomic systems I2 ṡṡṡ X(Y) as molecular orbitals. Calculations at fixed configurations as well as including vdW bending motions restricted to the plane perpendicular to the I2 axis have been carried out for the sake of comparison with previous results. Finally, the restrictions are relaxed and the vdW bending motions are incorporated in a full way within the framework of a configuration interaction. The structure of these clusters is also studied through the probability density function. 18. Discrete extrinsic curvatures and approximation of surfaces by polar polyhedra NASA Astrophysics Data System (ADS) Garanzha, V. A. 2010-01-01 Duality principle for approximation of geometrical objects (also known as Eu-doxus exhaustion method) was extended and perfected by Archimedes in his famous tractate “Measurement of circle”. The main idea of the approximation method by Archimedes is to construct a sequence of pairs of inscribed and circumscribed polygons (polyhedra) which approximate curvilinear convex body. This sequence allows to approximate length of curve, as well as area and volume of the bodies and to obtain error estimates for approximation. In this work it is shown that a sequence of pairs of locally polar polyhedra allows to construct piecewise-affine approximation to spherical Gauss map, to construct convergent point-wise approximations to mean and Gauss curvature, as well as to obtain natural discretizations of bending energies. The Suggested approach can be applied to nonconvex surfaces and in the case of multiple dimensions. 19. Relativistic Random Phase Approximation At Finite Temperature SciTech Connect Niu, Y. F.; Paar, N.; Vretenar, D.; Meng, J. 2009-08-26 The fully self-consistent finite temperature relativistic random phase approximation (FTRRPA) has been established in the single-nucleon basis of the temperature dependent Dirac-Hartree model (FTDH) based on effective Lagrangian with density dependent meson-nucleon couplings. Illustrative calculations in the FTRRPA framework show the evolution of multipole responses of {sup 132}Sn with temperature. With increased temperature, in both monopole and dipole strength distributions additional transitions appear in the low energy region due to the new opened particle-particle and hole-hole transition channels. 20. Testing the frozen flow approximation NASA Technical Reports Server (NTRS) Lucchin, Francesco; Matarrese, Sabino; Melott, Adrian L.; Moscardini, Lauro 1993-01-01 We investigate the accuracy of the frozen-flow approximation (FFA), recently proposed by Matarrese, et al. (1992), for following the nonlinear evolution of cosmological density fluctuations under gravitational instability. We compare a number of statistics between results of the FFA and n-body simulations, including those used by Melott, Pellman & Shandarin (1993) to test the Zel'dovich approximation. The FFA performs reasonably well in a statistical sense, e.g. in reproducing the counts-in-cell distribution, at small scales, but it does poorly in the crosscorrelation with n-body which means it is generally not moving mass to the right place, especially in models with high small-scale power. 1. Potential of the approximation method SciTech Connect Amano, K.; Maruoka, A. 1996-12-31 Developing some techniques for the approximation method, we establish precise versions of the following statements concerning lower bounds for circuits that detect cliques of size s in a graph with m vertices: For 5 {le} s {le} m/4, a monotone circuit computing CLIQUE(m, s) contains at least (1/2)1.8{sup min}({radical}s-1/2,m/(4s)) gates: If a non-monotone circuit computes CLIQUE using a {open_quotes}small{close_quotes} amount of negation, then the circuit contains an exponential number of gates. The former is proved very simply using so called bottleneck counting argument within the framework of approximation, whereas the latter is verified introducing a notion of restricting negation and generalizing the sunflower contraction. 2. Nonlinear Filtering and Approximation Techniques DTIC Science & Technology 1991-09-01 Shwartz), Academic Press (1991). [191 M.Cl. ROUTBAUD, Fiting lindairc par morceaux avec petit bruit d’obserration, These. Universit6 de Provence ( 1990...Kernel System (GKS), Academic Press (1983). 181 H.J. KUSHNER, Probability methods for approximations in stochastic control and for elliptic equations... Academic Press (1977). [9] F. LE GLAND, Time discretization of nonlinear filtering equations, in: 28th. IEEE CDC, Tampa, pp. 2601-2606. IEEE Press (1989 3. Analytical solution approximation for bearing NASA Astrophysics Data System (ADS) Hanafi, Lukman; Mufid, M. Syifaul 2017-08-01 The purpose of lubrication is to separate two surfaces sliding past each other with a film of some material which can be sheared without causing any damage to the surfaces. Reynolds equation is a basic equation for fluid lubrication which is applied in the bearing problem. This equation can be derived from Navier-Stokes equation and continuity equation. In this paper Reynolds equation is solved using analytical approximation by making simplification to obtain pressure distribution. 4. Ultrafast approximation for phylogenetic bootstrap. PubMed Minh, Bui Quang; Nguyen, Minh Anh Thi; von Haeseler, Arndt 2013-05-01 Nonparametric bootstrap has been a widely used tool in phylogenetic analysis to assess the clade support of phylogenetic trees. However, with the rapidly growing amount of data, this task remains a computational bottleneck. Recently, approximation methods such as the RAxML rapid bootstrap (RBS) and the Shimodaira-Hasegawa-like approximate likelihood ratio test have been introduced to speed up the bootstrap. Here, we suggest an ultrafast bootstrap approximation approach (UFBoot) to compute the support of phylogenetic groups in maximum likelihood (ML) based trees. To achieve this, we combine the resampling estimated log-likelihood method with a simple but effective collection scheme of candidate trees. We also propose a stopping rule that assesses the convergence of branch support values to automatically determine when to stop collecting candidate trees. UFBoot achieves a median speed up of 3.1 (range: 0.66-33.3) to 10.2 (range: 1.32-41.4) compared with RAxML RBS for real DNA and amino acid alignments, respectively. Moreover, our extensive simulations show that UFBoot is robust against moderate model violations and the support values obtained appear to be relatively unbiased compared with the conservative standard bootstrap. This provides a more direct interpretation of the bootstrap support. We offer an efficient and easy-to-use software (available at http://www.cibiv.at/software/iqtree) to perform the UFBoot analysis with ML tree inference. 5. Approximate Counting of Graphical Realizations. PubMed Erdős, Péter L; Kiss, Sándor Z; Miklós, István; Soukup, Lajos 2015-01-01 In 1999 Kannan, Tetali and Vempala proposed a MCMC method to uniformly sample all possible realizations of a given graphical degree sequence and conjectured its rapidly mixing nature. Recently their conjecture was proved affirmative for regular graphs (by Cooper, Dyer and Greenhill, 2007), for regular directed graphs (by Greenhill, 2011) and for half-regular bipartite graphs (by Miklós, Erdős and Soukup, 2013). Several heuristics on counting the number of possible realizations exist (via sampling processes), and while they work well in practice, so far no approximation guarantees exist for such an approach. This paper is the first to develop a method for counting realizations with provable approximation guarantee. In fact, we solve a slightly more general problem; besides the graphical degree sequence a small set of forbidden edges is also given. We show that for the general problem (which contains the Greenhill problem and the Miklós, Erdős and Soukup problem as special cases) the derived MCMC process is rapidly mixing. Further, we show that this new problem is self-reducible therefore it provides a fully polynomial randomized approximation scheme (a.k.a. FPRAS) for counting of all realizations. 6. Approximate Counting of Graphical Realizations PubMed Central 2015-01-01 In 1999 Kannan, Tetali and Vempala proposed a MCMC method to uniformly sample all possible realizations of a given graphical degree sequence and conjectured its rapidly mixing nature. Recently their conjecture was proved affirmative for regular graphs (by Cooper, Dyer and Greenhill, 2007), for regular directed graphs (by Greenhill, 2011) and for half-regular bipartite graphs (by Miklós, Erdős and Soukup, 2013). Several heuristics on counting the number of possible realizations exist (via sampling processes), and while they work well in practice, so far no approximation guarantees exist for such an approach. This paper is the first to develop a method for counting realizations with provable approximation guarantee. In fact, we solve a slightly more general problem; besides the graphical degree sequence a small set of forbidden edges is also given. We show that for the general problem (which contains the Greenhill problem and the Miklós, Erdős and Soukup problem as special cases) the derived MCMC process is rapidly mixing. Further, we show that this new problem is self-reducible therefore it provides a fully polynomial randomized approximation scheme (a.k.a. FPRAS) for counting of all realizations. PMID:26161994 7. Computer Experiments for Function Approximations SciTech Connect Chang, A; Izmailov, I; Rizzo, S; Wynter, S; Alexandrov, O; Tong, C 2007-10-15 This research project falls in the domain of response surface methodology, which seeks cost-effective ways to accurately fit an approximate function to experimental data. Modeling and computer simulation are essential tools in modern science and engineering. A computer simulation can be viewed as a function that receives input from a given parameter space and produces an output. Running the simulation repeatedly amounts to an equivalent number of function evaluations, and for complex models, such function evaluations can be very time-consuming. It is then of paramount importance to intelligently choose a relatively small set of sample points in the parameter space at which to evaluate the given function, and then use this information to construct a surrogate function that is close to the original function and takes little time to evaluate. This study was divided into two parts. The first part consisted of comparing four sampling methods and two function approximation methods in terms of efficiency and accuracy for simple test functions. The sampling methods used were Monte Carlo, Quasi-Random LP{sub {tau}}, Maximin Latin Hypercubes, and Orthogonal-Array-Based Latin Hypercubes. The function approximation methods utilized were Multivariate Adaptive Regression Splines (MARS) and Support Vector Machines (SVM). The second part of the study concerned adaptive sampling methods with a focus on creating useful sets of sample points specifically for monotonic functions, functions with a single minimum and functions with a bounded first derivative. 8. Approximate reasoning using terminological models NASA Technical Reports Server (NTRS) Yen, John; Vaidya, Nitin 1992-01-01 Term Subsumption Systems (TSS) form a knowledge-representation scheme in AI that can express the defining characteristics of concepts through a formal language that has a well-defined semantics and incorporates a reasoning mechanism that can deduce whether one concept subsumes another. However, TSS's have very limited ability to deal with the issue of uncertainty in knowledge bases. The objective of this research is to address issues in combining approximate reasoning with term subsumption systems. To do this, we have extended an existing AI architecture (CLASP) that is built on the top of a term subsumption system (LOOM). First, the assertional component of LOOM has been extended for asserting and representing uncertain propositions. Second, we have extended the pattern matcher of CLASP for plausible rule-based inferences. Third, an approximate reasoning model has been added to facilitate various kinds of approximate reasoning. And finally, the issue of inconsistency in truth values due to inheritance is addressed using justification of those values. This architecture enhances the reasoning capabilities of expert systems by providing support for reasoning under uncertainty using knowledge captured in TSS. Also, as definitional knowledge is explicit and separate from heuristic knowledge for plausible inferences, the maintainability of expert systems could be improved. 9. Approximate particle spectra in the pyramid scheme NASA Astrophysics Data System (ADS) Banks, Tom; Torres, T. J. 2012-12-01 We construct a minimal model inspired by the general class of pyramid schemes [T. Banks and J.-F. Fortin, J. High Energy Phys. 07 (2009) 046JHEPFG1029-8479], which is consistent with both supersymmetry breaking and electroweak symmetry breaking. In order to do computations, we make unjustified approximations to the low energy Kähler potential. The phenomenological viability of the resultant mass spectrum is then examined and compared with current collider limits. We show that for certain regimes of parameters, the model, and thus generically the pyramid scheme, can accommodate the current collider mass constraints on physics beyond the standard model with a tree-level light Higgs mass near 125 GeV. However, in this regime the model exhibits a little hierarchy problem, and one must permit fine-tunings that are of order 5%. 10. Heat flow in the postquasistatic approximation SciTech Connect Rodriguez-Mueller, B.; Peralta, C.; Barreto, W.; Rosales, L. 2010-08-15 We apply the postquasistatic approximation to study the evolution of spherically symmetric fluid distributions undergoing dissipation in the form of radial heat flow. For a model that corresponds to an incompressible fluid departing from the static equilibrium, it is not possible to go far from the initial state after the emission of a small amount of energy. Initially collapsing distributions of matter are not permitted. Emission of energy can be considered as a mechanism to avoid the collapse. If the distribution collapses initially and emits one hundredth of the initial mass only the outermost layers evolve. For a model that corresponds to a highly compressed Fermi gas, only the outermost shell can evolve with a shorter hydrodynamic time scale. 11. Existence and uniqueness results for neural network approximations. PubMed Williamson, R C; Helmke, U 1995-01-01 Some approximation theoretic questions concerning a certain class of neural networks are considered. The networks considered are single input, single output, single hidden layer, feedforward neural networks with continuous sigmoidal activation functions, no input weights but with hidden layer thresholds and output layer weights. Specifically, questions of existence and uniqueness of best approximations on a closed interval of the real line under mean-square and uniform approximation error measures are studied. A by-product of this study is a reparametrization of the class of networks considered in terms of rational functions of a single variable. This rational reparametrization is used to apply the theory of Pade approximation to the class of networks considered. In addition, a question related to the number of local minima arising in gradient algorithms for learning is examined. 12. Communication: Improved pair approximations in local coupled-cluster methods NASA Astrophysics Data System (ADS) Schwilk, Max; Usvyat, Denis; Werner, Hans-Joachim 2015-03-01 In local coupled cluster treatments the electron pairs can be classified according to the magnitude of their energy contributions or distances into strong, close, weak, and distant pairs. Different approximations are introduced for the latter three classes. In this communication, an improved simplified treatment of close and weak pairs is proposed, which is based on long-range cancellations of individually slowly decaying contributions in the amplitude equations. Benchmark calculations for correlation, reaction, and activation energies demonstrate that these approximations work extremely well, while pair approximations based on local second-order Møller-Plesset theory can lead to errors that are 1-2 orders of magnitude larger. 13. Fast approximate surface evolution in arbitrary dimension PubMed Central Malcolm, James; Rathi, Yogesh; Yezzi, Anthony; Tannenbaum, Allen 2013-01-01 The level set method is a popular technique used in medical image segmentation; however, the numerics involved make its use cumbersome. This paper proposes an approximate level set scheme that removes much of the computational burden while maintaining accuracy. Abandoning a floating point representation for the signed distance function, we use integral values to represent the signed distance function. For the cases of 2D and 3D, we detail rules governing the evolution and maintenance of these three regions. Arbitrary energies can be implemented in the framework. This scheme has several desirable properties: computations are only performed along the zero level set; the approximate distance function requires only a few simple integer comparisons for maintenance; smoothness regularization involves only a few integer calculations and may be handled apart from the energy itself; the zero level set is represented exactly removing the need for interpolation off the interface; and evolutions proceed on the order of milliseconds per iteration on conventional uniprocessor workstations. To highlight its accuracy, flexibility and speed, we demonstrate the technique on intensity-based segmentations under various statistical metrics. Results for 3D imagery show the technique is fast even for image volumes. PMID:24392194 14. Variational extensions of the mean spherical approximation NASA Astrophysics Data System (ADS) Blum, L.; Ubriaco, M. 2000-04-01 In a previous work we have proposed a method to study complex systems with objects of arbitrary size. For certain specific forms of the atomic and molecular interactions, surprisingly simple and accurate theories (The Variational Mean Spherical Scaling Approximation, VMSSA) [(Velazquez, Blum J. Chem. Phys. 110 (1990) 10 931; Blum, Velazquez, J. Quantum Chem. (Theochem), in press)] can be obtained. The basic idea is that if the interactions can be expressed in a rapidly converging sum of (complex) exponentials, then the Ornstein-Zernike equation (OZ) has an analytical solution. This analytical solution is used to construct a robust interpolation scheme, the variation mean spherical scaling approximation (VMSSA). The Helmholtz excess free energy Δ A=Δ E- TΔ S is then written as a function of a scaling matrix Γ. Both the excess energy Δ E( Γ) and the excess entropy Δ S( Γ) will be functionals of Γ. In previous work of this series the form of this functional was found for the two- (Blum, Herrera, Mol. Phys. 96 (1999) 821) and three-exponential closures of the OZ equation (Blum, J. Stat. Phys., submitted for publication). In this paper we extend this to M Yukawas, a complete basis set: We obtain a solution for the one-component case and give a closed-form expression for the MSA excess entropy, which is also the VMSSA entropy. 15. Neighbourhood approximation using randomized forests. PubMed Konukoglu, Ender; Glocker, Ben; Zikic, Darko; Criminisi, Antonio 2013-10-01 Leveraging available annotated data is an essential component of many modern methods for medical image analysis. In particular, approaches making use of the "neighbourhood" structure between images for this purpose have shown significant potential. Such techniques achieve high accuracy in analysing an image by propagating information from its immediate "neighbours" within an annotated database. Despite their success in certain applications, wide use of these methods is limited due to the challenging task of determining the neighbours for an out-of-sample image. This task is either computationally expensive due to large database sizes and costly distance evaluations, or infeasible due to distance definitions over semantic information, such as ground truth annotations, which is not available for out-of-sample images. This article introduces Neighbourhood Approximation Forests (NAFs), a supervised learning algorithm providing a general and efficient approach for the task of approximate nearest neighbour retrieval for arbitrary distances. Starting from an image training database and a user-defined distance between images, the algorithm learns to use appearance-based features to cluster images approximating the neighbourhood structured induced by the distance. NAF is able to efficiently infer nearest neighbours of an out-of-sample image, even when the original distance is based on semantic information. We perform experimental evaluation in two different scenarios: (i) age prediction from brain MRI and (ii) patch-based segmentation of unregistered, arbitrary field of view CT images. The results demonstrate the performance, computational benefits, and potential of NAF for different image analysis applications. Copyright © 2013 Elsevier B.V. All rights reserved. 16. Coulomb glass in the random phase approximation NASA Astrophysics Data System (ADS) Basylko, S. A.; Onischouk, V. A.; Rosengren, A. 2002-01-01 A three-dimensional model of the electrons localized on randomly distributed donor sites of density n and with the acceptor charge uniformly smeared on these sites, -Ke on each, is considered in the random phase approximation (RPA). For the case K=1/2 the free energy, the density of the one-site energies (DOSE) ɛ, and the pair OSE correlators are found. In the high-temperature region (e2n1/3/T)<1 (T is the temperature) RPA energies and DOSE are in a good agreement with the corresponding data of Monte Carlo simulations. Thermodynamics of the model in this region is similar to the one of an electrolyte in the regime of Debye screening. In the vicinity of the Fermi level μ=0 the OSE correlations, depending on sgn(ɛ1.ɛ2) and with very slow decoupling law, have been found. The main result is that even in the temperature range where the energy of a Coulomb glass is determined by Debye screening effects, the correlations of the long-range nature between the OSE still exist. 17. Univariate approximate integration via nested Taylor multivariate function decomposition NASA Astrophysics Data System (ADS) Gürvit, Ercan; Baykara, N. A. 2014-12-01 This work is based on the idea of nesting one or more Taylor decompositions in the remainder term of a Taylor decomposition of a function. This provides us with a better approximation quality to the original function. In addition to this basic idea each side of the Taylor decomposition is integrated and the limits of integrations are arranged in such a way to obtain a universal [0;1] interval without losing from the generality. Thus a univariate approximate integration technique is formed at the cost of getting multivariance in the remainder term. Moreover the remainder term expressed as an integral permits us to apply Fluctuationlessness theorem to it and obtain better results. 18. Topics in Multivariate Approximation Theory. DTIC Science & Technology 1982-05-01 of the Bramble -Hilbert lemma (see Bramble & H𔃻hert (13ŕ). Kergin’s scheme raises some questions. In .ontrast £.t its univar- iate antecedent, it...J. R. Rice (19791# An adaptive algorithm for multivariate approximation giving optimal convergence rates, J.Approx. Theory 25, 337-359. J. H. Bramble ...J.Numer.Anal. 7, 112-124. J. H. Bramble & S. R. Hilbert (19711, BoUnds for a class of linear functionals with applications to Hermite interpolation 19. Approximate transferability in conjugated polyalkenes NASA Astrophysics Data System (ADS) Eskandari, Keiamars; Mandado, Marcos; Mosquera, Ricardo A. 2007-03-01 QTAIM computed atomic and bond properties, as well as delocalization indices (obtained from electron densities computed at HF, MP2 and B3LYP levels) of several linear and branched conjugated polyalkenes and O- and N-containing conjugated polyenes have been employed to assess approximate transferable CH groups. The values of these properties indicate the effects of the functional group extend to four CH groups, whereas those of the terminal carbon affect up to three carbons. Ternary carbons also modify significantly the properties of atoms in α, β and γ. 20. Improved non-approximability results SciTech Connect Bellare, M.; Sudan, M. 1994-12-31 We indicate strong non-approximability factors for central problems: N{sup 1/4} for Max Clique; N{sup 1/10} for Chromatic Number; and 66/65 for Max 3SAT. Underlying the Max Clique result is a proof system in which the verifier examines only three {open_quotes}free bits{close_quotes} to attain an error of 1/2. Underlying the Chromatic Number result is a reduction from Max Clique which is more efficient than previous ones. 1. Approximation for Bayesian Ability Estimation. DTIC Science & Technology 1987-02-18 posterior pdfs of ande are given by p(-[Y) p(F) F P((y lei’ j)P )d. SiiJ i (4) a r~d p(e Iy) - p(t0) 1 J i P(Yij ei, (5) As shown in Tsutakawa and Lin...inverse A Hessian of the log of (27) with respect to , evaulatedat a Then, under regularity conditions, the marginal posterior pdf of O is...two-way contingency tables. Journal of Educational Statistics, 11, 33-56. Lindley, D.V. (1980). Approximate Bayesian methods. Trabajos Estadistica , 31 2. Measurement of anomalous cosmic ray oxygen at heliolatitudes approximately 25 deg to approximately 64 deg NASA Astrophysics Data System (ADS) Lanzerotti, L. J.; Maclennan, C. G.; Gold, R. E.; Armstrong, T. P.; Roelof, E. C.; Krimigis, S. M.; Simnett, G. M.; Sarris, E. T.; Anderson, K. A.; Pick, M. 1995-02-01 We report measurements of the oxygen component (0.5 - 22 MeV/nucl) of the interplanetary cosmic ray flux as a function of heliolatitude. The measurements reported here were made with the Wart telescope of the Heliosphere Instrument for Spectra, Composition, and Anisotropy at Low Energies (HI-SCALE) low energy particle instrument on the Ulysses spacecraft as the spacecraft climbed from approximately 24 deg to approximately 64 deg south solar heliolatitude during 1993 and early 1994. As a function of heliolatitude, the O abundance at 2-2.8 MeV/nucl drops sharply at latitudes above the heliospheric current sheet. The oxygen spectrum obtained above the current sheet has a broad peak centered at an energy of approximately 2.5 MeV/nucl that is the anomalous O component at these latitudes. There is little evidence for a latitude dependence in the anomalous O fluxes as measured above the current sheet. Within the heliospheric current sheet, the O measurements are composed of both solar and anomalous origin particles. 3. Approximate penetration factors for nuclear reactions of astrophysical interest NASA Technical Reports Server (NTRS) Humblet, J.; Fowler, W. A.; Zimmerman, B. A. 1987-01-01 The ranges of validity of approximations of P(l), the penetration factor which appears in the parameterization of nuclear-reaction cross sections at low energies and is employed in the extrapolation of laboratory data to even lower energies of astrophysical interest, are investigated analytically. Consideration is given to the WKB approximation, P(l) at the energy of the total barrier, approximations derived from the asymptotic expansion of G(l) for large eta, approximations for small values of the parameter x, applications of P(l) to nuclear reactions, and the dependence of P(l) on channel radius. Numerical results are presented in tables and graphs, and parameter ranges where the danger of serious errors is high are identified. 4. Low rank approximation in G0W0 calculations DOE PAGES Shao, MeiYue; Lin, Lin; Yang, Chao; ... 2016-06-04 The single particle energies obtained in a Kohn-Sham density functional theory (DFT) calculation are generally known to be poor approximations to electron excitation energies that are measured in tr ansport, tunneling and spectroscopic experiments such as photo-emission spectroscopy. The correction to these energies can be obtained from the poles of a single particle Green’s function derived from a many-body perturbation theory. From a computational perspective, the accuracy and efficiency of such an approach depends on how a self energy term that properly accounts for dynamic screening of electrons is approximated. The G0W0 approximation is a widely used technique in whichmore » the self energy is expressed as the convolution of a noninteracting Green’s function (G0) and a screened Coulomb interaction (W0) in the frequency domain. The computational cost associated with such a convolution is high due to the high complexity of evaluating W 0 at multiple frequencies. In this paper, we discuss how the cost of G0W0 calculation can be reduced by constructing a low rank approximation to the frequency dependent part of W 0 . In particular, we examine the effect of such a low rank approximation on the accuracy of the G0W0 approximation. We also discuss how the numerical convolution of G0 and W0 can be evaluated efficiently and accurately by using a contour deformation technique with an appropriate choice of the contour.« less 5. Successive intervals analysis of preference measures in a health status index. PubMed Central Blischke, W R; Bush, J W; Kaplan, R M 1975-01-01 The method of successive intervals, a procedure for obtaining equal intervals from category data, is applied to social preference data for a health status index. Several innovations are employed, including an approximate analysis of variance test for determining whether the intervals are of equal width, a regression model for estimating the width of the end intervals in finite scales, and a transformation to equalize interval widths and estimate item locations on the new scale. A computer program has been developed to process large data sets with a larger number of categories than previous programs. PMID:1219005 6. Laguerre approximation of random foams NASA Astrophysics Data System (ADS) Liebscher, André 2015-09-01 Stochastic models for the microstructure of foams are valuable tools to study the relations between microstructure characteristics and macroscopic properties. Owing to the physical laws behind the formation of foams, Laguerre tessellations have turned out to be suitable models for foams. Laguerre tessellations are weighted generalizations of Voronoi tessellations, where polyhedral cells are formed through the interaction of weighted generator points. While both share the same topology, the cell curvature of foams allows only an approximation by Laguerre tessellations. This makes the model fitting a challenging task, especially when the preservation of the local topology is required. In this work, we propose an inversion-based approach to fit a Laguerre tessellation model to a foam. The idea is to find a set of generator points whose tessellation best fits the foam's cell system. For this purpose, we transform the model fitting into a minimization problem that can be solved by gradient descent-based optimization. The proposed algorithm restores the generators of a tessellation if it is known to be Laguerre. If, as in the case of foams, no exact solution is possible, an approximative solution is obtained that maintains the local topology. 7. Wavelet Approximation in Data Assimilation NASA Technical Reports Server (NTRS) Tangborn, Andrew; Atlas, Robert (Technical Monitor) 2002-01-01 Estimation of the state of the atmosphere with the Kalman filter remains a distant goal because of high computational cost of evolving the error covariance for both linear and nonlinear systems. Wavelet approximation is presented here as a possible solution that efficiently compresses both global and local covariance information. We demonstrate the compression characteristics on the the error correlation field from a global two-dimensional chemical constituent assimilation, and implement an adaptive wavelet approximation scheme on the assimilation of the one-dimensional Burger's equation. In the former problem, we show that 99%, of the error correlation can be represented by just 3% of the wavelet coefficients, with good representation of localized features. In the Burger's equation assimilation, the discrete linearized equations (tangent linear model) and analysis covariance are projected onto a wavelet basis and truncated to just 6%, of the coefficients. A nearly optimal forecast is achieved and we show that errors due to truncation of the dynamics are no greater than the errors due to covariance truncation. 8. Rational approximations to fluid properties SciTech Connect Kincaid, J.M. 1990-05-01 The purpose of this report is to summarize some results that were presented at the Spring AIChE meeting in Orlando, Florida (20 March 1990). We report on recent attempts to develop a systematic method, based on the technique of rational approximation, for creating mathematical models of real-fluid equations of state and related properties. Equation-of-state models for real fluids are usually created by selecting a function {tilde p} (T,{rho}) that contains a set of parameters {l brace}{gamma}{sub i}{r brace}; the {l brace}{gamma}{sub i}{r brace} is chosen such that {tilde p}(T,{rho}) provides a good fit to the experimental data. (Here p is the pressure, T the temperature and {rho} is the density). In most cases a nonlinear least-squares numerical method is used to determine {l brace}{gamma}{sub i}{r brace}. There are several drawbacks to this method: one has essentially to guess what {tilde p}(T,{rho}) should be; the critical region is seldom fit very well and nonlinear numerical methods are time consuming and sometimes not very stable. The rational approximation approach we describe may eliminate all of these drawbacks. In particular it lets the data choose the function {tilde p}(T,{rho}) and its numerical implementation involves only linear algorithms. 27 refs., 5 figs. 9. An Event Restriction Interval Theory of Tense ERIC Educational Resources Information Center Beamer, Brandon Robert 2012-01-01 This dissertation presents a novel theory of tense and tense-like constructions. It is named after a key theoretical component of the theory, the event restriction interval. In Event Restriction Interval (ERI) Theory, sentences are semantically evaluated relative to an index which contains two key intervals, the evaluation interval and the event… 10. Simultaneous confidence intervals for a steady-state leaky aquifer groundwater flow model USGS Publications Warehouse Christensen, S.; Cooley, R.L. 1996-01-01 Using the optimization method of Vecchia & Cooley (1987), nonlinear Scheffe??-type confidence intervals were calculated tor the parameters and the simulated heads of a steady-state groundwater flow model covering 450 km2 of a leaky aquifer. The nonlinear confidence intervals are compared to corresponding linear intervals. As suggested by the significant nonlinearity of the regression model, linear confidence intervals are often not accurate. The commonly made assumption that widths of linear confidence intervals always underestimate the actual (nonlinear widths was not correct for the head intervals. Results show that nonlinear effects can cause the nonlinear intervals to be offset from, and either larger or smaller than, the linear approximations. Prior information on some transmissivities helps reduce and stabilize the confidence intervals, with the most notable effects occurring for the parameters on which there is prior information and for head values in parameter zones for which there is prior information on the parameters. 11. Confidence Intervals for Error Rates Observed in Coded Communications Systems NASA Astrophysics Data System (ADS) Hamkins, J. 2015-05-01 We present methods to compute confidence intervals for the codeword error rate (CWER) and bit error rate (BER) of a coded communications link. We review several methods to compute exact and approximate confidence intervals for the CWER, and specifically consider the situation in which the true CWER is so low that only a handful, if any, codeword errors are able to be simulated. In doing so, we answer the question of how long an error-free simulation must be run in order to certify that a given CWER requirement is met with a given level of confidence, and discuss the bias introduced by aborting a simulation after observing the first codeword error. Next, we turn to the lesser studied problem of determining confidence intervals for the BER of coded systems. Since bit errors in systems that use coding or higher-order modulation do not occur independently, blind application of a method that assumes independence leads to inappropriately narrow confidence intervals. We present a new method to compute the confidence interval properly, using the first and second sample moments of the number of bit errors per codeword. This is the first method we know of to compute a confidence interval for the BER of a coded or higher-order modulation system. 12. ENERGY RELAXATION OF HELIUM ATOMS IN ASTROPHYSICAL GASES SciTech Connect Lewkow, N. R.; Kharchenko, V.; Zhang, P. 2012-09-01 We report accurate parameters describing energy relaxation of He atoms in atomic gases, important for astrophysics and atmospheric science. Collisional energy exchange between helium atoms and atomic constituents of the interstellar gas, heliosphere, and upper planetary atmosphere has been investigated. Energy transfer rates, number of collisions required for thermalization, energy distributions of recoil atoms, and other major parameters of energy relaxation for fast He atoms in thermal H, He, and O gases have been computed in a broad interval of energies from 10 meV to 10 keV. This energy interval is important for astrophysical applications involving the energy deposition of energetic atoms and ions into atmospheres of planets and exoplanets, atmospheric evolution, and analysis of non-equilibrium processes in the interstellar gas and heliosphere. Angular- and energy-dependent cross sections, required for an accurate description of the momentum-energy transfer, are obtained using ab initio interaction potentials and quantum mechanical calculations for scattering processes. Calculation methods used include partial wave analysis for collisional energies below 2 keV and the eikonal approximation at energies higher than 100 eV, keeping a significant energy region of overlap, 0.1-2 keV, between these two methods for their mutual verification. The partial wave method and the eikonal approximation excellently match results obtained with each other as well as experimental data, providing reliable cross sections in the astrophysically important interval of energies from 10 meV to 10 keV. Analytical formulae, interpolating obtained energy- and angular-dependent cross sections, are presented to simplify potential applications of the reported database. Thermalization of fast He atoms in the interstellar gas and energy relaxation of hot He and O atoms in the upper atmosphere of Mars are considered as illustrative examples of potential applications of the new database. 13. Spline approximation of quantile functions NASA Technical Reports Server (NTRS) Schiess, J. R.; Matthews, C. G. 1983-01-01 The study reported here explored the development and utility of a spline representation of the sample quantile function of a continuous probability distribution in providing a functional description of a random sample and a method of generating random variables. With a spline representation, the random samples are generated by transforming a sample of uniform random variables to the interval of interest. This is useful, for example, in simulation studies in which a random sample represents the only known information about the distribution. The spline formulation considered here consists of a linear combination of cubic basis splines (B-splines) fit in a least squares sense to the sample quantile function using equally spaced knots. The following discussion is presented in five parts. The first section highlights major results realized from the study. The second section further details the results obtained. The methodology used is described in the third section, followed by a brief discussion of previous research on quantile functions. Finally, the results of the study are evaluated. 14. Digital redesign of uncertain interval systems based on time-response resemblance via particle swarm optimization. PubMed Hsu, Chen-Chien; Lin, Geng-Yu 2009-07-01 In this paper, a particle swarm optimization (PSO) based approach is proposed to derive an optimal digital controller for redesigned digital systems having an interval plant based on time-response resemblance of the closed-loop systems. Because of difficulties in obtaining time-response envelopes for interval systems, the design problem is formulated as an optimization problem of a cost function in terms of aggregated deviation between the step responses corresponding to extremal energies of the redesigned digital system and those of their continuous counterpart. A proposed evolutionary framework incorporating three PSOs is subsequently presented to minimize the cost function to derive an optimal set of parameters for the digital controller, so that step response sequences corresponding to the extremal sequence energy of the redesigned digital system suitably approximate those of their continuous counterpart under the perturbation of the uncertain plant parameters. Computer simulations have shown that redesigned digital systems incorporating the PSO-derived digital controllers have better system performance than those using conventional open-loop discretization methods. 15. Analytical approximations for spiral waves SciTech Connect Löber, Jakob Engel, Harald 2013-12-15 We propose a non-perturbative attempt to solve the kinematic equations for spiral waves in excitable media. From the eikonal equation for the wave front we derive an implicit analytical relation between rotation frequency Ω and core radius R{sub 0}. For free, rigidly rotating spiral waves our analytical prediction is in good agreement with numerical solutions of the linear eikonal equation not only for very large but also for intermediate and small values of the core radius. An equivalent Ω(R{sub +}) dependence improves the result by Keener and Tyson for spiral waves pinned to a circular defect of radius R{sub +} with Neumann boundaries at the periphery. Simultaneously, analytical approximations for the shape of free and pinned spirals are given. We discuss the reasons why the ansatz fails to correctly describe the dependence of the rotation frequency on the excitability of the medium. 16. Interplay of approximate planning strategies. PubMed Huys, Quentin J M; Lally, Níall; Faulkner, Paul; Eshel, Neir; Seifritz, Erich; Gershman, Samuel J; Dayan, Peter; Roiser, Jonathan P 2015-03-10 Humans routinely formulate plans in domains so complex that even the most powerful computers are taxed. To do so, they seem to avail themselves of many strategies and heuristics that efficiently simplify, approximate, and hierarchically decompose hard tasks into simpler subtasks. Theoretical and cognitive research has revealed several such strategies; however, little is known about their establishment, interaction, and efficiency. Here, we use model-based behavioral analysis to provide a detailed examination of the performance of human subjects in a moderately deep planning task. We find that subjects exploit the structure of the domain to establish subgoals in a way that achieves a nearly maximal reduction in the cost of computing values of choices, but then combine partial searches with greedy local steps to solve subtasks, and maladaptively prune the decision trees of subtasks in a reflexive manner upon encountering salient losses. Subjects come idiosyncratically to favor particular sequences of actions to achieve subgoals, creating novel complex actions or "options." 17. Indexing the approximate number system. PubMed Inglis, Matthew; Gilmore, Camilla 2014-01-01 Much recent research attention has focused on understanding individual differences in the approximate number system, a cognitive system believed to underlie human mathematical competence. To date researchers have used four main indices of ANS acuity, and have typically assumed that they measure similar properties. Here we report a study which questions this assumption. We demonstrate that the numerical ratio effect has poor test-retest reliability and that it does not relate to either Weber fractions or accuracy on nonsymbolic comparison tasks. Furthermore, we show that Weber fractions follow a strongly skewed distribution and that they have lower test-retest reliability than a simple accuracy measure. We conclude by arguing that in the future researchers interested in indexing individual differences in ANS acuity should use accuracy figures, not Weber fractions or numerical ratio effects. 18. Approximating metal-insulator transitions NASA Astrophysics Data System (ADS) Danieli, Carlo; Rayanov, Kristian; Pavlov, Boris; Martin, Gaven; Flach, Sergej 2015-12-01 We consider quantum wave propagation in one-dimensional quasiperiodic lattices. We propose an iterative construction of quasiperiodic potentials from sequences of potentials with increasing spatial period. At each finite iteration step, the eigenstates reflect the properties of the limiting quasiperiodic potential properties up to a controlled maximum system size. We then observe approximate Metal-Insulator Transitions (MIT) at the finite iteration steps. We also report evidence on mobility edges, which are at variance to the celebrated Aubry-André model. The dynamics near the MIT shows a critical slowing down of the ballistic group velocity in the metallic phase, similar to the divergence of the localization length in the insulating phase. 19. Analytical approximations for spiral waves. PubMed Löber, Jakob; Engel, Harald 2013-12-01 We propose a non-perturbative attempt to solve the kinematic equations for spiral waves in excitable media. From the eikonal equation for the wave front we derive an implicit analytical relation between rotation frequency Ω and core radius R(0). For free, rigidly rotating spiral waves our analytical prediction is in good agreement with numerical solutions of the linear eikonal equation not only for very large but also for intermediate and small values of the core radius. An equivalent Ω(R(+)) dependence improves the result by Keener and Tyson for spiral waves pinned to a circular defect of radius R(+) with Neumann boundaries at the periphery. Simultaneously, analytical approximations for the shape of free and pinned spirals are given. We discuss the reasons why the ansatz fails to correctly describe the dependence of the rotation frequency on the excitability of the medium. 20. Approximate analytic solutions to the NPDD: Short exposure approximations NASA Astrophysics Data System (ADS) Close, Ciara E.; Sheridan, John T. 2014-04-01 There have been many attempts to accurately describe the photochemical processes that take places in photopolymer materials. As the models have become more accurate, solving them has become more numerically intensive and more 'opaque'. Recent models incorporate the major photochemical reactions taking place as well as the diffusion effects resulting from the photo-polymerisation process, and have accurately described these processes in a number of different materials. It is our aim to develop accessible mathematical expressions which provide physical insights and simple quantitative predictions of practical value to material designers and users. In this paper, starting with the Non-Local Photo-Polymerisation Driven Diffusion (NPDD) model coupled integro-differential equations, we first simplify these equations and validate the accuracy of the resulting approximate model. This new set of governing equations are then used to produce accurate analytic solutions (polynomials) describing the evolution of the monomer and polymer concentrations, and the grating refractive index modulation, in the case of short low intensity sinusoidal exposures. The physical significance of the results and their consequences for holographic data storage (HDS) are then discussed. 1. Closed-form fiducial confidence intervals for some functions of independent binomial parameters with comparisons. PubMed Krishnamoorthy, K; Lee, Meesook; Zhang, Dan 2017-02-01 Approximate closed-form confidence intervals (CIs) for estimating the difference, relative risk, odds ratio, and linear combination of proportions are proposed. These CIs are developed using the fiducial approach and the modified normal-based approximation to the percentiles of a linear combination of independent random variables. These confidence intervals are easy to calculate as the computation requires only the percentiles of beta distributions. The proposed confidence intervals are compared with the popular score confidence intervals with respect to coverage probabilities and expected widths. Comparison studies indicate that the proposed confidence intervals are comparable with the corresponding score confidence intervals, and better in some cases, for all the problems considered. The methods are illustrated using several examples. 2. Practical Scheffe-type credibility intervals for variables of a groundwater model USGS Publications Warehouse Cooley, R.L. 1999-01-01 Simultaneous Scheffe-type credibility intervals (the Bayesian version of confidence intervals) for variables of a groundwater flow model calibrated using a Bayesian maximum a posteriori procedure were derived by Cooley [1993b]. It was assumed that variances reflecting the expected differences between observed and model-computed quantities used to calibrate the model are known, whereas they would often be unknown for an actual model. In this study the variances are regarded as unknown, and variance variability from observation to observation is approximated by grouping the data so that each group is characterized by a uniform variance. The credibility intervals are calculated from the posterior distribution, which was developed by considering each group variance to be a random variable about which nothing is known a priori, then eliminating it by integration. Numerical experiments using two test problems illustrate some characteristics of the credibility intervals. Nonlinearity of the statistical model greatly affected some of the credibility intervals, indicating that credibility intervals computed using the standard linear model approximation may often be inadequate to characterize uncertainty for actual field problems. The parameter characterizing the probability level for the credibility intervals was, however, accurately computed using a linear model approximation, as compared with values calculated using second-order and fully nonlinear formulations. This allows the credibility intervals to be computed very efficiently.Simultaneous Scheffe-type credibility intervals for variables of a groundwater flow model calibrated using a Bayesian maximum a posteriori procedure were developed. The variances reflecting the expected differences between the observed and model-computed quantities were unknown, and variance variability from observation to observation was approximated by grouping the data so that each group was characterized by a uniform variance. Nonlinearity 3. Nested Containment List (NCList): a new algorithm for accelerating interval query of genome alignment and interval databases. PubMed Alekseyenko, Alexander V; Lee, Christopher J 2007-06-01 The exponential growth of sequence databases poses a major challenge to bioinformatics tools for querying alignment and annotation databases. There is a pressing need for methods for finding overlapping sequence intervals that are highly scalable to database size, query interval size, result size and construction/updating of the interval database. We have developed a new interval database representation, the Nested Containment List (NCList), whose query time is O(n + log N), where N is the database size and n is the size of the result set. In all cases tested, this query algorithm is 5-500-fold faster than other indexing methods tested in this study, such as MySQL multi-column indexing, MySQL binning and R-Tree indexing. We provide performance comparisons both in simulated datasets and real-world genome alignment databases, across a wide range of database sizes and query interval widths. We also present an in-place NCList construction algorithm that yields database construction times that are approximately 100-fold faster than other methods available. The NCList data structure appears to provide a useful foundation for highly scalable interval database applications. NCList data structure is part of Pygr, a bioinformatics graph database library, available at http://sourceforge.net/projects/pygr 4. Generalized local-density approximation for spherical potentials SciTech Connect Zhang, X.; Nicholson, D.M. 1999-08-01 An alternative density functional for the spherical approximation of cell potentials is formulated. It relies on overlapping atomic spheres for the calculation of the kinetic energy, similar to the atomic sphere approximation (ASA), however, a shape correction is used that has the same form as the interstitial treatment in the nonoverlapping muffin-tin (MT) approach. The intersite Coulomb energy is evaluated using the Madelung energy as computed in the MT approach, while the on-site Coulomb energy is calculated using the ASA. The Kohn-Sham equations for the functional are then solved self-consistently. The ASA is known to give poor elastic constants and good point defect energies. Conversely the MT approach gives good elastic constants and poor point defect energies. The proposed new functional maintains the simplicity of the spherical potentials found in the ASA and MT approaches, but gives good values for both elastic constants and point defects. This solution avoids a problem, absent in the ASA but suffered by the MT approximation, of incorrect distribution of site charges when charge transfer is large. Relaxation of atomic positions is thus facilitated. Calculations confirm that the approach gives similar elastic constants to the MT approximation, and defect formation energies similar to those obtained with ASA. {copyright} {ital 1999} {ital The American Physical Society} 5. When Density Functional Approximations Meet Iron Oxides. PubMed Meng, Yu; Liu, Xing-Wu; Huo, Chun-Fang; Guo, Wen-Ping; Cao, Dong-Bo; Peng, Qing; Dearden, Albert; Gonze, Xavier; Yang, Yong; Wang, Jianguo; Jiao, Haijun; Li, Yongwang; Wen, Xiao-Dong 2016-10-11 Three density functional approximations (DFAs), PBE, PBE+U, and Heyd-Scuseria-Ernzerhof screened hybrid functional (HSE), were employed to investigate the geometric, electronic, magnetic, and thermodynamic properties of four iron oxides, namely, α-FeOOH, α-Fe2O3, Fe3O4, and FeO. Comparing our calculated results with available experimental data, we found that HSE (a = 0.15) (containing 15% "screened" Hartree-Fock exchange) can provide reliable values of lattice constants, Fe magnetic moments, band gaps, and formation energies of all four iron oxides, while standard HSE (a = 0.25) seriously overestimates the band gaps and formation energies. For PBE+U, a suitable U value can give quite good results for the electronic properties of each iron oxide, but it is challenging to accurately get other properties of the four iron oxides using the same U value. Subsequently, we calculated the Gibbs free energies of transformation reactions among iron oxides using the HSE (a = 0.15) functional and plotted the equilibrium phase diagrams of the iron oxide system under various conditions, which provide reliable theoretical insight into the phase transformations of iron oxides. 6. Caries risks and appropriate intervals between bitewing x-ray examinations in schoolchildren. PubMed Steiner, Marcel; Bühlmann, Saskia; Menghini, Giorgio; Imfeld, Carola; Imfeld, Thomas 2011-01-01 Short intervals between bitewing examinations favor the timely detection of lesions on approximal surfaces. Long intervals reduce the exposure to radiation. Thus, the question arises which intervals between bite-wing examinations are appropriate. The length of intervals between bitewing examinations should be adapted to the caries risk on approximal surfaces of molars and premolars. In order to estimate the caries risk in the Swiss school population, longitudinal data of 591 schoolchildren from the Canton (County) of Zurich were analyzed. These schoolchildren had been examined at 4-year intervals. The proportion of 7-year-olds with caries increment on approximal surfaces within 4 years was 7.1%, i.e., the caries risk in the population was 7.1%. In the 11-year-olds, the caries risk was 17.60%. Seven-year-olds without caries experience on selected approximal surfaces had a low caries risk of 2.2%. However, 7-year-olds with caries experience on selected approximal surfaces had a high risk of 24.2%. The same applied to 11-year-olds: those without caries experience had a low risk (7.5%), and those with caries experience had a high risk (38.5%). For the 7-year-old schoolchildren without any caries experience, an x-ray interval of 8 years is proposed. For the 7-year-old schoolchildren with caries experience, an x-ray interval of 1 year is proposed. 7. A COMPARISON OF CONFIDENCE INTERVAL PROCEDURES IN CENSORED LIFE TESTING PROBLEMS. DTIC Science & Technology Obtaining a confidence interval for a parameter lambda of an exponential distribution is a frequent occurrence in life testing problems. Oftentimes...the test plan used is one in which all the observations are censored at the same time point. Several approximate confidence interval procedures are 8. Confidence Intervals for True Scores Using the Skew-Normal Distribution ERIC Educational Resources Information Center Garcia-Perez, Miguel A. 2010-01-01 A recent comparative analysis of alternative interval estimation approaches and procedures has shown that confidence intervals (CIs) for true raw scores determined with the Score method--which uses the normal approximation to the binomial distribution--have actual coverage probabilities that are closest to their nominal level. It has also recently… 9. Confidence Intervals for True Scores Using the Skew-Normal Distribution ERIC Educational Resources Information Center Garcia-Perez, Miguel A. 2010-01-01 A recent comparative analysis of alternative interval estimation approaches and procedures has shown that confidence intervals (CIs) for true raw scores determined with the Score method--which uses the normal approximation to the binomial distribution--have actual coverage probabilities that are closest to their nominal level. It has also recently… 10. Magnus approximation in neutrino oscillations NASA Astrophysics Data System (ADS) Acero, Mario A.; Aguilar-Arevalo, Alexis A.; D'Olivo, J. C. 2011-04-01 Oscillations between active and sterile neutrinos remain as an open possibility to explain some anomalous experimental observations. In a four-neutrino (three active plus one sterile) mixing scheme, we use the Magnus expansion of the evolution operator to study the evolution of neutrino flavor amplitudes within the Earth. We apply this formalism to calculate the transition probabilities from active to sterile neutrinos with energies of the order of a few GeV, taking into account the matter effect for a varying terrestrial density. 11. Zero-temperature second random phase approximation and its formal properties SciTech Connect Yannouleas, C. 1987-03-01 We derive the zero-temperature second random phase approximation using Rowe's double-commutator equation. Subsequently, we show that the zero-temperature second random phase approximation exhibits formal properties in analogy with the usual 1p-1h random phase approximation. In particular, the second random phase approximation preserves the energy-weighted sum rule. 12. Multidimensional stochastic approximation Monte Carlo NASA Astrophysics Data System (ADS) Zablotskiy, Sergey V.; Ivanov, Victor A.; Paul, Wolfgang 2016-06-01 Stochastic Approximation Monte Carlo (SAMC) has been established as a mathematically founded powerful flat-histogram Monte Carlo method, used to determine the density of states, g (E ) , of a model system. We show here how it can be generalized for the determination of multidimensional probability distributions (or equivalently densities of states) of macroscopic or mesoscopic variables defined on the space of microstates of a statistical mechanical system. This establishes this method as a systematic way for coarse graining a model system, or, in other words, for performing a renormalization group step on a model. We discuss the formulation of the Kadanoff block spin transformation and the coarse-graining procedure for polymer models in this language. We also apply it to a standard case in the literature of two-dimensional densities of states, where two competing energetic effects are present g (E1,E2) . We show when and why care has to be exercised when obtaining the microcanonical density of states g (E1+E2) from g (E1,E2) . 13. Randomized approximate nearest neighbors algorithm PubMed Central Jones, Peter Wilcox; Osipov, Andrei; Rokhlin, Vladimir 2011-01-01 We present a randomized algorithm for the approximate nearest neighbor problem in d-dimensional Euclidean space. Given N points {xj} in , the algorithm attempts to find k nearest neighbors for each of xj, where k is a user-specified integer parameter. The algorithm is iterative, and its running time requirements are proportional to T·N·(d·(log d) + k·(d + log k)·(log N)) + N·k2·(d + log k), with T the number of iterations performed. The memory requirements of the procedure are of the order N·(d + k). A by-product of the scheme is a data structure, permitting a rapid search for the k nearest neighbors among {xj} for an arbitrary point . The cost of each such query is proportional to T·(d·(log d) + log(N/k)·k·(d + log k)), and the memory requirements for the requisite data structure are of the order N·(d + k) + T·(d + N). The algorithm utilizes random rotations and a basic divide-and-conquer scheme, followed by a local graph search. We analyze the scheme’s behavior for certain types of distributions of {xj} and illustrate its performance via several numerical examples. PMID:21885738 14. Interplay of approximate planning strategies PubMed Central Huys, Quentin J. M.; Lally, Níall; Faulkner, Paul; Eshel, Neir; Seifritz, Erich; Gershman, Samuel J.; Dayan, Peter; Roiser, Jonathan P. 2015-01-01 Humans routinely formulate plans in domains so complex that even the most powerful computers are taxed. To do so, they seem to avail themselves of many strategies and heuristics that efficiently simplify, approximate, and hierarchically decompose hard tasks into simpler subtasks. Theoretical and cognitive research has revealed several such strategies; however, little is known about their establishment, interaction, and efficiency. Here, we use model-based behavioral analysis to provide a detailed examination of the performance of human subjects in a moderately deep planning task. We find that subjects exploit the structure of the domain to establish subgoals in a way that achieves a nearly maximal reduction in the cost of computing values of choices, but then combine partial searches with greedy local steps to solve subtasks, and maladaptively prune the decision trees of subtasks in a reflexive manner upon encountering salient losses. Subjects come idiosyncratically to favor particular sequences of actions to achieve subgoals, creating novel complex actions or “options.” PMID:25675480 15. Femtolensing: Beyond the semiclassical approximation NASA Technical Reports Server (NTRS) Ulmer, Andrew; Goodman, Jeremy 1995-01-01 Femtolensoing is a gravitational lensing effect in which the magnification is a function not only of the position and sizes of the source and lens, but also of the wavelength of light. Femtolensing is the only known effect of 10(exp -13) - 10(exp -16) solar mass) dark-matter objects and may possibly be detectable in cosmological gamma-ray burst spectra. We present a new and efficient algorithm for femtolensing calculation in general potentials. The physical optics results presented here differ at low frequencies from the semiclassical approximation, in which the flux is attributed to a finite number of mutually coherent images. At higher frequencies, our results agree well with the semicalssical predictions. Applying our method to a point-mass lens with external shear, we find complex events that have structure at both large and small spectral resolution. In this way, we show that femtolensing may be observable for lenses up to 10(exp -11) solar mass, much larger than previously believed. Additionally, we discuss the possibility of a search femtolensing of white dwarfs in the Large Magellanic Cloud at optical wavelengths. 16. Approximating Densities of States with Gaps NASA Astrophysics Data System (ADS) Haydock, Roger; Nex, C. M. M. 2011-03-01 Reconstructing a density of states or similar distribution from moments or continued fractions is an important problem in calculating the electronic and vibrational structure of defective or non-crystalline solids. For single bands a quadratic boundary condition introduced previously [Phys. Rev. B 74, 205121 (2006)] produces results which compare favorably with maximum entropy and even give analytic continuations of Green functions to the unphysical sheet. In this paper, the previous boundary condition is generalized to an energy-independent condition for densities with multiple bands separated by gaps. As an example it is applied to a chain of atoms with s, p, and d bands of different widths with different gaps between them. The results are compared with maximum entropy for different levels of approximation. Generalized hypergeometric functions associated with multiple bands satisfy the new boundary condition exactly. Supported by the Richmond F. Snyder Fund. 17. The Bloch Approximation in Periodically Perforated Media SciTech Connect Conca, C. Gomez, D. Lobo, M. Perez, E. 2005-06-15 We consider a periodically heterogeneous and perforated medium filling an open domain {omega} of R{sup N}. Assuming that the size of the periodicity of the structure and of the holes is O({epsilon}),we study the asymptotic behavior, as {epsilon} {sup {yields}} 0, of the solution of an elliptic boundary value problem with strongly oscillating coefficients posed in {omega}{sup {epsilon}}({omega}{sup {epsilon}} being {omega} minus the holes) with a Neumann condition on the boundary of the holes. We use Bloch wave decomposition to introduce an approximation of the solution in the energy norm which can be computed from the homogenized solution and the first Bloch eigenfunction. We first consider the case where {omega}is R{sup N} and then localize the problem for abounded domain {omega}, considering a homogeneous Dirichlet condition on the boundary of {omega}. 18. Approximate Sensory Data Collection: A Survey. PubMed Cheng, Siyao; Cai, Zhipeng; Li, Jianzhong 2017-03-10 With the rapid development of the Internet of Things (IoTs), wireless sensor networks (WSNs) and related techniques, the amount of sensory data manifests an explosive growth. In some applications of IoTs and WSNs, the size of sensory data has already exceeded several petabytes annually, which brings too many troubles and challenges for the data collection, which is a primary operation in IoTs and WSNs. Since the exact data collection is not affordable for many WSN and IoT systems due to the limitations on bandwidth and energy, many approximate data collection algorithms have been proposed in the last decade. This survey reviews the state of the art of approximatedatacollectionalgorithms. Weclassifythemintothreecategories: themodel-basedones, the compressive sensing based ones, and the query-driven ones. For each category of algorithms, the advantages and disadvantages are elaborated, some challenges and unsolved problems are pointed out, and the research prospects are forecasted. 19. Error bounded conic spline approximation for NC code NASA Astrophysics Data System (ADS) Shen, Liyong 2012-01-01 Curve fitting is an important preliminary work for data compression and path interpolator in numerical control (NC). The paper gives a simple conic spline approximation algorithm for G01 code. The algorithm is mainly formed by three steps: divide the G01 code to subsets by discrete curvature detection, find the polygon line segment approximation for each subset within a given error and finally, fit each polygon line segment approximation with a conic Bezier spline. Naturally, B-spline curve can be obtained by proper knots selection. The algorithm is designed straightforward and efficient without solving any global equation system or optimal problem. It is complete with the selection of curve's weight. To design the curve more suitable for NC, we present an interval for the weight selection and the error is then computed. 20. Error bounded conic spline approximation for NC code NASA Astrophysics Data System (ADS) Shen, Liyong 2011-12-01 Curve fitting is an important preliminary work for data compression and path interpolator in numerical control (NC). The paper gives a simple conic spline approximation algorithm for G01 code. The algorithm is mainly formed by three steps: divide the G01 code to subsets by discrete curvature detection, find the polygon line segment approximation for each subset within a given error and finally, fit each polygon line segment approximation with a conic Bezier spline. Naturally, B-spline curve can be obtained by proper knots selection. The algorithm is designed straightforward and efficient without solving any global equation system or optimal problem. It is complete with the selection of curve's weight. To design the curve more suitable for NC, we present an interval for the weight selection and the error is then computed. 1. Approximate Sensory Data Collection: A Survey PubMed Central Cheng, Siyao; Cai, Zhipeng; Li, Jianzhong 2017-01-01 With the rapid development of the Internet of Things (IoTs), wireless sensor networks (WSNs) and related techniques, the amount of sensory data manifests an explosive growth. In some applications of IoTs and WSNs, the size of sensory data has already exceeded several petabytes annually, which brings too many troubles and challenges for the data collection, which is a primary operation in IoTs and WSNs. Since the exact data collection is not affordable for many WSN and IoT systems due to the limitations on bandwidth and energy, many approximate data collection algorithms have been proposed in the last decade. This survey reviews the state of the art of approximate data collection algorithms. We classify them into three categories: the model-based ones, the compressive sensing based ones, and the query-driven ones. For each category of algorithms, the advantages and disadvantages are elaborated, some challenges and unsolved problems are pointed out, and the research prospects are forecasted. PMID:28287440 2. Orders on Intervals Over Partially Ordered Sets: Extending Allen's Algebra and Interval Graph Results SciTech Connect Zapata, Francisco; Kreinovich, Vladik; Joslyn, Cliff A.; Hogan, Emilie A. 2013-08-01 To make a decision, we need to compare the values of quantities. In many practical situations, we know the values with interval uncertainty. In such situations, we need to compare intervals. Allen’s algebra describes all possible relations between intervals on the real line, and ordering relations between such intervals are well studied. In this paper, we extend this description to intervals in an arbitrary partially ordered set (poset). In particular, we explicitly describe ordering relations between intervals that generalize relation between points. As auxiliary results, we provide a logical interpretation of the relation between intervals, and extend the results about interval graphs to intervals over posets. 3. A criterion for the best uniform approximation by simple partial fractions in terms of alternance. II NASA Astrophysics Data System (ADS) Komarov, M. A. 2017-06-01 In the problem of approximating real functions f by simple partial fractions of order ≤ n on closed intervals K= \\lbrack c-\\varrho,c+\\varrho \\rbrack \\subset{R}, we obtain a criterion for the best uniform approximation which is similar to Chebyshev's alternance theorem and considerably generalizes previous results: under the same condition z_j^*\ 4. Pigeons' Choices between Fixed-Interval and Random-Interval Schedules: Utility of Variability? ERIC Educational Resources Information Center Andrzejewski, Matthew E.; Cardinal, Claudia D.; Field, Douglas P.; Flannery, Barbara A.; Johnson, Michael; Bailey, Kathleen; Hineline, Philip N. 2005-01-01 Pigeons' choosing between fixed-interval and random-interval schedules of reinforcement was investigated in three experiments using a discrete-trial procedure. In all three experiments, the random-interval schedule was generated by sampling a probability distribution at an interval (and in multiples of the interval) equal to that of the… 5. Pigeons' Choices between Fixed-Interval and Random-Interval Schedules: Utility of Variability? ERIC Educational Resources Information Center Andrzejewski, Matthew E.; Cardinal, Claudia D.; Field, Douglas P.; Flannery, Barbara A.; Johnson, Michael; Bailey, Kathleen; Hineline, Philip N. 2005-01-01 Pigeons' choosing between fixed-interval and random-interval schedules of reinforcement was investigated in three experiments using a discrete-trial procedure. In all three experiments, the random-interval schedule was generated by sampling a probability distribution at an interval (and in multiples of the interval) equal to that of the… 6. Dissimilar Physiological and Perceptual Responses Between Sprint Interval Training and High-Intensity Interval Training. PubMed Wood, Kimberly M; Olive, Brittany; LaValle, Kaylyn; Thompson, Heather; Greer, Kevin; Astorino, Todd A 2016-01-01 High-intensity interval training (HIIT) and sprint interval training (SIT) elicit similar cardiovascular and metabolic adaptations vs. endurance training. No study, however, has investigated acute physiological changes during HIIT vs. SIT. This study compared acute changes in heart rate (HR), blood lactate concentration (BLa), oxygen uptake (VO2), affect, and rating of perceived exertion (RPE) during HIIT and SIT. Active adults (4 women and 8 men, age = 24.2 ± 6.2 years) initially performed a VO2max test to determine workload for both sessions on the cycle ergometer, whose order was randomized. Sprint interval training consisted of 8 bouts of 30 seconds of all-out cycling at 130% of maximum Watts (Wmax). High-intensity interval training consisted of eight 60-second bouts at 85% Wmax. Heart rate, VO2, BLa, affect, and RPE were continuously assessed throughout exercise. Repeated-measures analysis of variance revealed a significant difference between HIIT and SIT for VO2 (p < 0.001), HR (p < 0.001), RPE (p = 0.03), and BLa (p = 0.049). Conversely, there was no significant difference between regimens for affect (p = 0.12). Energy expenditure was significantly higher (p = 0.02) in HIIT (209.3 ± 40.3 kcal) vs. SIT (193.5 ± 39.6 kcal). During HIIT, subjects burned significantly more calories and reported lower perceived exertion than SIT. The higher VO2 and lower BLa in HIIT vs. SIT reflected dissimilar metabolic perturbation between regimens, which may elicit unique long-term adaptations. If an individual is seeking to burn slightly more calories, maintain a higher oxygen uptake, and perceive less exertion during exercise, HIIT is the recommended routine. 7. Cost effective mass standard calibration intervals SciTech Connect Shull, A.H.; Clark, J.P. 1995-11-01 National Institute of Standards and Technology (NIST) traceable standard weights serve as the foundation of mass measurement control programs. These standards are normally recalibrated annually at a cost of approximately100 per weight. The Savannah River Site (SRS) has more than 4,000 standard weights. Most have recalibration intervals of 1 year. The cost effectiveness of the current practice was questioned. Are these mass standards being calibrated too often, and are all of these standards needed for calibration and QC activities? Statistical analyses of data from the calibration histories were performed on a random sample of eight weight sets. The analyses indicated no time effects or significant trends in the weight masses for periods of from 5 to 8 years. In other words, calibration checks were being performed too frequently. In addition, current electronic balance technology does not require a traditional set of standard weights that cover the entire weighing range of a balance. At the most, only 2 or 3 standards are needed for most weighing systems. Hence, by increasing weight set recalibration frequencies from 1 to 3 years, and by reducing the number standards calibrated by 80%, annual cost savings of over \$400,000 are attainable at SRS. Details of the data analysis, technological advances, and cost savings are included in the paper.

8. The factors influence compatibility of pulse-pulse intervals with R-R intervals.

PubMed

Liu, An-Bang; Wu, Hsien-Tsai; Liu, Cyuan-Cin; Hsu, Chun-Hsiang; Chen, Ding-Yuan

2013-01-01

Cardiac autonomic dysfunction assessed by power spectral analysis of electrocardigographic (ECG) R-R intervals (RRI) is a useful method in clinical research. The compatibility of pulse-pulse intervals (PPI) acquired by photoplethysmography (PPG) with RRI is equivocal. In this study, we would like to investigate factors influence the compatibility. We recruited 25 young and health subjects divided into two groups: normal subjects (Group1, BMI < 24, n=15) and overweight subjects (Group2, BMI >/= 24, n=10). ECG and PPG were measured for 5 minutes. Used cross-approximate entropy (CAE) and Fast Fourier transform (FFT) to obtained compatibility between RRI and PPI. The CAE value in Group1 were significantly lower than in Group2 (1.71 ± 0.12 vs. 1.83 ± 0.11, P = 0.011). A positive linear relationship between CAE value and risk factors of metabolic syndrome. No significantly difference between LFP/HFP ratio of RRI (LHRRRI) and LFP/HFP ratio of PPI (LHRPPI) in Group1 (1.42 ± 0.19 vs. 1.38 ± 0.17, P = 0.064), LHRRRI significantly higher than LHRPPI in Group2 (2.18 ± 0.37 vs. 1.93 ± 0.30, P = 0.005). It should be careful that using PPI to assess autonomic function in the obese subjects or the patients with metabolic syndrome.

9. Analytic Interatomic Forces in the Random Phase Approximation

Ramberger, Benjamin; Schäfer, Tobias; Kresse, Georg

2017-03-01

We discuss that in the random phase approximation (RPA) the first derivative of the energy with respect to the Green's function is the self-energy in the G W approximation. This relationship allows us to derive compact equations for the RPA interatomic forces. We also show that position dependent overlap operators are elegantly incorporated in the present framework. The RPA force equations have been implemented in the projector augmented wave formalism, and we present illustrative applications, including ab initio molecular dynamics simulations, the calculation of phonon dispersion relations for diamond and graphite, as well as structural relaxations for water on boron nitride. The present derivation establishes a concise framework for forces within perturbative approaches and is also applicable to more involved approximations for the correlation energy.

10. Weighted regression analysis and interval estimators

Treesearch

Donald W. Seegrist

1974-01-01

A method for deriving the weighted least squares estimators for the parameters of a multiple regression model. Confidence intervals for expected values, and prediction intervals for the means of future samples are given.

11. Interpregnancy interval and obstetrical complications.

PubMed

Shachar, Bat Zion; Lyell, Deirdre J

2012-09-01

Obstetricians are often presented with questions regarding the optimal interpregnancy interval (IPI). Short IPI has been associated with adverse perinatal and maternal outcomes, ranging from preterm birth and low birth weight to neonatal and maternal morbidity and mortality. Long IPI has in turn been associated with increased risk for preeclampsia and labor dystocia. In this review, we discuss the data regarding these associations along with recent studies revealing associations of short IPI with birth defects, schizophrenia, and autism. The optimal IPI may vary for different subgroups. We discuss the consequences of short IPI in women with a prior cesarean section, in particular the increased risk for uterine rupture and the considerations regarding a trial of labor in this subgroup. We review studies examining the interaction between short IPI and advanced maternal age and discuss the risk-benefit assessment for these women. Finally, we turn our attention to women after a stillbirth or an abortion, who often desire to conceive again with minimal delay. We discuss studies speaking in favor of a shorter IPI in this group. The accumulated data allow for the reevaluation of current IPI recommendations and management guidelines for women in general and among subpopulations with special circumstances. In particular, we suggest lowering the current minimal IPI recommendation to only 18 months (vs 24 months according to the latest World Health Organization recommendations), with even shorter recommended minimal IPI for women of advanced age and those who conceive after a spontaneous or induced abortion.

12. Scaling and memory in the return intervals of realized volatility

Ren, Fei; Gu, Gao-Feng; Zhou, Wei-Xing

2009-11-01

We perform return interval analysis of 1-min realized volatility defined by the sum of absolute high-frequency intraday returns for the Shanghai Stock Exchange Composite Index (SSEC) and 22 constituent stocks of SSEC. The scaling behavior and memory effect of the return intervals between successive realized volatilities above a certain threshold q are carefully investigated. In comparison with the volatility defined by the closest tick prices to the minute marks, the return interval distribution for the realized volatility shows a better scaling behavior since 20 stocks (out of 22 stocks) and the SSEC pass the Kolmogorov-Smirnov (KS) test and exhibit scaling behaviors, among which the scaling function for 8 stocks could be approximated well by a stretched exponential distribution revealed by the KS goodness-of-fit test under the significance level of 5%. The improved scaling behavior is further confirmed by the relation between the fitted exponent γ and the threshold q. In addition, the similarity of the return interval distributions for different stocks is also observed for the realized volatility. The investigation of the conditional probability distribution and the detrended fluctuation analysis (DFA) show that both short-term and long-term memory exists in the return intervals of realized volatility.

13. Estimation of confidence intervals of global horizontal irradiance obtained from a weather prediction model

Ohtake, Hideaki; Gari da Silva Fonseca, Joao, Jr.; Takashima, Takumi; Oozeki, Takashi; Yamada, Yoshinori

2014-05-01

Many photovoltaic (PV) systems have been installed in Japan after the introduction of the Feed-in-Tariff. For an energy management of electric power systems included many PV systems, the forecast of the PV power production are useful technology. Recently numerical weather predictions have been applied to forecast the PV power production while the forecasted values invariably have forecast errors for each modeling system. So, we must use the forecast data considering its error. In this study, we attempted to estimate confidence intervals for hourly forecasts of global horizontal irradiance (GHI) values obtained from a mesoscale model (MSM) de-veloped by the Japan Meteorological Agency. In the recent study, we found that the forecasted values of the GHI of the MSM have two systematical forecast errors; the first is that forecast values of the GHI are depended on the clearness indices, which are defined as the GHI values divided by the extraterrestrial solar irradiance. The second is that forecast errors have the seasonal variations; the overestimation of the GHI forecasts is found in winter while the underestimation of those is found in summer. The information of the errors of the hourly GHI forecasts, that is, confidence intervals of the forecasts, is of great significance for planning the energy management included a lot of PV systems by an electric company. On the PV systems, confidence intervals of the GHI forecasts are required for a pinpoint area or for a relatively large area control-ling the power system. For the relatively large area, a spatial-smoothing method of the GHI values is performed for both the observations and forecasts. The spatial-smoothing method caused the decline of confidence intervals of the hourly GHI forecasts on an extreme event of the GHI forecast (a case of large forecast error) over the relatively large area of the Tokyo electric company (approximately 68 % than for a pinpoint forecast). For more credible estimation of the confidence

14. A consistent collinear triad approximation for operational wave models

Salmon, J. E.; Smit, P. B.; Janssen, T. T.; Holthuijsen, L. H.

2016-08-01

In shallow water, the spectral evolution associated with energy transfers due to three-wave (or triad) interactions is important for the prediction of nearshore wave propagation and wave-driven dynamics. The numerical evaluation of these nonlinear interactions involves the evaluation of a weighted convolution integral in both frequency and directional space for each frequency-direction component in the wave field. For reasons of efficiency, operational wave models often rely on a so-called collinear approximation that assumes that energy is only exchanged between wave components travelling in the same direction (collinear propagation) to eliminate the directional convolution. In this work, we show that the collinear approximation as presently implemented in operational models is inconsistent. This causes energy transfers to become unbounded in the limit of unidirectional waves (narrow aperture), and results in the underestimation of energy transfers in short-crested wave conditions. We propose a modification to the collinear approximation to remove this inconsistency and to make it physically more realistic. Through comparison with laboratory observations and results from Monte Carlo simulations, we demonstrate that the proposed modified collinear model is consistent, remains bounded, smoothly converges to the unidirectional limit, and is numerically more robust. Our results show that the modifications proposed here result in a consistent collinear approximation, which remains bounded and can provide an efficient approximation to model nonlinear triad effects in operational wave models.

15. Comparison between the coherent-pair approximation and projection from a hedgehog Fock state in chiral soliton models

SciTech Connect

Harvey, M.; Goeke, K.; Urbano, J.N.

1987-10-01

Comparisons are shown for approximations to the lowest-energy solution of a schematic Hamiltonian using either the coherent-pair approximation of Bolsterli or the hedgehog approximation with variation after projection as given by Fiolhais and Rosina.

16. The Measurement of the QT Interval

PubMed Central

Postema, Pieter G; Wilde, Arthur A.M

2014-01-01

The evaluation of every electrocardiogram should also include an effort to interpret the QT interval to assess the risk of malignant arrhythmias and sudden death associated with an aberrant QT interval. The QT interval is measured from the beginning of the QRS complex to the end of the T-wave, and should be corrected for heart rate to enable comparison with reference values. However, the correct determination of the QT interval, and its value, appears to be a daunting task. Although computerized analysis and interpretation of the QT interval are widely available, these might well over- or underestimate the QT interval and may thus either result in unnecessary treatment or preclude appropriate measures to be taken. This is particularly evident with difficult T-wave morphologies and technically suboptimal ECGs. Similarly, also accurate manual assessment of the QT interval appears to be difficult for many physicians worldwide. In this review we delineate the history of the measurement of the QT interval, its underlying pathophysiological mechanisms and the current standards of the measurement of the QT interval, we provide a glimpse into the future and we discuss several issues troubling accurate measurement of the QT interval. These issues include the lead choice, U-waves, determination of the end of the T-wave, different heart rate correction formulas, arrhythmias and the definition of normal and aberrant QT intervals. Furthermore, we provide recommendations that may serve as guidance to address these complexities and which support accurate assessment of the QT interval and its interpretation. PMID:24827793

17. Min and Max Extreme Interval Values

ERIC Educational Resources Information Center

Jance, Marsha L.; Thomopoulos, Nick T.

2011-01-01

The paper shows how to find the min and max extreme interval values for the exponential and triangular distributions from the min and max uniform extreme interval values. Tables are provided to show the min and max extreme interval values for the uniform, exponential, and triangular distributions for different probabilities and observation sizes.

18. Dynamical nonlocal coherent-potential approximation for itinerant electron magnetism.

PubMed

Rowlands, D A; Zhang, Yu-Zhong

2014-11-26

A dynamical generalisation of the nonlocal coherent-potential approximation is derived based upon the functional integral approach to the interacting electron problem. The free energy is proven to be variational with respect to the self-energy provided a self-consistency condition on a cluster of sites is satisfied. In the present work, calculations are performed within the static approximation and the effect of the nonlocal physics on the formation of the local moment state in a simple model is investigated. The results reveal the importance of the dynamical correlations.

19. Military Applicability of Interval Training for Health and Performance.

PubMed

Gibala, Martin J; Gagnon, Patrick J; Nindl, Bradley C

2015-11-01

Militaries from around the globe have predominantly used endurance training as their primary mode of aerobic physical conditioning, with historical emphasis placed on the long distance run. In contrast to this traditional exercise approach to training, interval training is characterized by brief, intermittent bouts of intense exercise, separated by periods of lower intensity exercise or rest for recovery. Although hardly a novel concept, research over the past decade has shed new light on the potency of interval training to elicit physiological adaptations in a time-efficient manner. This work has largely focused on the benefits of low-volume interval training, which involves a relatively small total amount of exercise, as compared with the traditional high-volume approach to training historically favored by militaries. Studies that have directly compared interval and moderate-intensity continuous training have shown similar improvements in cardiorespiratory fitness and the capacity for aerobic energy metabolism, despite large differences in total exercise and training time commitment. Interval training can also be applied in a calisthenics manner to improve cardiorespiratory fitness and strength, and this approach could easily be incorporated into a military conditioning environment. Although interval training can elicit physiological changes in men and women, the potential for sex-specific adaptations in the adaptive response to interval training warrants further investigation. Additional work is needed to clarify adaptations occurring over the longer term; however, interval training deserves consideration from a military applicability standpoint as a time-efficient training strategy to enhance soldier health and performance. There is value for military leaders in identifying strategies that reduce the time required for exercise, but nonetheless provide an effective training stimulus.

20. Reinforcing value of interval and continuous physical activity in children.

PubMed

Barkley, Jacob E; Epstein, Leonard H; Roemmich, James N

2009-08-04

During play children engage in short bouts of intense activity, much like interval training. This natural preference for interval-type activity may have important implications for prescribing the most motivating type of physical activity, but the motivation of children to be physically active in interval or continuous fashion has not yet been examined. In the present study, ventilatory threshold (VT) and VO2 peak were determined in boys (n=16) and girls (n=16) age 10+/-1.3 years. Children sampled interval and continuous constant-load physical activity protocols on a cycle ergometer at 20% VT on another day. The physical activity protocols were matched for energy expenditure. Children then completed an operant button pressing task using a progressive fixed ratio schedule to assess the relative reinforcing value (RRV) of interval versus continuous physical activity. The number of button presses performed to gain access in interval or continuous physical activity and output maximum (O(max)) were the primary outcome variables. Children performed more button presses (P<0.005) and had a greater O(max) (P<0.005) when working to gain access to interval compared to continuous physical activity at intensities >VT and interval-type physical activity was more reinforcing than continuous constant-load physical activity for children when exercising both >VT and

1. Multifactor analysis of multiscaling in volatility return intervals.

PubMed

Wang, Fengzhong; Yamasaki, Kazuko; Havlin, Shlomo; Stanley, H Eugene

2009-01-01

We study the volatility time series of 1137 most traded stocks in the U.S. stock markets for the two-year period 2001-2002 and analyze their return intervals tau , which are time intervals between volatilities above a given threshold q . We explore the probability density function of tau , P_(q)(tau) , assuming a stretched exponential function, P_(q)(tau) approximately e;(-tau;(gamma)) . We find that the exponent gamma depends on the threshold in the range between q=1 and 6 standard deviations of the volatility. This finding supports the multiscaling nature of the return interval distribution. To better understand the multiscaling origin, we study how gamma depends on four essential factors, capitalization, risk, number of trades, and return. We show that gamma depends on the capitalization, risk, and return but almost does not depend on the number of trades. This suggests that gamma relates to the portfolio selection but not on the market activity. To further characterize the multiscaling of individual stocks, we fit the moments of tau , mu_(m) identical with(tautau);(m);(1m) , in the range of 10approximately tau;(delta). The exponent delta is found also to depend on the capitalization, risk, and return but not on the number of trades, and its tendency is opposite to that of gamma . Moreover, we show that delta decreases with increasing gamma approximately by a linear relation. The return intervals demonstrate the temporal structure of volatilities and our findings suggest that their multiscaling features may be helpful for portfolio optimization.

2. Bond selective chemistry beyond the adiabatic approximation

SciTech Connect

Butler, L.J.

1993-12-01

One of the most important challenges in chemistry is to develop predictive ability for the branching between energetically allowed chemical reaction pathways. Such predictive capability, coupled with a fundamental understanding of the important molecular interactions, is essential to the development and utilization of new fuels and the design of efficient combustion processes. Existing transition state and exact quantum theories successfully predict the branching between available product channels for systems in which each reaction coordinate can be adequately described by different paths along a single adiabatic potential energy surface. In particular, unimolecular dissociation following thermal, infrared multiphoton, or overtone excitation in the ground state yields a branching between energetically allowed product channels which can be successfully predicted by the application of statistical theories, i.e. the weakest bond breaks. (The predictions are particularly good for competing reactions in which when there is no saddle point along the reaction coordinates, as in simple bond fission reactions.) The predicted lack of bond selectivity results from the assumption of rapid internal vibrational energy redistribution and the implicit use of a single adiabatic Born-Oppenheimer potential energy surface for the reaction. However, the adiabatic approximation is not valid for the reaction of a wide variety of energetic materials and organic fuels; coupling between the electronic states of the reacting species play a a key role in determining the selectivity of the chemical reactions induced. The work described below investigated the central role played by coupling between electronic states in polyatomic molecules in determining the selective branching between energetically allowed fragmentation pathways in two key systems.

3. Producing approximate answers to database queries

NASA Technical Reports Server (NTRS)

Vrbsky, Susan V.; Liu, Jane W. S.

1993-01-01

We have designed and implemented a query processor, called APPROXIMATE, that makes approximate answers available if part of the database is unavailable or if there is not enough time to produce an exact answer. The accuracy of the approximate answers produced improves monotonically with the amount of data retrieved to produce the result. The exact answer is produced if all of the needed data are available and query processing is allowed to continue until completion. The monotone query processing algorithm of APPROXIMATE works within the standard relational algebra framework and can be implemented on a relational database system with little change to the relational architecture. We describe here the approximation semantics of APPROXIMATE that serves as the basis for meaningful approximations of both set-valued and single-valued queries. We show how APPROXIMATE is implemented to make effective use of semantic information, provided by an object-oriented view of the database, and describe the additional overhead required by APPROXIMATE.

4. Interval Management Display Design Study

NASA Technical Reports Server (NTRS)

Baxley, Brian T.; Beyer, Timothy M.; Cooke, Stuart D.; Grant, Karlus A.

2014-01-01

In 2012, the Federal Aviation Administration (FAA) estimated that U.S. commercial air carriers moved 736.7 million passengers over 822.3 billion revenue-passenger miles. The FAA also forecasts, in that same report, an average annual increase in passenger traffic of 2.2 percent per year for the next 20 years, which approximates to one-and-a-half times the number of today's aircraft operations and passengers by the year 2033. If airspace capacity and throughput remain unchanged, then flight delays will increase, particularly at those airports already operating near or at capacity. Therefore it is critical to create new and improved technologies, communications, and procedures to be used by air traffic controllers and pilots. National Aeronautics and Space Administration (NASA), the FAA, and the aviation industry are working together to improve the efficiency of the National Airspace System and the cost to operate in it in several ways, one of which is through the creation of the Next Generation Air Transportation System (NextGen). NextGen is intended to provide airspace users with more precise information about traffic, routing, and weather, as well as improve the control mechanisms within the air traffic system. NASA's Air Traffic Management Technology Demonstration-1 (ATD-1) Project is designed to contribute to the goals of NextGen, and accomplishes this by integrating three NASA technologies to enable fuel-efficient arrival operations into high-density airports. The three NASA technologies and procedures combined in the ATD-1 concept are advanced arrival scheduling, controller decision support tools, and aircraft avionics to enable multiple time deconflicted and fuel efficient arrival streams in high-density terminal airspace.

5. Approximate Green's function methods for HZE transport in multilayered materials

NASA Technical Reports Server (NTRS)

Wilson, John W.; Badavi, Francis F.; Shinn, Judy L.; Costen, Robert C.

1993-01-01

A nonperturbative analytic solution of the high charge and energy (HZE) Green's function is used to implement a computer code for laboratory ion beam transport in multilayered materials. The code is established to operate on the Langley nuclear fragmentation model used in engineering applications. Computational procedures are established to generate linear energy transfer (LET) distributions for a specified ion beam and target for comparison with experimental measurements. The code was found to be highly efficient and compared well with the perturbation approximation.

6. Intervals in evolutionary algorithms for global optimization

SciTech Connect

Patil, R.B.

1995-05-01

Optimization is of central concern to a number of disciplines. Interval Arithmetic methods for global optimization provide us with (guaranteed) verified results. These methods are mainly restricted to the classes of objective functions that are twice differentiable and use a simple strategy of eliminating a splitting larger regions of search space in the global optimization process. An efficient approach that combines the efficient strategy from Interval Global Optimization Methods and robustness of the Evolutionary Algorithms is proposed. In the proposed approach, search begins with randomly created interval vectors with interval widths equal to the whole domain. Before the beginning of the evolutionary process, fitness of these interval parameter vectors is defined by evaluating the objective function at the center of the initial interval vectors. In the subsequent evolutionary process the local optimization process returns an estimate of the bounds of the objective function over the interval vectors. Though these bounds may not be correct at the beginning due to large interval widths and complicated function properties, the process of reducing interval widths over time and a selection approach similar to simulated annealing helps in estimating reasonably correct bounds as the population evolves. The interval parameter vectors at these estimated bounds (local optima) are then subjected to crossover and mutation operators. This evolutionary process continues for predetermined number of generations in the search of the global optimum.

7. Signal Approximation with a Wavelet Neural Network

DTIC Science & Technology

1992-12-01

specialized electronic devices like the Intel Electronically Trainable Analog Neural Network (ETANN) chip. The WNN representation allows the...accurately approximated with a WNN trained with irregularly sampled data. Signal approximation, Wavelet neural network .

8. High-Intensity Interval Exercise and Postprandial Triacylglycerol.

PubMed

Burns, Stephen F; Miyashita, Masashi; Stensel, David J

2015-07-01

This review examined if high-intensity interval exercise (HIIE) reduces postprandial triacylglycerol (TAG) levels. Fifteen studies were identified, in which the effect of interval exercise conducted at an intensity of >65% of maximal oxygen uptake was evaluated on postprandial TAG levels. Analysis was divided between studies that included supramaximal exercise and those that included submaximal interval exercise. Ten studies examined the effect of a single session of low-volume HIIE including supramaximal sprints on postprandial TAG. Seven of these studies noted reductions in the postprandial total TAG area under the curve the morning after exercise of between ~10 and 21% compared with rest, but three investigations found no significant difference in TAG levels. Variations in the HIIE protocol used, inter-individual variation or insufficient time post-exercise for an increase in lipoprotein lipase activity are proposed reasons for the divergent results among studies. Five studies examined the effect of high-volume submaximal interval exercise on postprandial TAG. Four of these studies were characterised by high exercise energy expenditure and effectively attenuated total postprandial TAG levels by ~15-30%, but one study with a lower energy expenditure found no effect on TAG. The evidence suggests that supramaximal HIIE can induce large reductions in postprandial TAG levels but findings are inconsistent. Submaximal interval exercise offers no TAG metabolic or time advantage over continuous aerobic exercise but could be appealing in nature to some individuals. Future research should examine if submaximal interval exercise can reduce TAG levels in line with more realistic and achievable exercise durations of 30 min per day.

9. Approximate solution for high-frequency Q-switched lasers.

PubMed

Agnesi, Antonio

2016-06-01

A simple approximation for the energy, pulse width, and build-up time valid for high-repetition-rate Q-switched lasers is discussed. This particular regime of operation is most common in industrial applications where manufacturing time must be minimized. Limits of validity and some considerations for the choice of the most appropriate laser system for specific applications are briefly discussed.

10. Approximate Approaches to the One-Dimensional Finite Potential Well

ERIC Educational Resources Information Center

Singh, Shilpi; Pathak, Praveen; Singh, Vijay A.

2011-01-01

The one-dimensional finite well is a textbook problem. We propose approximate approaches to obtain the energy levels of the well. The finite well is also encountered in semiconductor heterostructures where the carrier mass inside the well (m[subscript i]) is taken to be distinct from mass outside (m[subscript o]). A relevant parameter is the mass…

11. Approximate Approaches to the One-Dimensional Finite Potential Well

ERIC Educational Resources Information Center

Singh, Shilpi; Pathak, Praveen; Singh, Vijay A.

2011-01-01

The one-dimensional finite well is a textbook problem. We propose approximate approaches to obtain the energy levels of the well. The finite well is also encountered in semiconductor heterostructures where the carrier mass inside the well (m[subscript i]) is taken to be distinct from mass outside (m[subscript o]). A relevant parameter is the mass…

12. Spin-1 Heisenberg ferromagnet using pair approximation method

SciTech Connect

Mert, Murat; Mert, Gülistan; Kılıç, Ahmet

2016-06-08

Thermodynamic properties for Heisenberg ferromagnet with spin-1 on the simple cubic lattice have been calculated using pair approximation method. We introduce the single-ion anisotropy and the next-nearest-neighbor exchange interaction. We found that for negative single-ion anisotropy parameter, the internal energy is positive and heat capacity has two peaks.

13. Analytic Approximations for the Extrapolation of Lattice Data

SciTech Connect

Masjuan, Pere

2010-12-22

We present analytic approximations of chiral SU(3) amplitudes for the extrapolation of lattice data to the physical masses and the determination of Next-to-Next-to-Leading-Order low-energy constants. Lattice data for the ratio F{sub K}/F{sub {pi}} is used to test the method.

14. An approximation technique for jet impingement flow

SciTech Connect

Najafi, Mahmoud; Fincher, Donald; Rahni, Taeibi; Javadi, KH.; Massah, H.

2015-03-10

The analytical approximate solution of a non-linear jet impingement flow model will be demonstrated. We will show that this is an improvement over the series approximation obtained via the Adomian decomposition method, which is itself, a powerful method for analysing non-linear differential equations. The results of these approximations will be compared to the Runge-Kutta approximation in order to demonstrate their validity.

15. Approximation method for the kinetic Boltzmann equation

NASA Technical Reports Server (NTRS)

Shakhov, Y. M.

1972-01-01

The further development of a method for approximating the Boltzmann equation is considered and a case of pseudo-Maxwellian molecules is treated in detail. A method of approximating the collision frequency is discussed along with a method for approximating the moments of the Boltzmann collision integral. Since the return collisions integral and the collision frequency are expressed through the distribution function moments, use of the proposed methods make it possible to reduce the Boltzmann equation to a series of approximating equations.

16. Approximating the Helium Wavefunction in Positronium-Helium Scattering

NASA Technical Reports Server (NTRS)

DiRienzi, Joseph; Drachman, Richard J.

2003-01-01

In the Kohn variational treatment of the positronium- hydrogen scattering problem the scattering wave function is approximated by an expansion in some appropriate basis set, but the target and projectile wave functions are known exactly. In the positronium-helium case, however, a difficulty immediately arises in that the wave function of the helium target atom is not known exactly, and there are several ways to deal with the associated eigenvalue in formulating the variational scattering equations to be solved. In this work we will use the Kohn variational principle in the static exchange approximation to d e t e e the zero-energy scattering length for the Ps-He system, using a suite of approximate target functions. The results we obtain will be compared with each other and with corresponding values found by other approximation techniques.

17. Non-ideal boson system in the Gaussian approximation

SciTech Connect

Tommasini, P.R.; de Toledo Piza, A.F.

1997-01-01

We investigate ground-state and thermal properties of a system of non-relativistic bosons interacting through repulsive, two-body interactions in a self-consistent Gaussian mean-field approximation which consists in writing the variationally determined density operator as the most general Gaussian functional of the quantized field operators. Finite temperature results are obtained in a grand canonical framework. Contact is made with the results of Lee, Yang, and Huang in terms of particular truncations of the Gaussian approximation. The full Gaussian approximation supports a free phase or a thermodynamically unstable phase when contact forces and a standard renormalization scheme are used. When applied to a Hamiltonian with zero range forces interpreted as an effective theory with a high momentum cutoff, the full Gaussian approximation generates a quasi-particle spectrum having an energy gap, in conflict with perturbation theory results. {copyright} 1997 Academic Press, Inc.

18. Compressive Imaging via Approximate Message Passing

DTIC Science & Technology

2015-09-04

We propose novel compressive imaging algorithms that employ approximate message passing (AMP), which is an iterative signal estimation algorithm that...Approved for Public Release; Distribution Unlimited Final Report: Compressive Imaging via Approximate Message Passing The views, opinions and/or findings...Research Triangle Park, NC 27709-2211 approximate message passing , compressive imaging, compressive sensing, hyperspectral imaging, signal reconstruction

19. Fractal Trigonometric Polynomials for Restricted Range Approximation

Chand, A. K. B.; Navascués, M. A.; Viswanathan, P.; Katiyar, S. K.

2016-05-01

One-sided approximation tackles the problem of approximation of a prescribed function by simple traditional functions such as polynomials or trigonometric functions that lie completely above or below it. In this paper, we use the concept of fractal interpolation function (FIF), precisely of fractal trigonometric polynomials, to construct one-sided uniform approximants for some classes of continuous functions.

20. On Approximation of Distribution and Density Functions.

ERIC Educational Resources Information Center

Wolff, Hans

Stochastic approximation algorithms for least square error approximation to density and distribution functions are considered. The main results are necessary and sufficient parameter conditions for the convergence of the approximation processes and a generalization to some time-dependent density and distribution functions. (Author)